4

Over the last year I've been learning Swift and starting to put together some iOS apps. I'd definitely class myself as a Swift beginner.

I'm currently building an app and today I used ChatGPT to help with a function I needed to write. I found myself wondering if somehow I was "cheating". In the past I would have used YouTube videos, online tutorials and Stack Overflow, and adapted what I found to work for my particular usage case.

Is using ChatGPT different? The fact that ChatGPT explains the code it writes and often the code still needs fettling to get it to work makes me think that it is a useful learning tool and that as long as I take the time to read the explanations given and ensure I understand what the code is doing then it's probably a good thing on balance.

I was just wondering what other people's thoughts are?

Also, as a side note, I found that chucking code I had written in to ChatGPT and asking it to comment every line was pretty successful and a. big time saver :D

top 15 comments
sorted by: hot top controversial new old
[-] krixcrox@programming.dev 3 points 1 year ago

Back in the day they used to look things up in books, then the internet came along and you didn't need these heavy books anymore to look something up, you just typed it into a search engine, and today we use ChatGPT to do the "searching"(obviously it's not actually searching on the internet, but you get what I mean) for us. It's just another step in making coding and learning coding easier and more accessible.

[-] TeaHands@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

If you understand the code and are able to adapt it to for your needs it's no different to copy pasting from other sources, imo. It's just a time saver.

If you get to the point where you're blindly trusting it with no ability to understand what it's doing, then you have a problem. But that applied to Stack Overflow too.

[-] KindaABigDyl@programming.dev 2 points 1 year ago

Cheating who?

[-] JackbyDev@programming.dev 2 points 1 year ago* (last edited 1 year ago)

No, it's not cheating, but also please don't blindly trust it. Random people on the internet can be wrong too but people can at least correct them if they are. Stuff ChatGPT outputs is fresh for your eyes only.

Edit: typo

[-] mrkite@programming.dev 1 points 1 year ago

Agreed. While I've never used ChatGPT on an actual project, I've tested it on theoretical problems and I've never seen it give an answer that didn't have a problem.

So I would treat it like any answer on Stack Overflow, use it as a start, but you should definitely customize it and fix any edge cases.

[-] danc4498@lemmy.world 2 points 1 year ago

Anything that isn't assembly language is cheating.

[-] SJ_Zero@lemmy.fbxl.net 1 points 1 year ago

Over time you'll realize Chatgpt has giant holes.

As a developer you do use tools every day -- you probably use a rapid gui tool, you use a compiler, you use APIs, and they're things you probably couldn't build on your own. Even under MS-DOS, you're using bios or msdos interrupts. The PC also handles lot of stuff.

So it's just another tool, and it doesn't do everything so you need to use it as one thing in your pouch. Don't rely on it too much, and be mindful of IP concerns -- ai is like a monkey with a camera legally, you can't copyright whatever it creates.

[-] Dazawassa@programming.dev 0 points 1 year ago

This! Not even 2 months ago I had a classmate be convinced that he could prompt ChatGPT to write a program that could both encrypt and then decrypt files and store the key secured. It didn't do a single one of those things so we got a long way to go until we got ChatGPT Operating Systems.

[-] colonial@lemmy.world 3 points 1 year ago

I recently took an "intro to C" course at my university, despite already having some experience - they wouldn't let me test out - so I ended up helping a few of my classmates. Some had made the rookie mistake of "posting the assignment into ChatGPT and hitting enter," whereupon their faces were eaten by nasal demons.

Here's the worst example I saw, with my comments:

char* getName() {
    // Dollar store ass buffer
    char name[1];

    printf("Enter your name: ");
    // STACK GOES BOOM
    scanf("%s", name);
    
    // Returning stack-allocated data, very naughty
    return name;
}

Sighs

[-] mrkite@programming.dev 0 points 1 year ago

My favorite test for ChatGPT is to ask it to write a function to divide two numbers in 6502 assembly. Not only is there no DIV instruction to rely on, but the 6502 is very register starved, so you get a lot of screwups. Here's one example of what ChatGPT generated:

; Inputs:
; - A: dividend
; - X: divisor

DIVISION:
    LDX #0       ; initialize quotient to 0
    CPX #0       ; check for divide-by-zero error
    BEQ DIVIDE_BY_ZERO
    CLC          ; clear carry flag
DIV_LOOP:
    CMP X,A      ; compare divisor with dividend
    BCC DIV_DONE ; if divisor is greater, division is complete
    SEC          ; set carry flag
    ROL A        ; shift dividend left
    INX          ; increment quotient
    JMP DIV_LOOP ; continue division
DIV_DONE:
    RTS          ; return from subroutine
DIVIDE_BY_ZERO:
    ; handle divide-by-zero error here
    RTS

You can see it immediately overwrites the divisor with the quotient, so this thing will always give a divide by zero error. But even if it didn't do that, CMP X,A is an invalid instruction. But even if that wasn't invalid, multiplying the dividend by two (and adding one) is nonsense.

[-] Deely@programming.dev 0 points 1 year ago

Honestly I still don't get it. Every dialog with ChatGPT where I tried to do something meaningful always ends with ChatGPT hallucinations. It answers general questions, but it imagine something everytime. I asks for a list of command line renderers, it returns list with a few renderers that do not have CLI interface. I asks about library that do something, it returns 5 libraries with one library that definitely can't do it. And so on, so on. ChatGPT is good on trivial task, but I don't need help with trivial task, I can do trivial task myself... Sorry for a rant.

[-] axo10tl@sopuli.xyz 2 points 1 year ago* (last edited 1 year ago)

That's because ChatGPT and LLM's are not oracles. They don't take into account whether the text they generate is factually correct, because that's not the task they're trained for. They're only trained to generate the next statistically most likely word, then the next word, and then the next one...

You can take a parrot to a math class, have it listen to lessons for a few months and then you can "have a conversation" about math with it. The parrot won't have a deep (or any) understanding of math, but it will gladly replicate phrases it has heard. Many of those phrases could be mathematical facts, but just because the parrot can recite the phrases, doesn't mean it understands their meaning, or that it could even count 3+3.

LLMs are the same. They're excellent at reciting known phrases, even combining popular phrases into novel ones, but even then the model lacks any understanding behind the words and sentences it produces.

If you give an LLM a task in which your objective is to receive factually correct information, you might as well be asking a parrot - the answer may well be factually correct, but it just as well might be a hallucination. In both cases the responsibility of fact checking falls 100% on your shoulders.

So even though LLMs aren't good for information retreival, they're exceptionally good at text generation. The ideal use-cases for LLMs thus lie in the domain of text generation, not information retreival or facts. If you recognize and understand this, you're all set to use ChatGPT effectively, because you know what kind of questions it's good for, and with what kind of questions they're absolutely useless.

[-] Dazawassa@programming.dev 1 points 1 year ago

No you aren't the only one. I've prompted ChatGPT before for SFML library commands and it's given me commands that either don't work anymore or just never existed everytime.

[-] truthy@programming.dev 0 points 1 year ago* (last edited 1 year ago)

No, it's not cheating. But you are expected to understand what your code does and how.

And this brings us to the explanations it provides. Keep in mind that these AI tools excell in producing content that seems right. But they may very well be hallucinating. And just as for code, small details and exact concepts matter.

I would therefore recommend you to verify your final code against official documentation, to make sure you actually understand.

In the end, as long as you don't trust the AI, neither for solutions or knowledge, its just another tool. Use it as it fits.

[-] balder1993@programming.dev 1 points 1 year ago

I’d go as far as saying you should know what every line of code does or you’re risking the whole thing to have unexpected side effects. When you understand what the code is doing, you know what parts you should test.

this post was submitted on 24 Jun 2023
4 points (100.0% liked)

Programming

17351 readers
298 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS