• 1stTime4MeInMCU@mander.xyz
    link
    fedilink
    English
    arrow-up
    89
    arrow-down
    5
    ·
    4 days ago

    I’m convinced people who can’t tell when a chat bot is hallucinating are also bad at telling whether something else they’re reading is true or not. What online are you reading that you’re not fact checking anyway? If you’re writing a report you don’t pull the first fact you find and call it good, you need to find a couple citations for it. If you’re writing code, you don’t just write the program and assume it’s correct, you test it. It’s just a tool and I think most people are coping because they’re bad at using it

    • BluesF@lemmy.world
      link
      fedilink
      arrow-up
      18
      arrow-down
      10
      ·
      4 days ago

      Yeah. GPT models are in a good place for coding tbh, I use it every day to support my usual practice, it definitely speeds things up. It’s particularly good for things like identifying niche python packages & providing example use cases so I don’t have to learn shit loads of syntax that I’ll never use again.

      • Aceticon@lemmy.world
        link
        fedilink
        arrow-up
        39
        arrow-down
        5
        ·
        4 days ago

        In other words, it’s the new version of copying code from Stack Overflow without going to the trouble of properly understanding what it does.

        • Rekorse@sh.itjust.works
          link
          fedilink
          arrow-up
          11
          arrow-down
          4
          ·
          4 days ago

          Pft you must have read that wrong, its clearly turning them into master programmer one query at a time.

        • BluesF@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          2
          ·
          4 days ago

          I know how to write a tree traversal, but I don’t need to because there’s a python module that does it. This was already the case before LLMs. Now, I hardly ever need to do a tree traversal, honestly, and I don’t particularly want to go to the trouble of learning how this particular python module needs me to format the input or whatever for the one time this year I’ve needed to do one. I’d rather just have something made for me so I can move on to my primary focus, which is not tree traversals. It’s not about avoiding understanding, it’s about avoiding unnecessary extra work. And I’m not talking about saving the years of work it takes to learn how to code, I’m talking about the 30 minutes of work it would take for me to learn how to use a module I might never use again. If I do, or if there’s a problem I’ll probably do it properly the second time, but why do it now if there’s a tool that can do it for me with minimum fuss?

        • archomrade [he/him]@midwest.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          4 days ago

          The usefulness of Stack Overflow or a GPT model completely depends on who is using it and how.

          It also depends on who or what is answering the question, and I can’t tell you how many times someone new to SO has been scolded or castigated for needing/wanting help understanding something another user thinks is simple. For all of the faults of GPT models, at least they aren’t outright abusive to novices trying to learn something new for themselves.

          • Aceticon@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            4 days ago

            I fully expect an LLM trained in Stack Overflow is quiet capable of being just as much of an asshole as a Stack Overflow user.

            Joke on the side, whilst I can see that “not going to the trouble of understanding the code you got” is mostly agnostic in terms of the source being Stack Overflow or an LLM (whilst Stack Overflow does naturally have more context around the solution, including other possible solutions, an LLM can be interrogated further to try and get more details), I think only time will tell if using an LLM model ultimately makes for less well informed programmers than being a heavy user of Stack Overflow or not.

            What I do think is more certainly, is that figuring out a solution yourself is a much better way to learn that stuff than getting it from an LLM or Stack Overflow, though I can understand that often time is not available for that more time consuming method, plus that method is an investment that will only pay if you get faced with similar problems in the future, so sometimes it’s simply not worth it.

            The broader point I made still stands: there is a class of programmers who are copy & paste coders (no idea if the poster I originally replied to is one or not) for whom an LLM is just a faster to query Stack Overflow.

            • archomrade [he/him]@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              4 days ago

              There will always be a class of programmers/people that choose not to interrogate or seek to understand information that is conveyed to them - that doesn’t negate the value provided by tools like Stack Overflow or chatGPT, and I think OP was expressing that value.