and as always, the culprit is ChatGPT. Stack Overflow Inc. won’t let their mods take down AI-generated content

  • Hyperz@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    It seems to me like StackOverflow is really shooting themselves in the foot by allowing AI generated answers. Even if we assume that all AI generated answers are “correct”, doesn’t that completely destroy the purpose of the site? Like, if I were seeking an answer to some Python-related problem, why wouldn’t I go straight to ChatGPT or similar language models instead then? That way I also don’t have to deal with some of the other issues that plague StackOverflow such as “this question is a duplicate of <insert unrelated question> - closed!”.

    • OrangeSlice@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I think what sites have been running into is that it’s difficult to tell what is and is not AI-generated, so enforcement of a ban is difficult. Some would say that it’s better to have an AI-generated response out there in the open, which can then be verified and prioritized appropriately from user feedback. If there’s a human generated response that’s higher.quality, then that should win anyway, right? (Idk tbh)

      • Hyperz@beehaw.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Yeah that’s a good point. I have no idea how you’d go about solving that problem. Right now you can still sort of tell sometimes when something was AI generated. But if we extrapolate the past few years of advances in LLMs, say, 10 years into the future… There will be no telling what’s AI and what’s not. Where does that leave sites like StackOverflow, or indeed many other types of sites?

        This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.

        • OrangeSlice@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.

          Between this and the “deep fake” tech I’m kinda hoping for a light Butlerian jihad that gets everyone to log tf off and exist in the real world, but that’s kind of a hot take

          • Hyperz@beehaw.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            But then they’d have to break up with their AI girlfriends/boyfriends 🤔.

            spoiler

            I wish I was joking.

  • pAULIE42o@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m no pro here, but I think the underlying ‘issue’ is that soon these types of sites will be driven by AI. Mods will just look over the content, but sadly I think the days of mods being the most intelligent person in the room are numbered.

    I don’t trust AI output/answers today, but tomorrow they’re going to be spot-on and answer better than we can. :/

    I think the Inc. [corporations] know the writing on the wall and are just getting everyone ready for the inevitable asap.

    What say you?

    • Pigeon@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I dunno about “tomorrow”. Eventually, maybe. But today’s AI are just language models. If there are no humans answering questions and creating new reporting for new events/tech/etc, then the AI can’t be trained on their output and won’t be able to say a single thing about those new topics. It’ll pretend to and make shit up, but that’s it.

      Being just language models - really great ones, but still, without any understanding of the content of what they say whatsoever - they’re currently in a state of making shit up all the time. All they care about is the likelihood that one word or phrase or paragraph might typically follow another, for truthy sounding language, but that’s often very far from actual truth.

      The only way to get around that is to create AI that isn’t just a pile of language algorithms, and that’s an entirely different beast than what we’re dealing with now, who knows how far off, if it’s even possible. You can’t just iteratively improve a language algorithm into not being just a language algorithm anymore.

      • kevin@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        I imagine it’ll be possible in the near future to improve the accuracy of technical AI content somewhat easily. It’d go something along these lines: have an LLM generate a candidate response, then have a second LLM capable of validating that response. The validator would have access to real references it can use to ensure some form of correctness, ie a python response could be plugged into a python interpreter to make sure it, to some extent, does what it is proported to do. The validator then decides the output is most likely correct, or generates some sort of response to ask the first LLM to revise until it passes validation. This wouldn’t catch 100% of errors, but a process like this could significantly reduce the frequency of hallucinations, for example.

        • Tutunkommon@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Best description I’ve heard is that LLM is good at figuring out what the correct answer should look like, not necessarily what it is.