I know there’s other plausible reasons, but thought I’d use this juicy title.

What does everyone think? As someone who works outside of tech I’m curious to hear the collective thoughts of the tech minds on Lemmy.

    • Sekrayray@lemmy.worldOP
      link
      fedilink
      arrow-up
      7
      arrow-down
      24
      ·
      8 months ago

      Yeah, I have this completely unfounded gut feeling that they may have created something close to AGI internally that led to full brakes.

      Their weird for-profit and non-for-profit board structure makes me think it was crafted that way in the event of a rapid acceleration—the pace and format of this firing makes me wonder if it was a last ditch effort to keep the genie from leaving the bottle.

      • ∟⊔⊤∦∣≶@lemmy.nz
        link
        fedilink
        arrow-up
        33
        arrow-down
        2
        ·
        edit-2
        8 months ago

        No, no need at all to worry about that kind of thing.

        AI (LLMs) are still just a box that spits out things when you put things in. It is a digital Galton board. That’s it.

        This is not going to take over the world.

            • Cogency@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              8 months ago

              And also not that different from how most people would describe their fellow earthers.

              Ie - we aren’t that much more complicated than that when it gets right down to the philosophical break down of what an “I” is.

        • Sekrayray@lemmy.worldOP
          link
          fedilink
          arrow-up
          7
          arrow-down
          1
          ·
          8 months ago

          I mean, I don’t think AGI necessarily implies singularity, and I doubt singularity will ever come from LLM’s. But when you look at human intelligence one could make the argument that it is a glorified input-output system like LLM’s.

          I’m not sure. There’s a lot of things going on in the background with even human intelligence that we don’t understand.

          • agent_flounder@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            8 months ago

            Yes except human brains can learn things without the typical manual training and tweaking you see in ML. In other words, LLMs can’t just start from an initial “blank” state and train themselves autonomously. A baby starts from an initial state and learns about objects, calibrates their eyes, proprioception, movement, then learns to roll over crawl, stand, walk, grasp, learns to understand language then speak it, etc. of course there’s parental involvement and all that but not like someone training an LLM on a massive dataset.

          • xmunk@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            8 months ago

            Spin up AI Dungeon with chatgpt and see how compelling it is once you run out of script.

            • Sekrayray@lemmy.worldOP
              link
              fedilink
              arrow-up
              5
              ·
              8 months ago

              Really good point. I’ve actually messed around a lot with GPT as a 5e DM and you’re right—as soon as it needs to generate unique content it just leads you in an infinite loop that goes no where.

              • ∟⊔⊤∦∣≶@lemmy.nz
                link
                fedilink
                arrow-up
                2
                ·
                8 months ago

                I’ve had some amazing fantasy conversations with LLMs running on my own GPU. Family and world history, tribal traditions, flora and fauna, etc. It’s quite amazing and fun.

      • ilinamorato@lemmy.world
        link
        fedilink
        arrow-up
        14
        arrow-down
        1
        ·
        8 months ago

        I’m very doubtful that an AGI is possible with our current understanding of technology.

        Current models have the appearance of intelligence because they’ve been trained on the entire Internet (which also has the appearance of intelligence), but it’s still at its core a predictive pattern matcher; a pile of linear algebra that can be stirred around to get an output. Useful. But if eight billion people all wrote down their answer to a question and we averaged them all out, we’d get a pretty good answer that appeared to be intelligent as well; and the human race as a whole isn’t a distinct intelligence.

        Data manipulated on a large scale, especially when it’s bounded with rules and perturbed with random noise, yields surprising and often even poignant results. That’s all AI is right now; a more-or-less average of the internet. Your prompt just points it toward a particular corner of the internet.

        • v_krishna@lemmy.ml
          link
          fedilink
          English
          arrow-up
          7
          ·
          8 months ago

          the human race as a whole isn’t a distinct intelligence

          I don’t know it’s quite that simple, (some) cognitive scientists and Marvin Minsky might disagree too. Pedantic asshattery aside, AGI might be an intelligence that’s so fundamentally different from our own ego/narrative/1-person perspective intelligence that we have trouble recognizing it as such.

          • ilinamorato@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            8 months ago

            Well the big thing is that, right now, the “intelligence” doesn’t exist without a prompt. It has no agency or continuity outside of our requests. It also has no reasoning or thought process that we can distinguish, just an algorithm. It’s fundamentally not distinct from basic computers, which means that if it is intelligence, so are our servers and smartwatches and satellite phones and Switch OLEDs.

          • ilinamorato@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            8 months ago

            Yeah. I mean, quantum computing might upend some of my assumptions, but in the long run we’re probably going to have nailed down a decent definition of sentience before we have to wonder if computers have it.