Elon Musk has launched a new AI company called xAI with the goal of understanding the true nature of the universe. The team at xAI includes AI researchers who have worked at companies like OpenAI, Google, Microsoft and DeepMind. Little is known about xAI currently except that Musk seeks funding from SpaceX and Tesla to start it. The xAI team plans to host a Twitter Spaces discussion on July 14th to introduce themselves. xAI is separate from Musk’s X Corp but will work closely with his other companies like Tesla and Twitter.

  • Bad3r@lemmy.one
    link
    fedilink
    English
    arrow-up
    49
    ·
    1 year ago

    Can’t wait to stop hearing about this clown. He did the worst thing possible; made people switch to facebook again.

  • frog 🐸@beehaw.org
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 year ago

    Given that all of the existing “AI” models are in fact not intelligent at all, and are basically glorified predictive text… any insights an AI could come up with about the true nature of the universe would likely be like one of those sayings that initially sounds deep and meaningful, but is in fact completely inane and meaningless. Calling it now: it’ll come out with “if you immediately know the candlelight is fire, then the meal was cooked a long time ago”.

    • kent_eh@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      any insights an AI could come up with about the true nature of the universe would likely be like one of those sayings that initially sounds deep and meaningful, but is in fact completely inane and meaningless.

      There’s a term for that:

      Deepity

    • raccoona_nongrata@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      While I don’t think an AI will figure out some great Truth behind the universe, and ChatGPT is indeed just a convincing text predictor, if what they claim about GPT-4 is true then it’s possible it has some level of general intelligence.

      When tested, GPT-4 was asked to do things that would not have appeared in its training data. For example, it was asked to provide instructions to stack a book, laptop, six eggs a bottle and a nail – ChatGPT gave the predictable “dumb” answer of just saying something like “place the laptop on the ground, then stack one egg on top, then stack the second egg on the first, then the book etc.”

      But GPT-4 gave correct instructions to place the book on a flat surface, place the eggs evenly spaced on the book to distribute the weight, then put the laptop, then the bottle and nail on the bottle cap with the point upward. This implies that the chat understands the shapes of the objects and how they would interact physically in the world.

      It was also able to do stuff like create code that would draw a unicorn, something it would never have seen in any of its text data. It could even take a modified version of that code that creates a unicorn without the horn, in a mirrored position and correct the code to add the horn on the unicorn’s head. Suggesting it knew what the head was, even when reoriented.

      It even potentially shows an understanding of theory of mind, being able to theorize about where a person might think an object is if it were moved while they were out of the room.

      So it’s possible AI may show us that a lot of intelligence, even our own, actually manifests from much simpler rules and conditions than we initially thought.

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Given the absolutely vast amounts of data that goes into these models, especially the most recent ones, I’m sceptical that there was absolutely nothing in the training data from a WikiHow about stacking objects or tutorials about how to create code that can draw animals. I read an article a few months ago about someone asking an “AI” to create a crochet pattern for a narwhal, and the resulting pattern did indeed look something like a narwhal, in that it had all the right parts in roughly the right place, even if it was still a ghastly abomination. There’s no evidence that the “AI” actually understood what it was creating: there are plenty of narwhal crochet patterns online which were included in its datasets, and it simply predicted a pattern based on those.

        I’m inclined to believe the unicorn code is the same. It doesn’t need to understand the concept of a head or even a unicorn to be able to predict a code for a unicorn without a horn. In the vastness of the internet, there is undoubtedly a tutorial out there that has some version of “you can turn your unicorn into a horse by removing this bit of code”. There probably are tutorials out there for “if you want your unicorn facing the other way, do it like this”, too. Its training data will always include the lines of code for the horn as part of the code for the head. It’s not like there’s code out there for “how to draw a unicorn with a horn on its butt” (although I’m open to being proved wrong on this, I’m sure somebody on the internet has a thing for unicorns with horns on their butts instead of their heads, but it’s unlikely to be the most predictable structure for the code). So predictive text ability alone would predict it’s unlikely for the horn code to be anywhere near the butt code.

        The training data likely also includes all the many, many texts out there describing how to test for a theory of mind, so the ability to predict what someone writing about theory of mind would say (including descriptions of how a child/animal passing a theory of mind test will predict where objects are) doesn’t prove that an “AI” has a theory of mind.

        So I remain very, very sceptical that there is any general intelligence in the latest versions. They just have larger datasets and more refined predictive abilities, so the results are more accurate and less prone to hallucination. That’s not the same as evidence of actual consciousness. I’d be more convinced if it correctly completed a brand new puzzle, which has never been done before and has not been posted about on the internet or written about in scientific journals or text books. But so far all the evidence of general intelligence is predicting the response to a question or puzzle for which there is ample data about the correct response.

      • Haatveit@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        To me, what is surprising is that people refuse to see the similarity between how our brains work and how neural networks work. I mean, it’s in the name. We are fundamentally the same, just on different scales. I belive we work exactly like that, but with way more inputs and outputs and way deeper network, but fundamental principles i think are the same.

        • Parsnip8904@beehaw.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          The difference is that somehow the nets in our brains are creating emergent behaviour while the nets in code, even with a lot more power aren’t. I feel we are probably missing something pivotal in constructing them.

          • Haatveit@beehaw.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I’m not so sure we’re missing that much personally, I think it’s more just sheer scale, as well as the complexity of the input and output connections (I guess unlike machine learning networks, living things tend to have a much more ‘fuzzy’ sense of inputs and outputs). And of course sheer computational speed; our virtual networks are basically at a standstill compared to the paralellism of a real brain.

            Just my thoughts though!

    • gt24@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      AI coming up with sayings of that type is something already being done ( https://inspirobot.me/ ). Youtube reaction videos exist referring to that site (like “Ai Generates Hilarious Motivational Posters” by jacksepticeye).

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Somebody should tell Musk there’s already an AI that can determine the “true nature of the universe”, so he doesn’t need to waste his money. Inspirobot seems to have done it already. 😀

  • audaxdreik@pawb.social
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 year ago

    I’d like to see evidence that he understands literally any other, smaller component of the universe first.

  • EijiT@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    understand the true nature of the universe

    This sounds like a line from some kind of cult

  • lightninhopkins@beehaw.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    Hired a bunch of dudes that he pays to hang around with and feel smart.

    His ego must be smarting after the Twitter debacle

  • Grimpen@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Heard about this earlier… on Mastodon.

    I think “working closely” with Twitter might not be such an advantage.

  • Atiran@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    How has he not been forced out of Tesla yet? Does he spend any time with them at all?

    • KRAW@linux.community
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      According to a friend of mine that is an employee, he shows up every now and then to throw a fit and go on firing-sprees.

    • Fauxreigner@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Your second question answers your first. The more he leaves Tesla alone, the better for them. It’s probably pretty hard to determine how much having his name associated with Tesla is hurting the company vs how much it would help/hurt to get rid of him, but if he’s attached but absent, it’s very much a “devil you know” situation.

  • Elyssa@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Musk starts a company to verify what he’s known all along:

    Universe Simulation Detection: running... Result: Confirmed

  • MrSpArkle@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    This fool gonna build a total perspective vortex and about to be shocked he ain’t it.