TL;DR: LLMs are just mimicking natural language and conversation. Fact checking and healthy skepticism is not part of their model. For example they can be easily tricked into advocating conspiracy theories, like a fake moon landing. Google Bard is even stating arithmetic falsehoods like 5*6 != 30

  • Babalas@lemmy.nz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Seems more like different people expect it to behave differently. I mean the statement that it isn’t intelligent because it can be made to believe conspiracy theories would apply equally to humans would it not?

    I’m having a blast using it to write descriptions for characters and locations for my Savage Worlds game. It can even roll up an NPC for you. It’s fantastic for helping to fill in details. I.e. I embrace it’s hallucinations.

    For work (programmer) it also acts like a contextually aware search engine that I can correct. It’s like peer to peer programming with a genius grad. Yesterday I had it help me out writing a vim keymap to open a url for a Qt class and that’s pretty obscure.

    It is setup to accept your input as fact, so if you give it the premise that 5*6 != 30, it’ll use that as a basis.

    For a 3rd gen baby AI I’m not complaining.