Like if I type “I have two appl…” for example, often it will suggest “apple” singular instead of plural. Just a small example, but it is really bad at predicting which variant of a word should come after the previous

  • Knusper@feddit.de
    link
    fedilink
    arrow-up
    5
    ·
    7 months ago

    I guess, the real question is: Could we be using (simplistic) LLMs on a phone for predictive text?

    There’s some LLMs that can be run offline and which maybe wouldn’t use enormous amounts of battery. But I don’t know how good the quality of those is…

    • ashe@lemmy.starless.one
      link
      fedilink
      arrow-up
      26
      ·
      edit-2
      7 months ago

      You can run an LLM on a phone (tried it myself once, with llama.cpp), but even on the simplest model I could find it was doing maybe one word every few seconds while using up 100% of the CPU. The quality is terrible, and your battery wouldn’t last an hour.

    • Munkisquisher@lemmy.nz
      link
      fedilink
      arrow-up
      5
      ·
      7 months ago

      A pre trained model isn’t going to learn how you type the more you use it. Though with Microsoft owning SwiftKey, I imagine they will try it soon

      • SidewaysHighways@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        7 months ago

        I was so heartbroken when I found out that Microsoft purchased Swiftkey. It was my favorite. Is there any way to still use it without Microsoft involved? Lawdhammercy

    • SpooksMcDoots@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      Openhermes 2.5 Mistral 7b competes with LLMs that require 10x the resources. You could try it out on your phone.

      • Square Singer@feddit.de
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        7 months ago

        They’ll probably have to offload that to a server farm in real time. That’s not gonna be easy.