• DocRekd@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Nowdays LLM can be ran on consumer hardware, so the “dead battery” analogy fall short here too.

    • FLeX@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      With the same efficiency ? I’m interested in an example

      Why everyone using these crappy SaaS then ?

      • AdrianTheFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Llama 2 and its derivatives, mostly. Simple local ui available here.

        Not as good as chatGPT 3.5 in my experience. Just kinda falls apart on anything too complex, and is a lot more likely to get things wrong.

        I tried it out using the ‘Open-Orca/OpenOrcaxOpenChat-Preview2-13B’ 4 bit 32g model. Its surprisingly fast to generate. It seems significantly faster than ChatGPT on my 3060. (with ExLlama)

        There are also some models tuned specifically to actually answer your requests instead of the ‘As an AI language model’ kind of stuff.

        Edit: just tried a newer model and its a lot better. (dolphin-2.1-mistral-7b)

      • DocRekd@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        For the same reason SaaS is popular in general: yes, you could get a VPS, install all the needed software on it, keep it up to date, oor you could pay a company to do all that for you.