• kill_dash_nine@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    16 days ago

    You would be surprised. If you haven’t tried to run a LLM on Apple silicon, it’s pretty snappy but like all others, RAM can be a significantly limiting factor unless the model is trimmed down to do very specific things to reduce the size.