• @[email protected]
    link
    fedilink
    64 months ago

    That’s why I’ve stopped using non-local LLMs. Ollama works just fine on my outdated GTX 2060.