• PushButton@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    19 days ago

    During that time, you can easily install Ollama on an old computer.

    With a client like Oatmeal, you can save your session/ reload/delete as you wish; so your model remembers what you want.

    I am running llama3.1:8b, it’s good enough for the day-to-day operations.

    • Need for a spyware: 0
    • Need to take screenshots of my desktop: 0
    • Need to buy another computer for the hype chipset: 0
    • Need of Microsoft bullshit: 0

    My old computer is apparently “not good enough” for windows 11, but it’s surely good enough for my personal AI running on Linux though!

    • red_pigeon@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 days ago

      Interesting. A few questions, if I may.

      Are you running ollama in the same system as the one consuming it ? If yes does it always run in background ? Does it impact performance of other applications when it runs in background?

      • PushButton@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        19 days ago

        No, Ollama is running on an old PC with a GeForce 1060 and 16gig of ram…

        Yes, it’s a “webserver” running in the background exposing an API.

        However, if I “top” my system, without chatting, it sits at 0% usage; it’s only when asking that the system peeks at around 55-70% CPU.

        You have to understand there is 2 things here: the server and the model. The server is always running, but requires next to nothing in terms of resources.

        The model is what computing your questions, this is the heavy part. It’s started on use, then after a delay, it’s closing.

        TL;DR To answer your real question, you could use Ollama on the same system that you are using.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        17 days ago

        You can use larger “open” models through free or dirt-cheap APIs though.

        TBH local LLMs are still kinda “meh” unless you have a high vram GPU. I agree that 8b is kinda underwhelming, but the step up to like Qwen 14B is enormous.