Hi all, I’d like to hear some suggestions on self hosting LLMs on a remote server, and accessing said LLM via a client app or a convenient website. Either hear about your setups or products you got good impression on.

I’ve hosted Ollama before but I don’t think it’s intented for remote use. On the other hand I’m not really an expert and maybe there’s other things to do like add-ons.

Thanks in advance!

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      14 days ago

      That depends on the use-case. An hour of RTX 4090 compute is about $0.69 while the graphics card is like $1,600.00 plus computer plus electricity bill. I’d say you need to use it like 4000h+ to break even. I’m not doing that much gaming and AI stuff, so I’m better off renting some cloud GPU by the hour. Of course you can optimize that, buy an AMD card, use smaller AI models and pay for less VRAM. But there is a break even point for all of them which you need to pass.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        14 days ago

        Yes, but running an LLM isn’t an on-demand workload, it’s always on. You’re paying for a 24/7 GPU instance if going that route over CPU.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          14 days ago

          Well, there’s both. I’m with runpod and they bill me for each second I run that cloud instance. I can have it running 24/7 or 30min on-demand or just 20 seconds if I want to generate just one reply/image. Behind the curtains, it’s Docker containers. And one of the services is an API that you can hook into. Upon request, it’ll start a container, do the compute and at your option either shut down immediately, meaning you’d have payed like 2ct for that single request. Or listen for more requests until an arbitrary timeout is reached. Other services offer similar things. Or a fixed price per ingested or generated token with some other (ready-made) services.

            • hendrik@palaver.p3x.de
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              14 days ago

              What’s the difference regarding this task? You can rent it 24/7 as a crude webserver. Or run a Linux desktop inside. Pretty much everything you could do with other kinds of servers. I don’t think the exact technology matters. It could be a VPS, virtualized with KVM, or a container. And for AI workloads, these containers have several advantages. Like you can spin them up within seconds. Scale them etc. I mean you’re right. This isn’t a bare-metal server that you’re renting. But I think it aligns well with OP’s requirements?!

                • ddh@lemmy.sdf.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 days ago

                  Running an LLM can certainly be an on-demand service. Apart from training, which I don’t think we are discussing, GPU compute is only used while responding to prompts.

    • EmbarrassedDrum@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      No, but I have free instance on Oracle Cloud and that’s where I’ll run it. If it’s too slow or no good I’ll stop using it but there’s no harm trying.

      • ddh@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 days ago

        I’d be interested to see how it goes. I’ve deployed Ollama plus Open WebUI on a few hosts and small models like Llama3.2 run adequately (at least as fast as I can read) on even an old i5-8500T with no GPU. Oracle Cloud free tier might work OK.