We need more people to code like this.
Everyone codes like it will be the only process running on their dev machine. Before you know it, you’ve opened a word processor, a mail client and a chat program. 8GB RAM is gone and the 3.2GHz 8-core CPU is idling at 35%.
Any processor can run llms. The only issue is how fast, and how much ram it has access to. And you can trade the latter for disk space if you’re willing to sacrifice even more speed.
If it can add, it can run any model
Yes, and a big part of AI advancement is running it on leaner hardware, using less power, with more efficient models.
Not every team is working on building bigger with more resources. Showing off how much they can squeeze out of minimal hardware is an important piece of this.
Yeah the Church-Turing thesis holds that you can run an LLM on a casio wrist watch (if for some reason you wanted to do that). I can’t imagine this is exactly what you’d call ‘good’…
The story that this 260K parameter model generated (in the screenshot of their report):
Sleepy Joe said: “Hello, Spot. Do you want to be careful with me?” Spot told Spot, “Yes, I will step back!”
Spot replied, “I lost my broken broke in my cold rock. It is okay, you can’t.” Spoon and Spot went to the top of the rock and pulled his broken rock. Spot was sad because he was not lIt’s still impressive that they got it running. I look forward to seeing if their BitNet architecture produces better results on the same hardware.
Before we go, please note that EXO is still looking for help. If you also want to avoid the future of AI being locked into massive data centers owned by billionaires and megacorps and think you can contribute in some way, you could reach out.
Unfortunately Biden just signed an executive order allowing data centers to be built on federal land…
And Trump is 100% going to sell it off to the highest bidder and/or give a bunch to Elmo
There’s a huge bit of space between “more datacenter space” and “ah well, on-prem and self-host are dead”. Like, this is a 2024-voter level of apathy.
Well…hold on a second. I was with you through most of that, but then you said Elmo at the end.
Maybe I’m stupid…maybe there’s another Elmo that makes WAAAAAY more logical sense. But what I’M envisioning is this army of AI Seaseme Street Elmo’s harvesting your data, tracking/stalking you all day, and plotting the best time to come and tickle you.
I don’t get it. What are you SAYING???
Elongated Muskrat
This is like saying “my 30-year-old bike still works under very specific conditions”
You exist in a world where people overclock systems to eke a 3% more performance out of the metal, and somehow hammering some performance out of the software seems wasteful?
This kind of thinking seems to be a “slow? Throw more hardware at it” kind of mentality that I only see in … Wait; you’re a java programmer.
Or Python, don’t leave us out.
What is this, Dr. Sbaitsbo?
THIS IS NOT THE ONION?!
You can run AI on a smartwatch, it’s just not going to be very intelligent. The fact that you can technically do it isn’t necessarily very impressive.
I installed gtp4all and tested all models available on it, it sucks …
If it can work in such a small space, think of what it can do in even a low-end android phone.
I don’t need my phone to write me stories, but I would like it to notice when my flight is running late and either call up the customer support and book a new connection or get a refund (like Facebook’s M was bizarrely adept at doing) or just let my contacts in the upcoming meeting know I’m a bit late.
If it searches my mail I’d like it to be on the device and never leverage a leaky, insecure cloud service.
With a trim setup like this, they’ve shown it’s possible.
Why would you need an LLM for that?
We have a standard, it’s called RSS.
We have scripting. We also have visual scripting. That there’s no customer tool for that … is not customer’s fault, but not a sign of some fundamental limitation either.
Customer support would, in fact, be more pleased with an e-mail from a template, and not a robot call (and it’ll likely have robot detection and drop such calls anyway).
Informing your contacts is better done with a template too.
However, now when I think about it, if such a tool existed, it could use an LLM as a fallback in each case, where we don’t have a better source of data about your flights, a fitting e-mail template, some point of that template lacking, or confusion in parsing the company page for support e-mail.
But that still would much rather be some “guesser” element in a visual script, one used when there’s nothing more precise.
I think such a product could be more popular than just an LLM to which you say to do something and are never certain whether it’s going to do a wildcard weirdness or it’s going to be fine.
Me: Hmmmmmmmmm…I only vaguely have an idea what’s even being discussed here. They somehow got Windows 98 to run on a Llama. Probably picked a llama because if they used a camel it would BSOD.
Username checks out
Let me know when someone gets an LLM to run on a Bendix G15. This Pentium stuff is way too new.
Imagine how much better it would run on a similar era version of redhat, gentoo, or beos.
They just proved that the hardware was perfectly capable, in the absolute garbage middle layer-the operating system is what matters about propelling the potential of the hardware forward into a usable form.
Many people may not remember, but there were a few Lins distributions around at the time. Certainly, they would have been able to make better use of the hardware had enough developers worked on it.
but the hardware is not capable. it’s running a miniscule custom 260k LLM and the “claim to fame” is that it wasn’t slow. great? we already know tiny models are fast, they’re just not as accurate and perform worse than larger models, all they did was make an even smaller than normal model. this is akin to getting Doom to run on anything with a CPU, while cool and impressive, it doesn’t do much for anyone other than being an exercise in doing something because you can.
With your first sentence, I can say you’re wrong. My 1997 era DX4-75 MHz ran redhat wonderfully. And SUSE, and Gentoo.
As the rest? You don’t know what an AI/LLM would’ve looked like on a processor from the era. No one even thought of it then. That doesn’t mean it can’t run it. It just means you can’t imagine that.
Fortunately, I do not lack imagination for what could be possible.
With your first sentence, I can say you’re wrong.
except i’m not wrong. the model they ran is 4 orders of magnitude smaller than even the smallest “mini” models that are generally available, see TinyLlama1.1B [1] or Phi-3 3.8B mini [2] to compare against. Most “mini” models range from 1 to about 10 Billion parameters, which makes running them incredibly inefficient on older devices.
That doesn’t mean it can’t run it. It just means you can’t imagine that.
but I can imagine it. in fact, I could have told you it would have needed a significantly smaller model in order to run at an adequate pace on older hardware. it’s not at all a mystery, its a known factor. i think it’s absolutely cool that they did it, but lets not pretend its more than what it is - a modern version of running Doom on non-standard hardware.
[1] https://huggingface.co/TinyLlama/TinyLlama-1.1B-step-50K-105b
[2] https://ollama.com/library/phi3:3.8b-mini-128k-instruct-q5_0
[3] https://www.thirtythreeforty.net/posts/2019/12/my-business-card-runs-linux/
We can all imagine it, it wouldn’t run very well, as shown in the article…