It’s not close to 100%, it is by formal definition 100%. It’s a calculus thing, when there’s a y value that depends on an x value. And y approaches 1 when x approaches infinity, then y = 1 when x = infinite.
It’s not close to 100%, it is by formal definition 100%. It’s a calculus thing, when there’s a y value that depends on an x value. And y approaches 1 when x approaches infinity, then y = 1 when x = infinite.
Indeed, the formal definition actually doesn’t specify how many monkeys will write what given an infinite number of monkeys, it’s unknowable (that’s just how probability is). We just know that it will almost surely happen, but that doesn’t mean it will happen an infinite amount of occurrences.
The infinite amount of time version is just as vague, one monkey will almost surely type a specific thing, eventually, given infinite time to type it. This is because when you throw infinites at probability, all probabilities tend to 1. Given an infinite amount of time, all things that can happen, will almost surely happen, eventually.
Almost surely, I’m quoting mathematicians. Because an infinite anything also includes events that exist but with probability zero. So, sure, the probability is 100% (more accurately, it tends to 1 as the number of monkeys approach infinite) but that doesn’t mean it will occur. Just like 0% doesn’t mean it won’t, because, well, infinity.
Calculus is a bitch.
In typical statistical mathematician fashion, it’s ambiguously “almost surely at least one”. Infinite is very large.
The whole point is that one of the terms has to be infinite. But it also works with infinite number of monkeys, one will almost surely start typing Hamlet right away.
The interesting part is that has already happened, since an ape already typed Hamlet, we call him Shakespeare. But at the same time, monkeys aren’t random letter generators, they are very intentional and conscious beings and not truly random at all.
Or you could, you know, pay a person a living wage to be physically present at the store to assist shoppers and review the sales.
Or, hear me out. Maybe a 70% review requirement is not automation at all. Just saying.
The only money to be made in the LLM craze is data scraping, collection, filtering, collation and data set selling. When in a gold rush, don’t dig, sell shovels. And AI needs a shit ton of shovels.
The only people making money are Nvidia, the third party data center operators and data brokers. Everyone else running and using the models are losing money. Even OpenAI, the biggest AI vendor, is running at a loss. Eventually the bubble will burst and data brokers will still have something to sell. In the mean time, the fastest way to increase model performance is by increasing the size, that means more data is needed to train them.
I agree it is not because they can’t but because they didn’t want to. But the truth is they haven’t. Current offers match exactly what I have described in my comment. Intel and AMD have been sleeping on their laurels and ARM is coming for their lunch unless they move quick.
Power efficiency. Arm promises the same performance at lower temps and wattage than x86 at competitive price points. That’s a really attractive proposition for the laptop market. x86 can be as small format, as power efficient, as cheap, or as powerful than ARM but not all at the same time.
Most likely, as with all AI as a service startups. After a certain mass of users the models can’t keep up. So to reduce the response times they pay offshore firms to have real people answer the chat. Unfortunately, doctors willing to answer a chat all day are way less numerous than cheap labor.
It means nothing, it’s just a paycheck you sign and then you get to say “I certify my OS is Unix”. The little bit more technical part is POSIX compliance but modern OSs are such massive and complex beasts today that those compliances are tiny parts and very slowly but very surely becoming irrelevant over time.
Apple made OSX Unix certified because it was cheap and it got them off the hook from a lawsuit. That’s it.
Luddites weren’t against new technology, they were against the aristocrats using new technology as a tool or excuse to oppress and kill the labor class. The problem is not the new technology, the problem is that people were dying of hunger and being laid off in droves. Destroying the machinery, which almost always they were the operators of when working on said aristocrat’s factories, was an act of protest, just like a riot, or a strike. It was a form of collective bargaining.
You assume most stock investors read beyond the headline, you assume wrong.
Well, you see, that’s the really hard part of LLMs. Getting good results is a direct function of the size of the model. The bigger the model, the more effective it can be at its task. However, there’s something called compute efficient frontier (technical but neatly explained video about it). Basically you can’t make a model more effective at their computations beyond said linear boundary for any given size. The only way to make a model better, is to make it larger (what most mega corps have been doing) or radically change the algorithms and method underlying the model. But the latter has been proving to be extraordinarily hard. Mostly because to understand what is going on inside the model you need to think in rather abstract and esoteric mathematical principles that bend your mind backwards. You can compress an already trained model to run on smaller hardware. But to train them, you still need the humongously large datasets and power hungry processing. This is compounded by the fact that larger and larger models are ever more expensive while providing rapidly diminishing returns. Oh, and we are quickly running out of quality usable data, so shoveling more data after a certain point starts to actually provide worse results unless you dedicate thousands of hours of human labor producing, collecting and cleaning the new data. That’s all even before you have to address data poisoning, where previously LLM generated data is fed back to train a model but it is very hard to prevent it from devolving into incoherence after a couple of generations.
Replacing garbage with sewer water. Not exactly an improvement.
Bazzite comes with wine all setup by default. KDE’s file managerl can integrate running exe with wine on a default prefix automatically.
Most distributions and DEs already package wine in a set it and forget it configuration. Wine by default has a system wide prefix such that clicking on any exe in the file system automatically runs it on the default prefix. This way of doing things predates wsl by a long time. It is just safer and better practice to setup a new prefix for every software, specially if they are games.
Some linux installers will refuse to erase the bitlocker partition automatically. Then you have to manually erase it before running the installer.
Except, that’s in the real world of physics. In this mathematical/philosophical hypothetical metaphysical scenario, x is infinite. Thus the probability is 1. It doesn’t just approach infinite, it is infinite.