Windows 11
Windows 11
It was last year. You missed it.
Oh come on, there’s nothing irresponsible or creepy about wanting to collect retinal images of every person on the planet into a single, Internet-connected database. You’re just being paranoid.
Hahhh… well really, the only way to test backups is to try to restore from them. VMs are extremely helpful for this - you can try to restore a VM mirror of your production system, see if it works as expected, wipe it and start over if it doesn’t.
I second this - virtualization is the easiest way to branch out and try new things. You can keep the working system you already have, and also experiment with other systems.
A further advantage is that you can run services in separate VMs, which helps if you need isolated contexts for security, privacy, or stability reasons. And, if you break something while you’re learning you can just delete that VM and start over without affecting your other working services.
The revolution will not be televised.
I’m sure he’ll clean the rug.
Java is to JavaScript as car is to carpet.
I mean… the Pokémon kind of get forced into combat for your entertainment… so more like cockfighting?
Jenny’s number: (area code) 867-5309
Of course it probably doesn’t matter if you also use a credit card to make the purchase - every single purchase is fed into your personal consumer profile.
You probably agreed to it when you installed the app.
AI learning isn’t the issue, its not something we will be able to put a lid on either way.
So… there is no Artificial Intelligence. The AI cannot hurt you. It is just a (buggy) statistical language parsing system. It does not think, it does not plan, it does not have goals, it does not understand, and it doesn’t even really “learn” in a meaningful sense.
Either it destroys or saves the world.
If we’re talking about machine learning systems based on multi-dimensionl statistical analyses, then it will do neither. Both extremes are sensationalism and arguments based on the idea that either such outcome will come from the current boom of ML technology is utter nonsense designed to drive engagement.
It doesn’t need to learn much to do so besides evolving actual self-agency and sovereign thought.
Oh, is that all?
No one on the planet has any idea how to replicate the functionality of consciousness. Sam Altman would very much like you to believe that his company is close to achieving this so that VCs will see the public interest and throw more money at him. Sam Altman is a snake oil salesman.
What is a huge issue is the secretive non-consentual mining of peoples identity and expressions.
And then acting all normal about It.
This is absolutely true and correct and the collection and aggregation of data on human behavior should be scaring the shit out of everyone. The potential for authoritarian abuses of such data collection and tracking is disturbing.
that bug was so egregious, it demonstrates a rare level of incompetence
I wish so much this was true, but it super isn’t. Some of the recent Cisco security flaws are just so brain-dead stupid you wonder if they have any internal quality control at all… and, well, there was the Crowdstrike thing…
Is “dragged” the new “slammed”?
- a few git repos (pushed and backup in the important stuff) with all docker compose, keys and such (the 5%)
Um, maybe I’m misunderstanding, but you’re storing keys in git repositories which are where…?
And remember, if you haven’t tested your backups then you don’t have backups!
You jest but… delay line memory
I see, so your argument is that because the training data is not stored in the model in its original form, it doesn’t count as a copy, and therefore it doesn’t constitute intellectual property theft. I had never really understood what the justification for this point of view was, so thanks for that, it’s a bit clearer now. It’s still wrong, but at least it makes some kind of sense.
If the model “has no memory of training data images”, then what effect is it that the images have on the model? Why is the training data necessary, what is its function?
So it’s possible to have two different GPUs rendering content which is then output to a single monitor at the same time? Could I have a game rendered by a discrete GPU running in a window being handled by an Xorg session rendered by an integrated GPU? Do I understand this correctly? Would it matter if the video output was physically connected to the discrete GPU or the motherboard, or is that configurable?