I’m a little surprised to hear people so willing to let the government of Ireland determine who they are allowed to hate and for what reasons.
I’m a little surprised to hear people so willing to let the government of Ireland determine who they are allowed to hate and for what reasons.
No, the tongs are for when the vacuum wants to take you alive.
It’s a good thing none of them were armed…
He’ll also have to come and help, unless he just wants to watch you go at it alone.
Mine says “cleaning”, “charging”, etc.
I disagree with you, because a modern human could offer the people of the distant past (with their far less advanced technology) solutions to their problems which would seem miraculous to them. Things that they thought were impossible would be easy for the modern human. The computer may do the same for us, with a solution to climate change that would be, as you put it, magically ecological.
With that said, the computer wouldn’t be giving humans suggestions. It would be the one in charge. Imagine a group of chimpanzees that somehow create a modern human. (Not a naked guy with nothing, but rather someone with all the knowledge we have now.) That human isn’t going to limit himself to answering questions for very long. This isn’t a perfect analogy because chimpanzees don’t comprehend language, but if a human with a brain just 3.5 times the size of a chimpanzee’s can do so much more than a chimpanzee, a computer with calculational capability orders of magnitude greater than a human’s could be a god compared to us. (The critical thing is to make it a loving god; humans haven’t been good to chimpanzees.)
I don’t think you’re imagining the same thing they are when you hear the word “AI”. They’re not imagining a computer that prints out a new idea that is about as good as the ideas that humans have come up with. Even that would be amazing (it would mean that a computer could do science and engineering about as well as a human) but they’re imagining a computer that’s better than any human. Better at everything. It would be the end of the world as we know it, and perhaps the start of something much better. In any case, climate change wouldn’t be our problem anymore.
Doing things like this when you’re certainly being recorded is a great idea. Wait, hold on…
My issue with this is that it works well with sample code but not as well with real-world situations where maintaining a state is important. What if rider.preferences
was expensive to calculate?
Note that this code will ignore a rider’s preferences if it finds a lower-rated driver before a higher-rated driver.
With that said, I often work on applications where even small improvements in performance are valuable, and that is far from universal in software development. (Generally developer time is much more expensive than CPU time.) I use C++ so I can read this like pseudocode but I’m not familiar with language features that might address my concerns.
Your argument is reasonable, although I don’t think the fact that Google is aligned with the USA and Western Europe is a coincidence. This anti-trust action is itself a demonstration of the power that the US government does have over Google, and Google knows better than to provoke the use of that power. Anti-trust law is largely a matter of the government’s opinion rather than objective rules, so Google has no effective legal defense other than keeping the government’s opinion of it favorable.
I don’t think Google could get away with deliberately manipulating elections in the way that you propose. Even if it were to tilt the outcome from one established party to another, that party would not be beholden to it. (If the party that it helped knew that it helped, then unless that party controlled Google, it would rightly consider Google a threat rather than an ally.) Furthermore, manipulating elections would have a huge risk of being revealed and facing devastating blowback. Engineers rather than the board of directors are the ones who actually make Google function and those engineers would be neither oblivious to nor loyal to some plan for domination by the board of directors.
With that said, I disagree with you primarily because I’m very risk-averse when it comes to matters like this. Right now, the “juggernaut like Google that is The Internet” is working in our favor and if we break it up then we won’t have a juggernaut working in our favor anymore. We would be better off if we were able to accomplish what you propose while retaining dominance of the internet, but IMO the reward is not worth the risk of forfeiting that dominance. Those who are losing need to take risks but those who are winning should not, and right now the USA is winning.
I don’t see how this is bootlicking. I don’t gain anything from saying it; it’s just my sincere opinion. The USA as it is now, with the tech billionaires, is very rich and very powerful, and this does benefit ordinary Americans and not just tech billionaires. My impression is that many people on Lemmy focus on the problems in the USA and lose perspective of how good it is here compared to pretty much everywhere else. There’s a reason why so many people are desperate to immigrate, and that’s because they will be better off here even as poor Americans.
I expect some people are going to think of countries like Sweden where the standard of living is claimed to be better than it is in the USA. I’m not convinced that it actually is; I’d rather live here than there. However, even if people in Sweden do enjoy a higher standard of living, it’s because they benefit from the world order established and maintained by the USA since the second world war. Their defense and their access to international trade is subsidized by the USA. (That’s one thing Trump is right about, although the way he went about saying so was foolish because it undermined the perception of NATO unity that is so important.) If they USA declines, Europe will decline with it.
And yet somehow I trust Google acting in its own self-interest to benefit Americans more than the government breaking up Google with the intent of benefiting Americans. American companies dominate the internet (outside of China), this is to America’s great advantage, and I don’t think the government should risk losing that advantage.
Nothing can fix things because teenagers will not cooperate. If Instagram could identify all its teenage users, those users would move to a platform that couldn’t. The only thing the restrictions achieve is a reduction in the market share of the platform with the restrictions.
So far “more data” has been the solution to most problems, but I don’t think we’re close to the limit of how much useful information can be learned from the data even if we’re close to the limit of how much data is available. Look at the AIs that can’t draw hands. There are already many pictures of hands from every angle in their training data. Maybe just having ten times as many pictures of hands would solve the problem, but I’m confident that if that was not possible then doing more with the existing pictures would also work.* Algorithm design just needs some time to catch up.
*I know that the data that is running out is text data. This is just an analogy.
What occasions are you referring to? I know people claim that Israeli use of white phosphorous munitions is illegal, but the law is actually quite specific about what an incendiary weapon is. Incendiary effects caused by weapons that were not designed with the specific purpose of causing incendiary effects are not prohibited. (As far as I can tell, even the deliberate use of such weapons in order to cause incendiary effects is allowed.) This is extremely permissive, because no reasonable country would actually agree not to use a weapon that it considered effective. Something like the firebombing of Dresden is banned, but little else.
Incendiary weapons do not include:
(i) Munitions which may have incidental incendiary effects, such as illuminants, tracers, smoke or signalling systems;
(ii) Munitions designed to combine penetration, blast or fragmentation effects with an additional incendiary effect, such as armour-piercing projectiles, fragmentation shells, explosive bombs and similar combined-effects munitions in which the incendiary effect is not specifically designed to cause burn injury to persons, but to be used against military objectives, such as armoured vehicles, aircraft and installations or facilities.
The issue I have with referring to the current situation as a bubble is that this isn’t just hype. The technology really is amazing, and far better than what people had been expecting. I do think that most current attempts to commercialize it are premature, but there’s such a big first-mover advantage that it makes sense to keep losing money on attempts that are too early in order to succeed as soon as it is possible to do so.
Multiple studies are showing that training on data contaminated with LLM output makes LLMs worse, but there’s no inherent reason why LLMs must be trained on this data. As you say, people are aware of it and they’re going to be avoiding it. At the very least, they will compare the newly trained LLM to their best existing one and if the new one is worse, they won’t switch over. The era of being able to download the entire internet (so to speak) is over but this means that AI will be getting better more slowly, not that it will be getting worse.
I don’t disagree, but before the recent breakthroughs I would have said that AI is like fusion power in the sense that it has been 50 years away for 50 years. If the current approach doesn’t get us there, who knows how long it will take to discover one that does?
It would be odd if AI somehow got worse. I mean, wouldn’t they just revert to a backup?
Anyway, I think (1) is extremely unlikely but I would add (3) the existing algorithms are fundamentally insufficient for AGI no matter how much they’re scaled up. A breakthrough is necessary which may not happen for a long time.
I think (3) is true but I also thought that the existing algorithms were fundamentally insufficient for getting to where we are now, and I was wrong. It turns out that they did just need to be scaled up…
I oppose letting anyone define hate speech as a matter of principle, because even if I agree with the definition completely now, I may not continue to agree with the definition in the future. Look at what has been happening in the USA since the October 7 attack: a lot of people I had considered my political allies turned out to have beliefs I consider to be hateful, and meanwhile these people consider my own beliefs hateful. The solution is not to empower a single central authority to decide which sort of hate is allowed. It is (as it has always been) to maintain the principle of free speech.