

That doesn’t happen with DEI and it didn’t happen with ‘diversity hires’ either.
White men do love to hire other white men who aren’t qualified though. That happens all the fucking time.
Source: am white man
Also known as snooggums on midwest.social and kbin.social.
That doesn’t happen with DEI and it didn’t happen with ‘diversity hires’ either.
White men do love to hire other white men who aren’t qualified though. That happens all the fucking time.
Source: am white man
I knew it was going to get bad when the Fox News audience was fine with vilifying Mr. Rogers, but didn’t expect them to go full naxi.
His citizenship is invalid because he lied and he should be deported.
It won’t happen because he has too much access to money, but that is literally true.
How do you validate the accuracy of what it spits out?
Why don’t you skip the AI and just use the thing you use to validate the AI output?
It makes perfect sense if you do mental acrobatics to explain why a wrong answer is actually correct.
They’re pretty good at summarizing, but don’t trust the summary to be accurate, just to give you a decent idea of what something is about.
That is called being terrible at summarizing.
AI is slower and less efficient than the older search algorithms and is less accurate.
If the amount of time it takes to create the prompt is the same as it would have taken to write the dumbed down text, then the only time you saved was not learning how to write dumbed down text. Plus you need to know what dumbed down text should look like to know if the output is dumbed down but still accurate.
If you always rely on the AI to correct your tone or simplify your language, you’re choosing not to build an essential skill that is every bit as important to doing your job well as it is to know how to correctly configure an ACL on a Cisco managed switch.
This is such a good example of how it AI/LLMs/whatever are being used as a crutch that is far more impactful than using a spellchecker. A spell checker catches typos or helps with unfamiliar words, but doesn’t replace the underlying skill of communicating to your audience.
The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn’t have outrage instead of outage just like if you wrote it yourself.
How do you know the answer on why RIP was replaced with RIPv2 is accurate and not just a load of bullshit like putting glue on pizza?
Are you really saving time?
The exact same scenario plays out when .ml users chime in a .world news thread about China/Russia and the reverse happens. On .world the .ml tankies get downvoted into the ground and on .ml the .world users who call out tankie shit get banned. That is an instance cultural clash that fits the exact scenario.
For the anti Dem stuff plenty of us who vote for them don’t actually like them and it doesn’t take bots to drum up votes for posts that criticize them, but we will downvote the ones that seem to be discouraging others from voting Dem. If they were brigading then the anti Dem posts would get upvoted even more on .world.
There are likely to be malicious actors, probably some vote manipulation. But overall it seems far more likely that in Lemmy the vast majority is still valid users both posting and voting, but that there are malicious actors who are trying to sway directly instead of through bots.
What do you think are the views being promoted by bots on lemmy?
Are their accounts you think are bots or are you assuming differing opinions from people you know in real life are bots? I know people who have wildly different views in real life, some of which I avoid because of those views.
In the case of Lemmy, it is more likely that the members of communities are people because the population is small enough that a mass influx of bots would be easy to notice compared to reddit. Plus the Lemmy comminities tend to have obvious rules and enforcement that filters out people who aren’t on the same page.
For example, you will notice some general opinions on .world and .ml and blahaj will fit their local instance culture and trying to change that with bots would likely run afoul of the moderation or the established community members.
It is far easier to utilize bots as part of a large pool of potential users compared to a smaller one.
Their goal is to create AI agents that are indistinguishable from humans and capable of convincing people to hold certain positions.
A very large portion of people, possibly more than half, do change their views to fit in with everyone else. So an army of bots pretending to have a view will sway a significant portion of the population just through repetition and exposure with the assumption that most other people think that way. They don’t even need to be convincing at all, just have an overwhelming appearance of conformity.
Are you too stupid to understand the difference between hosting software source code and modifying it?
Microsoft/github are not the ones making changes through commits.
Also have anyone who understands basic car design point out why shit like door opening buttons are a terrible idea in emergencies. Or why requiring the doors to lock during a software update is stupid. Or why putting electronics not designed for extreme heat is terrible. Or that trying to use cameras in bad weather isn’t any better than human eyes…
They shipped 39k cybertrucks for backlogged preorders that were based on a completely different description of what the truck would be.
Plus the initial sales were to people who had already committed to preorders at a lower price for a truck that was hyped up to be far better than the end result.
Cybertrucks are basically No Man’s Sky but without the possibility of being good in a half decade.
May be part deliberate, but could just be super tuning to what they know.
It is deliberately handling things in a way that works ‘better’ in a way that doesn’t follow standards, and has been common with whatever browser has the largest proportion of users for a long, long time.
Or you might eliminate some that are what you are looking for because the summaries are inaccurate.
Guess it depends on whether an unreliable system is still better than being overwhelmed with choices.