• 0 Posts
  • 77 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle

  • Exactly. The big problem with LLMs is that they’re so good at mimicking understanding that people forget that they don’t actually have understanding of anything beyond language itself.

    The thing they excel at, and should be used for, is exactly what you say - a natural language interface between humans and software.

    Like in your example, an LLM doesn’t know what a cat is, but it knows what words describe a cat based on training data - and for a search engine, that’s all you need.









  • I’m a bit late to the party, but I would be inclined to agree with the majority here. Your choice to have their cookies deleted on browser close is adding more friction to an already quite high friction process - you managed to get them to switch over, you don’t want to undo all that over cookies of all things.

    You have to remember, it is their machine at the end of the day, and while you might be able to put up with having to redo 2FA loads due to cookie deletion, they’re clearly not… And if that’s going to be the dealbreaker, you’re far better off forgetting cookie deletion for now and focusing on more passive privacy options like blocking 3rd party cookies, trackers, and ADs.




  • I feel like in most cases if a product has such bad reviews that it kills the company that made it, there’s a good reason for that.

    Of course there are exceptions, and it is expected that a reviewer do their due diligence to make sure they’re giving an honest, accurate, and reasonable review, but no company should be shielded for being told their product isn’t good if it isn’t.




  • Because it’s worth knowing beforehand what a company is really like behind closed doors.

    Some companies are great, some suck in standard corporate fashion, but there are some out there that are exceptional in sucking…

    I’ll use myself an example… the last company I worked for, our team was constantly given deadlines that were impossible to meet within work hours. The company basically refused to pay for what was essentially mandatory overtime required to catch up - wage theft by a different name.

    Fortunately my role allowed me to push back, but most of peers didn’t - we were all straight out of university, some needed the money/job, but most just didn’t know how to fight in the corporate environment.

    Not to mention that a few folks who did try to complain against the company conveniently found themselves fired for some miscellaneous breaches of contract. From what I heard, one was even fired based on their reaction to being told they were being dismissed - quite literally entrapment.

    If you’re wondering why we didn’t sue or anything like that, again we were all straight out of uni, we barely knew what our working rights were…

    Which is why Glassdoor was important - it was how most of these folks got word out about the company and tried to warn other potential candidates of what they were walking into.

    The company knew about it too because they posted multiple fake reviews to try to drown out the real ones. I know for a fact that if they were able to find out who posted these, they would have retaliated, likely in the form of litigation.





  • I don’t mind having my own arguments thrown back in my face, but I do disagree with the premise that humans are anything like LLMs.

    We have more than just a catalogue of conversational training data. We are hugely influenced by our current emotions, experiences, and traumas/fears.

    I do agree with the idea that we shouldn’t give too much power to one person, but I’d argue it’s due to a lack of objectivity and a tendency towards selfish actions, rather than acting like an LLM.

    Ultroning the world to achieve world peace isn’t exactly the best outcome, especially for innocent folks caught in the crossfire