In many cases it should be fine to point them all at the same server. You’ll just need to make sure there aren’t any collisions between schema/table names.
Formerly /u/Zalack on Reddit.e
Also Zalack@kbin.social
In many cases it should be fine to point them all at the same server. You’ll just need to make sure there aren’t any collisions between schema/table names.
I’m not saying there aren’t downsides, just that it isn’t a totally crazy strategy.
You’re being sarcastic but even small fees immediately weed out a ton of cruft.
While that’s true, we have to allow for the fact that our own intelligence, at some point, is an encoded model of the world around us. Probably not through something as rigid as precise statistics, but our consciousness is somehow an emergent phenomenon of the chemical reactions in our brains that on their own have no real understanding of the world either.
I do have to wonder if at some point, consciousness will spontaneously emerge as we make these models bigger and more complex and – maybe more importantly – start layering specialized models on top of each other that handle specific tasks then hand the result back to another model, creating feedback loops. I’m imagining a nueral network that is trained on something extremely abstract like figuring out, from the raw input data, what specialist model would be best suited to process that data, then based on the result, what model would be best suited to refine that data. Something we train to basically be an executive function with a bunch of sub models available to it.
Could something like that become conscious without realizing it’s “communicating” with us? The program executing the LLM might reflexively process data without any concept that it’s text, but still be emergently complex enough when reflecting its own processes to the point of self awareness. It wouldn’t realize the data represents a link to other conscious beings.
As a metaphor, you could teach a very smart dog how to respond to certain, basic arithmetic problems. They would get stuff wrong the moment you prompted them to do something out of their training, and they wouldn’t understand they were doing math even when they got it “right”, but they would still be sentient, if not sapient, despite that.
It’s the opposite side of the philosophical zombie. A philosophical zombie behaves exactly as a human would, but is a surface-level automaton with no inner life.
But I propose that we also consider the inverse-philosophical zombie, an entity that behaves like an automation, but has an inner life that has not recognized its input data for evidence of an external world outside it’s own bounds. Something that might not even recognize it’s executing a program the same way we aren’t consciously aware of the chemical reactions our brain is executing to make us think.
I don’t believe current LLMs are anywhere near complex enough to give rise to that sort of thing, but they are also still pretty early in their development and haven’t started to be heavily layered and interconnected the way I think they’ll end up.
At the very least it makes for a fun Sci-fi premise.
Yeah. Part of me has to wonder what – if any – backchanneled agreements there are between Glynn Shotwell and the DoD for if/when Musk does something truly compromising.
This reminded me of an old joke:
Two economists are walking down the street with their friend when they come across a fresh, streaming pile of dog shit. The first economist jokingly tells the other “I’ll give you a million dollars if you eat that pile of dog shit”. To his surprise, the second economist grabs it off the ground and eats it without hesitation. A deal is a deal so the first economist hands over a million dollars.
A few minutes later they come across a second pile of shit. The second economist, wanting to give his peer a taste of his own medicine, says he’ll give the first economist a million dollars if he eats it. The first economist agrees and does so, winning him a million dollars.
Their friend, rather confused, asks what the point of all this was, the first economist gave the second economist a million dollars, and then the second economist gave it right back. All they’ve accomplished is to eat two piles of shit.
The two economists look rather taken aback. “Well sure,” they say, “but we’ve grown the economy by two million dollars!”
I actually don’t think that’s the case for languages. Most languages start out from a desire to do some specific thing better than other languages rather than do everything.
Compiled Rust is fast.
Compiling Rust is slow.
Also my understanding is that RustAnalyzer has to compile all Rust macros so it can check them properly. That’s not something that a lot of static analysis tools do for things like C++ templates
That’s not the actual reason. Hexbear was openly advocating for their “army” to brigade other instances once it was federating. It just so happens that the basis of that brigading was going to be political.
Lemmy.world pre-emptivly decided it wasn’t worth the hassle of having to deal with that.
That’s very subjective. I have yet to find a Linux desktop I like as much as MacOS, especially when it comes to WACOM drivers. The stylus response time/curve almost always feels wrong.
Also, I’ve worked with designers who can get something that looks and feels fully professional on a first pass, so it’s not just newness for Lemmy.
IMO FOSS has really great offerings when it comes to libraries or other highly technical code.
But something about either the community or incentive structure results in sub-par UI/UX. Obviously not a rule, but definitely a trend I’ve noticed.
Self driving cars could actually be kind of a good stepping stone to better public transit while making more efficient use of existing roadways. You hit a button to request a car, it drives you to wherever, you need to go, and then gets tasked to pick up the next person. Where you used to need 10 cars for 10 people, you now need one.