• 14 Posts
  • 1.06K Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle














  • I blame Bavaria. If Germany had multiple price zones like other European countries instead of one giant one prices would plummet here in the north, while they’d explode in Bavaria. The state that does not want wind power, does want nuclear power, but already knows ahead of time that its geology (with lots of mountains and granite) is unsuitable for nuclear waste storage. Meanwhile, north German wind power and Scandinavian hydro dams complement each other perfectly. The Bavarians could do the same with the Austrians, they just don’t. They want to eat cake and have it, too.


  • After reading through the abstract the article is pop sci bunk: They developed a method to save additional space with constant-time overhead.

    Which is certainly novel and nice and all kinds of things but it’s just a tool in the toolbox, making things more optimal in theory says little about things being faster in practice because the theoretical cost models never match what real-world machines are actually doing. In algorithm classes we learn to analyse sorting algorithms by number of comparisons, and indeed the minimum necessary is O(n log n), in the real world, it’s numbers of cache invalidation that matters: CPUs can compare numbers basically instantly, getting the stuff you want to compare from memory to the CPU is where time is spent. It can very well be faster to make more comparisons if it means you get fewer, or more regular (so that the CPU can predict and pre-fetch), data transfers.

    Consulting my crystal ball, I see this trickling down into at least the minds of people who develop the usual KV stores, database engineers, etc, maybe it’ll help maybe it won’t those things are already incredibly optimized. Never trust a data structure optimisation you didn’t benchmark. Never trust any optimisation you didn’t benchmark, actually. Do your benchmarks, you’re not smarter than reality. In case it does help, it’s going to trickle down into standard implementations of data structures languages ship with.

    EDIT: I was looking an this paper, not this. It’s actually disproving a conjecture of Yao, who has a Turing prize, certainly a nice feather to have in your cap. It’s also way more into the theoretical weeds than I’m comfortable with. This may have applications, or this may go along the lines of the Karatsuba algorithm: Faster only if your data is astronomically large, for (most) real-world applications the constant overhead out-weighs the asymptotic speedup.


  • There’s plenty of schemes that aren’t fully standards-compliant but I don’t think leaving out eval is common – it’s easy to implement and nothing about the standard says that it needs to run code fast.

    Just wanted to point out that eval is the real static vs dynamic boundary. As to evil, sure, you shouldn’t run just any code you find without having a sandbox in place, C’s way to do the same thing is to call cc followed by dlopen, that’s way scarier, which is why people just link in lua or something instead. I guess in <currentyear> you should probably include a wasm runtime instead of using dlopen.



  • Rust has affine types and gets close to linear when you include #[must_use] (you can still let _ = foo but at least it won’t be an accident, also, drop code isn’t guaranteed to run and there’s good reasons for that), refinement types there’s a library for that. GADTs… I mean sure trait magic can get annoying and coming from Haskell you’d want to do more in the type system but in the end the idiomatic rust way to do many of those things is with macros. Which, unlike Haskell, Rust actually is really good at. Really good. Tack refinement types onto the language kind of good.

    Proving tools, honestly, there’s only one piece of actually proven software (SeL4) and the only language it’s really written in is Coq. Which Rust will never, ever, compete with on its home turf.