Came to post the same. Seems like the most awkward possible way to phrase that.
Your “Disks not included” suggestion, or heck, just “empty” would surely be better.
Came to post the same. Seems like the most awkward possible way to phrase that.
Your “Disks not included” suggestion, or heck, just “empty” would surely be better.
eSATAp! What a wild combination.
Not actually a terrible idea, even if it frequently was limited to powering 2.5" drives due to a lack of 12V. Some had extra contacts for that, but most that I saw didn’t.
It does however affect getting updates from government agencies, and others who insist on only disseminating real-time information to the public via Twitter.
For instance: https://twitter.com/WakaKotahiWgtn
This is the account for traffic events (road closures, traffic accidents, etc) in my city. Not signed in, the latest visible post is from February 2023.
Since I don’t have a twitter account, this is now functionally useless.
Since the realistic competitor here is probably magnetic tape, current-generation (LTO9) media can transfer at around 400MB/s, taking 12 hours and change to fill an 18TB tape.
Earlier archival optical disk formats (https://news.panasonic.com/global/stories/798) claimed 360MB/s, but I believe that is six, double-sided discs writing both sides simultaneously, so 30MB/s per stream. Filling the same six (300GB) discs would take about an hour and a half.
Building the library to handle and read/write in bulk is always the issue though. The above optical system fit 1.9PB in the space of a server rack (and I didn’t see any options to expand further when that was current technology), and by the looks is 7 units that each can be writing a set of discs (call that 2.5GB/s total).
In the same single rack you’d fit 560 LTO tapes (10.1PB for LTO9) and 21 drives (8.4GB/s).
So they have a bit of catching up to do, especially with LTO10 (due in the next year or so) doubling the capacity and further increasing the throughput.
There’s also the small matter that every one of these massive increases in optical disc capacity in recent years has turned out to be vapourware. I mean I don’t doubt that they will achieve it someday, but they always seem to go nowhere.
From the video description:
I have been a Samsung product user for many years, and I don’t plan to stop anytime soon
And all sympathy I had for this person just vanished. If you don’t demand better, they will keep doing - and getting away with - shit like this.
Voting with your wallet might be the one voice you have left in this world, what a way to squander it by continuing to buy products from companies whose representatives behave in this manner.
The estimated training time for GPT-4 is 90 days though.
Assuming you could scale that linearly with the amount of hardware, you’d get it down to about 3.5 days. From four times a year to twice a week.
If you’re scrambling to get ahead of the competition, being able to iterate that quickly could very much be worth the money.
And spatulas. Don’t forget the spatulas.
I’d be curious to see how much cooling a SAS HBA would get in there. Looking at Broadcom’s 8 external port offerings, the 9300-8e reports 14.5W typical power consumption, 9400-8e 9.5W, and 9500-8e only 6.1W. If you were considering one of these, definitely seems it’d be worth dropping the money on the newest model of HBA.
I’m definitely curious, would only personally need it to be NAS + Plex server for which either of the CPUs they’re offering is a bit overkill, but it’s nice that it fits a decent amount of RAM, and you’re not forced to choose between adding storage or networking.
Single-sided drives can be up to 4TB though, no?
It was linked a little up thread, but since you’re (probably) referring to the “Space-cadet” keyboard, it was seven.
Technically, they drew a distinction between the “shift” keys (of which there were three), and the other modifiers (four).
In modern times (or for Linux at least), Meta has essentially coalesced with Alt, so the modifiers we’ve retained are Control, Alt, and Super (Windows), with only “Hyper” having been lost along the way.
The remaining two shifts (also lost to time) were “Top” (symbols) and “Front” (Greek), with the Greek supporting combining with shift (there’s a table on that Wiki page).
I was going to write a rebuttal. And then I decided that the “zero points” speech from Billy Madison will suffice.
The phrase “cutting off your nose to spite your face” comes to mind.
Not in so much detail, but it’s also really hard to define unless you’ve one specific metric you’re trying to hit.
Aside from the included power/cooling costs, we’re not (overly) constrained by space in our own datacentre so there’s no strict requirement for minimising the physical space other than for our own gratification. With HDD capacities steadily rising, as older systems are retired the total possible storage space increases accordingly…
The performance of the disk system when adequately provisioned with RAM and SSD cache is honestly pretty good too, and assuming the cache tiers are adequate to hold the working set across the entire storage fleet (you could never have just one multi-petabyte system) the abysmal performance of HDDs really doesn’t come into it (filesystems like ZFS coalesce random writes into periodic sequential writes, and sequential performance is… adequate).
Not mentioned too is the support costs - which typically start in the range of 10-15% of the hardware price per year - do eventually have an upward curve. For one brand we use, the per-terabyte cost bottoms out at 7 years of ownership then starts to increase again as yearly support costs for older hardware also rise. But you always have the option to pay the inflated price and keep it, if you’re not ready to replace.
And again with the QLC, you’re paying for density more than you are for performance. On every fair metric you can imagine aside from the TB/RU density - latency, throughput/capacity, capacity/watt, capacity/dollar - there are a few tens of percent in it at most.
Being in an HPC-adjacent field, can confirm.
Looking forward to LTO10, which ought to be not far away.
The majority of what we’ve got our eye on for FY '24 are SSD systems, and I expect in '25 it’ll be everything.
There’s some space occupied by the servo tracks (which align the heads to the tap) in LTO, but if we ignore that…
Current-generation LTO9 has 1035m of 12.65mm wide tape, for 18TB of storage. That’s approximately 13.1m², or just under 1.4TB/m².
A 90 minute audio cassette has around 90m of 6.4mm wide tape, or 0.576m². At the same density it could potentially hold 825GB.
DDS (which was data tape in a similar form factor) achieved 160GB in 2009, although there’s a lot more tape in one of those cartridges (153m).
Honestly, you’d be better off using the LTO. Because they’re single-reel cartridges (the 2nd is inside the drive), they can pack a lot more tape into the same volume.
We’ve done this exercise recently for multi-petabyte enterprise storage systems.
Not going to name brands, but in both cases this is usable (after RAID and hot spares) capacity, in a high-availability (multi-controller / cluster) system, including vendor support and power/cooling costs, but (because we run our own datacenter) not counting a $/RU cost as a company in a colo would be paying:
Note that the total power consumption for ~3.5PB of HDD vs ~5PB of flash is within spitting distance, but the flash system occupies a third of the total rack space doing it.
As this is comparing to QLC flash, the overall system performance (measured in Gbps/TB) is also quite similar, although - despite the QLC - the flash does still have a latency advantage (moreso on reads than writes).
So yeah, no. At <1.5× the per-TB cost for a usable system - the cost of one HDD vs one SSD is quite immaterial here - and at >4× the TB-per-RU density, you’d have to have a really good reason to keep buying HDDs. If lowest-possible-price is that reason, then sure.
Reliability is probably higher too, with >300 HDDs to build that system you’re going to expect a few failures.
They’re not really particularly low power.
Quick search suggests around 8W power consumption with a 2 ohm heater, which at the approximately 4V of a charged Lithium-Ion battery (V=IR, P=VI) checks out to around a 2A draw.
Similar results suggest the batteries inside are in the neighbourhood of 0.75Ah (3.7V nominal) = 2.8Wh. I don’t know how much of that capacity actually gets used during the “lifespan” of the vape, but I’d guess half would be a good estimate. In any case, probably safe to assume you need to pack around 2Wh in at minimum.
A Lithium AA battery (Li-FeS2 chemistry) gives you 3.4Ah @ 1.5V = 5.1Wh, but has a maximum discharge current of 2.5A (only 3.8W). The AAA is only 1.2Ah with 1.5A discharge, but two of them would give you 3.6Wh and 4.5W, closer to the target but still under.
You could probably arrange this in some sort of configuration whereby the batteries charge a capacitor and that runs the heater, at those kind of numbers it’d need to be at most a 2 seconds off for 1 second on deal, but that honestly seems like it should be fine for, y’know, vaping. Might just need to have an on/off switch to avoid draining the batteries when you’re not using it.
But I guess we’re at the point where manufacturing Li-Po cells happens in such vast quantities that the extra electronics to charge a capacitor from a 1.5V battery probably cost more.
What sets some of Boox’s models apart from the other e-readers is they’re full Android devices; you can install most apps from the Play Store. Perhaps not as great for battery life, but a world apart so far as functionality goes (and you can even install the other e-book vendors’ apps if you have existing purchased content).
In the “pocketable” size category, Palma which is a phone form-factor device (I have one of these, has been great), the Page looks very much inspired by the design of the Kindle Oasis, or the Tab Mini C has a colour e-ink display.
To expand on @doeknius_gloek’s comment, those categories usually directly correlate to a range of DWPD (endurance) figures. I’m most familiar with buying servers from Dell, but other brands are pretty similar.
Usually, the split is something like this:
(Consumer SSDs frequently have endurances only in the 0.1 - 0.3 DWPD range for comparison, and I’ve seen as low as 0.05)
You’ll also find these tiers roughly line up with the SSDs that expose different capacities while having the same amount of flash inside; where a consumer drive would be 512GB, an enterprise RI would be 480GB, and a MU/WI only 400GB. Similarly 1TB/960GB/800GB, 2TB/1.92TB/1.6TB, etc.
If you only get a TBW figure, just divide by the capacity and the length of the warranty. For instance a 1.92TB 1DWPD with 5y warranty might list 3.5PBW.
Probably best to look at it as a competitor to a Xeon D system, rather than any full-size server.
We use a few of the Dell XR4000 at work (https://www.dell.com/en-us/shop/ipovw/poweredge-xr4510c), as they’re small, low power, and able to be mounted in a 2-post comms rack.
Our CPU of choice there is the Xeon D-2776NT (https://www.intel.com/content/www/us/en/products/sku/226239/intel-xeon-d2776nt-processor-25m-cache-up-to-3-20-ghz/specifications.html), which features 16 cores @ 2.1GHz, 32 PCIe 4.0 lanes, and is rated 117W.
The ostensibly top of this range 4584PX, also with 16 cores but at double the clock speed, 28 PCIe 5.0 lanes, and 120W seems like it would be a perfectly fine drop-in replacement for that.
(I will note there is one significant difference that the Xeon does come with a built-in NIC; in this case the 4-port 25Gb “E823-C”, saving you space and PCIe lanes in your system)
As more PCIe 5.0 expansion options land, I’d expect the need for large quantities of PCIe to diminish somewhat. A 100Gb NIC would only require a x4 port, and even a x8 HBA could push more than 15GB/s. Indeed, if you compare the total possible PCIe throughput of those CPUs, 32x 4.0 is ~63GB/s, while 28x 5.0 gets you ~110GB/s.
Unfortunately, we’re now at the mercy of what server designs these wind up in. I have to say though, I fully expect it is going to be smaller designs marketed as “edge” compute, like that Dell system.