Nvidia does not have a strong history of open sourcing things, to say the least. That last bit sounds like pure hopium
Nvidia does not have a strong history of open sourcing things, to say the least. That last bit sounds like pure hopium
If you use the right color of light, then the doppler effect means that the atoms will only absorb (and be pushed by) light that they are headed towards. That means that the light will always act as a brake for the atoms and never an accelerator, so the fluid will cool. If you do this from all directions, the fluid will start to stay still in one place and get very close to absolute zero. Idk, I just read the Wikipedia article, but that is my best attempt at an ELI18
You’re not wrong that GPU and AI silicon design are tightly coupled, but my point was that both of the GPU manufacturers are dedicating hardware to AI/ML in their consumer products. Nvidia has the tensor cores in its GPUs that it justifies to consumers with DLSS and RT but we’re clearly designed for AI/ML use cases when they presented them with Turing. AMD has the XDNA AI Engine that it is putting its APUs separate from its RDNA GPUs
Fair enough. Was just asking because the choice of company surprised me. AMD is putting "AI Engines in their new CPUs (separate silicon design from their GPUs) and while Nvidia largely only sells GPUs that are less universal, they’ve had dedicated AI hardware (tensor cores) in their offerings for the past three generations. If anything, Intel is barely keeping up with its competition in this area (for the record, I see vanishingly little value in the focus on AI as a consumer, so this isn’t really a ding on Intel in my books, more so making the observation from a market forces perspective)
Why call out Intel? Pretty sure AMD and Nvidia are both putting dedicated AI hardware in all of their new and upcoming product lines. From what I understand they are even generally doing it better than Intel. Hell, Qualcomm is advertising their AI performance on their new chips and so is Apple. I don’t think there is anyone in the chip world that isn’t hopping on the AI train
How nearby is nearby though? And, in the context of the proposed use case for defending a crowded stadium in a populated area, does this put people down range as well that could also be impaired by the pellets?
You are giving ants way too much credit. Those fuckers are brutal war criminals, the lot of them. Humans are bad, but we’ve had nukes for almost 80 years without glassing ourselves, ants wouldn’t last a day
Blower is specifically referring to coolers that are designed to blow air through the GPU heats ink and then out the back of the case. In contrast, open air coolers use (typically more numerous and larger fans) to force air at the GPU heats ink but without much concern for where it goes after that, so the air ends up partially blown out the back of the case, and partially recirculate back into the rest of the case where the case fans are hopefully promoting enough exchange that ambient temps remain sufficiently low. The recirculation is less than ideal, but is offset by the larger fans and heatsinks for a typically quieter and cooler solution. The fans can be larger because they are blowing on the larger side/cross section of the heat sink. Pass through are a somewhat newer variant of open air coolers common on newer Nvidia cards that push or pull air through a heat sink that is not blocked on one side of a pc so air flows though the heat sink with less back pressure for more efficient dissipation at the expense of a more compact PCB to put all the GPU components on
There is a net loss of potable water (or potable water capacity, if you prefer), which is often a capacity bottleneck before non-potable water due to the infrastructure required to generate it. However, according to a comment above, Microsoft is using evaporative coolers, which specifically work by losing water (through evaporation). It’s not a 100% loss rate to the watershed, but it’s not net zero either
I detest defending Comcast, but are you positive it was 1.2 Gbps and not 1.2 GBps? Because 1.2GBps is about 10 Gbps
Hi! I just want to say fuck you for making me laugh at such a bad pun. I thought I had taste. I’m devastated