• 0 Posts
  • 445 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle

  • CUDA is a proprietary platform that (officially) only runs on Nvidia cards, so making projects that use CUDA run on non-Nvidia hardware is not trivial.

    I don’t think the consumer-facing stuff can be called a monopoly per se, but Nvidia can easily force proprietary features onto the market (G-Sync before they adapted VESA Adaptive-Sync, DLSS etc.) because they have such a large market share.

    Assume a scenario where Nvidia has 90% market share and Nvidia cards would still only support adaptive sync via their proprietary G-Sync solution. Display manufacturers will obviously want to tailor to the market, so most displays will release with support for G-Sync instead of VESA Adaptive-Sync. 9 out of 10 customers will likely buy a G-Sync display as they have Nvidia cards. Now everyone has a monitor supporting some form of adaptive sync. AMD and Nvidia release their new GPU generation and isolated (in this hypothetical scenario), AMD cards are 10% cheaper for the same performance and efficiency as their Nvidia counterparts. The problem for AMD here is that even though per $ they have the better cards, 9 out of 10 people would need new displays to get adaptive sync working with an AMD card (because their current display only supports the proprietary G-Sync), and AMD can’t possibly undercut Nvidia by so much that the customer can also buy a new display for the price difference. This results in 9 out of 10 customers going for Nvidia again.

    To be fair to Nvidia, most of their proprietary features are somewhat innovative. When G-Sync first came out, VESA Adaptive-Sync wasn’t really a thing yet. DLSS was way better than any other upscaler in existence when it released and it required hardware that only Nvidia had.

    But with CUDA, it’s a big problem. Entire software projects that just won’t (officially) run on non-Nvidia hardware so Nvidia is able to charge whatever they want (unless what they’re charging is more than the cost of switching to competitor products and importantly porting over the affected software projects).





  • To be fair, USB-C, especially with Thunderbolt, is much more universal. There are adapters for pretty much every “legacy” port out there so if you really need FireWire you can have it, but it’s clear why FireWire isn’t built into the laptop itself anymore.

    The top MacBook Pro is also the 2016+ pre Apple Silicon chassis (that was also used with M chips, but sort of as a leftover), while the newer MacBook Pro chassis at least brought back HDMI and an SD card reader (and MagSafe as a dedicated charging port, although USB-C still works fine for that).

    Considering modern “docking” solutions only need a single USB-C/Thunderbolt cable for everything, these additional ports only matter when on the go. HDMI comes in handy for presentations for example.

    I’d love to see at least a single USB-A port on the MacBook Pro, but that’s likely never coming back. USB-C to A adapters exist though, so it’s not a huge deal. Ethernet can be handy as well, but most use cases for that are docked anyway.

    I like the Framework concept the most, also “only” 4 ports (on the 13" at least, plus a built-in combo jack), but using adapter cards you can configure it to whatever you need at that point in time and the cards slide into the chassis instead of sticking out like dongles would. I usually go for one USB-C/Thunderbolt on either side (so charging works on either side), a single USB-A and video out in the form of DisplayPort or HDMI. Sometimes I swap the video out (that also works via USB-C obviously) for Ethernet, even though the Ethernet card sticks out. For a (retro) LAN party, I used 1 USB-C, USB-A (with a 4-port hub for wired peripherals), DisplayPort and Ethernet.



  • SMS, iMessage and now RCS have been working well for me and I’ve been (primarily) using iPhones for the past 8 years now.

    The Messages app shows what type of message (iMessage/SMS/RCS) you’re about to send in the text field and displays which (sent or received) messages are what as well.

    One thing I could see going wrong is that a given phone number is registered with iMessage and it hasn’t been disabled after switching to an Android phone for example.

    Another (imo more likely) thing is that if it’s using RCS, some carriers don’t seem to work too well with it as of now. iOS seems to have implemented the base standard, while Google added proprietary extensions to said “standard” in Android, like end-to-end encryption. I never had issues sending or receiving RCS messages from/to Android devices, but there might be some hiccups for some people as RCS - even though it’s called a “standard” - isn’t really standardized.

    Not sure what’s so insane from Apple’s side about any of that.







  • Not sure if you still encounter the issue, but I finally did some trial and error.

    It doesn’t seem to be related to the AMD GPU, as I briefly swapped it out for an Intel Arc A750 and had the same issue. I then went ahead and tried disabling most onboard devices of my mainboard (ASUS ROG Strix B650E-E) and sure enough: that fixed it. I then re-enabled them one by one, trying waking the PC from sleep each time and narrowed it down to the on-board Bluetooth.

    Do you happen to have a mainboard that has the “MediaTek MT7922A” (or AMD rebranded variant “AMD RZ616”) Wi-Fi/Bluetooth card? If so, try disabling the Bluetooth portion of it in the BIOS.



  • The main thing (by far) degrading a battery is charging cycles. After 7 years with say 1,500 cycles most batteries will have degraded far beyond “80%” (which is always just an estimate from the electronics anyway). Yes, you can help it a bit by limiting charging rate, heat and limit the min/max %, but it’s not going to be a night and day difference. After 7 years with daily use, you’re going to want to swap the battery, if not for capacity reduction then for safety issues.


  • I think I have a simple function in my .zshrc file that updates flatpaks and runs dnf or zypper depending on what the system uses. This file is synced between machines as part of my dotfiles sync so I don’t have to install anything separate. The interface of most package managers is stable, so I didn’t have to touch the function.

    This way I don’t have to deal with a package that’s on a different version in different software repositories (depending on distribution) or manually install and update it.

    But that’s just me, I tend to keep it as simple as possible for maximum portability. I also avoid having too many abstraction layers.



  • Technically, wired charging degrades the battery less than wireless charging, mainly because of the excessive heat generated by the latter. The same way slower wired charging generates less heat. Lower and upper charging limits also help (the tighter the better).

    But I personally don’t bother with it. In my experience, battery degradation and longevity mostly comes down to the “battery lottery”, comparable to the “silicon lottery” where some CPUs overclock/undervolt better than others. I’ve had phone batteries mostly charged with a slow wired charger degrade earlier and more compared to almost exclusively wireless charging others. No battery is an exact verbatim copy of another one. Heck, I had a 2 month old battery die on me after just ~20 cycles once. It happens.

    Sure, on average you might get a bit more life out of your batteries, but in my opinion it’s not worth it.

    The way I see it with charging limits is that sure, your battery might degrade 5% more over the span of 2 years when always charging it to 100% (all numbers here are just wild estimates and, again, depend on your individual battery). But when you limit charging to 80% for example, you get 20% less capacity from the get go. Unless of course you know exactly on what days you need 100% charge and plan your charging ahead of time that way.

    Something I personally could never be bothered with. I want to use my device without having to think about it. If that means having to swap out the battery one year earlier, then so be it.