That’s a question I always asked myself.
Currently, I’m running Debian on both my servers, but I consider switching to Fedora Atomic Core (CoreOS), since I already use Fedora Atomic on my desktop and feel very comfortable with it.

There’s always the mentality of using a “stable” host OS bein better due to following reasons:

  • Things not changing means less maintenance, and nothing will break compatibility all of the sudden.
  • Less chance to break.
  • Services are up to date anyway, since they are usually containerized (e.g. Docker).
  • And, for Debian especially, there’s one of the biggest availability of services and documentation, since it’s THE server OS.

My question is, how much of these pro-arguments will I loose when I switch to something less stable (more regular updates), in my case, Fedora Atomic?


My pro-arguments in general for it would be:

  • The host OS image is very minimal, and I think most core packages should be running very reliably. And, in the worst case, if something breaks, I can always roll back. Even the, in comparison to the server image, “bloated” desktop OS (Silverblue) had been running extremely reliably and pretty much bug free in the past.
  • I can always use Podman/ Toolbx for example for running services that were made for Debian, and for everything else there’s Docker and more. So, the software availability shouldn’t be an issue.
  • I feel relatively comfortable using containers, and think especially the security benefits sound promising.

Cons:

  • I don’t have much experience. Everything I do related to my servers, e.g. getting a new service running, troubleshooting, etc., is hard for me.
  • Because of that, I often don’t have “workarounds” (e.g. using Toolbx instead of installing something on the host directly) in my mind, due to the lack of experience.
  • Distros other than Debian and some others aren’t the standard, and therefore, documentation and availability isn’t as good.
  • Containerization adds another layer of abstraction. For example, if my webcam doesn’t work, is it because of a missing driver, Docker, the service, the cable not being plugged in, or something entirely different? Troubleshooting would get harder that way.

On my “proper” server I mainly use Nextcloud, installed as Docker image.
My Raspberry Pi on the other hand is only used as print server, running Octoprint for my 3D-printer. I have installed Octoprint there in the form of Octopi, which is a Raspian fork distro where Octoprint is pre-installed, which is the recommended way.

With my “proper” server, I’m not really unhappy with Debian. It works and the server is running 24/7. I don’t plan to change it for the time being.

Regarding the Raspi especially, it looks quite a bit different. I think I will just try it and see if I like it.

Why?

  • It is running only rarely. Most of the time, the device is powered off. I only power it on a few times per month when I want to print something. This is actually pretty good, since the OS needs to reboot to apply updates, and it updates itself automatically, so I don’t have to SSH into it from time to time, reducing maintenence.
  • And, last but not least, I’ve lost my password. I can’t log in anymore and am not able to update anymore, so I have to reinstall anyway.

What is your opinion about that?

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 months ago

    So you’ve listed some important cons. I don’t see the why outweighing those cons. If the why is “I really wanna play with this.” then perhaps that outweighs the cons.

    BTW on production servers we often don’t do updates at all. That’s because updates could break, beyond what’s expected. Instead we apply updates on the base OS in a preproduction environment, then we build an image out of it, test it and send that image to the data centers where our production servers are. Test it some more in a staging environment. Then the update becomes - spin up new VMs in the production environment from the new image and destroy the old VMs.

    • nottelling@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 months ago

      Yup. Treating VMs similar to containers. The alternative, older-school method is cold snapshots of the VM, apply patches/updates (after pre-prod testing & validation), usually in an A/B or red/green phased rollout, and roll back snaps when things go tits up.