Hi, I’ve been thinking for a few days whether I should learn Docker or Podman. I know that Podman is more FOSS and I like it more in theory, but maybe it’s better to start with docker, for which there is a lot more tutorials. On the other hand, maybe it’s better to straight up learn podman when I don’t know any of the two and not having to change habits later. What do you think? For context, I know how containers works in theory, I know some linux I think well, but I never actually used docker nor podman. In another words: If I want to eventually end up with Podman, is it easier to start with docker and then learn Podman, or start with Podman right away? Thanks in advance

  • stepanzak@iusearchlinux.fyiOP
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    9 months ago

    Do you selfhost stuff on bare metal? I feel like most projects provide containers as their officially supported packages.

    • SaintWacko@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      9 months ago

      They’re being useless, but what I do is use Proxmox and just install my stuff each in their own LXC

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        9 months ago

        You’re using LXC… so you may want to learn about Incus/LXD that was made by the same people who made LXC, can work as a full replacement for Proxmox in most scenarios. Here a few reasons:

        • It is bellow the Linux Containers project, open-source;
        • Available on Debian 12’s repositories;
        • Unlike Proxmox, it won’t withhold important fixes on the subscription (payed) repositories;
        • Is way, way lighter;
        • LXC was hacked into Proxmox, they simply removed OpenVZ from their product and added LXC and it won’t even be as compatible and smooth as Incus;
        • Also has a WebUI;

        Why not try it? :)

    • 2xsaiko@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      9 months ago

      I use distro packages. In the rare case something isn’t packaged yet, I package it myself. And for the isolation, systemd services can do most of the things docker can if you need (check systemd-analyze security).

      For just hosting services that can be done instead with normal system services, docker makes your setup a lot more complex (especially on the networking side), for little if any gain. Unless I need to spin up something multiple times temporarily on demand or something has a hard dependency on it, I’m not going to bother with it anymore.

      • Victor@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        9 months ago

        Not sure why all the down votes without any explanation.

        I too don’t use docker for my services. I run Plex on my Arch install via the provided AUR package. 🤷‍♂️ Nobody told me I needed to do otherwise, with docker or anything else. Not sure why that would be better in any way. It could hardly be more performant? And it’s as simple as enabling the service and forgetting about it.

        • Nibodhika@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          9 months ago

          Maybe they’re having issues with his answer of “using an OS” which implies other people are not? IDK.

          But as to you if you’re running just one or two services from a machine you also use for other stuff using packages and systems services is perfectly fine. If you have dedicated hardware for it (or plan on having it), it starts to make sense to look at ways of making things easier for yourself in the long run. Docker solves lots of issues no one’s talking about (because no one is facing them anymore), e.g.:

          • Different services requiring different versions of the same library/database/etc
          • Moving your service from one computer to another
          • Service requiring specific steps for updates (this is not entirely gone, but it’s much better and it’s prevents you from breaking your services by doing a random operation like updating your system)
          • Pinning versions of services until you decide to update without needing to sacrifice system updates for it (I know you can pin a version of a package, but if you don’t upgrade it it will break when you upgrade it’s dependencies)
          • Easily map ports or block access in a generic way, no need to discover how each service config file allows that, you can just do it at the container level. e.g. databases that can’t be accessed from the network or even from within the host machine (I mean, they can obviously be accessed from the host system, just not in the traditional way, so a user who gains access to your machine on a user that’s not allowed to use docker can’t)
          • Isolation between services
          • Isolation from host machine
          • Reproducibility of services (i.e. one small docker compose file guarantees a reproducible host of services)
          • Endurance that no service is running as root (even if they only work as root)
          • Spin services in minutes to test stuff up and clean them out thoroughly in seconds.

          There’s probably many more reasons to use docker. Plus once you’ve learned it it’s very easy for small self-hosted stuff so there’s really no reason not to use it. Every time I see someone saying they don’t use docker and don’t understand why people use it I’m a bit baffled, it’s like someone claiming he doesn’t understand why people use knifes to cut bread when the two-handed axe he uses for chopping wood works (like, yes, it does work, but it’s obviously not the best tool for the job)

            • Nibodhika@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              9 months ago

              Yes I’m aware of that, having written several systemd units for my own services in the past. But you’re not likely to get any of that by default when you just install from the package manager as it’s the discussion here, and most people will just use the default systemd unit provided, and in the vast majority of cases they don’t provide the same level of isolation the default docker compose file does.

              We’re talking about ease of setting things up, anything you can do in docker you can do without, it’s just a matter of how easy it is to get good standards. A similar argument to what you made would be that you can also install multiple versions of databases directly on your OS.

              For example I’m 99% sure the person I replied to has this file for service:

              [Unit]
              Description=Plex Media Server
              After=network.target network-online.target
              
              [Service]
              # In this file, set LANG and LC_ALL to en_US.UTF-8 on non-English systems to avoid mystery crashes.
              EnvironmentFile=/etc/conf.d/plexmediaserver
              ExecStart=/usr/lib/plexmediaserver/Plex\x20Media\x20Server
              SyslogIdentifier=plexmediaserver
              Type=simple
              User=plex
              Group=plex
              Restart=on-failure
              RestartSec=5
              StartLimitInterval=60s
              StartLimitBurst=3
              
              [Install]
              WantedBy=multi-user.target
              

              Some good user isolation, but almost nothing else, and I doubt that someone who argued that installing from the package manager is easier will run systemctl edit on what he just installed to add extra security features.

              • Victor@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                Can confirm, have this file. Can confirm, will not learn unit files because I don’t know enough to know the provided one is not sufficient, because the wiki has no such mention. You are spot on.

                • Nibodhika@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  Btw I don’t mean any of that as an insult or anything of the sort, I do the same with the services I install from the package manager even though I’m aware of those security flags, what they do and how to add them.

              • TCB13@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                9 months ago

                But you’re not likely to get any of that by default when you just install from the package manager as it’s the discussion here,

                This is changing… Fedora is planning to enable the various systemd services hardening flags by default and so is Debian.

                We’re talking about ease of setting things up, anything you can do in docker you can do withou

                Yes, but at what cost? At the cost of being overly dependent on some cloud service / proprietary solution like DockerHub / Kubernetes? Remember that the alternative is packages from your Linux repository that can be easily mirrored, archived offline and whatnot.

                • Nibodhika@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  You’re not forced to use dockerhub or kubernetes, in fact I use neither. Also if a team chooses to host their images on dockerhub that’s their choice, it’s like saying git is bad because Microsoft owns GitHub, or that installing software X from the repos is better than compiling because you need to use GitHub to get the code.

                  Also docker images can also be easily mirrored, archived offline etc, and they will keep working after the packages you archived stop because the base version of some library got updated.

                  • TCB13@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    9 months ago

                    Yet people chose to use those proprietary solutions and platforms because its easier. This is just like chrome, there are other browser, yet people go for chrome.

                    It’s significantly hard to archive and have funcional offline setups with Docker than it is with an APT repository. It’s like an hack not something it was designed for.

          • Victor@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            Pretty good points. I especially like the no-root and isolation aspects, as well as the reproducibility aspect.

            But I don’t have enough services to warrant learning docker at a deeper level yet, and they aren’t exposed on the internet yet either. Just local services so far. But all of those points are good to consider. Thanks for replying, friend! 🤝

        • SpaceNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          6
          ·
          edit-2
          9 months ago

          People love to hate on people who don’t care for containers.

          Also, I’m guessing that nobody here actually knows what it means to run code on bare metal.

          What you’re doing is fine. No need to make life harder for yourself.

          • Victor@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            People love to hate on people who don’t care for containers.

            Maybe so. 😕

            what it means to run code on bare metal

            I’m guessing it means something slightly different than what most people think, namely to just run it in the OS. Would you explain to me what it really means?

            • SpaceNoodle@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 months ago

              Bare metal would mean without an OS to manage peripherals, resources, even other tasks - like you might find on a resource-constrained embedded system.

            • ImTryingLemmy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              9 months ago

              The OS is in between the service and the bare metal. Something like OPNsense can be said to be running on bare metal because the OS and the firewall service are so intertwined. However, something like firewalld isn’t running on the bare metal because it’s just a service of the operating system.

              That’s how I understand it anyway, I’m not a pro