In a few months, I will have the space and infrastructure to join the selfhost community. I’m trying to prepare, as I know it can be challenging, but I somehow ended up with more questions than answers.

For context, I want to run a server with torrents, media (plex, Jellyfin or something else entirely - I didn’t make a decision yet), photos(Emmich, if its stable, or something else), Rook, Paperless, Home Assistant, Frigate, Adguard Home… Possibly lots more. Also, I will need storage - I’m planning for 3x18tb drives to begin with, but will certainly be adding more later.

My initial intention was to set up a NAS in Silverstone CS382(or Jonsbo N3/N5, if they’re in a reasonable price). I heard good things about Unraid and it’s capabilities of running docker. On the other hand, I’m hearing hood things about Proxmox or NixOS with NAS software running in a VM, too - but for Unraid, it seems hacky. Maybe I should run NAS and a separate server? That’d be more costly and seems like more work on maintenance with no real benefit. Maybe I should go with TrueNAS in a VM? If I don’t do anything other than NAS, TrueNAS shouldn’t be that hard to set up, right?

I’m also wondering whether I should go with Intel for QuickSync, AMD and Arc graphics or something else entirely. I’ve read that AV1 is getting popular, is AMD getting more support there? I will buy Intel if it’s clearly the better option, but I’m team Red and would prefer AMD.

Also, could anyone with a non-technical SO tell me how do they find your selhosted things? I’ve read about Cloudflare Tunnels and Tailscale, which will be a breeze for me, but I gotta think about other users aswell.

That’s another concern for me - am I correct in thinking Tailscale and Cloudflare Tunnels are all I need to access the server remotely? I will probably set up a PiKVM or the Risc one aswell, can it be exposed aswell? I will have a dream machine from Ubiqiti, anything that needs to run to access the server I may run there. I’m not looking to set up anything more complicated like Wireguard - it’s too much.

For additional context, I’m a software developer, I know my way with Docker and the command line and I consider myself to be tech savvy, but I’m not looking to spend every weekend reading changelogs and doing manual updates. I want to have an upgrade path (that’s why Im not going with Synology for example), but I also don’t want to obsess over it. Money isn’t much of an issue, I can spare 1-2k$ on the build, not including the drives.

Any feedback and suggestions appreciated :)

  • originalucifer@moist.catsweat.com
    link
    fedilink
    arrow-up
    18
    ·
    3 months ago

    one piece i highly recommend is running your torrent solution in a container with the network set to a gluetun container. no fuss, no muss, vpn’d torrenting.

    for the nas piece, im a big fan of the nas device being single purpose. its life should only exist in fileserving. i have several redundant nas devices and then a big ol app server.

    my goal is the ron popeil method of ‘set it and forget it’

    • NarrativeBear@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      I personally run truenas on a standalone system to act as my NAS network wide. It never goes offline and is up near 24/7 except when I need to pull a dead drive.

      Unraid is my go to right now for self hosting as its learning curve for docker containers is fairly easy. I find I reboot the system from time to time so its not something I use for a daily NAS solution.

      Proxmox I run as well on a standalone system. This is my go to for VM instances. Really easy to spin up any OS I would need for any purpose. I run things like home assistant for example on this machine. And its uptime is 24/7.

      Each operating system has its advantages, and all three could potentially do the same things. Though I do find a containered approche prevents long periods of downtime if one system goes offline.

      • sorghum@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        TrueNAS is switching apps from kubernetes to docker. Might wait till October if wanting to spin up something new. I’ve got to figure out how to migrate my TrueCharts apps or find the equivalent when the time comes to upgrade

      • sodamnfrolic@lemmy.sdf.orgOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Why do you need both Unraid and TrueNAS? Don’t they do the same thing? What’s the downside to running TrueNAS on VM in Proxmox VS dedicated machine?

        • NarrativeBear@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Comes down to personal preferences really. Personally I have been running truenas since the freebsd days and its always been on bare metal. There would be no reason you could not virtualize it, and I have seen it done.

          I do run a pfsense virtualized on my proxmox VM machine. It runs great once I figured out all the hardware pass through settings. I do the same with GPU pass through for a retro gaming machine on the same proxmox machine.

          The only thing I dont like is that when you reboot your proxmox machine the PCI devices dont retain their mapping ids. So a PCI NIC card I have in the machine causes the pfsense machine not to start.

          The one thing to take into account with Unraid vs TrueNAS is the difference between how they do RAID. Unraid always drives of different sizes in its setup, but it does not provide the same redundancy as TrueNAS. Truenas requires disk be the same size inside a vdev, but you can have multiple vdevs in one large pool. One vdev can be 5 drives of 10tb and the other vdev can be 5 drives of 2tb. You can always swap any drive in truenas with a larger drive, but it will only be as big as the smallest disk in the vdev.

      • bobs_monkey@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        What hardware are you running your truenas setup on? I have an old computer that I’ve had freenas on that finally died.

        • NarrativeBear@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Intel Core i5 CPU 750 @ 2.67GHz with 16gb ram 165TB of storage. Motherboard is a Asus Delux 10+ years old. And a 10gb NIC. All inside a fractal Design XL case.

          The hardware is by all means not top of the line, but you dont need much for a NAS.

    • myersguy@lemmy.simpl.website
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      im a big fan of the nas device being single purpose. its life should only exist in fileserving. i have several redundant nas devices and then a big ol app server.

      This is the way. Except my “big ol’ app server” is an n95 mini pc that sips power.

    • sodamnfrolic@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      What device do you use for NAS? I’m looking to have a tb of raid 0 ssd cache and if I were to have a dedicated NAS, I would probably go for something with ITX mobo or something like Ugreen Nas with unraid software. Doesn’t the power necessary to have a performant NAS go underutilized then?

      • originalucifer@moist.catsweat.com
        link
        fedilink
        arrow-up
        1
        ·
        3 months ago

        well, i do try and keep all my data hot in the server… i am using the devices as redundancy, so no raid anywhere. 24Tb with 3 copies.

        ive got a couple of 6-bay readynas i got second hand and a plain ol’ ubunutu workstation i crammed with 6 drives and an ssd.

        app server runs off 6 local disks(and an ssd for os) which are replicated to the 2 nas as logical (they pull), the workstation gets a drive-for-drive physical

        rsync is your friend.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    3 months ago

    I just went with a plain boring Ubuntu box, because all the “purpose built” options come with compromises.

    Granted, this is about as hard-mode as this can get, but on the other hand I have 100% perfect support for any damn thing I feel like using, regardless of the state of support of whatever more specialized OS is for aforementioned thing.

    I probably wouldn’t recommend this if you’re NOT very well versed in Linux sysadmin stuff, and probably wouldn’t recommended it to anyone who doesn’t have any interest in sometimes having to fix a broken thing, but I’m 3 LTS upgrades, two hardware swaps, and a full drive replacement, and most of a decade into this build and it does exactly what I want, 95% of the time.

    I would say, though, that containerizing EVERYTHING is the way to go. Worst case, you blow up a single service and the other dozen (or two, or three…) keep right on running like you did absolutely nothing. I can’t imagine maintaining the 70ish containers I’m running without them actually being containers and/or without me being a complete nutcase that runs around the house half naked muttering about the horrors of updates.

    I’m not anti-Cloudflare, so I use a mix of tunnels, their normal proxy, as well as some rawdogging of services with direct port forwards/a local nginx reverse proxy.

    Different services, different needs, different access methods.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 months ago

      This is the way. I’m up since Ubuntu 14.04 LTS on this machine. Platform swapped from AMD Phenom, to Intel i7, to AMD Ryzen, now with a bigger Ryzen. SSDs from a single SATA, to NVMe, to a 512G NVMe mirror, to a 1G NVMe mirror. The storage went from a single 4T disk to an 8T mirror NAS, to 8T directly attached mirror, to 24T RAIDz, to 48T RAIDz. I’ve now activated the free Ubuntu Pro tier, so if Canonical is still around in 2032, this machine can operate for another 8 years with just hardware swaps on failure.

  • Telodzrum@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    3 months ago

    I have been extremely happy with Unraid. It is by far the most beginner-friendly option and there isn’t an easier solution when it comes to expanding capacity. I run my nzb client and all of my *arr containers on it. My media server is on a used SFF PC I grabbed for cheap — so QuickSync can run on the bare metal. It’s been a great stack for years.

  • mspencer712@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    Married, we both work from home, and we’re in an apartment.

    First, all of my weird stuff is not between her work and living room pcs and the internet. Cable modem connects to normal consumer router (openwrt) with four lan ports. Two of those are directly connected to her machines (requiring a 150-ish foot cable for one), and two connect to my stuff. All of my stuff can be down and she still has internet.

    Second, no rack mount servers with loud fans, mid tower cases only. Through command line tools I’ve found some of these are in fact capable of a lot of fan noise, but this never happens in normal operation so she’s fine with it.

    Separately I’d say, have a plan for what she will need if something happens to you. Precious memories, backups, your utility and service accounts, etc. should remain accessible to her if you’re gone and everything is powered off - but not accessible to a burglar. Ideally label and structure things so a future internet installer can ignore your stuff and set her up with normal consumer internet after your business internet account is shut off.

    Also keep in mind if you both switch over so every movie and show you watch only ever comes from Plex (which we both like), in an extended power outage situation all of your media will be inaccessible. It might be good to save a few emergency-entertainment shows to storage you can browse from your phone, usb or iXpand drive you can plug directly into your phone for example.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 months ago

    I have one mini-ATX server with four drives in RAID 10. I find it easier to manage everything in one device. It runs Proxmox, with Almalinux in a VM that runs my Docker containers. Yes, it’s a layer of inefficiency, but I keep it that way partially because I migrated the VM to Proxmox from ESXi, and partially because I’m not confident in LXC being able to do everything Docker can.

    I also run it that way because I have a handful of other VMs.

  • NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    3 months ago

    A LOT of questions there.

    Unraid vs Truenas vs Proxmox+Ceph vs Proxmox+ZFS for NAS: I am not sure if Unraid is ONLY a subscription these days (I think it was going that way?) but for a single machine NAS with a hodgepodge of drives, it is pretty much unbeatable.

    That said, it sounds like you are buying dedicated drives. There are a lot of arguments for not having large spinning disk drives (I think general wisdom is 12 TB is the biggest you should go for speed reasons?), but at 3x18 you aren’t going to really be upgrading any time soon. So Truenas or just a ZFS pool in Proxmox seems reasonable. Although, with only three drives you are in a weird spot regarding “raid” options. Seeing as I am already going to antagonize enough people by having an opinion, I’ll let someone else wage the holy war of RAID levels.

    I personally run Proxmox+Ceph across three machines (with one specifically set up to use Proxmox+ZFS+Ceph so I can take my essential data with me in an evacuation). It is overkill and Proxmox+ZFS is probably sufficient for your needs. The main difference is that your “NAS” is actually a mount that you expose via SMB and something like Cockpit. Apalrd did a REALLY good video on this that goes step by step and explains everything and it is well worth checking out https://www.youtube.com/watch?v=Hu3t8pcq8O0.

    Ceph is always the wrong decision. It is too slow for enterprise and too finicky for home use. That said, I use ceph and love it. Proxmox abstracts away most of the chaos but you still need to understand enough to set up pools and cephfs (at which point it is exactly like the zfs examples above). And I love that I can set redundancy settings for different pools (folders) of data. So my blu ray rips are pretty much YOLO with minimal redundancy. My personal documents have multiple full backups (and then get backed up to a different storage setup entirely). Just understand that you really need at least three nodes (“servers”) for that to make sense. But also? If you are expanding it is very possible to set up the ceph in parallel to your initial ZFS pool (using separate drives/OSDs), copy stuff over, and then cannibalize the old OSDs. Just understand that makes that initial upgrade more expensive because you need to be able to duplicate all of the data you care about.

    I know some people want really fancy NASes with twenty million access methods. I want an SMB share that I can see when I am on my local network. So… barebones cockpit exposing an SMB share is nice. And I have syncthing set up to access the same share for the purpose of saves for video games and so forth.

    Unraid vs Truenas vs Proxmox for Services: Personally? I prefer to just use Proxmox to set up a crapton of containers/vms. I used Unraid for years but the vast majority of tutorials and wisdom out there are just setting things up via something closer to proxmox. And it is often a struggle to replicate that in the Unraid gui (although I think level1techs have good resources on how to access the real interface which is REALLY good?).

    And my general experience is that truenas is mostly a worst of all worlds in every aspect and is really just there if you want something but are afraid of/smart enough not to use proxmox like a sicko.

    Processor and Graphics: it really depends on what you are doing. For what you listed? Only frigate will really take advantage and I just bought a Coral accelerator which is a lot cheaper than a GPU and tends to outperform them for the kind of inference that Frigate does. There is an argument for having a proper GPU for transcoding in Plex but… I’ve never seen a point in that.

    That said: A buddy of mine does the whole vlogger thing and some day soon we are going to set up a contract for me to sit down and set her up an exporting box (with likely use as a streaming box). But I need to do more research on what she actually needs and how best to handle that and she needs to figure out her budget for both materials and my time (the latter likely just being another case where she pays for my vacation and I am her camera guy for like half of it). But we probably will grab a cheap intel gpu for that.

    External access: Don’t do it, that is a great way to get hacked.

    That out of the way. My nextcloud is exposed to the outside world via a cloudflare tunnel. It fills me with anxiety but as long as you regularly update everything it is “fine”.

    My plex? I have a lifetime plex pass so I just use their services to access it remotely. And I think I pay an annual fee for homeassistant because I genuinely want to support that project.

    Everything else? I used to use wireguard (and openvpn before it) but actually switched to tailscale. I like the control that the former provided but much prefer the model where I expose individual services (well, VMs). Because it is nice to have access to my cockpit share when I want to grab a file in a hotel room. There is zero reason that anything needs access to my qbitorrent or calibre or opnsense setup. Let alone even seeing my desktop that I totally forgot to turn off.

    But the general idea I use for all my selfhosted services is: The vast majority of interactions should happen when I am at home on my home network. It is a special case if I ever need to access anything remotely and that is where tailscale comes in.

    Theoretically you can also do the same via wireguard and subnetting and vlans but I always found that to be a mess to provide access both locally and remotely and the end result is I get lazy. Also, Tailscale is just an app on basically any machine whereas wireguard tends to involve some commands or weird phone interactions.

    • ahal@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      You can still buy a lifetime subscription for Unraid, it’s just a lot more expensive.

    • sodamnfrolic@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I only said 3 drives because supposedly it’s better than 2, which would mean simple mirroring - I’ll cross that bridge when I get to it, shouldn’t be hard to get answers there.

      Is Cloudflare Tunnels really this problematic? I thought Tunnels and Tailscale would be safe… If I can’t expose those services, I’d rather pay for SaaS alternatives.

      Thank you for the other tips.

      • NuXCOM_90Percent@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        3 months ago

        More drives is always better. But you need to understand how you are making it better.

        https://en.wikipedia.org/wiki/Standard_RAID_levels is a good breakdown of the different RAID levels. Those are slightly different depending on if you are doing “real”/hardware RAID or software raid (e.g. ZFS) but the principle holds true and the rest is just googling the translation (for example, Unraid is effectively RAID4 with some extra magic to better support mismatched drive sizes)

        That actually IS an important thing to understand early on. Because, depending on the RAID model you use, it might not be as easy as adding another drive. Have three 8 TB and want to add a 10? That last 2 TB won’t be used until EVERY drive has at least 10 TB. There are ways to set this up in ZFS and Ceph and the like but it can be a headache.

        And the issue isn’t the cloudflare tunnel. The issue is that you would have a publicly accessible service running on your network. If you use the cloudflare access control thing (login page before you can access the site) you mitigate a lot of that (while making it obnoxious for anything that uses an app…) but are still at the mercy of cloudflare.

        And understand that these are all very popular tools for a reason. So they are also things hackers REALLY care about getting access to. Just look up all the MANY MANY MANY ransomware attacks that QNAP had (and the hilarity of QNAP silently re-enabling online services with firmware updates…). Because using a botnet to just scan a list of domains and subdomains is pretty trivial and more than pays for itself after one person pays the ransom.

        As for paying for that? I would NEVER pay for nextcloud. It is fairly shit software that is overkill for what people use it for (file syncing and document server) and dogshit for what it pretends to be (google docs+drive). If I am going that route, I’ll just use Google Docs or might even check out the Proton Docs I pay for alongside my email and VPN.

        But for something self hosted where the only data that matters is backed up to a completely different storage setup? I still don’t like it being “exposed” but it is REALLY nice to have a working shopping list and the like when I head to the store.

  • monkeyman512@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 months ago

    What my setup will soon be for hardware: Gen 2 AMD epic 16 core CPU, Supermicro motherboard with lots of pcie slots, 128g ram, Intel arc a40 GPU, HBA card attached to a super micro disk shelf

    Software: Proxmox for host is, Truenas Scale (just NAS) in VM with HBA card passed into VM, Plex in VM with Intel GPU passed in, 3 VMs for docker swarm (headless Debian)

    Other thoughts: Cloud flare will only be helpful for things you want exposed to the internet. If you do that make sure you have a reverse proxy. This is how I expose services for non-tech family.

    VPN will be more secure, but can also be more of a pain. I generally only do that for things only I need or only techy savvy people will use.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    ESXi VMWare virtual machine hypervisor
    HTTP Hypertext Transfer Protocol, the Web
    IP Internet Protocol
    IoT Internet of Things for device controllers
    LTS Long Term Support software version
    LXC Linux Containers
    NAS Network-Attached Storage
    NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
    NUC Next Unit of Computing brand of Intel small computers
    NVMe Non-Volatile Memory Express interface for mass storage
    Plex Brand of media server package
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    SMB Server Message Block protocol for file and printer sharing; Windows-native
    SSD Solid State Drive mass storage
    VPN Virtual Private Network
    ZFS Solaris/Linux filesystem focusing on data integrity
    nginx Popular HTTP server

    [Thread #943 for this sub, first seen 31st Aug 2024, 15:55] [FAQ] [Full list] [Contact] [Source code]

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    I’m running something surprisingly close to most of what you’re asking for sans the immich which I’m waiting on stability from them first. That warning at the time of their site that says it’s under constant development and not to use it as your primary picture store is a bit worrisome.

    Unraid with 2 video cards

    • Plex Container (primary video card)
    • Plex VM (pass through secondary card handles DVR and backups and it’s also my steam remote provider)
    • Home assistant VM (running it in a VM is nicer than a container because of HAOS)
    • Jellyfin container
    • All the video services pull from the same catalog. I use jellyfin frequently but secondarily, it is my backup in case Plex heads in a direction I don’t like. They’ve already shown some indications I’m not going to like them in the future.
    • Deluge+VPN container
    • Cloudflare container (first set up is actually a pain in the ass)
    • Tailscale plugin
    • SearxNG container self-hosted search engine tool
    • Pi hole in a container
    • Pi hole on a raspberry pi

    Plex gets accessed remotely via its own remote capabilities

    Jellyfin gets accessed remotely via tailscale

    SearXNG is access remotely via cloudflare

    I have a secondary Plex server sitting on a raspberry pi with the backup pi hole

    I am preparing to set up a peertube. Haven’t had a lot of luck with the container on unraid. I run a fair amount of proxmox at work so I’ll probably just use proxmox for it.

    I run a separate dedicated system completely for my cameras. Not running frigate yet but I’ll get around to it eventually using blue iris at the moment.

    My unraid gets as much uptime as updates allow. I love being able to just jbod my media discs together and still have some protection with parity.

    I find the containerized version of Plex to be more stable than my VM version but that’s probably my own fault as I’m oversubscribing the vm.

  • ahal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    3 months ago

    I’m currently using Unraid for pretty much every thing you listed, and I love it so much. I really appreciate being able to set up almost everything through the web interface. It makes my hobbies feel fun rather than just an extension of my day job.

    That said, I bought the licence before they switched to a subscription model. So if I were starting over I might look into free alternatives.

  • Atherel@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    Was there 6 months ago, I’ll just share what I did: OS: I went with Unraid because you can mix different sized HDDs without loosing space, just make sure the parity disc is same size or bigger as the biggest one with data. I backup everything with duplicacy to a stupid nas (wd mybook i got 2nd hand) and to an external hosting via ssh.

    Most CPU is used for video transcoding so I went with a 12th gen i3 12100, it’s more than enough for my usage. Just don’t make the same error as I did… I really recommend a better cooler than the boxed one. It can get loud when unmanic starts to converting bigger videos to h265.

    My normal PC is fully team red as it just works better on Linux for gaming but for nas, 12th gen Intel seems to be the way to go as far as my research shows.

    I don’t use a gpu and the slot for it is used for a DELL perc h310 SAS controller in IT mode for more discs.

    Most services are not exposed and I use wireguard to access my server remotely. Single docker services are exposed with nginx reverse proxy manager and dyndns, my domain is set to resolve to local IP addresses when at home or through vpn, this way I can always use the same hostnames with valid certificates. I use a simple bash script in a cron job to update my dns zone.

    I have other hardware to play around and did work with proxmox and other solutions, but this NAS had to just work without lot of tinkering and I’m really happy with it.

  • hperrin@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    3 months ago

    The way I’ve done it is Ubuntu Server with a bunch of Docker Compose stacks for each service I run. Then they all get their own subdomain which all runs through the Nginx Proxy Manager service to forward to the right port. The Portainer service lets me inspect things and poke around, but I don’t manage anything through it. I want it all to be super portable, so if Ubuntu Server becomes too annoying, I can pack it all up and plop it into something like Fedora Server.

  • pwet@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    After years of messing around with cheap and unreliable hardware and complicated setups, I settled to a very stable and simple setup: one huge Dell server with a lot of spare SAS bays and plenty empty memory slots, driven by Proxmox. Within it only 4 VMs: One for pfSense, one for Home Assistant, one for Docker, and one for Ispconfig, as I host for some friends. I ended up using Truenas as it was such a pain to maintain and totally useless for my use case. Proxmox is good enough to run a simple ZFS Nas if you don’t need to manage dozens of shares and users. It’s now so hassle free that I start to become inclined to brake something just for the sake of it.

    • ripcord@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      Which model Dell?

      Buying few-year old enterprise gear can be a really cost-effective way to get a ton of power and expandability. But the noise, footprint, and power requirements seem pretty niche, even for homelab/selfhost people.

      But I’m curious if you’re talking about a full-depth rack system like I’m assuming, or something else.

      Personally, I switched to a handful of very small-footprint systems (mostly NUC/SFF PCs, and some laptops). And use cheap jbod enclosures when I need to add external storage.

      • pwet@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        It’s a R730XD. It draws 168W idle with 128GB of 2400Mhz, and 8 x 3.5" spinning drives. I started with small desktop computers, but I ended up compromising about everything: ram, disks, cards. Everything was braking one after an other, mostly because heat I would think. I (and my family) was constantly annoyed by the outages, so now I invested in a proper rack in my garage. It’s sometimes noisy, it’s somewhat power hungry, but god… Professional hardware is so comfortable to work with. iDRAC, ipmi, very good temperature management, lot of room for upgrades, reliability, I wouldn’t go back to the nightmare of half-assed computer. I now run everything I can think of so smoothly that I rarely get complains from anyone. It’s not only from the hardware side to be honest. Using traefik has been a massive improvement to ease my reverse proxying. Finally getting rid of Truenas a huge relief. And switching from a hardcore 20 year long Gentoo user to a Portainer’s noob a clever move to finally get some time to use the services I host instead of messing around with hundreds of config files.

        By the way, I do not understand the huge paranoia about facing services to the internet. I’m happy to share my mail, websites, jellyfin, cloud services and what else to everyone interested. In the more than 30 years I’m online, I never been hacked in anyway. I might be lucky.

  • n4sdaq@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    3 months ago

    Been running Unraid for almost a year. I was previously running Windows with nearly zero insight into the health of my apps, RAID, etc. Made me very nervous. Unraid makes it all so easy.

    I’m running many of the apps you mentioned and the implementation of docker on Unraid is easy to install, update, etc. I used docker on Windows but it was not the same. I’m not a software dev, so I’m not sure why you said Unraid’s docker implementation is hacky. It seems good to me.

    The reason I switched to Unraid was I had to add more storage to my RAID and that was impossible in Windows without destroying the RAID and losing my data. I considered TrueNAS, but my understanding is the same is true. They’re supposed to be adding that capability soon ™️ but who knows when that’ll actually be available and reliable. Unraid let’s you add more storage whenever and the drives don’t have to match. I love the flexibility.

    I use Nginx Proxy Manager docker to access my apps externally. My SO is not tech savvy and after setting up the individual apps with the domain I have, it’s usually smooth sailing. If I ever need to do any mucking about with the server itself, I turn on UI teleport. I also have a PiKVM but have only needed to use it a couple times. It’s just not necessary with how reliable Unraid is.

    My server has an i5-6600 and utilizes QuickSync, which is great. More energy efficient than a dedicated GPU. I’ve considered adding a GPU but I haven’t run into a situation where I need it.

    Tldr: highly recommend Unraid.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    3 months ago

    Unraid is bad at NAS and bad at docker. Go with a separate Nas and application server.