Software: First and foremost: must be unix-like, must be able to communicate in both ways with an open-wrt router firmware distro and the devices on the local network (android, windows, linux, ipadOS systems). Must be very secure, like enterprise-grade or almost like that. Must be free and open-source. Must be somewhat fault-tolerant (so no Arch or gentoo or anything like that, i don’t feel like recompiling the server’s system daily). Must have these in base repos or easily installed in other methods: secure ssh client (like openSSH or such caliber), a software that enables me to securely control and see the gui of the server from android (Rustdesk? or such), (optionally i2p, dnscrypt, vpn clients, not needed if the router has them, just in case of emergency), ip camera management software, high-security intrusion-detection system, https server with css and js support (preferably command-line). Window manager: must support a very easy to use and lightweight tiling window manager (like i3wm) or if not, its installation and configuration needs to be possible and documented.
Hardware: affordable, x86_64 architecture, should be able to handle all of these at the same time, without freezing or overheating (i live in Hungary, so should be able to handle up to 40°C air temperature with stock fans or there should be space for more fans. liquid cooling is no-go).
I have considered these operating systems. Are any of these bad ideas? What you recommend that is not here?
AlmaLinux Alpine Linux Ubuntu Server Rhino Linux (unofficial ubuntu rolling) Debian Testing Void Linux FreeBSD
I would always recommend good old Debian for a mostly „it just works“ experience. You‘ll find debian packages for most if not all the things you mentioned. Alternatively you could go the steeper route and use an immutable OS like Fedora CoreOS or Fedora Silverblue for a more desktopy experience.
Hardware wise I‘ve been told the Intel NUC kits work wonders, or similarly specced boxes from Asia. You might get like 32GB RAM, a fairly recent CPU for <400€. Personally I‘m using a 12 year old Mac Mini until it dies, running debian.
i have all of those things on my ubuntu server; but i wouldn’t characterize all of them as enterprise-grade . my ubuntu server it’s based off of off-the-shelf hardware from a couple years ago and it does all of those things that you described plus more.
it’s my wifi router; my data storage backup; my home made security system; my media server inside; & my cat’s favorite warm spot all within a tiny case the size of a toaster with lots of harddrives. it uses 2 kvm/qemu based virtual machines on top of the bare iron and they both use pci-passthrough; the first virtual machine is based off of the pfsense soft firewall & router and also serves to air-gap the bare iron server from the internet and the second virtual machine is windows 10 and serves to provide wifi 6 & 7 speeds with the windows ap driver.
i wouldn’t describe any of it as enterprise grade since they’re a bit hacky: for example, the server is mostly headless; but i did install the xserver & vnc because i use the motion project along with a bunch of old androids to create a homemade security monitoring system and that requires a browser. this means that i can now access the server’s gui anywhere than i want; but it’s subject to vnc’s limitations.
however the things that come from the soft firewall are definitely enterprise grade: the vpn works well and i can use both it and ssh from anywhere in the world to access my home network and i could theoretically add in a remote check in capability from a new project that reacts to incoming connections.
the only thing i don’t think it could do i the high temperatures; the case is compact so i doubt that its thermals are any good.
so i should avoid mini and compact pcs then, right?
for heat: you’re either going to want a blade serve with lots of fans or big, mostly empty case.
the size of a toaster with lots of harddrives
a pretty big toaster then, isn’t it?
it the length of 3 spinning platter harddrives and the width of 3 of them; it’s smaller than my actual toaster.
well yeah actually it depends on how they are rotated. if they are “hanging” with the connectors facing the bottom/top, it does not make the box big in the dimensions I imagined
I run Slackware on all my servers
i heard it is extremely hard to use. is it true?
Preface: Not the person you responded to.
I’ve never used Slackware myself, but it’s probably the oldest distribution out there. It’s supposed to be stable AF, doesn’t “fix” what ain’t broken, and is very old school in its efficiency mindset. This means it’s indeed not likely to hold your hand through things, but it’s also very thoroughly documented at this point, and any help you find online is much more likely to still (mostly) work regardless of it’s age - unlike most other more frequently updated distros. It’s meant to be reliable, not fancy.
security, being up to date, stability, ease of use. All of these are important, but in this order
It is the oldest distribution and tries to not modify any source so as to keep things pure to the vision of the maintainer of whatever software you have installed. It doesn’t hold your hand, there is no auto find and install dependencies for example, but then again that’s one of its advantages, you know what you have installed and why. I picked up a raspberry pi a while back and gave their Rasbian a try. booted it up and ran its update and saw a Microsoft repo get added and stuff from it starting to download so I unplugged it real quick and put Slackware-arm on that microSD card and never looked back at the rasbian/debian stuff again.
i couldn’t live with no automatic depency resolving. It is like booting up without a package manager, network connection, gui, sudo command. I want a server, not a broken system to fix
It already has all that. And the reason it doesn’t do it auto is so that you can yourself, so you know whats going on. I’m running nextcloud at home for example and apache, mysql, etc were already there so it was like 30 minutes to download and install nextcloud and set it up, very simple, easy and fast to spin up new servers. There are third party package managers that do like sbopkg so you still can if you want.
Debian.
I have considered these operating systems. Are any of these bad ideas? What you recommend that is not here?
why not Debian? Perhaps Proxmox (but only if you are interested in virtualization based separation)?
If they want to run all those services, they will absolutely need some kind of separation like VMs or containers, else it will very quickly become a mess.
Absolutely. I think having this in mind would probably also solve the outdated packages problem. the docker based services won’t depend on it, and unless OP wants it to be a full blown desktop system too, the older packages shouldn’t get in the way
Proxmox has been pretty good to me. I have an ancient office PC that has proxmox installed as the hypervisor. It’s based on debian but everything is done via a web interface (you can ssh or whatever into it too if you needed to). Then I have debian with docker containers, TrueNAS, and home assistant all installed as VMs. Benefits to this means you can put mission critical stuff on the “boring” debian and then have fun with whatever you want to tinker with on an entirely different os/Virtual Machine. I also use wireguard easy which is stupid simple to setup a VPN with. I would strongly recommend keeping all management of the server on the local network and use a VPN to connect. This will get you the “enterprise grade” security. Anything public should go through a reverse proxy/DMZ VM if you host something on the Internet. Use cloudflare or similar as an extra layer if you need a domain name and want a buffer between users and your network. Keep that device and software up to date and you should have a great defense.
IDS wise, it’s a lot of work. You’re better off spending that time working on building security by design by doing the above and ensuring anything that touches the public Internet has as little permissions as possible (no running the web server as root/user account), firewall management, etc. If you do want the challenge, or are Interested in learning something like security onion, wazuah or whatnot, don’t let it stop you.
Hardware wise, affordable and uptime could mean it might be cheaper to have a backup machine. Proxmox has features to support high availability where if one of your physical servers go down, another can take over (2 physical servers that are copies of each other). You could have a decent workstation and then a used PC or whatnot as the backup. More important is probably a UPS and some workstation gear unless you want a screaming server jet in whatever room it goes in. Nothing you’ve mentioned seems too performance heavy so technical PC recommendations are going to vary based on expected traffic or use cases. My 2014 DDR3 office PC manages just fine but it’s for very few people and in air conditioned space. You could probably price out mid grade consumer equipment for the main server and a used office PC for redundancy.
is it a big problem if i don’t use virtualization? And i think if i ever need a public website, i will use an another machine to host that, or a docker. Also, what kind of cpu is needed and how much ram? i don’t want a headless server, since survallience stuff needs graphical enviroment, my best bet would be a lightweight x11 window manager
Virtualization can be nice in that you can tinker and not worry about dependencies. Plus you can have one resource that’s stable on FreeBSD, another that works well on Unix, etc.
Headless servers can run surveillance stuff via web interfaces or API/app integrations. Plus you can use the GUI via vnc, spice or another service to get to your x11 environment. I find proxmox easier than docker/containers as most of my troubleshooting is there. I’ve got security cameras linked to home assistant and it’s all headless. You could plug a monitor in and pass that to a virtual machine to get the desktop experience.
Hardware recommendations are going to need more information. Number of users? Number of cameras/tasks the server is expected to do concurrently, will you have media/NAS hosting and if so, how much space and how fast do you want that to be?
Your use case in the OP for less than 4 users could probably be run on a potato (my potato is bottlenecked by wifi @ 10MBps). 10-15 users streaming media or 20 cameras constantly streaming to a monitor could easily eat up a decent chunk of resources.
If you’re not exposing anything to the Internet, you probably don’t need an IDS. It’s a lot of effort to reduce false positives/tune it and the benefits are probably not worth it unless this is a business use case. Enterprise IDS/SIEMS used by actual companies is typically not FOSS because it needs that support provided by the vendor.
it will be around 5-15 users at the same time (end devices), 5-10 cameras (720p, 25fps, with lightweight motion detection), it will surely host that, and some ids like Snort or Suricata (not actual enterprise software, only something that is open-source and tries to imitate such security), maybe 1-5 static websites. In emergency, it will take the file server, i2p, dnscrypt, vpn hosting as well. And there should be still some resources free, for stability and performance. I have 15-20 mb/s wi-fi, according to ookla speedtests and torrent downloads (i’m living next to a forest). oh and i would like to mitigate ddos attacks, at least with the basic blackholing method (redirecting excess traffic to localhost). However, if i can configure a more reliable method, then i will use that
Based on your description, your exposing something to the Internet. You absolutely should have things virtualized/containers and use a reverse proxy. Use cloudflare for the domain name registration and take advantage of their ddos protection. Keeping everything virtualized/separated would also give an IDS a fighting chance since they’d have to pivot if you bothered to setup firewalls between the devices.
If you have the space for some used servers, you can find something affordable. Any enterprise server will be loud and electricity costs should be factored.
If you don’t have the space for a noisy server, an old workstation on the used market can be affordable. Otherwise you can build something yourself using consumer parts. Ryzen 5 (Ryzen will allow you to use ECC RAM which is something you might want) or an i7/Xeon from the previous generation or two should be more than enough. Add 32-64Gb of RAM and a SSD boot drive. I’d probably get HDDs designed for surveillance to save cost and put your file server storage on an SSD separate from the OS. Backups on VMs are stupid easy too which means you’re more likely to bother using and testing them.
Edit: forgot about GPU. If you’re using as a media server and need transcoding or another reason, an external GPU like the Nvidia p600 m4000 will work. Use this link to figure out what you need (you don’t have to use Plex it’s just a guideline)
i really need such strong hardware for hosting these basic things? my dream gaming pc isn’t that powerful. This seems very unrealistic, what you mentioned is top-tier hardware
All of those components should be used and a few generations behind to save cost. A used Quadro m4000 is about $100 usd in the US. A used Xeon based office PC all in should be ~$400-600 USD max stateside and you can find whichever drives you need to add. I don’t know what your local economy is like or what you can expect. If you’re able to find a used office PC or and older device, give that a try and see if it works. If you have 15 users all hitting a computer it’s going to take resources. Those resources are going to depend on what they’re doing. If you want enterprise fault tolerance, ECC may be worth the extra cost. If you want to budget it out you can probably get everything you want running on something 4-5 generations behind for around $100 USD + drives cost.
Consider if you’re going media streaming like a Plex/jellyfin server. It would be kinda similar to playing 15 YouTube videos on your desktop.
If it’s 15 users with maybe 2-3 hitting it at any one time then you can build cheaper and get decent performance. If you’re just hosting static pages/simple programs with low resource requirements anything post 2010 with 4 cores and 8GB RAM will probably run it fine and work as file storage for cameras.
any quadro cards are very rare in my country, it is hard to find one, especially on the used market. And around 4-5 users will go on the network at the same time, plus the cameras. 400$ would be too much, but 100$ is pretty good. currently i’m browsing used PCs from 2012-2016 around the 100$ category
ps: I want to avoid debian. I need more recent software than that. Maybe debian testing can get into consideration, but certainly not the main build
can you give examples of what you need more recent versions of?
anything that has even a little to do with security. Not like a live release enviroment where i grab packages almost instantly, but i don’t think my server could be secure with 5 months - 2-3 years old packages
Ubuntu is debian-based, and their repositories are kept pretty up-to-date. They offer a server config.
what about Rhino (it is Ubuntu’s unofficial rolling distro)?
I’m confused. Your OP seems to describe wanting something stable and “fault-tolerant,” but then you go and ask about an unofficial rolling distro? I think you should figure out what your priorities are first.
i have priorities. And fresh software is higher priority that being ultra stable and fault tolerant. I used Tumbleweed which is a rolling release and it was perfectly stable. I would use SUSE server in no question, if it was free
I didn’t mean to imply you didn’t have priorities, just that a couple of them seemed to be conflicting. To me, what you described called more for reliability than cutting edge. I understand your concern with getting security updates expediently, but you can get those with less system stability risk using a more standard distro.
I haven’t used a SUSE in a very long time, but as I recall Tumbleweed is an official product of theirs. I’ve not heard of Rhino until now, which gives me pause in considering it - let alone the fact it’s not backed by a known significant team. There’s nothing wrong with that, but when setting up a server like you’re describing I’d rather it not require a significant amount of time at random once I’ve got it up and running, which is what can happen when relying upon less vetted software.
It’s your choice, obviously. Rhino looks like it might make a nice desktop to play with, but I personally would really be hesitant to use it for a server because I just don’t have the time to deal with problems at random - I’ve got enough of those already in my life. Your priorities are obviously different, and there’s no denying the fact that even things going awry on your server can be a plus from a learning perspective. I would really be concerned with the project being abandoned since it’s just a year old, tho.
Good luck whichever way you choose to go.
deleted by creator