this post was submitted on 10 Jan 2024
55 points (93.7% liked)

Linux

48092 readers
866 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Hello there lemmings! Finally I have taken up the courage to buy a low power mini PC to be my first homeserver (Ryzen 5500U, 16GB RAM, 512 SSD, already have 6TB external HDD tho). I have basically no tangible experience with Debian or Fedora-based system, since my daily drivers are Arch-based (although I'm planning to switch my laptop over to Fedora).

What's your experiences with Debian and Rocky as a homeserver OS?

all 45 comments
sorted by: hot top controversial new old
[–] lemmyvore@feddit.nl 17 points 10 months ago (1 children)

Debian stable is a very solid choice for a server OS.

It depends on how you're going to host your services though. Are you going to use containers (what kind), VMs, a mix of the two, install directly on the host system (and if so where do you plan to source the packages)?

I've kept my Debian system very basic, installed latest Docker from the official apt repo, and I've installed almost every service in a docker container. Only things installed directly on host are docker, ssh, nfs and avahi.

[–] PrivateNoob@sopuli.xyz 6 points 10 months ago (1 children)

I'm going full container mode if it's possible, or just make the docker images myself then.

  • Jellyfin
  • Onedrive alternative (probably Nextcloud)
  • Personal website + it's backend, or just the backend (Might won't host this tho, since it's a high security risk to my personal data)
  • Pi-hole
  • Probably other ideas which seems fun to host
[–] lemmyvore@feddit.nl 6 points 10 months ago (4 children)

Make sure you use a docker image that tracks the stable version of Jellyfin. The official image jellyfin/jellyfin tracks unstable. Not all plugins work with unstable and switching to stable later is difficult. This trips lots of people and locks them into unstable because by the time they figure it out they've customized their collection a lot.

The linuxserver/jellyfin image carries stable versions but you have to go into the "Tags" tab and filter for 10. to find them (10.8.13 pushed 16 days ago is the latest right now).

To use that version you say "image: linuxserver/jellyfin:10.8.13" in your docker compose instead of "linuxserver/jellyfin:latest".

This approach has the added benefit of letting you control when you want to update Jellyfin, as opposed to :latest which will get updated whenever the container (re)starts if there's a newer image available.

While upgrading your images constantly sounds good in theory, eventually you will see that sometimes the new versions will break (especially if they're tracking unstable versions). When that happens you will want to go back to a known good version.

What I do is go look for tags every once in a while and if there's a newer version I comment-out the previous "image:" line and add one with the new version, then destroy and recreate the container (the data will survive because you configure it to live on a mounted volume, not inside the container), then recreate with the new version. If there's any problem I can destroy it, switch back to the old version, and raise it again.

[–] PrivateNoob@sopuli.xyz 2 points 10 months ago

Oh that explains the 2 linuxserver and official jellyfin then. It was always kinda strange to me.

Luckily my uni hosted a docker course and binge watched a beginner Linkedin Learning too about it, but I'm really grateful for your in-depth guide. Guys like you really make Lemmy the old Reddit you used to have and cherish in your hearts. :3

[–] idefix@sh.itjust.works 1 points 10 months ago (1 children)

The official image jellyfin/jellyfin tracks unstable

Why did they make that choice? I am on this version right now, didn't know it was unstable. I found it very difficult to have information regarding the docker images in general, it's a pity we don't have a few lines explaining what the content is.

[–] lemmyvore@feddit.nl 2 points 10 months ago

It's more like "latest" tracks unstable, because unstable evolves much faster and it puts out versions more often. Unfortunately there's a practice going around that makes people just the :latest tag for everything and they don't always stop to consider the implications (which may be different for each project).

[–] hedgehog@ttrpg.network 1 points 10 months ago (1 children)

I thought the official jellyfin images on the versioned tags (like “10.8.13”) were stable - are they not?

[–] lemmyvore@feddit.nl 2 points 10 months ago

Oh right, I filtered for "10." and got an unstable image and thought they don't have them. Yeah those are stable too.

[–] SpaceCadet@feddit.nl 1 points 10 months ago

The official image jellyfin/jellyfin tracks unstable

Huh? That doesn't appear to be the case. jellyfin/jellyfin:latest, which is what they tell you to use in the installation instructions. gives me 10.8.13 which appears to be the latest stable release.

There are newer and unstable versions available in dockerhub as well, but latest doesn't give you those. After all latest is just a tag with no special meaning of itself, it doesn't necessarly give you the most recent build.

[–] stefenauris@pawb.social 14 points 10 months ago (1 children)

Debian is a distro of few surprises and stable but slightly out of date packages. Their software repositories are vast and supported across pretty much every architecture you could think of running Linux on.

Meanwhile the world of RHEL has been turned upside down with Redhat essentially putting a paywall around their sources. Although Rocky currently promises to continue being bug for bug compatible with RHEL it remains to be seen if they can continue to do so (in my opinion)

[–] PrivateNoob@sopuli.xyz 3 points 10 months ago

Yeah that's one of the main reason I'm interested in your experience. The sorta recent source lock is definitely shaky just in general, although I believe in Rocky's message that they won't have to roll their shutters down.

[–] ikidd@lemmy.world 11 points 10 months ago

Use Debian, make your life easier. Chances are the RHEL copies are going to get frozen out, but there will always be Debian, and it's the most community supported server mainline anyway.

[–] NotATurtle@lemmy.dbzer0.com 11 points 10 months ago (2 children)

What surprised me with debain, it comes as a very minimal installation, so you will have to set up stuff like sudo yourself.

[–] exu@feditown.com 2 points 10 months ago

If you don't set a root password, it'll add your user created during the install to the sudo group.

[–] yianiris@kafeneio.social -5 points 10 months ago (1 children)

That's 23s of your life wasted, but how would you set it?
NOPASSWD?

That's not secure by most experts, people do it as convenience, but say rogue code was run by user and sudo was open, ... done your system belongs to someone else now.

@NotATurtle @PrivateNoob

[–] PrivateNoob@sopuli.xyz 2 points 10 months ago (2 children)

Hmmm interesting, so having no sudo is a security move then?

[–] ikidd@lemmy.world 8 points 10 months ago

Sudo is fine, just use a good password. Anyone setting up NOPASSWD has given up on security, it's not a thing in real practice.

[–] yianiris@kafeneio.social 2 points 10 months ago

That is a strict position some have, but I didn't say this. Editing /etc/sudoers and giving sudo or wheel group users a no-passwd access is insecure.

sudo chmod 1777 /tmp

will not ask you for
passwd, it is like bypassing sudo

If you open sudoers you will see what I'm saying. In debiuntu it is sudo group in arch/void ... it is wheel group

@PrivateNoob

[–] vynlwombat@lemmy.world 9 points 10 months ago (1 children)

What would you like to do with your home server?

[–] PrivateNoob@sopuli.xyz 5 points 10 months ago (2 children)

Ahh yeah I have forgot to mention that.

  • Jellyfin
  • Onedrive alternative (probably Nextcloud)
  • Personal website + it's backend, or just the backend
  • Pi-hole
  • Probably other ideas which seems fun to host
[–] haui_lemmy@lemmy.giftedmc.com 7 points 10 months ago

Hi! Here’s you, like 2 yrs down the road. I have no opinion on the server OS since I started with ubuntu server but my projects went a similar direction.

One major thing I’d recommend is thinking about security: web facing servers with your private data on it are a very bad idea. So unless you mean a website for personal use, I’d split the “home” server and the “personal web server” or vps in two so you have the stuff you want others to use unsupervised and the stuff you use at home and from the road.

Another thought is bandwith, unless you have insane upload, I’d stay away from web facing stuff like websites, game servers and social media instances. This works on a cheap vps with gigabit bandwith up and down. Way less hassle and less security issues.

[–] huskypenguin@sh.itjust.works 2 points 10 months ago (1 children)

I would do Truenas scale + portainer

[–] PrivateNoob@sopuli.xyz 1 points 10 months ago

Honestly yeah, that's the more productive option, but I want to learn setting up things by myself.

[–] frap129@lemmy.maples.dev 8 points 10 months ago (1 children)

As others have said, debian is very minimal, so If you would prefer to setup and configure the whole system yourself, debian is a good choice.

Personally, I prefer fedora server. It comes with more things configured out of the box (zram and sysctl configs for example) as well as better security defaults (selinux included with proper policies) and first class support for container infrastructure. Ultimately you could achieve a similar end result with debian, but for my homeservers I prefer to let the fedora team handle most of the system configuration for me.

[–] fluffyb@lemmy.fluffyb.net 2 points 10 months ago

I would be careful if they wanna use zfs though. Fedora can be a bit quick on the kernels meaning a kernel can come out that isn't supported by zfs. This causes zfs to fail to build the kernel module on the new kernel and so you lose zfs on the next boot.

Almost happened to me tracking debian testing a while back.

[–] Sebbe@lemmy.sebbem.se 7 points 10 months ago (1 children)

I run Debian on my server and while it's sometimes annoying how old a lot of packages are, it's ridiculously stable.

[–] PrivateNoob@sopuli.xyz 3 points 10 months ago (1 children)

How annoying do you find the outdated packages?

[–] Sebbe@lemmy.sebbem.se 5 points 10 months ago* (last edited 10 months ago) (1 children)

Mostly not at all but sometimes I want to try some new features and that's when it gets annoying. Right now, I'd like to try passing encoding capability from my APU to a VM I'm hosting but it requires Mesa 23 and Debian is on 22.

[–] HumanPerson@sh.itjust.works 3 points 10 months ago (1 children)

You can use the backports repository fairly easily. I did for the kernel and had no issues.

[–] Sebbe@lemmy.sebbem.se 1 points 9 months ago

Thanks for the tip but Mesa is not in the backports repo.

[–] r0bi@infosec.pub 7 points 10 months ago (1 children)

I was running CentOS then migrated to Rocky. It handles various VMs and containers great and has been trouble free for years. 10 core Haswell-era Xeon with 64 GB RAM and a lot of ZFS storage.

I moved from Arch to Fedora on my desktop/laptop as well. Really helps my mental state not keeping up with the different distro-specific knowledge between hosts.

[–] PrivateNoob@sopuli.xyz 3 points 10 months ago (1 children)

Did you get bored of dealing with packages dependencies and always relying on AUR when you wanted to download a corpo software? I'm planning to do the Arch to Fedora pill too tbf.

[–] r0bi@infosec.pub 2 points 10 months ago

Somewhat but it was more driven on the server-side decision. I wanted something that I could set and forget, that didn't have a ton of updates but prioritized stability/security patches.

Of course, speaking of packages I do regularly use rpmfusion and epel for the extra stuff the normal repos don't have, but I understand why.

Also being a heavy user of KVM, PCIe and GPU passthrough I found the experience easier and less likely to break between updates. A lot of Red Hat devs work on these subsystems so I assume it's better QA'd.

[–] ExLisper@linux.community 5 points 10 months ago

My experience with Debian is good.

[–] hellvolution@lemmygrad.ml 5 points 10 months ago

I use Debian for everything; from games to servers! The best distro, by far!

[–] Adincar@discuss.tchncs.de 4 points 10 months ago

I'm using Rocky on my main server at the moment, I was/am used to Debian based operating systems beforehand but wanted to learn red hat without dealing with Oracle directly.

It was definitely a step curve getting to understanding the os but I'm quite happy with the stability of Rocky and it does everything I need and more. I think the real question is which would you get more enjoyment out of as far as learning and personally I don't think the learning curve is as steep with Debian.

The best thing I can advise is just back up your data regularly and if you're not vibing or something breaks don't be afraid to change to something different, though as an arch user I'm sure you're used to things breaking.

[–] YawnTor@infosec.pub 4 points 10 months ago (1 children)

I have a home lab consisting of 9 mini PCs running Docker Swarm. They're from various manufacturers, Intel, ASRock, Minisforum, etc. I originally tried to use Debian to build out the environment but it couldn't find the network interfaces, or storage, or whatever else. So I made a Rocky 9 install drive and tried that. Every machine came up with all hardware recognized on the first try. So, that's what I've been running for just about two years now. No complaints.

[–] PrivateNoob@sopuli.xyz 4 points 10 months ago (1 children)

Good to hear that. How many containers do you run if you need 9 mini PCs for those?

[–] YawnTor@infosec.pub 3 points 10 months ago

I use three systems for manager nodes so they don't get much work. Mostly Traefik and a few other administrative services. I have about 80 containers running on the six worker nodes.

[–] knfrmity@lemmygrad.ml 2 points 10 months ago

I started my Linux journey with a Raspberry Pi and Debian based PiOS four years ago and I haven't felt the need to mess with that. Since then I have added other machines running other distros, but the Pi running PiOS is always on and always reliable.

[–] wmassingham@lemmy.world 2 points 10 months ago

I've been using Alma for a while and been happy with it. Like RHEL types, it's slightly behind on versioning, but that's by design.

[–] vanderbilt@beehaw.org 1 points 10 months ago

Having used both:

Debian is very easy to manage, it has the one of packages and mostly sane defaults. Ubuntu’s user friendliness owes a lot to Debian. I do not like the state of package management however. Dpkg is in need of some upgrades, and the deb package format has some security concerns.

Rocky, being RHEL-derived is, as expected, exceptionally stable. I personally find DNF to be the superior package manager and I have historically run into fewer issues with it. Repos are extensive, especially with copr and fusion, but not as good as Debian.

For a simple home server use Debian. If you want experience with enterprise Linux use Rocky.