this post was submitted on 16 Nov 2023
2 points (100.0% liked)

Homelab

371 readers
3 users here now

Rules

founded 11 months ago
MODERATORS
 

I currently have a 10-year old off-the-shelf NAS (Synology) that needs replacing soon. I haven't done much with it other than the simple things I mention later, so I still consider myself a novice when it comes to NAS, servers, and networking in general, but I've been reading a bit lately (which lead my to this sub). For a replacement I'm wondering whether to get another Synology, use an open source NAS/server OS, or just use a Windows PC. Windows is by far the OS I'm most comfortable with so I'm drawn to the final option. However, I regularly see articles and forum posts which frown upon the use Windows for NAS/server purposes even for simple home-use needs, although I can't remember reading a good explanation of why. I'd be grateful for some explanations as to why Windows (desktop version) is a poor choice as an OS for a simple home NAS/server.

Some observations from me (please critique if any issues in my thinking):

  • I initially assumed it was because Windows likely causes a high idle power consumption as its a large OS. But I recently measured the idle power consumption of a celeron-based mini PC running Windows and found it to be only 5W, which is lower than my Synology NAS when idle. It seems to me that any further power consumption savings that might be achieved by a smaller OS, or a more modern Synology, would be pretty negligible in terms of running costs.
  • I can see a significant downside of Windows for DIY builds is the cost of Windows license. I wonder is this accounts for most of the critique of Windows? If I went the Windows route I wouldn't do a DIY build. I would start with a PC which had a Windows OEM licence.
  • My needs are very simple (although I think probably represent a majority of home user needs). I need device which is accessible 24/7 on my home network and 1) can provide SMB files shares, 2) act as a target for backing up other devices on home network, 3) run cloud backup software (to back itself up to an off-site backup location) and, 4) run a media server (such as Plex), 5) provide 1-drive redundancy via RAID or a RAID-like solution (such as Windows Storage Spaces). It seems to me Windows is fine for this and people who frown upon Windows for NAS/server usage probably have more advanced needs.
top 50 comments
sorted by: hot top controversial new old
[–] Skwide@alien.top 2 points 10 months ago

For server:

docker is linux in a jailed namespace (network, filesystem, process tree, etc jail)

Docker hosted on linux is efficient.
Docket hosted on anything else less so.

[–] Freonr2@alien.top 2 points 10 months ago

Never been a better time to try Linux. Ubuntu is pretty easy to get started with (download and setup a bootable USB, stick it and go) and ChatGPT is extremely good about walking you through any questions. You don't even need to ask highly technical questions, just tell it your goal and your system.

"I just installed Ubuntu 22.04 on my computer and want to SSH into it from a Windows computer on my network, how do I do that?"

"I want to download a file from my Ubuntu command line, how do I do that?"

"I want to setup a share that both Windows and Linux computers can access over my network, how do I do that?"

"I have a github action runner provided by github that includes a run.sh file that needs to run constantly. I want to setup as a background service on my Ubuntu Linux computer so it will always be running as long as the computer is on, how can I do that?"

It will spit out every command line you need in what order, contents of a .service file, tell you how to monitor it, and so on. You can ask it what each line does, what the parameters mean, etc. It's like having a mid-level sys admin at your fingertips. It will interpret any errors you get, and tell you how to fix them.

Perfect? Maybe not, but its close for a remarkable variety of tasks. It may be, and I'm not joking, 20 times more productive and time efficient than Google searches, reading stackoverflow posts, reading documentations/man pages and trying to decipher what you really need out of any of those sources.

I'm sure some are too paranoid to ask ChatGPT certain things for privacy reasons, and I would anonymize anything you paste in, probably just be a bit mindful of anything involving permissions (you can also ask what security risks exist doing something). Just normal ChatGP3.5 (free) is extremely knowledgeable about Linux CLI and administration along with common packages and apps you'd want to use.

[–] lightmatter501@alien.top 2 points 10 months ago

For me, #1 is license costs. I’ve taken home some servers which would require me to buy 4+ windows server licenses because 16 physical cores is a number for entry-level servers at this point. For the cost of those licenses, I could almost buy a new server with a similar amount of cores every single year.

Second, the brand new filesystem, ReFS, (which needs licenses), has just about caught up to what ZFS had in 2005. The biggest omission is that 2005 ZFS could be your root filesystem. This is less important on *nix systems where your root can be tiny, but windows insists on storing tons of stuff on C, which still needs to be NTFS. ZFS also has 22 years of production testing and still has lots of development.

Third, I want to use containers, and windows uses a Linux VM to do that, so why not skip the middle man?

[–] killermouse0@alien.top 1 points 10 months ago

I can get behind your pragmatic analysis. If it works, is low power, easy to manage, etc then that might be a good choice! One thing to possibly also consider: how future proof would you say it is?

[–] CryptographerOdd6143@alien.top 1 points 10 months ago

The S in NAS refers to storage and storage requires being able to use multiple drives together as a single disk. Windows doesn’t work well for this primary NAS use case.

[–] EnkiAnunnaki@alien.top 1 points 10 months ago

RAM management is terrible and I just ran Windows Updates on my gaming PC last night and it went into a boot loop. Been a while since I've heard about a NIX platform running into boot loops on system updates.

[–] jayaram13@alien.top 1 points 10 months ago

Honestly, you do you. Stick to what works with your workflow and use case.

However, given that you're in r/homelab, it's reasonable to think you're open to learning new things. With that, Windows tended to not be as stable as Linux (hence the dominance of Linux in the server world).

Windows approach to drivers and software wasn't as clean as Linux. Uninstalling software was not guaranteed to remove everything in Windows.

Windows license is another minus.

Plus, given that it isn't open source, and given the dominance in desktop world, lots of viruses tend to target Windows, and we don't get patches on a timely manner. Plus, there's a history of patches breaking things in Windows.

Linux and Unix, tends to be simple and stable. Synology is a very good NAS, which combines the robustness of bsd with a fantastic GUI. I'd personally urge you to get another Synology or explore xpenology.

But barring that, your use case today is simple enough and if you think Windows is sufficient, go for it.

If you want to also get learning out of it, explore truenas scale. It's based on Debian and is fantastic. You can also sideload proxmox on it for various VM and lxc magickery.

[–] AlarmDozer@alien.top 1 points 10 months ago

Because it’s a waste of a perfectly good gaming PC.

Plus, you’d have to reboot every 3 days for updates.

[–] Alex_2259@alien.top 1 points 10 months ago

Not that I encourage it, but home users seldom pay MSRP for Windows licenses or at all. Getting around the licensing while ridiculously unlikely to get you busted is a hassle.

The answer is there's just better options you can install on top of Linux or BSD that are easier to manage, a better experience (nice web panels and not an RDP GUI or clunky thick client) and they have 0 licensing concerns to pay or work around.

I wouldn't host a share directly from the Linux CLI for some reason I always found this to be kind of a pain but it works, there's easy solutions like TrueNAS or OpenMediaVault, container based options and you can take the cowards way out with Portainer (that's what I do) to run tons of really lightweight services.

Windows is fine just not the best unless you're doing something that works better or needs it

[–] Failboat88@alien.top 1 points 10 months ago

I always recommend windows to people who want a home server that's easy to maintain. Homelabbing is more about learning and trying new things out.

A nuc with a nice size external can do a lot and they come with windows not to mention it can run fine free. A lot of the services people run are all using mono to run the windows app. Anyone who has used Windows can install an exe but not anyone is willing to use command line in Linux.

Home server and self hosted are more focused on what you're looking for.

[–] Mint_Fury@alien.top 1 points 10 months ago (2 children)

Lots of great responses here, I won't reiterate what everyone has already explained. The big benefits imo are redundancy using better file systems like ZFS (Truenas) or BTRFS (Synology, unraid), and in general better management of the drives, and data stored on them. These appliances support more robust raid configs as well, so you have a lot less risk losing data. The other big one is simplicity for what you need it to do. Creating an SMB share on a PC using windows isn't hard, but it's not nearly as simple as the 3 clicks it takes on the purpose built OS. These OSs also usually have built in solutions for hosting any other apps you may also want to play with. That's just my two cents.

load more comments (2 replies)
[–] ProbablePenguin@alien.top 1 points 10 months ago

The main downsides of windows for a server are:

  • Forced reboots

  • More RAM/Storage usage for the OS

  • No options for ZFS or similar data protection software, storage spaces provides basic RAID but the performance can be fairly low.

  • Needs a license

  • Less general availability of self-hosted software, but you can run Docker for Windows as a way around that.

However there are some upsides, it's very easy to set up and manage, SMB shares are super easy, and some backup software like Veeam B&R is windows only.

[–] More_Leadership_4095@alien.top 1 points 10 months ago

If you run only system resources, or task manager, or whatever windows is calling their resource manager these days to monitor CPU, right next to a headless debian server running only htop you will straight up see the answer to your question.

That, is overhead.

[–] Overall-Tailor8949@alien.top 1 points 10 months ago

Cost of the OS is a large part of it.

Another is Windows FORCING reboots for updates when IT wants to do it.

A third is the "computing" overhead that Windows demands. Even running the GUI, Linux uses much less of the CPU's time than Windows.

Ubuntu is as easy to set-up as Windows (unless you have very odd hardware/software), doesn't force updates down your throat, frees up more CPU time and RAM for the applications YOU want to run and best of all won't cost you a thin dime to purchase. Unless you want to make a purely voluntary donation to the developers.

[–] Content_Yak_7907@alien.top 1 points 10 months ago

It's not frowned upon, it's just not made for it. It's made for desktop use. It's also more unreliable/unstable, just ran into some problems yesterday after updates. You DO NOT have to pay for a license however. The only features of windows you lack (afaik) on an unlicensed install is desktop customization and some kind of remote access that I have never used myself. Docker and VMs will work equally well on windows.

Don't be discouraged from using a dedicated NAS operating system though, there are a lot of videotutorials and documentation. Worst case scenario you wipe it and start over with windows.

[–] MikeHods@alien.top 1 points 10 months ago

Personally, unless I need Active Directory, I actively avoid MS Server. One of the biggest issues for me, is the lack of Docker support. If I have to run WSL or a VM for Docker support, then I'd rather just run Linux and cut the middleman.

[–] d-cent@alien.top 1 points 10 months ago

You could use windows fine I am sure. It is just the majority of us have other things we like to run and there all developed for Linux or Docker. The other reason people go away from Windows is it uses more RAM and CPU, you are doing so little that it isn't really that big of an issue for you. If you decide to add anything else or run docker, you will probably see why the majority go with Linux, because you will run into issues.

[–] Sinister_Crayon@alien.top 1 points 10 months ago

This has been a great discussion here... let me add a few things from my perspective of 30-odd years in the IT space;

  • I like to use stuff that's fit for purpose. Windows 10, Windows 11 and such are desktop operating systems that are fit for their purpose and are very good at it. But they're less optimal for server-type workloads. Microsoft themselves provide a different operating system for that purpose but it has a different cost model that is a lot higher.
  • Access to the GUI is necessary to run Windows. NAS devices and such have the ability to run "headless"; that is no keyboard, monitor or mouse. NAS devices also have a "network first" mentality where everything must be accessible on the network even in the event of a system failure. Recovery cannot require a monitor if you can't plug one in! Windows (even server) requires physical access.
  • Server-focused platforms like NAS provide a lot of capabilities that Windows does not because of the nature of their platforms. For example Synology allows growing your storage easily while Windows requires a lot more technical knowledge to accomplish that.
  • Going back to fit-for-purpose; NAS devices provide security that isn't necessarily there with Windows. Windows has a lot of "moving parts"... in addition to the operating system there are a bunch of ancillary libraries, tools and software that may or may not be used when using Windows as a server. All of these additional tools and libraries provide another potential vector for security breaches especially if not individually maintained thus increasing the maintenance requirements of the system. NAS devices give you the basics of what they need to operate and no more... well that's until you start adding service packages to a Synology. But even then they will all be managed through the stock package manager and thus updated and maintained, and will still only be as much as you need to get the job done.

As far as my most recent experience with desktop Windows that I find irritating, there are a couple of reasons I still wouldn't use it as a server platform ;

  • Microsoft has a tendency to randomly update your settings, overriding your own settings with what they think are better. A good example that hit me recently is that some recent update overrode my power management settings on a PC I have set up as a headless desktop I then connect to using NX. I had it set to never sleep... suddenly it started sleeping. I had to reset it in order to get back to where I wanted it. This is not the first time this has happened, and I've had other issues along these lines. 24x7 isn't possible when your PC goes to sleep...
  • Windows lacks a really solid local filesystem. NTFS is OK and is pretty performant but it lacks a lot of the more advanced features of filesystems from NAS vendors or *NIX systems; ZFS and others have checksumming and scrubbing, most NAS vendors allow scheduled data integrity checks and the like... things like that.
  • Software RAID in Windows is acceptable, but is not great. It's hard to understand when things aren't working properly and thus plan to replace failed hardware.

Hope that helps :)

[–] sk-sakul@alien.top 1 points 10 months ago

Windows 10 starts to behave weirdly after like 60 days of uptime. USB devices are not detected, drivers randomly restart, ...

Linux just runs...

Also Win 10 installs updates more or less randomly...

[–] BigYoSpeck@alien.top 1 points 10 months ago

If you just want to share some folders on an existing Windows computer with the network without having an entire other system running then fair enough. But you've got the additional load that pieces on it while you're using it for other purposes

If it's a dedicated computer then Windows and in fact any operating system with a full GUI is overkill

Setting up a dedicated server can require a little extra effort initially, but once it's up and running you can almost forget about it, it's worth the effort for a less resource hungry and more stable system

[–] notdoreen@alien.top 1 points 10 months ago (1 children)

Forced updates and forced restarts. If you want a server that's available 24/4 that's a no no.

That's what did it for me. I started my selfhosting journey on a Windows 10 machine, I stalled Docker on it and all of my containers. Every time Windows forced an update and auto reset, I had to manually go back in, log in, spin up Docker again and every container (I now know that a lot of this can be automated but it's a lot easier to manage a Linux server now). Plus the system requirements are significantly less. The Windows OS alone takes up a chunk of your storage and RAM right off the bat.

I do have one Windows server VM because I enjoy the file system, and it doesn't do forced resets, but most of my infrastructure is made up of Linux VMs on bare metal Proxmox machines.

load more comments (1 replies)
[–] bufandatl@alien.top 1 points 10 months ago

Windows bad. Linux good. BSD better.

For real though. Windows cost money, it uses a lot of resources. And Desktop Version is missing vital parts you might want to use on a windows server like Domain Controller, DHCP, Server, Web Server, Hyper-V. Etc.

Those reasons also have most running Limix or even BSD because they are pretty lightweight especially when used headless. Also as open source they are mostly free of cost. And when you virtualize on a free and open source Hypervisor like XCP-ng or Proxmox you can run way more smaller VMs than Windows VMs as they need more resources.

[–] MarkB70s@alien.top 1 points 10 months ago

I am to give a perspective that may go along with the OP.

I spent this last summer building up a Proxmox server and putting Plex on it, and a few VMs for things I thought I would need. I also just noticed my Synology DS1515+ is not getting new version of DSM, so it's probably getting close to needing to be retired.

six months later, this is my new plan.

  1. Retire the Synology DS1515+
  2. Replace my main windows box that is 9 years old with something current.
  3. Take a couple of drives out of the Synology and move them into the Windows Box, and do a simple dynamic mirror
  4. Put Plex server on Windows
  5. let run 24x7

There are two people in my house, no reason not to let it run 24x7 hosting everything I need. There is a fear of power usage, but, I will monitor that to see if I need to spin up a low power server.

But, I really do not need to separate it out across multiple versions of linus.

[–] Pericombobulator@alien.top 1 points 10 months ago

You could run desktop Windows but if you get the Pro version then you can RDP into from your desktop/laptop. It makes administering it very easy, like working on it locally. .

Personally, I run Ubuntu Server (took a little learning) which I choose to run on Proxmox. You can just run on bare metal. I then just install the media-related packages I use : plex, Sonarr, radarr, SAbnzbd etc

[–] morningreis@alien.top 1 points 10 months ago

Because when you start trying to run actual services, the home-user side of it is going to kick in and make is unreasonably difficult to do simple things such as creating a fileshare, managing permissions, getting your services to work through a firewall, etc. All things which are typically just simple text file configs in Linux, or just a few simple commands.

However if you do persevere and get everything working, it's not going to last. Windows is going to decide what's best for you and you will be left trying to figure out what settings were wiped or reverted to default when Windows updated itself without asking.

And then if youre running a server where you want performance, stability, and security, you don't want extra crap running because all those other services that you absolutely don't need will start interfering with the services that you do need. Linux VMs or containers on a hypervisor are very popular because you can spin up multiple lightweight instances of the OS to perform a single or limited set of functions. So if something breaks, that breakage doesn't spill over to the other instances. With Windows, you'd have to spin up a 10GB+ instance each time for this approach because you'd have so much extra stuff you do not need.

[–] cmmmota@alien.top 1 points 10 months ago

Sever oriented Linux distros are designed with server workflows and high availability in mind. Desktop Windows isn't. However, if you're not running mission critical services, who cares? Do whatever is the most practical to you.

[–] erikpt@alien.top 1 points 10 months ago

Simple, the SMB connection limit in Windows Pro editions (7,8,10,11) is 20. In previous editions it was 10, and I can't find a reliable number for the non-pro editions.

That may sound like a lot, but if you have lots of devices reading/writing to it, you'll quickly run out. It's also kind of a resource hog when there are lightweight and easy to setup dedicated NAS operating systems with no user limitations for free like TrueNAS (fka FreeNAS) and others.

[–] mrtramplefoot@alien.top 1 points 10 months ago (2 children)

I use windows 10 pro for my nas/media server. I run drivepool and it works great for me. I run a Pentium gold g6400 and it's more than enough power. It might use a bit more RAM, but I'll buy another 8gb of RAM before I spend eons trying to learn how to do something in Linux.

load more comments (2 replies)
[–] faygo1979@alien.top 1 points 10 months ago

The old windows home server was awesome. The problem with standard windows, as I don’t remember if you can do some sort of software raid or not. You could build it with a hardware raid on secondary drives, and just share those out and you would be fine. Not any different than the old windows file servers years ago.

[–] NeedSomeHelpHere4785@alien.top 1 points 10 months ago

It is not that you can't or necessarily shouldn't but other things are so much better. Personally I like working in Windows I so I run a Windows VM on Proxmox along with, TrueNas, and Ubuntu. Windows is resource hungry and other OS's do server things a lot better and don't do the bothersome things Desktop Windows wants to do.

[–] FritzGman@alien.top 1 points 10 months ago

If you want to get the same resiliency as a NAS, you need to have multiple hard drives so a mini PC won't cut it. Single drive redundancy is an oxymoron because once the disk goes, so does your redundancy. That said, if you are OK with single disk because of cloud backups, no issue. I would consider how long it would take me to restore it all should the disk go. Its not as fast as you think. Especially when you get into the terabytes conversation.

If you want to run Plex for media streaming, you'll find the resource consumption of windows just for existing plus all the other things running may impact your quality of digital life should more than one person stream something at the same time. Just check the number of "Service Host:" processes running on your windows machine. All the windows specific ones add up after a while.

Windows updates not only patch security flaws but also introduce new features or remove old ones. This can sometimes impact what you are doing with it because they try to steer you to their ecosystem of products with the changes they introduce. It can also break something that works because it isn't a dedicated appliance meant to service that one function.

Multiple NICs. I think there might be mini PCs that come with that nowadays and PCs in general can run multiple NICs. However, Windows networking used to be notoriously bad at managing multiple network card connectivity. Not sure if that is still true as I don't work with Windows too much anymore but if it is and that was in your plans, might want to make sure it can do what you think it can do with the version of windows you get. They still have Windows Pro vs Windows Home right?

Those are some of things I would consider. In any case, your post sounds like your mind is already made up. In the end, you will have to live with it so what you think is really what matters.

[–] nowhereman1223@alien.top 1 points 10 months ago

End User Windows has a shit history with forcing updates on you and reboots just because you waited to long.

End User Windows is also not great at managing large numbers of storage drives.

They also aren't great to manage remotely.

[–] WebMaka@alien.top 1 points 10 months ago

Short answer: desktop versions are tuned for application performance, especially foreground vs. background, while server versions are tuned for, well, being used as a server, multitasking being less performance-penalizing for background applications (like server daemons/servelets) and of course greater uptime.

There's also much more bloat on the desktop side, as it's targeting consumers and not IT.

[–] MairusuPawa@alien.top 1 points 10 months ago

tldr: the OS is shit, it's super expensive, keeping it actually secure is hell, it's nowhere near as good as ZFS-based solutions

[–] SimonKepp@alien.top 1 points 10 months ago

Desktop editions of Windows can be used for a simple home NAS. However, it doesn't have a lot of advanced features supporting that use-case. Once you have a NAS, all of your digital data tends to end on it, which makes it a very critical system. Desktop Windows has one significant advantage, that you can run Backblaze personal computer unlimited backup on it to secure a backup of your critical and non-critical data in case of a disaster. On the down-side, there are no good RAID-features available for desktop Windows, making your data very vulnerable to drive failures, which are quite common. I personally prefer to run a homeNAS on something supporting the ZFS file system, such as Linux with OpenZFS or TrueNAS,but it is very important to choose a system based on your own skills, so you are able to set it up and manage it safely.

[–] nzulu9er@alien.top 1 points 10 months ago

Windows is feature rich and is stable , secure, and just works. Storage Spaces with ReFS, controlled folder access as a short list of valuable features to any Storage solution.

[–] SilentDecode@alien.top 1 points 10 months ago
  1. It isn't meant for 24/7 usage and it also doesn't have the nice features other options do have (including Windows Server)
  2. Windows Updates is just annoying
  3. Why would you if there are plenty better tools for a specific job
  4. Just because you know it, shouldn't mean you will use it for that. Because maybe it's time you learn a new trick, such as Linux based stuff
  5. Extremely heavy for simple tasks
  6. License costs

And to answer your own points you made:

  1. Windows is always busy with something, so yeah, at default you have a higher usage.
  2. Valid point
  3. Simple needs require simple tools. So this is a perfect opportunity for TrueNAS.

I have nothing against Windows, so don't get me wrong. But there are so many better ways to do stuff, also where you don't have to pay for licensing.

[–] highedutechsup@alien.top 1 points 10 months ago

I think the issue is here: "I can't remember reading a good explanation of why." One can make any number of excuses for why or why not to do things, it really is what you want to do.

I use Windows Core solely for my file storage. I have thought about moving to a solution like TrueNas but didn't find that as simple as my current solution. I found windows core runs hdsentinel and ssh/iscsi/nfs/samba easily and the uptime is well over 3 years. I block outside access and never do updates. Since I have a mixture of Windows and Linux/BSD/Arm/Apple products in my home the file server is dedicated to file storage and does nothing else. Windows core is stable and always up.

I only use ReFS because I did have corruption with NTFS and it has been super stable. I know people love ZFS, but for me it was an unmitigated shit show of mismatched utilities and documentation, and losing containers on reboot with proxmox was a PITA getting them back. Since I have 20tb drives, I don't do any raid or anything like that, just straight drive presentation.

[–] dafzor@alien.top 1 points 10 months ago

Because linux is just easier and less annoying.

I too started with Windows server NAS since it was what i was more familiar with, but eventually moved to Linux server NAS and would never go back.

Systems like omv and unRaid are built to be NAS so after install which are very easy, the common/popular use cases will be covered out of the box or a plugin install away without leaving the built in WebUI.

Stuff such as:

  • Setup a bunch of random disks as a single one
  • Setup a backup server for windows/mac clients
  • Setup cloud syncing to a cloud provider
  • Run an additional service in a container
  • Save all your nas settings to you can restore them on a new server later

In windows, you'd have to install a bunch of independent programs designed for single user desktops with different configurations and UIs and trying to make them work as a server.

All while also fighting against default OS settings and licensing limitations since it was never designed to be a NAS but a desktop OS and microsoft servers to use server licenses.

Not to mention Windows just isn't popular server software outside of enterprise due to the high cost, so most tools will only support linux and wont even have a windows version.

[–] szakes1@alien.top 1 points 10 months ago

Unless you feel comfortable to set everything up via GUI, Linux can be configured using just the CLI. It's a major game changer when it comes to OS administration.

[–] officiallyStephen@alien.top 1 points 10 months ago

I have found that running windows without reboots leads to a lot more issues than Ubuntu (or ideally Ubuntu server). I don’t know if the OS just has memory leaks or what but continuous runtime is just not that great on Windows Desktop

[–] schokelafreisser@alien.top 1 points 10 months ago

If you want, you could try Openmediavault. It is a very simple and clean os for Nas functionality, based on debian. It has simple plugins to use the basic functions, but you can install anything on it in docker. There are great tutorials online.

[–] thetredev@alien.top 1 points 10 months ago

I wouldn't say it's frowned upon. It's just... assuming you are not going the container route, then it's basically the same thing that it always has been with any OS before LXC (and after that Docker) became a thing: One machine for multiple applications (bare metal or VM, doesn't matter). Managing and maintaining those without causing too much downtime is a sometimes unachievable task.

Generally speaking: since Docker became a thing, it really doesn't matter which OS you use to run which application from which image type (Linux or Windows, doesn't matter either).

My personal opinion:

  • Use a "real" hypervisor as the underlying OS: ESXi, Proxmox, KVM standalone, whatever suits your needs and skills. Why? Because the OS is made for hypervisor tasks. Windows Server or Desktop with Hyper-V may work well with Windows guests, but managing those, especially with multiple bare metal nodes, may be unintuitive to say the least.
  • Use Windows as a VM to run a Windows application
  • Use Linux as a VM with Docker to run multiple Linux applications

That's how I do it.

Edit: of course nothing hinders you to run Windows Desktop bare metal as a NAS server. That's a perfectly valid thing to do. BUT: Same problem as running bare metal Linux as NAS server: How would you achieve backups/snapshots? I know it's certainly not impossible, but using a VM is many times more convenient. This is the main reason to use VMs.

[–] Professional-Bug2305@alien.top 1 points 10 months ago

Nas software is more friendly and just works. Windows doesn't have a great raid option, but nas do. Windows desktop also has alot of artificial limits and is less stable and will reboot for patches at inconvenient times.

I simplest terms, it's like use a flat screw driver on a Philips head. It'll technically work, but there are just plain better options that will work better and easier.

[–] justpassingby_thanks@alien.top 1 points 10 months ago

I have had two Synology boxes and they were my first foray into servers and backup at home. I'm an amateur, or was, and always will be as far as enterprise things. My recent synology 1019+ acts as an awesome plex server using docker only because it still has intel graphics transcoding. Most recent Synology boxes have moved to AMD or intels that don't have intel graphics. For that reason and that reason alone, I'd get a small box, my preference is linux, but a small box with igpu. That means you'd still need a proper nas for media. Maybe just slowly update the disks in your old synology, or do a DIY, or learn something. But honestly for my nas purposes, I just trust synology, and I have learned all sorts of other options along the way.

[–] eagle6705@alien.top 1 points 10 months ago

The biggest limitation is connection limits. While 2-3 users won't matter, once you get past 10 connections you will start to get into issues.

[–] sintheticgaming@alien.top 1 points 10 months ago

Given that you’re “most comfortable” with windows is probably the number one reason why you should go with something other than Windows. I think you should always get out of your comfort zone and expand your knowledge. Sure you can keep using windows but why but branch out! Hell if you really want to take a leap of faith load up TrueNAS core 🤣

[–] Perfect_Sir4820@alien.top 1 points 10 months ago

I migrated my server (mainly Plex but lots of other stuff too) from windows bare metal to windows docker to Linux docker. The main reason was to avoid docker running in a VM with the overhead and networking issues that a native Linux install avoids. In the end I'm very glad I did. Docker just makes everything so easy to setup, manage, backup and migrate if needed. My server is very stable and almost never needs to be fiddled with or restarted. Also I learned a ton which then springboarded into other homelab stuff like running a proxmox server, opnsense firewall, remote gaming server, etc.

[–] IBreedBagels@alien.top 1 points 10 months ago

It depends on who you're talking to.. Windows can work perfectly fine..

The more "techy" the environment you're talking in, the more in depth answers you'll get. But in general:

- It's bulky, there's a ton of overhead that simply isn't needed. Lots of bloatware, different systems running in the background that are basically mandatory that don't exist on other systems.

- Security concerns... Nobody trusts Microsoft, and there's too much user security involved even if you've purchased and own the freaking thing. Sometimes you have to jump through multiple hoops to accomplish some basic task because of some "permissions" bs

- Lack of customizability.. There's not a ton you can do as far as customization besides writing your own programs / scripts. Any third party route you might take will just add more un-necessary bloat

- As far as the actual work being done, the "NAS" portion, the management is horrendous natively. It just feels clunky and un-reliable (I've ran many windows NAS environments). It can run just fine, it just doesn't give you that warm "this is gonna work" feeling...

- I think one of the BIGGEST issues here, is price... Windows isn't free. At least, if you're a law abiding citizen lol.

[–] liverwurst_man@alien.top 1 points 10 months ago

Windows has more overhead, is more expensive, is less interesting/fun IMO, has poor data parity features, and has less of the homelab community’s attention than any purpose built Linux based home-lab OS. But it will definitely do the job with minimum effort from you.

load more comments
view more: next ›