I see no problem putting all the machines into a single cluster. By the way, what are you using for shared storage?
Pvt-Snafu
As mentioned, if a drive dies, you just need to take it out and replace with a new one and start RAID rebuild. The vendor should have a guide on this with detailed steps.
Well, on Windows, for bit rot prevention there is ReFS but the problem with it is that can go RAW for no reason. Happened to me several times. As to RSTe (Intel vROC), poor performance and also not reliable. Plus, not sure how the migration would go if you want to transfer to another system.
I think that should be possible but I would prefer the second option mentioned - Hyper-V role with a NAS OS Vm and drives passed through to it. Then collect in RAID inside a VM.
That's a very decent setup. What are you running on it?
Hmm, I guess the most IOPs and latency cut will come from a storage protocol use. I mean, with 10GbE and iSCSI or NFS, you might not feel the benefits of NVMe. Especially in terms of latency. And as far as i know, there is no NVMe-oF support yet.
Depends on the amount of data you are writing and the DWPD of an SSD. Also, take into the account parity if you're doing RAID: https://support.liveoptics.com/hc/en-us/articles/360000498588-Average-Daily-Writes
Very nice and clean setup. Looks great!
Well, R720 is quite old. I would look into R730/R630 options. Or ideally, use some hardware that you already have. An old laptop with Proxmox might very well be a start.
Looks like a really cool setup! Nicely done.
I would go for SSDs if I needed speed. SSDs longevity is just fine. Any drive can die when you leave it unused for a decade.
That's still a good usage of the hardware. Main thing is that it does the job.