this post was submitted on 16 Apr 2024
93 points (93.5% liked)

Selfhosted

40198 readers
1050 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 
top 47 comments
sorted by: hot top controversial new old
[–] MrJameGumb@lemmy.world 71 points 7 months ago* (last edited 7 months ago) (1 children)

I don't think you should be using it anymore if it's getting hot enough to cook a pizza...

[–] Krafting@lemmy.world 20 points 7 months ago

cooked perfectly !

[–] NoIWontPickAName@kbin.earth 31 points 7 months ago (1 children)

You knew when you took the picture with the pizza that most of the comments would be about the pizza didn’t you?

Also, if you place it over a vent, does it double as a pizza keeper warmer?

[–] Krafting@lemmy.world 6 points 7 months ago

Kinda, but I cooked the pizza, it was there when I wanted to post something about the server, so I couldn't resist ahah

TO be a good Pizza keeper warmer, I'd definitely have to remove the 12 fans inside

[–] skittlebrau@lemmy.world 18 points 7 months ago* (last edited 7 months ago) (1 children)

Serving pizza and files. What a time to be alive.

mv pizza.01 /srv/mouth/

[–] WhyAUsername_1@lemmy.world 5 points 7 months ago

In Linux everything is a file!

[–] peregus@lemmy.world 16 points 7 months ago* (last edited 7 months ago) (1 children)

Avoid hardware RAID (have a look at this). Use Linux MD or BTRFS or ZFS.

[–] Krafting@lemmy.world 16 points 7 months ago (4 children)

It's a 2004 server, you can't do anything else but HW RAID on this. also, it's using UltraSCSI (and you should not use that in 2024 either ahah)

[–] lemmyreader@lemmy.ml 4 points 7 months ago* (last edited 7 months ago) (1 children)

SCSI was creme de la creme ages ago! Is it not a matter of going in its BIOS, configure the hardware RAID (go for mirror only!?), endure the noise it probably makes, and install ? :)

[–] Krafting@lemmy.world 2 points 7 months ago (2 children)

Indeed! I have a lot of SCSI disks, PCI cards and a few cables too! (also, SCSI is fun to pronounce... SKEUZY) but on this server, the RAID card doesn't have any option to create a RAID in its BIOS, from what I can tell it needs a special software and I can't find good tutorials or documentation out there :(

[–] spacepotato@lemmy.world 4 points 7 months ago (1 children)

You can find the 7.12.x support CD for that controller at https://www.ibm.com/support/pages/ibm-serveraid-software-matrix. I'm pretty sure that server model did not support USB booting so you'll need to burn that to a disc. This will be the disc to boot off of to create your array(s).

I forget if the support CD had the application you would install in Windows to manage things after installation or not, or if that's only on the application CD. Either way you'll find several downloads for various OS drivers and the applications from that matrix.

[–] Krafting@lemmy.world 2 points 7 months ago

Thanks for the link! I'll definitely need to try this... I have a few CDs laying around, I'll burn one!

[–] metaStatic@kbin.social 2 points 7 months ago (1 children)

my scsi controller needs to be entered during boot to manage raid. it also has an external battery that needed replacing (which cost more new than just buying a new card ... with the exact same battery) so if you're not in verbose boot mode figure that out and see if the controller is telling you which function key it needs.

figuring out this old stuff is most of the fun in running it, I would sell it as scrap before actually hosting anything on it.

[–] Krafting@lemmy.world 1 points 7 months ago

Yeah I already have the key combo to enter the RAID card BIOS, CTRL + i

And yeah I won't be hosting anything on it obviously, I just love old hardware and trying to push them tp their limit!

[–] peregus@lemmy.world 2 points 7 months ago

I did not know that

[–] teawrecks@sopuli.xyz 2 points 7 months ago (1 children)

Why is that? Does the motherboard effectively just not have enough inputs for all the disks, so that's why you need dedicated hardware that handles some kind of raid configuration, and in the end the motherboard just sees it all as one drive? I never really understood what SCSI was for. How do the drives connect, SATA/PATA/something else?

[–] Krafting@lemmy.world 3 points 7 months ago

SCSI is its own thing, to fix some issues with IDE iirc. The drive backplane is directly attached to the motherboard, well, more specifically to the RAID Card on the Motherboard, then the RAID card give the OS/Motherboard access to the configured RAID disk that you have created, but not to the disks themselves.

[–] catloaf@lemm.ee 1 points 7 months ago

Well, you could make each disk its own RAID 0 array. There would probably be performance overhead compared to just using the hardware RAID though.

[–] Emmie@lemm.ee 12 points 7 months ago

It really ties the room together

[–] austinfloyd@ttrpg.network 9 points 7 months ago (2 children)

For a non-pizza comment: I've been out of the hardware game for awhile, but the last time I had to set one of these up for RAID, the paper manual (which can probably be found digitally) was helpful. I also vaguely recall RAID 5 either having issues or being unavailable.

[–] austinfloyd@ttrpg.network 4 points 7 months ago

It's slowly coming back to me... There was a floppy disk that you needed to launch the raid config? Also the platform ran pretty well with debian 4.0 if you're debating what to run on it.

[–] just_another_person@lemmy.world 1 points 7 months ago

It's pretty straightforward. RAID controller has it's own bios. Setup what you want. Done.

[–] sabreW4K3@lazysoci.al 7 points 7 months ago (1 children)
[–] Krafting@lemmy.world 6 points 7 months ago (1 children)

Air, mostly!

(but also merguez and pepper)

[–] UndulyUnruly@lemmy.world 2 points 7 months ago

merguez

I see you are a person of culture.

[–] possiblylinux127@lemmy.zip 6 points 7 months ago (1 children)

I like how you have a pizza on the top. Probably not a great place for it long term.

[–] mipadaitu@lemmy.world 8 points 7 months ago

Just keeping lunch warm.

[–] Endorkend@kbin.social 6 points 7 months ago (1 children)

You should replace that thing with something more modern. I had a 5000p chipset system someone gave me with dual quad cores and an assload of ram.

The shitty box idled over 400W. I went as far as getting low power ram and the newest CPU it would support that also supported frequency and power scaling and it still used over 400W on idle.

This while I had a Xeon E5 box that was only a few years younger that uses more in the neighborhood of 50W on idle and utterly decimates the 5000 series box in CPU performance.

You're probably better of fetching some old Ryzen 1800x system of ebay for higher performance and leagues lower power consumption.

As for the raid, don't use it. Hardware raid has always been shit and in modern Linux and Windows is as good as completely depricated.

[–] Krafting@lemmy.world 8 points 7 months ago (2 children)

You're missing the point, it's not about using old hardware to daily drive them here, it's for the fun and thrill of discovering ancient hardware, software and technologies! I'll definitly need to see how much power this one is taking tho, but with only 1 out of 2 CPU I'd say around 200W for something this old

[–] metaStatic@kbin.social 3 points 7 months ago (1 children)

I have a HP Proliant DL380G7, basically the last server with a front side bus, and all the comments about it where about power per watt.

and they're not wrong.

I just don't think this is the community for old servers like this, self hosting is very much a practical consideration and the money spent on electricity running anything useful on these old things is better spent on a raspberry pi or stand alone NAS or something.

[–] Krafting@lemmy.world 6 points 7 months ago

In my opinion, selfhosting is also about discovering how (and what) you could selfhost with old hardware and OS, just for fun and understanding a bit more about the history of hardware

But yeah for 24/7 services I have others way more modern servers and also an OrangePi

[–] Endorkend@kbin.social 2 points 7 months ago (2 children)

Oh, I get it. But a baseline HP Proliant from that era is just an x86 system barely different from a desktop today but worse/slower/more power hungry in every respect.

For history and "how things changed", go for something like a Sun Fire system from the mid 2000's (280R or V240 are relatively easy and cheap to get and are actually different) or a Proliant from the mid to late 90's (I have a functioning Compaq Proliant 7000 which is HUGE and a puzzlebox inside).

x86 computers haven't changed much at all in the past 20 years and you need to go into the rarer models (like blade systems) to see an actual deviation from the basic PC alike form factor we've been using for the past 20 years and unique approaches to storage and performance.

For self hosting, just use something more recent that falls within your priceclass (usually 5-6 years old becomes highly affordable). Even a Pi is going to trounce a system that old and actually has a different form factor.

[–] Krafting@lemmy.world 1 points 7 months ago

I would love to actually get my hand on some Sun gear, they look really cool! Or even some Itanium powered servers! This one is an IBM server that I got for free, and exploring the software to use it, is a bit of a challenge and it is pretty different from how you configure server nowadays. (Also, a floppy drive on a server, this is what I call awesome!)

For selfhosting real stuff, I do have modern gear and an OrangePi too!

[–] Krafting@lemmy.world 1 points 7 months ago (1 children)

I also looked up the Compaq ProLiant 7000, and this thing is huge indeed, but it does look awesome!

[–] Endorkend@kbin.social 3 points 7 months ago

They have a secondary motherboard that hosts the Slot CPUs, 4 single core P3 Xeons. I also have the Dell equivalent model but it has a bum mainboard.

With those 90's systems, to get Windows NT to use more than 1 core, you have to get the appropriate Windows version that actually supports them.

Now you can simply upgrade from a 1 to a 32 core CPU and Windows and Linux will pick up the difference and run with it.

In the NT 3.5 and 4 days, you actually had to either do a full reinstall or swap out several parts of the Kernel to get it to work.

Downgrading took the same effort as a multicore windows Kernel ran really badly on a single core system.

As for the Sun Fires, the two models I mentioned tend to be highly available on Ebay in the 100-200 range and are very different inside than an X86 system. You can go for 400 or higher series to get even more difference, but getting a complete one of those can be a challenge.

And yes, the software used on some of these older systems was a challenge in itself, but they aren't really special, they are pretty much like having different vendors RGB controller softwares on your system, a nuisance that you should try to get past.

For instance, the IBM 5000 series raid cards were simply LSI cards with an IBM branded firmware.

The first thing most people do is put the actual LSI firmware on them so they run decently.

[–] HumanPerson@sh.itjust.works 4 points 7 months ago* (last edited 7 months ago)

Did you use it to cook the pizza?

[–] mlg@lemmy.world 4 points 7 months ago* (last edited 7 months ago) (1 children)

I have a (crappy) poweredge and know for a fact that that's the wrong end to put the pizza on any rack server.

Only heat would be from the drive backplain, all the boiling hot CPUs, RAM, and expansion cards are further back.

[–] Krafting@lemmy.world 2 points 7 months ago

Who said it was too keep it warm ? Maybe it's too cool it off before eating it :)

Also, drives can get pretty hot

[–] li10@feddit.uk 4 points 7 months ago

wats on the pzaa

[–] lemmyreader@lemmy.ml 3 points 7 months ago

pizza for scale :)

[–] Freestylesno@lemmy.world 3 points 7 months ago

Does it cook pizza?

[–] Penta@lemmy.world 2 points 7 months ago
[–] lnxtx@feddit.nl 1 points 7 months ago (1 children)
[–] Krafting@lemmy.world 3 points 7 months ago

Intel Xeon 3.2GHz (yes that's the whole model number), 4 gigs of DDR2 RAM and 3x 73GB Ultra SCSI disks!

[–] Decronym@lemmy.decronym.xyz 1 points 7 months ago* (last edited 7 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
PCIe Peripheral Component Interconnect Express
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

[Thread #684 for this sub, first seen 16th Apr 2024, 22:05] [FAQ] [Full list] [Contact] [Source code]

[–] pyrosis@lemmy.world 1 points 7 months ago (1 children)

I think I would get rid of that optical drive and install a converter for another drive like a 2.5 SATA. That way you could get an SSD for the OS and leave the bays for raid.

Other than that depending on what you want to put on this beast and if you want to utilize the hardware raid will determine the recommendations.

For example if you are thinking of a file server with zfs. You need to disable the hardware raid completely by getting it to expose the disks directly to the operating system. Most would investigate if the raid controller could be flashed into IT mode for this. If not some controllers do support just a simple JBOD mode which would be better than utilizing the raid in a zfs configuration. ZFS likes to directly maintain the disks. You can generally tell its correct if you can see all your disk serial numbers during setup.

Now if you do want to utilize the raid controller and are interested in something like proxmox or just a simple Debian system. I have had great performance with XFS and hardware raid. You lose out on some advanced Copy on Write features but if disk I/O is your focus consider it worth playing with.

My personal recommendation is get rid of the optical drive and replace it with a 2.5 converter for more installation options. I would also recommend getting that ram maxed and possibly upgrading the network card to a 10gb nic if possible. It wouldn't hurt to investigate the power supply. The original may be a bit dated and you may find a more modern supply that is more rnergy efficient.

OS generally recommendation would be proxmox installed in zfs mode with an ashift of 12.

(It's important to get this number right for performance because it can't be changed after creation. 12 for disks and most ssds. 13 for more modern ssds.)

Only do zfs if you can bypass all the raid functions.

I would install the rpool in a basic zfs mirror on a couple SSDs. When the system boots I would log into the web gui and create another zfs pool out of the spinners. Ashift 12. Now if this is mostly a pool for media storage I would make it a z2. If it is going to have vms on it I would make it a raid 10 style. Disk I/O is significantly improved for vms in a raid 10 style zfs pool.

From here for a bit of easy zfs management I would install cockpit on top of the hypervisor with the zfs plugin. That should make it really easy to create, manage, and share zfs datasets.

If you read this far and have considered a setup like this. One last warning. Use the proxmox web UI for all the tasks you can. Do not utilize the cockpit web UI for much more than zfs management.

Have fun creating lxcs and vms for all the services you could want.

[–] Krafting@lemmy.world 1 points 7 months ago

Hey, it's a 2005 server, it can't do IT mode, it only have Ultra SCSI 70GB drives, a 10 GB nic would be useless (it's only PCI, not PCIe) DDR2 RAM and 1 core processors only too!

I'll probably install a Debian, I had fun trying Windows 2003 Server. It has a Floppy drive too, I'll definitely keep the DVD and Floppy drive in there! (the CD Drive is IDE btw) And you can only configure the RAID array via a CD provided by IBM (No, you cannot boot this CD from an USB key, as the software on the CD is looking for the dvd drive and not an USB key)

Most of everything you said would be accurate for recent servers tho, but not here, not at all ahah!