SamSausages

joined 1 year ago
[–] SamSausages@alien.top 1 points 11 months ago

I do this with ZFS using a Keyfile and a script that runs at boot to unlock/mount.

I put the keyfiles on a USB drive. (Make sure you have backups!) This USB drive is hidden, I won't go into details on how I did that, several ways to do that, you can get pretty creative.

If someone steals my server, they need to know where I hid my USB, or they won't be able to get to any of the encrypted datasets.

[–] SamSausages@alien.top 1 points 11 months ago

X12SDV-4C-SP6F

Man, I have been looking into the X11SDV-8C-TP8F and I'm so close to pulling the trigger. I love the form factor.Do you know of any with 25g networking that also has EDIT: "QuickAssist" enabled like the X11SDV-8C-TP8F ? The jump from 4 core to 8 core on those adds QAT

[–] SamSausages@alien.top 1 points 11 months ago

I looked at those cases, ended up going with this because easier to expand and hte PSU size:
Can find them for less elsewhere:
https://www.amazon.com/gp/product/B095YMXW1K/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

Have a look at these HDD's, I have seen them as low as $150. i run 20 of them right now, some have 3 years run time now. Been very happy with this vendor, usually only a few hours of runtime:
https://www.disctech.com/Western-Digital-UltraStar-DC-HC530-WUH721414ALE604-0F31156-0F31284-14TB-3.5-7.2K-RPM-512e-SATA-6Gb

Storage type:
If you need the speed, stick with ZFS.
But it should depend on the data and how risk averse you are (how easy to replace the data is). I started all ZFS, now I use this method:

Unraid Array. For easily replaceable media, like movies. (Where I have a list and can easily re-download/upload) I use only the Unraid Array for that. Mainly for data that is write once read often.
Downside:
Unraid write speed is slow. No scrubbing.
Upside:
only the 1 disk that has the data spins up. This saves me about 180-200W of power and lots of wear on the drives.
It's also very storage efficient, I only run 2 parity disks with 20 drives. I wouldn't do that on a regular raidz. But with unraid data lives on each disk, so you don't loose the entire array if you lose more than 2 disks. This changes the risk math.

Then for all my critical data, that I need ZFS speed and scrubbing for, I setup a raidz1 pool with 4x Intel P4510 4TB. (Home pics/media)
Those I got for $200/pc new on eBay.

[–] SamSausages@alien.top 2 points 11 months ago

Can be safer. Can be worse.

A poorly configured self hosted vaultwarden can be a major security issue.

A properly configured one is arguable safer than hosting with a 3rd party. Lastpass taught me that one.

If you configure it to where it's not exposed to the web, and only accessed through a VPN, like Tailscale. It can be quite robust.

[–] SamSausages@alien.top 1 points 11 months ago

That sounds easy enough, but it creates a situation where I don't know what updates are important (security) and what updates are minor. So I have to read the release notes for each update and then decide if I need it to patch a security vulnerability.
Where with the other method, I know the update is likely critical.
For some those frequent updates are a +, for me it is not. So use what works best for you!

But right now I couldn't use opensense even if I wanted to, as it's FIPS non-compliant due to them still using the depreciated EOL OpenSSH 1.1.1, and no date set to move to v3

[–] SamSausages@alien.top 1 points 11 months ago

chuckle, butthurt downvotes but not one comment to dispute anything I said. Enjoy the depreciated OpenSSL without security updates.

[–] SamSausages@alien.top 1 points 11 months ago (1 children)

No, I like pfsense because it has less frequent updates and is better documented.

Here is one of the better guides that helps you config much of what you are talking about:

https://nguvu.org/pfsense/pfsense-baseline-setup/

Plus, opensense gets most of their code from the work done by pfsense, and often have to wait on them to push the code. Just look at what happened with TLS 1.3

[–] SamSausages@alien.top 1 points 1 year ago

and a big part of the reason is taxes and regulations. People with $$ don’t care, but everyone in the bottom 75% really takes a big hit compared to their income.

[–] SamSausages@alien.top 1 points 1 year ago

self hosted git repository.

I setup gitea on my server and use it to track version changes of all my scripts.

And I use a combination of the wiki and .md (readme) files for howto's and any inventory I'm keeping, like IP addresses, CPU assignments etc.

But mainly it's all in .md formatted with markdown.

[–] SamSausages@alien.top 1 points 1 year ago

I do this at the file system level, not the file level, using zfs.

Unless the container has a database, I use zfs snapshots. If it has a database, my script dumps the database first and then does a ZFS snapshot. Then that snapshot is sent via sanoid to a zfs disk that is in a different backup pool.

This is a block level backup, so it only backs up the actual data blocks that changed.

[–] SamSausages@alien.top 1 points 1 year ago (1 children)

I don't use photoprism, but have experienced similar in other docker containers. What is most likely happening is that something, like headers/ports, needs to be forwarded by NPM, usually b adding additional config to the "advanced" tab in NPM.
Sorry, I'm not familiar enough with photoprism to know what exactly needs to be added to the config, but I since nobody has replied, I thought it might at least give you a direction to search in.

view more: ‹ prev next ›