keen1320

joined 1 year ago
[–] keen1320@lemmy.world 1 points 8 months ago

Yep, I understand that. I didn’t know if RAID 6, having parity bits, would be able to repair a file from that data. I figured being RAID would protect it from corruption but apparently not. I didn’t overwrite the file with a bad one, it just stopped working.

 

I have a RAID 6 volume with data protection on my DS1621+. A video file seems to have become corrupt - it worked in Plex a few months ago but now it won’t play at all. I’ve tried in VLC, the built-in DSM video player - nothing seems to work. No other files appear to be corrupt, all drives show as Healthy.

Is it possible to repair the file, and if so how would I do that? My research seems to only find results where an entire volume is corrupt. In this case I’d like to just recover a single file.

[–] keen1320@lemmy.world 1 points 1 year ago (1 children)

Again, pardon my ignorance when it comes to Kubernetes. Why would I use something like k0s instead of just regular old Docker? I suspect PCIe passthrough will have similar challenges on both k0s and Docker, whereas on Proxmox it's been relatively painless.

This might be better suited for a different community, in which case I'll make a post where appropriate. I'm not familiar with some of the Kubernetes terminology - batteries, pod/manifest (is this similar to stacks/docker compose?), NodePort?

[–] keen1320@lemmy.world 1 points 1 year ago (3 children)

I apologize for my ignorance when it comes to Kubernetes - I sort of wrote it off as complete overkill for a home lab when my very basic understanding was that it was essentially a load balancer. After some light research, I'm beginning to understand that it could be a better solution than a full-blown hypervisor.

If I understand your comment correctly, you're suggesting to simply run a lightweight distro and install k0s or k3s to run containers? What would be an ideal bare metal OS for this? What would be pros/cons to k0s vs k3s in a home lab environment, or is that simply a matter of personal preference? What would be the best way to connect to my media - SMB, NFS, something else? Or are the differences here irrelevant? Any concerns (permissions, IO latency) when passing an NFS mount from host into a container, or is there an even better way to do something like that entirely within the container?

 

I’m looking for some feedback on my Plex system architecture.

All my media is stored on a Synology DS 1621+, six 4 TB drives in RAID 6 with one acting as a hot spare. All four network ports are bonded into a 4G link to an Ubiquiti USW-48-POE.

Previously, I ran Plex in a Docker container on the NAS. This setup was stable; however, the NAS only has 4 GB of memory shared between Plex, several other Docker services, and regular DSM overhead. Plus, the processor is not very powerful (AMD Ryzen V1500B, ~5400 PassMark).

A few months ago I repurposed some old desktop PC parts to build a home lab Proxmox server (Core i7-6700K [~8900 PassMark], 32 GB memory, GTX 970, an old 2.5” SATA SSD for guest OS disks, 1G networking on the motherboard). I’m running Plex on an Ubuntu VM, with the GPU passed through directly to the guest OS. Plex is not containerized in Ubuntu. The VM has 8 CPU cores and 8 GiB memory (different units in Proxmox). My Plex media is accessed via a persistent NFS mount in Ubuntu (had been SMB before a DSM update broke something and the VM could no longer read the directory contents.)

The main purpose of the change from NAS to VM was to utilize the increased CPU/GPU horsepower and memory that I had lying around, but I worry that the added layers of complexity (hypervisor/VM, PCIe pass through, NFS mounts) will introduce more opportunities for performance issues. I have noticed more frequent hiccups/buffering/transcoding since the change but I’m not sure if it’s related to my setup or if those issues lie with client devices and/or the files themselves (e.g. weird file container type that the client can’t play natively).

Any critique or recommendations on system architecture? Should I get a dedicated NIC to pass through to my VM? Dedicated NVMe drive passed through as a guest OS disk? Ditch Proxmox altogether and go back to Synology Docker container?

[–] keen1320@lemmy.world 3 points 1 year ago

The Fn and Carl keys can be switched in software. I have a work-issued Lenovo with a similar layout. They can be soft-swapped in the BIOS. There’s also a desktop utility to do the same but I don’t know if they have a Linux version of it. I totally agree, the physical layout is annoying but it has a simple fix.

 

This morning I updated my DS1621+ from DSM 7.1.1-42962 U6 to DSM 7.2-64570 U1. After the update I am no longer able to access shared folders from my Ubuntu machine. All errors in terminal and Portainer indicate "access denied", though no passwords have changed, and I am still able to access the shared folders from Windows. I am mounting shared folders with SMB but also tried NFS when SMB stopped working, as that appears to potentially be easier to manage (no usernames/passwords?).

Any help or direction to a fix is greatly appreciated.

EDIT: Realized I had a typo in my mount command for mounting an NFS share - I was using ip.address:/shared_folder instead of ip.addres:/volume_name/shared_folder. Fixed that and now have no problems using NFS to mount the shared folders to the same mount point as before, so for me that's a suitable workaround and presumably a better solution than SMB anyway, since both client and server are Linux OSes anyway.