stardustsystem

joined 1 year ago
[–] stardustsystem@lemmy.world 1 points 1 week ago (1 children)

So CosmOS does run through Compose files, but it makes them on the fly and gives you a moment before runtime to review it and make any changes.

Am I understanding right that your idea here is to put the Volumes on the NFS share and run through that, as opposed to having the data outside of a Volume just sitting on an NFS Mount?

[–] stardustsystem@lemmy.world 1 points 1 week ago (1 children)

I'm still early enough in that if something's wrong or not ideal about the config, I can go scorched earth and have the whole thing back up and running in an hour or two.

Is there a better filesystem that I could share out for this kind of thing? My RAID Array is run through OpenMediaVault if that helps.

[–] stardustsystem@lemmy.world 1 points 1 week ago

That tracks with my experience as well. I've been trying to get a system set up where the OS and Docker live on a small disk by themselves, and then go out to the larger RAID Array to load its data. But it's sounding like that's not really going to work the way I want to (probably why it's crashed on me so many times, too).

[–] stardustsystem@lemmy.world 1 points 1 week ago (1 children)

So I have a 2TB nVME for VM Host Disks, and a 72TB RAID Array on my server. My hope is to have the OS and Docker on the 32GB drive I set up for the VM (which lives on the nVME), and then all the files related to the webapps live in a folder on RAID Array in a section meant just for that.

But the other responses in this thread make me think that's not really going to be an option. Maybe I could make a very large VM Host Disk and put it on the RAID Array, let Docker just forget about the mount points entirely...

 

Hello everybody, happy Monday.

I'm hoping to get a little help with my most recent self-hosting project. I've created a VM on my Proxmox instance with a 32GB disk and installed Ubuntu, Docker, and CosmOS to it. Currently I have Gitea, Home Assistant, NextCloud, and Jellyfin installed via CosmOS.

If I want to add more services to Cosmos, then I need to be able to move the containers from the VM's 32GB disk into an NFS Share mounted on the VM which has something like 40TB of storage at the moment. My hope is that moving these Containers will allow them to grow on their own terms while leaving the OS disk the same size.

Would some kind of link allow me to move the files to the NFS share while making them still appear in their current locations in the host OS (Ubuntu 24.04). I'm not concerned about the NFS share not being available, it runs on the same server virtualizing everything else and it's configured to start before everything else so the share should be up and running by the time the server is in any situation. If anyone can see an obvious problem with that premise though, I'd love to hear about it.

[–] stardustsystem@lemmy.world 21 points 1 month ago

The "Promised Land" should become like the Garden of Eden - Promised to God's people, but God's people fucked up and failed to follow his instructions, so they had to leave.

Return it to the animals. As least they won't send missile strikes wherever they think they should be "living".

[–] stardustsystem@lemmy.world 1 points 2 months ago

Honestly, this is the point where I'd just make a new VM and manually migrate what I need to

[–] stardustsystem@lemmy.world 3 points 2 months ago (6 children)

Hyper-V will work with physical disk, but be warned - the wizard you run through when making a VM will make it look like you give the VM a VHD file for storage or nothing. Just attach no storage to the VM initially, then go into the VM settings after the wizard is complete to attach something besides a VHD.

Can't entirely remember if it handles partitions but I know it can boot particular disks and if the setting exists, that's where it would be

[–] stardustsystem@lemmy.world 14 points 2 months ago* (last edited 2 months ago) (2 children)

Windows 7 was a competent OS with low system requirements, a stable kernel, a simple feature set that was well-known and useful, an interface that was comprehensible and clearly conveyed to the user, and it didn't require extra investment or online accounts, and compatibility options for the really old stuff. It remains the Best version of Windows in my eyes.

8 took away the comprehenisble UI, low spec options, and lack of online service requirements, then 10 further complicated the UI and filled the OS with ads, the then 11 bloated the feature set, added even more ads, borked compatibility, and made the online accounts a requirement unless you pay extra and/or know what you're doing.

Textbook Enshittification

[–] stardustsystem@lemmy.world 10 points 3 months ago (2 children)

Of course now he wants to cooperate

[–] stardustsystem@lemmy.world 4 points 4 months ago* (last edited 4 months ago)

PRAISE BE TO THE LINE for some reason

[–] stardustsystem@lemmy.world 8 points 4 months ago* (last edited 4 months ago)

Forget about EA, they're a different company. Ubisoft is the one you want to worry about, they own Watch_Dogs and all related copyrights like DedSec

[–] stardustsystem@lemmy.world 94 points 5 months ago (9 children)

But muh platform growth!?!?! It just needs more AI, that'll get the people upgrading

 

Hey folks! Hope your day's going good.

I'm hoping someone else has had this problem or knows the application enough to where they can help me. I'm moving my main desktop from W10 to linux (Q4OS, Debian-based) and it's gone well so far.

The only thing I truly need Windows for is work, so I've decided to build a Win11 VM on my Proxmox server and remote into it when I need to do work there. Install went smoothly, and my M365 user is the Admin of the W11 box. Remote Desktop is enabled, and my user is added to the Remote Desktop Users group on the local machine.

I had issues remoting in from anywhere, but after researching I was able to make a shortcut that worked on a Windows machine by adding the below options to the .rdp file. With these added, a web page opens and takes me through M365 authentication, and then I remote in.

username:s:.\AzureAD\name@domain.tld

enablecredsspsupport:i:0

authentication level:i:2

`Note: email address changed for anonymity'

I've tried and failed several different ways to remote into this machine via Remmina. It works as described from Windows machines, but Remmina doesn't seem able to open the webpage that lets me sign in. Instead, I get Remmina's login prompt which I've so far been unable to log in through. This occurs whether I create a profile from scratch or if I import the previously-mentioned RDP file.

I have 2 Windows 10 VMs which are just regular solo machines, and I have no trouble remoting into them, it's just the Azure/Entra joined machine that causes this.

I'd like to use my Azure account on the VM so I can keep work at work, so to speak, and so I don't have to activate Windows (a license is included in my business account). If anyone's got some kind of solution or can tell me how to apply the options above to Remmina, I'd love to know how.

 

Hi selfhosted! Hope you're having a good day :)

I'm pretty new to self-hosting and have been traipsing through a minefield attempting to make NextCloud AIO work inside Docker. The instance runs for a few days/weeks and then starts getting extremely slow on the website, then dies entirely. Usually, either the ClamAV or Apache containers get stuck in an unhealthy state that no number of reboots or reinstalls can fix.

Quick context for how this all works. I have one machine that runs Proxmox and a group of VMs for various purposes. One such VM runs my Nextcloud. This VM is running Ubuntu 23.10, Docker, and the NextCloud AIO package.

Another VM hosts OpenMediaVault, which contains a set of SMB Shares mounted to the host VM that act as storage for NextCloud. The symlinks (I think I'm using that word right) on the host VM have user and group permissions updated according to AIO's documentation. Proxmox is configured to boot this VM first, then boot the rest in sequence once the files are available.

Right now I've got Nextcloud handling Synchronization of Files, Calendars, Contacts, and Kanban boards via the Deck Extension. Everything else can be abandoned at this point, these are the only functions I'm truly using. If this gives you an idea for an alternative app I'd love to hear it.

So after AIO broke for about the 5th time in the 8 months since I started trying to self-host it, I've been looking at alternatives. Before I go that route, I want to try installing Nextcloud without Docker. Some of the posts I've read here suggest that the Docker distribution of NextCloud has serious issues with stability and safely installing updates.

I plan to make a new VM entirely for this, Distro undecided. I still want to run it as a VM and still use my SMB shares for bulk storage.

So where would I begin if I planned to install NextCloud directly to the VM rather than through Docker?

 

Hey /c/selfhosted! Reddit refugee here with a couple questions on things I'm a bit uncertain about. I'll try to keep it brief, but I can clarify anything that needs clarified.

I came into a little money recently and I'm coming into some more in the nearish future. My plan is to put some of that into a new server build that I'll use to host VMs running Docker, Portainer, and Nextcloud for starters. Vaultwarden, Jellyfin, Gitea, and some kind of dashboard site will come once I get NextCloud in a good place (I'm torn between Dashy and Heimdall, so if anyone's got opinions I'd love to hear them.) I plan to add more once I'm more comfortable with Docker, and once I have a better idea of how to keep all these things organized and backed up.

I have two domains I'm going to use for these, one for test and one for "prod". I use quotes because all of these things are for me only until I'm confident enough to invite my family. I don't plan to make anything that's going to be used by more than a handful of people overall.

I've been trying all this with an old server I got off Craigslist which I installed Server 2019 on. I know IIS is a thing, but I'm not certain how or even if IIS plays with Docker, which has me questioning if Windows Server is even worth messing with on the new hardware. Right now, I have a VM set up in Hyper-V which is hosting Docker/Nextcloud in what I'm considering a test environment, but it's not accessible outside the home. Mostly I did this to learn Hyper-V for work, so I'm not married to Windows Server or even Windows for all this.

The other problem, of course, is DNS. It does appear that my ISP has given me a static address (or at least they haven't changed it since I moved in 6 months ago). Assuming that's true, I'm not certain how I'd go about configuring a DNS server at home and making it accessible outside my home. If anyone's got any resources they want to recommend for setting up a DNS server in-home for this kind of thing, I would love to see them.

tl;dr

  1. Is there any advantage to using Windows Server to host VS some flavor of Linux or even Windows Pro, or am I just wasting my time (assume cost is not a factor)
  2. Am I making my life harder trying to manage DNS through Windows Server, and is there an alternative if so. Linux alternatives also accepted
view more: next ›