IIRC docker on Windows lives inside WSL, so everything is done on Linux anyway. What's the issues you're getting?
NRoach44
The typical way involves something outside your network acting as a proxy. Your home network VPNs to this proxy, then the proxy sends requests down to your homelab.
I used a VPS and a VPN, I would connect to the VPN endpoint on the VPS, and then route all traffic back down to home.
You can also run a reverse proxy on the VPS, so it does TLS for clients, and speaks to the servers direct over the VPN.
Another option is things like Cloudflare tunnels, which means cloudflare does the "VPS and VPN" part of the above, but the tradeoff is that your have to trust cloudflare, rather than yourself (may be a positive or not depending on your perspective).
Lastly you could use something like tinc (which needs something on the outside to act as a negotiator) to form a mesh between NAT'd devices.
It means that if someone breaks out of your container, they can only do things that user can do.
Can that user access your private documents (are these documents in a container that also runs under that user)?
Can that user sudo?
Can that user access SSH keys and jump to other computers?
Generally speaking, the answer to all of these should be "no", meaning that each group of containers (or risk levels etc) get their own account.
There's mt32-pi, a baremetal app that emulates the classic MT-32 MIDI synthesizer.
For better or worse, the Pi (2+) seems to be the only SBC with a video output that can do 240P or other funky CRT resolutions (the DPI interface on the GPIO).
(If you buy a suitable device) You don't have to use the preloaded OS (see Graphrne, Lineage etc).
No, this is
- buying a surface from Microsoft
- immediately wiping it and installing Linux
- Microsoft then forcing you to authenticate using the device that is only tied to your account via purchase, and NOT login records, AND disabling other forms of auth
Just a point of clarification: Don't use RAID 5 for more than 2-4 TB. The rebuild takes so long that the mean-time-between read errors statistic basically guarantees a read error while rebuilding, which may cause the controller to trash the array.
That and rebuilding that much data might push one of the drives over the edge anyway.
The linked article — and others — explain that in Android 10+, (a) executable binaries can no longer reside in a read/write directory, and (b) access to /sdcard will go away. Simply put, these changes destroy my application's ability to function, and that of Termux as well.
That sounds like proper security to me? Inability to access the user's storage is a bit lame, but they've been moving to nicer APIs for that anyway.
Android is a mobile phone OS, not desktop / embedded Linux.
One thing that people miss - either out of ignorance, or because it goes against the narrative - is that systemd is modular.
One part handles init and services (and related things like mounts and sockets, because it makes sense to do that), one handles user sessions (logind), one handles logging (journald), one handles networking (networkd) etc etc.
You don't have to use networkd, or their efi bootloader, or their kernel install tool, or the other hostname/name resolution/userdb/tmpfiles etc etc tools.
Where are these OEMs that allow proper bootloader unlocking on most of their range?
Google, Sony ...? Huawei stopped doing it, Oppo & Samsung doesn't last I checked.