Coki91

joined 8 months ago
 

Hello, this is a continuation of my previous post where I explained how to Set-Up a Remote Desktop experience fully encrypted for you or friends to use in your Machine

In that post I received a comment basically asking for what we're about to do and I believed it to be a cool idea, to integrate Containers to basically be able to spin-up Full Virtual Desktops on demand, with the Pros and Cons that entails (more on that at the end); so

Let's establish things

  • We're using the previous post as a baseline, I'll assume everything there has been Set-Up as explained and we'll focus on understanding and fabricating the Container here and the SSH environment; If you have read and done what's in the post previously, please take a look again as I have updated it to a better version that's basically required for things to work here, particularly pipewire
  • Everything established on the previous post holds true, we will be using wlroots Wayland; Pipewire, SSHFS and a Terminal Emulator for all our interactive needs
  • I'll be using Podman in Arch Linux with an Arch Linux container image, it would be best you stick to an image of your same Distribution

Where do we start?

First we better install Podman as things are straight up not going to work without it, so refer to your Distribution's instructions on how to install it and get rootless support going, it's really easy

Now that the hard part is done let's get to Build a Container Image, we will be building an image for each user we want to have to make things easy and have a reproducible environment should the user want to restart their Space

We have to build a Containerfile like this;

# Base distribution image to use
FROM archlinux 
# Set the user name with a variable passed from the shell
ARG USER=$USER  
# Create the Home Folder in the image
WORKDIR /home   
# Create the user Home folder in the image
WORKDIR ${USER} 
# Distro-Specific set-up for the package manager inside the image
RUN pacman-key --init  
# Distro-Specific set-up for the package manager inside the image
RUN pacman-key --populate archlinux  
# Installing base user packages inside the image
RUN pacman -Syu base-devel sudo bash htop git nano fastfetch curl wget --noconfirm --needed 
# Making ALL users able to use SUDO
RUN echo "ALL ALL=(ALL:ALL) NOPASSWD:ALL" >> /etc/sudoers  
# Copy userspace initialization script into the image
COPY ./initialization.sh /initialization.sh
# Make initialization script executable inside the image
RUN chmod +x /initialization.sh 
# Set the user's default entrypoint to the BASH shell
CMD /bin/bash 

Take a moment to read the comments on each of the Lines, as we preferably adjust things to match our Distribution of preference, all Distro-Specific steps should be replaced to match what your Distribution needs, and also what you want there to be on container for your user available from the get-go

There are 3 elements to take note of;

  • ARG USER=$USER; this will be a variable passed for usage during the build-time of the image and it's only use is to create the user's Home, as this is the best moment to do so
  • RUN echo "ALL ALL=(ALL:ALL) NOPASSWD:ALL" >> /etc/sudoers; this will make all Users able to use SUDO which might seem risky at first but don't worry this will be undone as soon as the User gets control of the container; it is only for the Set-up Process to function
  • COPY ./initialization.sh /initialization.sh and RUN chmod +x /initialization.sh; This is a script we're going to write next that will set-up the Container to be appropriately used like a Bare-Metal installation

Finished customizing your Containerfile to your needs? Next up we'll write that script I mentioned like this

echo "Welcome to your Personal Container, This script will ensure Userspace is properly initialized"
if [ "x${SUDO_USER}" = "x" ]; then
  echo "This script is not being run with SUDO, run it with SUDO or edit it to work with whatever elevated command system you're using"
  exit
fi

rm /home/"$SUDO_USER"/.bashrc
cp -r /etc/skel/. /home/"$SUDO_USER"
#echo "Please input your password for SUDO within the container"
#passwd $SUDO_USER
export PASSWORD="changeme"
echo "$SUDO_USER:$PASSWORD" | chpasswd
echo "$USER:$PASSWORD" | chpasswd
echo "Your (and root's) password within the container is: $PASSWORD"
echo "You can (and should) change it by invoking 'passwd' on both accounts!"

chown -R "$SUDO_USER" /home/"$SUDO_USER"
sed -i '$d' /etc/sudoers
echo "ALL ALL=(ALL:ALL) ALL" >> /etc/sudoers
echo "All done, this should be usable now!"
rm $0

This script is rather simple, but important, you see Containers are not usually built to be comfortable User Spaces but instead to be fast and discard-able, this is cool but with a bit of tweaking we can make it suit our needs, this is all that tweaking required all within the Container;

  • We initialize the User's Home
  • We give a default Password to the user and root to be able to restrict SUDO to only users who know their Passwords
  • We restrict SUDO as mentioned and prompt the user to change their password
  • We delete the Script

With the Initialization Script and Containerfile written up and saved in the same directory, we now have to give the User's account access to reading them both, I personally just copy them to their Home Folder and make them read-only; You can spin it to them however you'd like

Since Podman is being used rootless, we naturally want to Switch to that User's account (unless we already ARE the user) and get things set-up

First we Build the Container image with a recognizable name to make things easier for us AND passing the user's name as a Build Flag as we expect in our Containerfile, such a command could go like;

podman build --tag "$USER"_container --build-arg USER="$USER" </directory/to/containerfilefolder>

If that succeeds with no errors, we're basically done, next we should make the user's home inside the container a folder that's inside their real Home, so that all their files are accessible through Remote File Sharing; so for example

sshfs_directory="$HOME"/."$USER"_container
mkdir -p "$sshfs_directory"

And then the indispensable part about this, we make the user's container run the initialization script as sudo first of all on first start by adding it to their .bashrc

echo "sudo /initialization.sh" >> "$sshfs_directory"/.bashrc

As the script removes both .bashrc and itself this won't happen every time, only the first run of the Container

Finally we're ready to create the Container, I have compiled this podman create command with everything that's necessary so like;

podman create -ti \ # Make the Container interactable 
--mount type=bind,src="$XDG_RUNTIME_DIR",dst="$XDG_RUNTIME_DIR" \ # Mount Real user's runtime to the Container user's runtime, allowing Remote Clients to connect into the Container
--mount type=bind,src="$sshfs_directory",dst=/home/"$USER" \ # Mount Real user's Home Container Directory into the Container's user Home directory
--env=XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \ # Pass the Real user's runtime directory variable to the Container's, needed for all Runtime applications to work
--env=USER="$USER" \ # Set the user as the USER envrionment variable inside the Container like all SHELL logins do which is expected by a lot of things
--env=SHELL="$SHELL" \ # Set the Container's SHELL environment variable which is really useful
--device /dev/dri:/dev/dri \ # Give the container access to the Graphics Hardware of the Machine
--device /dev/snd:/dev/snd \ # Give the container access to the Audio Hardware of the Machine
--userns keep-id:uid=$(id -u),gid=$(id -g) \ # Run the Container as a full-copy of the user which is super necessary for things like running Wayland and Accessing container Files from Local or Remote
--name "$USER"_container \ # Naming the container appropiately to easily identify and start or remove it
--hostname "$USER"_container \ # Naming the container's runtime to be easily identified
localhost/"$USER"_container # Use the container image we just built

Read all the comments to understand what's going on and after you're done, make sure to delete all the comments (The # and everything after each line) otherwise the command will fail

This step can take some time, as Podman has to do quite some processing to put the User's namespace to use within the image given, so give it time

After that succeeds, our user could log-in through SSH and start using the Container IF they're not gonna use Wayland or Audio, we have to set-up both of these for the Container to use after each log-in, this is easy, we just have to add some lines to the real user's .bash_profile (outside the container!) to do everything on each SSH log-in for example;

if ! [ "x${SSH_TTY}" = "x" ]; then
  if (find $XDG_RUNTIME_DIR/pulse-server-*) >/dev/null 2>&1; then 
    export PULSE_SERVER=unix:$(find $XDG_RUNTIME_DIR/pulse-server-* | tail -n 1)
    podman update --env PULSE_SERVER=$PULSE_SERVER "$USER"_container
  else
    podman update --unsetenv PULSE_SERVER "$USER"_container
  fi
  if ! [ "x${WAYLAND_DISPLAY}" = "x" ]; then
    podman update --env WAYLAND_DISPLAY=$WAYLAND_DISPLAY "$USER"_container
  else
    podman update --unsetenv WAYLAND_DISPLAY "$USER"_container
  fi
#  podman start -a "$USER"_container
#  exit
else
  podman update --unsetenv PULSE_SERVER "$USER"_container
  podman update --unsetenv WAYLAND_DISPLAY "$USER"_container
fi

This is a modified version of the .bash_profile explained in the first post, here we just add commands to update the container's environment variables as needed since these are volatile on each SSH log-in and are required for Wayland and Audio to work properly and both versions are compatible, so they can use the Remote Machine either way

However if we, added "podman start -a "$USER"_container" and "exit" one after the other on .bash_profile, it'll basically jail the user into using the Container on SSH log-in as their session will only start once they are inside it and be immediately terminated once they exit it, this way they will be unable to do ANYTHING outside the Container and to me this is desirable behavior, if you're not into that you can remove it and the container will just be an option to them

However should you also wish to Jail them, make sure to keep .bash_profile read-only to the user, as with remote file access they can modify it by accessing it through Remote File Sharing

Speaking of Scripts, how does that look for the user? It mostly looks the same as the first post, but just for a refresher here's an example script for connecting to our Remote Host;

mkdir -p "$HOME"/.remote_files
sshfs -p 7777 user@192.168.100.2:"$HOME"/."$USER"_container "$HOME"/.remote_files
waypipe --video h264 ssh -p 7777 -l user -R $XDG_RUNTIME_DIR/pulse-server-"$RANDOM":$XDG_RUNTIME_DIR/pulse/native 192.168.100.2
umount "$HOME"/.remote_files

And that's it, we're done, next log-in through SSH your user will be dropped inside a Container, where as far as I've tested, works pretty well for a Virtual Desktop

The obvious advantage of doing all of this is

  • The User can install and run any software without requiring root to intervene as they can do anything they want on the Container
  • The Administrator can easily limit the Resources a guest user can access (CPU, RAM, GPU) with podman commands should they wish to offer an affordable workspace for them (check the documentation for this)
  • If the User's space is compromised, the problem is only as big as the user's access to the real user's permissions
  • As opposed to Virtual Machines, this set-up only occupies resources on demand and has minimal overhead

While the obvious disadvantages basically summarize as It's not as flexible as a Virtual Machine

But I still find it novel and useful for testing software before committing but for example you could donate me a container like this in your machine since I could use the processing power ;) off

That's all from me today, Have suggestions? Let me know what can be done better and I’ll update this post, thanks for reading and have a good one

[–] Coki91@lemmy.world 1 points 4 months ago* (last edited 4 months ago)

Not in Docker, but in Podman, it uses all the same syntax so it should be completely possible, with the same caveats as listed in the post using this method (x11 running on a Wayland Compositor through Xwayland)

I believe it warrants another post as it requires quite a bit of set-up, although it should be completely scriptable, still rough around the edges but the fact that podman is daemon-less makes it possible to use both on demand and without root as well

[–] Coki91@lemmy.world 1 points 4 months ago

I'm having a hard time understanding your comment, do you mean your goal is to have a light client on the Local Machine and a dependable (sturdy/not crashing) Server on the Remote Machine? If so this isn't quite it

Besides the SSH server, all components used initialize on demand and there is no other responsible for stability than the SSH Daemon, so that's for you to judge, other components may error or crash at their own leisure, just like the applications you would execute within them may

I have only heard of Moonlight and Sunshine, you should totally check them out, using them was also a consideration before I managed to do what I posted here, with this "much" configuration and for my case it is completely dependable with no admin intervention

[–] Coki91@lemmy.world 4 points 4 months ago

I think in the context of the post, you're missing everything else that's not remote file sharing

[–] Coki91@lemmy.world 1 points 4 months ago

I have wrote up another comment in more detail of this, but to briefly summarize there isn't any outrageous latency that isn't present on other remote control applications (Xrdp for example)

I'm not well versed into Gstreamer, in my mind it is a new ffmpeg and in fact, ffmpeg is what waypipe uses for it's video backend, so perhaps it's also possible to extend the waypipe code to add a bitrate switch on the ffmpeg calls, though there's nothing of that right now, would be a nice suggestion to the project

[–] Coki91@lemmy.world 2 points 4 months ago* (last edited 4 months ago)

There's actually some issues with games that make the Mouse Cursor stick to the center and turn the camera with movement (like most FPS do) which also happened on Windows to me, the Mouse has like x10 the sensitivity and is hardly controllable, other games that don't do that (like point and click games where the Cursor is free) don't present this issue

Besides that, I have played mostly keyboard centric games, a fighting-game style MMO that I have engaged in both PvE and PvP Gameplay and besides the bullshit moments in PvP where the Wi-fi turned on me, I really can't complain of performance, haven't played Souls games OR games using a joystick controller, it's probably something that needs testing

If you mean the compression that waypipe offers with -c, I used for the longest time "-c none" thinking it would free my CPU of all overhead possible, that's probably the right play on a local network environment but I have since some time ago rid of that to see and I again don't notice a difference, so if you plan to go out and connect back through a private LAN like Tailscale you may want to keep it on the default compression algorithm

[–] Coki91@lemmy.world 2 points 4 months ago* (last edited 4 months ago) (2 children)

Right, so as I mentioned in the post, hardware accelerated encoding for waypipe greatly increases performance and that's the one thing I can tell you makes the night and day of this setup

In my experience, my two devices are low-end on my local network, one of them 10 years old (The Local PC) on Ethernet and the other (The "Remote" PC) is in 5G Wi-fi, audio is 1:1 I can barely tell if there's delay, except when it stutters due to the Wi-fi being wi-fi, for video it's clearly compressed with h264 so colors are a bit off but otherwise it doesn't struggle either, latency and responsiveness feel right, like I mentioned in the post WINE Games is my use case for this, and playing Games has a high bar for that, maybe at some point my processor gets jacked to 100% due to the games and it struggles but it has never crashed, and it recovers once intensity tones down

Compared to other systems, I have only used Xrdp and Microsoft's RDP on Windows; and I don't feel any different besides the lack of additional features like drag and drop between devices that aren't a dealbreaker for me

[–] Coki91@lemmy.world 12 points 4 months ago

The title kinda reads like a question, so you're not wrong thinking I'm doing so, however that's more oriented for like search engines to pick up this post, as often users would type straight up looking for help on something, that's the idea of that, but in the first paragraph I explain that this is not a question post

 

Hello, I am writing this because this topic was at first a question I had and I couldn't find an answer to it, information about it online is scarce and outdated so here I am to share what I have figured out; so

Let's establish things

  • Remote Machine = The device processing the program/audio and holding the files, streaming them over to the Local Machine
  • Local Machine = The device which initiates the connection to the Remote Machine, hears the audio, interacts with the programs and receives the files requested

What we'll be using (L/R means Local or Remote respectively)

  • SSH (openSSH) L&R
  • waypipe L&R
  • pipewire, pipewire-pulse and wireplumber L
  • sshfs L
  • Any wlroots based Compositor L
  • Any Terminal Emulator L
  • FUSE L

In my case my compositor of choice will be Labwc, keep in mind all of the components used have a lot of options and you could benefit from checking out whats hot in each of them, I will only cover settings up to things WORKING

First things first install the packages and on the Local Machine make sure you have your sound system running for your User, if you hear audio already you should be okay otherwise review your specific distro documentation on how to start the services

For example on Arch: systemctl start pipewire pipewire-pulse wireplumber --user

Next is to start your Compositor of choice and open up a terminal emulator, you should first make a connection from the Local Machine to the Remote Machine with SSH and your credentials so

For example: ssh -p 7777 -l user 192.168.100.2

Managed to connect to your Remote Machine? Great now we'll need to do the set-up, we're going to need to make an environment variable set automatically on each SSH login

This variable has to set-up on SSH LOGIN ONLY as to not disturb the Remote Machine's local audio playing for when it is used locally, there might be many ways to setup this, in my case I'm gonna add this line to ~/.bash_profile

For example: if ! [ "x${PULSE_SERVER}" = "x" ]; then rm ${PULSE_SERVER#*:} fi

This will automatically execute on login, evaluating if it's an SSH login, and adding the environment variable PULSE_SERVER which will tell applications running locally (On the Remote Machine) that the audio socket is the SSH socket we will forward, which sends it back to your Local Machine's Audio Service (Encrypting it)

Next we would like to remove the audio socket when we're done so it doesn't bloat our filesystem right? We can add a script to ~/.bash_logout for this purpose just like we just did

For example: if ! [ "x${SSH_TTY}" = "x" ]; then rm ${PULSE_SERVER#*:} fi

When this is done, we can exit the Remote Machine's shell and everything is basically ready on the Remote Machine

Now on our Local Machine we have to modify our SSH command to forward the audio socket we have mentioned prior

For example: ssh -p 7777 -l user -R $XDG_RUNTIME_DIR/pulse-server-"$RANDOM":$XDG_RUNTIME_DIR/pulse/native 192.168.100.2

The important part is -R $XDG_RUNTIME_DIR/pulse-server-"$RANDOM":$XDG_RUNTIME_DIR/pulse/native which creating the UNIX Stream socket on the remote machine and giving it the name "pulse-server" + a random number to prevent multi-session conflicts and linking it to your Local Machine's audio socket

This makes applications on the Remote machine talk directly to your Local Machine's audio server and playing it there, everything basically functions as if it were running Locally, due to this the audio stream is uncompressed, you might want to add -c to the SSH command for all the data to be streamed with compression if you have limited data plans

Next up we should set-up waypipe, this application allows forwarding both Wayland native applications AND wayland compositors themselves, so if there is an X11 only application you can't forward through Waypipe, you can start a compositor instead and use it from there (Wine games, to say my use case) just like a Remote Desktop

For example: waypipe --video h264 ssh -p 7777 -l user -R $XDG_RUNTIME_DIR/pulse-server-"$RANDOM":$XDG_RUNTIME_DIR/pulse/native 192.168.100.2

In my example command, I use hardware accelerated video encoding which greatly increases performance, you may just want to use waypipe alone however which uses default settings, I highly recommend reading waypipe documentation for achieving the best performance for your setup and test it with your application of choice

For example: WLR_RENDERER=gles2 Labwc (executed on the Remote Machine Shell, will open it on your Local Machine's Compositor)

Finally, for setting up Remote File Access we use sshfs prior to connecting to the Remote Machine, this utility mounts a Remote Filesystem on a local directory through SSH and FUSE using the sftp protocol which is all encrypted

For example: sshfs -p 7777 user@192.168.100.2:/home/user/RemoteDirectory/ /home/user/LocalDirectory/

Nice, now we have it all set up and ready to work, we can finally make it convenient to use, in my case I prefer to run all of this as a script easily accessible on my terminal as a single command that executes the script located on my scripting environment, and we add two more commands that just unloads the Remote Filesystem mount when we're done

For an example script:

sshfs -p 7777 user@192.168.100.2:/home/user/RemoteDirectory/ /home/user/LocalDirectory/
waypipe --video h264 ssh -p 7777 -l user -R $XDG_RUNTIME_DIR/pulse-server-"$RANDOM":$XDG_RUNTIME_DIR/pulse/native 192.168.100.2
umount /home/user/LocalDirectory/

Let's say we create this script and it's saved in our home folder, we just have to make it executable (chmod +x scriptdir) and run it from our Terminal Emulator

For example: ./Remote\ Machine1.sh

And it will automatically set up everything for us and ask for our Credentials, we have a perfect workspace that imitates that of a remote desktop experience, on Wayland (may not be exclusive to this but that's what I'm doing here)

Did I miss anything? Have suggestions? Let me know what can be done better and I'll update this post, thanks for reading and have a good one