moonpiedumplings

joined 2 years ago
[–] moonpiedumplings@programming.dev 2 points 3 days ago (2 children)

don’t understand why you treat it as all or nothing problem. It’s clearly not

There are clear alternatives to using developer install scripts to install software though: package managers

And they are not using package managers because clearly they don’t meet their needs.

Developers incorrectly believe that they need to vendor dependencies or control the way software is installed, which package managers of distros do not offer them. So they don't mention the way that their software (deno, rust) is packaged in nixpkgs, and instead mention the install script. Actually Deno mentions nixpkgs, and Rust mentions apt on their less immediately visible docs, but the first recommendation is to use the install script.

The core problem mentioned here is one of packager control vs developer control. With an install script that downloads a binary (usually vendored) the developer has control over things like: the version of the software, how it is installed, and what libraries it uses. They like this for a variety of reasons, but it often comes to the detriment of user security for the reasons I have mentioned above. Please, please read the blog post about static linking or look into my cargo audit. Developers are not security experts and should not be allowed to install software, even though they want to and continue to do this.

One the other hand, with package maintainers, they value the security of users more than things like getting a new version out. With package maintainers however, they take control over how packages are installed, often using older versions to dodge new security vulnerabilities, at the cost of keeping the same set of non-security related bugs, and sometimes the developers whine about this, like when the Bottles devs tried to get unofficial versions of bottles taken down. Bottles even intentionally broke non-flatpak builds.

But I don't care about developer control. I don't care about the newest version. I don't care about the latest features. I don't care about the non-security bugs not getting ironed out until the next stable release. Developers care about these things.

But I care only about the security of the users. And that means stable release. That means package managers. That means developers not installing software.

[–] moonpiedumplings@programming.dev 3 points 3 days ago (4 children)

It’s just a way to make bash installers more secure.

bash installers from the developers, and vendored/pinned dependencies in general will never be secure for the reasons I mentioned above. Developers are bad at security. Developers should not be installing software on people's machines.

[–] moonpiedumplings@programming.dev 4 points 3 days ago (6 children)

I said that the tool would have to be installed by default on the main distros. I would be a single binary and a man page. I don’t think it would be very difficult to get it included.

It is very difficult. The core problem here is the philosophy of the distros will cause them to avoid this tool for various reasons. Minimalist distros, like Arch, will avoid this by default because they are minimal. On the other hand, Debian really dislikes users not using packages to install things, for a variety of reasons that could be their own post, but the short version is that they also won't package this tool. A gentoo developer explains some of this, but also why staticly compiled (single binary) setups are disliked by distro packages as well.

It's a very long post, but to paraphrase a common opinion from it: Developers are often bad at actually installing software and cannot really be trusted to manage their own installer, and the dependencies of the software they create. For example, here is a pastebin of me running cargo-audit on Deno. Just in that pastebin, there are two CVE's, one is 5.9, and also an unmaintained package. Except, one of the CVE's has a patch available. But, in the Cargo.lock:

[[package]]
name = "hickory-proto"
version = "0.25.0-alpha.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d063c0692ee669aa6d261988aa19ca5510f1cc40e4f211024f50c888499a35d7"

They have "vendored" and "pinned" the package, meaning that it is essentially stuck on an insecure version. Although I'm sure that this version will be updated shortly, what sometimes happens is that a non-backwards compatible update that includes a security fix is released, and lazy developers, instead of updating their software, will pin the insecure version.

In a distro's package manager, the distro would step up to patch vulnerabilities like that one, or do security maintenance for unsupported packages. Although Debian's extremely slow movement is frustrating, they are a particularly excellent example of this because they maintain packages in such a way that all their packages are backwards compatible for the duration of their lifecycle in the stable release, meaning that a developer making a package for Debian would have no need to pin the version, but they would still get security updates for the libraries they are using for 6 years.

Deno is an extremely popular package, and thankfully it has very few issues, but I have seen much worse than this, and it's because of issues like these that I am generally opposed to developers being package maintainers, and I think that should be left up to distro maintainers or package maintainers.

There’s 0 security. Even tarballs are usually provided with MD5 checksum that you can verify client side. With bash there’s nothing

MD5 hashes are not enough. Modern packaging systems, like Debian's or Arch's have developers sign the packages to ensure that it was the real developer (or at least someone on the real developers computer...) who uploaded the package. Even with MD5 hashes, there is no such verification.

The other step needed is reproducible builds: If multiple people build a package, they will have the same output. I can verify the XZ tarball and see that the MD5 hash matches, but it's meaningless when that tarball has a backdoor in it, because they added something when they compiled it on their own machine (real story btw, also the xz backdoor didn't make it into Debian stable because of Debian's slow release policy and the fact that they essentially maintain and build forks of their own packages).

If the rust binary is not being built reproducibly, then it's meaningless to verify the MD5 hash.

that all those CD tools were specifically tailored to run as workers in a deployment pipeline

That's CI 🙃

Confusing terms, but yeah. With ArgoCD and FluxCD, they just read from a git repo and apply it to the cluster. In my linked git repo, flux is used to install "helmreleases" but argo has something similar.

[–] moonpiedumplings@programming.dev 5 points 3 days ago* (last edited 3 days ago) (8 children)

But all the website already use bash scripts.

I mentioned an alternative to the what these websites do, using a package manager to install these instead of their bash scripts.

It’s not a package manager based on bash.

Both of the bash scripts you mentioned as an example are being used to install software. If you have examples of bash scripts that do things other than install software, then it's worth discussing how to handle those.

However, the reason why bash is so popular for usecases like configuration scripts or an Arch install script though, is because no other software besides wget/curl and bash is required to get it. Having to get an extra tool on the Arch install iso just to run an install script in bash, or to run a script that installs tools on a fresh, clean install, somewhat defeats the point of the script being written in bash imo.

It’s secure way to distribute bash scripts that are already being distributed in a insecure way.

Bash is inherently insecure. I consider security not just issues with malice, but also footguns like the steam issues mentioned above. Centralizing all the bash scripts to a "repo" doesn't fix the issues with arbitrary bash scripts.

And if you are concerned about malice, then the bash scripts almost always download a binary that does further arbitrary code execution and cannot be audited. What's the difference between a bash script from the developers website and a binary from the developers website?

[–] moonpiedumplings@programming.dev 7 points 3 days ago (2 children)

There is also no way to verify that the software that is being installed is not going to do anything bad. If you trust the software then why not trust the installation scripts by the same authors

Just because I trust the authors to write good software in a popular programming language, doesn't mean I trust them to write shell scripts in a language known for footguns.

[–] moonpiedumplings@programming.dev 8 points 3 days ago* (last edited 3 days ago) (13 children)

The problem with a central script repository is that bash scripts are difficult to audit, both for malicious activity, but also for bad practices and user errors.

A steam bug in their bash script once deleted a user's home repository.

Even though the AUR is "basically" bash scripts, it's acceptable because they use their own format that calls other scripts other the hood, and the standardized format makes it easier to audit. Although I have heard a few stories of issues with this, like one poorly made AUR package moving someone's /bin to /opt and breaking everything.

So in my opinion, a package manager based on bash basically doesn't work because of these issues. All modern packaging uses some kind of actual standardized format, to make it easier to audit and develop, and to either mitigate package maintainer/creator error, or to prevent it entirely.

If you want to install tools on another distro that doesn't package them currently, I think nix, Junest, or distrobox are good solutions, because they essentially give you access to the package managers of other distros. Nix in particular has the most packages out of any distro, even more than the AUR and arch repos combined.

[–] moonpiedumplings@programming.dev 2 points 3 days ago (2 children)

garden seems similar to GitOps solutions like ArgoCD or FluxCD for deploying helm charts.

Here is an example of authentik deployed using helm and fluxcd.

[–] moonpiedumplings@programming.dev 4 points 3 days ago (1 children)

Firstly, I want to say that I started with podman (alternative to docker) and ansible, but I quickly ran into issues. The last issue I encountered, and the last straw, was that creating a container, I was frustrated because Ansible would not actually change the container unless I used ansible to destroy and recreate it.

Without quadlets, podman manages it’s own state, which has issues, and was the entire reason I was looking into alternatives to podman for managing state.

More research: https://github.com/linux-system-roles/podman: I found an ansible role to generate podman quadlets, but I don’t really want to include ansible roles in my existing ansible roles. Also, it intakes kubernetes yaml, which is very complex for what I am trying to do. At that point, why not just use a single node kubernetes cluster and let kubernetes manage state?

So I switched to Kubernetes.

To answer some of your questions:

Am I really supposed to have a collection of small yaml files for everything, that I use with kubectl apply -f ?? It feels wrong and way too “by hand”! Is there a more scripted way to do it? Should I stay with everything in Ansible ??

So what I (and the industry) uses is called "GitOps". It's essentially you have a git repo, and the software automatically pulls the git repo and applies the configs.

Here is my gitops repo: https://github.com/moonpiedumplings/flux-config. I use FluxCD for GitOps, but there are other options like Rancher's Fleet or the most popular ArgoCD.

As a tip, you can search github for pieces of code to reuse. I usually do path:*.y*ml keywords keywords to search for appropriate pieces of yaml.

I see little to no example on how to deploy the service containers I want (pihole, navidrome, etc.) to a cluster, unlike docker-compose examples that can be found everywhere. Am I looking for the wrong thing?

So the first issue is that Kubernetes doesn't really have "containers". Instead, the smallest controllable unit in Kubernetes is a "pod", which is a collection of containers that share a network device. Of course, pods for selfhosted services like the type this community is interested in will rarely have more than one container in them.

There are ways to convert a docker-compose to a kubernetes pod.

But in general, Kubernetes doesn't use compose files for premade services, but instead helm charts. If you are having issues installing specific helm charts, you should ask for help here so we can iron them out. Helm charts are pretty reliable in my experience, but they do seem to be more involved to set up than docker-compose.

Even official doc seems broken. Am I really supposed to run many helm commands (some of them how just fails) and try and get ssl certs just to have Rancher and its dashboard

So what you're supposed to do is deploy an "ingress", (k3s comes with traefik by default), and then use cert-manager to automatically apply get letsencrypt certs for ingress "objects".

Actually, traefik comes with it's own way to get SSL certs (in addition to ingresses and cert manager), so you can look into that as well, but I decided to use the standardized ingress + cert-manager method because it was also compatible with other ingress software.

Although it seems complex, I've come to really, really love Kubernetes because of features mentioned here. Especially the declarative part, where all my services can be code in a git repo.

[–] moonpiedumplings@programming.dev 1 points 3 days ago* (last edited 3 days ago) (1 children)

Can you share your setup? I'd really like that because I don't think nixgl works with GPPU apps like CUDA or lc0 (chess engine that uses gpu).

EDIT:

Is this it: https://github.com/juipeltje/configs/blob/54e971f6a6da47d6cfd02a6409be97d5e1051b0f/scripts/cron/nix-drivers.sh ?

Although this seems like it would just symlink opengl? What if I wanted GGPU like OpenCL or CUDA?

Edit2: wait, I think I might be misunderstanding how it works. I think nixgl already supports opencl, because it's mesa that implements opencl and opengl. But how would I get mesa support? I don't think the symlink to /run trick works.

 

cross-posted from: https://programming.dev/post/18069168

I couldn't get any of the OS images to load on any of the browsers I tested, but they loaded for other people I tested it with. I think I'm just unlucky. > > Linux emulation isn't too polished.

 

I couldn't get any of the OS images to load on any of the browsers I tested, but they loaded for other people I tested it with. I think I'm just unlucky.

Linux emulation isn't too polished.

 

According to the archwiki article on a swapfile on btrfs: https://wiki.archlinux.org/title/Btrfs#Swap_file

Tip: Consider creating the subvolume directly below the top-level subvolume, e.g. @swap. Then, make sure the subvolume is mounted to /swap (or any other accessible location).

But... why? I've been researching for a bit now, and I still don't understand the benefit of a subvolume directly below the top level subvolume, as opposed to a nested subvolume.

At first I thought this might be because nested subvolumes are included in snapshots, but that doesn't seem to be the case, according to a reddit post... but I can't find anything about this on the arch wiki, gentoo wiki, or the btrfs readthedocs page.

Any ideas? I feel like the tip wouldn't just be there just because.

 

I've recently done some talks for my schools cybersecurity club, and now I want to edit them.

My actual video editing needs are very simple, I just need to clip parts of the video out, which basically every editor can do, as per my understanding.

However, my videos were recorded from my phone, and I don't have a presentation mic or anything of the sort, meaning background noise, including people talking has slipped in. From my understanding, it's trivial to filter out general noise from audio, as human voices have a specific frequency, even "live", like during recording or during a game, but filtering voices is harder.

However, it seems that AI can do this:

https://scribe.rip/axinc-ai/voicefilter-targeted-voice-separation-model-6fe6f85309ea

Although, it seems to only work on .wav audio files, meaning I would need to separate out the audio track first, convert it to wav, and then re merge it back in.

Before I go learning how to do this, I'm wondering if there is already an existing FOSS video editor, or plugin to an editor that lets me filter the video itself, or a similar software that works on the audio of videos.

 

cross-posted from: https://programming.dev/post/5669401

docker-tcp-switchboard is pretty good, but it has two problems for me:

  • Doesn't support non-ssh connections
  • Containers, not virtual machines

I am setting up a simple CTF for my college's cybersecurity club, and I want each competitor to be isolated to their own virtual machine. Normally I'd use containers, but they don't really work for this, because it's a container escape ctf...

My idea is to deploy linuxserver/webtop, as the entry point for the CTF, (with the insecure option enabled, if you know what I mean), but but it only supports one user at a time, if multiple users attempt to connect, they all see the same X session.

I don't have too much time, so I don't want to write a custom solution. If worst comes to worst, then I will just put a virtual machine on each of the desktops in the shared lab.

Any ideas?

 

docker-tcp-switchboard is pretty good, but it has two problems for me:

  • Doesn't support non-ssh connections
  • Containers, not virtual machines

I am setting up a simple CTF for my college's cybersecurity club, and I want each competitor to be isolated to their own virtual machine. Normally I'd use containers, but they don't really work for this, because it's a container escape ctf...

My idea is to deploy linuxserver/webtop, as the entry point for the CTF, (with the insecure option enabled, if you know what I mean), but but it only supports one user at a time, if multiple users attempt to connect, they all see the same X session.

I don't have too much time, so I don't want to write a custom solution. If worst comes to worst, then I will just put a virtual machine on each of the desktops in the shared lab.

Any ideas?

 

So basically, my setup has everything encrypted except /boot/efi. This means that /boot/grub is encrypted, along with my kernels.

I am now attempting to get secure boot setup, to lock some stuff, down, but I encountered this issue: https://bbs.archlinux.org/viewtopic.php?id=282076

Now I could sign the font files... but I don't want to. Font files and grub config are located under /boot/grub, and therefore encrypted. An attacker doing something like removing my hard drive would not be able to modify them.

I don't want to go through the effort of encrypting font files, does anyone know if there is a version of grub that doesn't do this?

Actually, preferably, I would like a version of grub that doesn't verify ANYTHING. Since everything but grub's efi file is encrypted, it would be so much simpler to only do secure boot for that.

And yes, I do understand there are security benefits to being able to prevent an attacker that has gained some level of running access to do something like replacing your kernel. But I'm less concerned about that vector of attack, I would simply like to make it so that my laptops aren't affected by evil maid attacks, without losing benefits from timeshift or whatnot.

I found the specific commit where grub enforces verification of font files: https://github.com/rhboot/grub2/commit/539662956ad787fffa662720a67c98c217d78128

But I don't really feel interested in creating and maintaining my own fork of grub, and I am wondering if someone has already done that.

 

I'm having trouble with networking on linux. I am renting a vps with only one NIC, one ipv4 address, and a /64 range of ipv6 ones. I want to deploy openstack neutron to this vps, but openstack neutron is designed to be ran on machines with two NIC's, one for normal network access, and entirely dedicated to virtualized networking, like in my case, giving an openstack virtual machine a public ipv6 address. I want to create a virtual NIC, which can get it's own public ipv6 addresses, for the vm's, without losing functionality of the main NIC, and I also want the vm's to have ipv4 connectivity. I know this setup is possible, as the openstack docs say so, but they didnt' cover how to do so.

Docs: https://docs.openstack.org/kolla-ansible/latest/reference/networking/neutron.html#example-shared-interface

There is an overview of what you need to do here, but I don't understand how to turn this into a usable setup. In addition to that, it seems you would need to give vm's public ipv4 addresses, in order for them to have internet connectivity. I would need to create a NAT type network that routes through the main working interface, and then put the neutron interface partially behind that, in order for ipv4 connectivity to happen.

I've been searching around for a bit, so I know this exact setup is possible: https://jamielinux.com/docs/libvirt-networking-handbook/multiple-networks.html#example-2 (last updated in 2016, outdated)

But I haven't found an updated guide on how to do it.

view more: ‹ prev next ›