moonpiedumplings

joined 2 years ago
[–] moonpiedumplings@programming.dev 2 points 2 weeks ago* (last edited 2 weeks ago)

Late reply but I also recommend going through flathub for screenwriting apps if you want more. I saw some options that looked pretty good, although many were proprietary.

[–] moonpiedumplings@programming.dev 2 points 2 weeks ago (2 children)

Not really? From this page, all it looks like you need is a salsa.debian.org account. They call this being a "Debian developer", but registration on Debian Salsa is open to anybody, and you can just sign up.

Once you have an account, you can use Debian's Debusine normally. I don't really see how this is any different from being required to create an Ubuntu/Launchpad account for a PPA. This is really just pedantic terminology, Debian considers anybody who contributes to their distro in any way to be a "Debian Developer", whereas Ubuntu doesn't.

If you don't want to create an account, you can self host debusine — except it looks like you can't self host the server that powers PPA's. I consider this to be a win for Debusine.

Make sure you stream with the "linux" tag so thag people who follow that tag around like me can find you!

[–] moonpiedumplings@programming.dev 1 points 2 weeks ago* (last edited 2 weeks ago)

Proxmox is based on debian, with it's own virtualization packages and system services that do something very similar to what libvirt does.

Libvirr + virt manager also uses qemu kvm as it's underlying virtual machine software, meaning performance will be identical.

Although perhaps there will be a tiny difference due to libvirt's use of the more performant spice for graphics vs proxmox's novnc but it doesn't really matter.

The true minimal setup is to just use qemu kvm directly, but the virtual machine performance will be the same as libvirt, in exchange for a very small reduction in overhead.

[–] moonpiedumplings@programming.dev 1 points 2 weeks ago* (last edited 2 weeks ago)

Idk what to tell you. I linked to sources showing that flathub signs everything, and that flatpak refuses to install unsigned packages by default.

If you have anything contrary feel free to link it.

Also you multi replied to this comment. Sometimes I had this issue with eternity.

I have a similar setup, and even though I am hosting git (forgejo), I use ssh as a git server for the source of truth that k8s reads.

This prevents an ouroboros dependency where flux is using the git repo from forgejo which is deployed by flux...

[–] moonpiedumplings@programming.dev 1 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

From flahubs docs: https://docs.flathub.org/blog/app-safety-layered-approach-source-to-user#reproducibility--auditability

The build itself is signed by Flathub’s key, and Flatpak/OSTree verify these signatures when installing and updating apps.

This does not seem to be optional or up to the control of each developer or publisher who is using the flathub repos.

Of course, unless you mean packages via flatpak in general?

Hmmm, this is where my research leads me.

https://docs.flatpak.org/en/latest/flatpak-builder.html#signing

Though it generally isn’t recommended, it is possible not to use GPG verification. In this case, the --no-gpg-verify option should be used when adding the repository. Note that it is necessary to become root in order to update a repository that does not have GPG verification enabled.

Going further, I found a relevant github issue where a user is encountering an issue where flatpak is refusing to install a package that is not signed, and the user is asking for a cli flag to bypass this block.

I don't really see how this is any different from apt refusing to install unsigned packages by default but allowing a command line flag (--allow-unauthenticated) as an escape hatch.

To be really pedantic, apt key signing is also optional, it's just that apt is configured to refuse to install unsigned packages by default. So therefor all major repos sign their packages with GPG keys. Flatpak appears to follow this exact same model.

[–] moonpiedumplings@programming.dev 0 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

This is not true. Flatpaks from flathub are signed with a gpg key.

Now admittedly, they use a single release key for all their signing, which is much weaker than the traditional distro's model of having multiple package maintainers sign off on a release.

But the packages are signed.

Edit: snaps are signed in a similar way.

sandboxing is not the best practice on Linux… So I’m better off with Qubes than with Secureblue

No, no, no.

It's no that sandboxing is the best practice, it's just that attempting to "stack" linux sandboxes is mostly ineffective. If I run kvm inside xen, I get more security. If I run a linux container inside a linux container, I only get the benefit of one layer. But linux sandboxes are good practice.

I do agree that secureblue sucks, but I don't understand your focus on Qubes. To elaborate on my criticisms let me explain, with a reply to this comment:

Many CVE’s for Xen were discovered and patched by the Qubes folks, so that’s a good thing…

If really, really care about security, it's not enough to "find and patch CVE's". The architecture of the software must be organized in such a way that certain classes of vulnerabilities are impossible — so no CVE's exist in the first place. Having a lack of separation between different privilege levels turns a normal bug into a critical security issue.

Xen having so many CVE's shows that is has some clear architectural flaws, and that despite technically being a "microkernel", the isolation between the components is not enough to prevent privilege isolation flaws.

Gvisor having very few CVE's over it's lifespan shows it has a better architecture. Same for OpenBSD — despite having a "monolithic" kernel, I would trust openbsd more in many cases (will elaborate later).

Now, let's talk about threat model. Personally, I don't really understand your fears in this thread. You visited a site, got literally jumpscared (not even phised), and are now looking at qubes? No actual exploit was done.

You need to understand that the sandboxing that browsers use is one of the most advanced in existence currently. Browser escapes are mostly impossible... mostly.

In addition, you need to know that excluding openbsd, gvisor, and a few other projects almost all other projects will have a regular outpouring of CVE's at varying rates, depending on how well they are architectured.

Xen is one of those projects. Linux is one of those projects. Your browser is one of those projects. Although I consider Linux a tier below in security, I consider Xen and browsers to exist at a similar tier of security.

What I'm trying to say, is that any organization/entity that is keeping a browser sandbox escape, will most definitely have a Linux privilege escalation vulnerability, and will probably also have a Xen escape and escalation vulnerability.

The qube with the browser might get compromised, but dom0 would stay safely offline, that’s my ideal, not the utopic notion of never possibly getting attacked and hacked.

This is just false. Anybody who is able to do the very difficult task of compromising you through the browser will probably also be able to punch through Xen.

not the utopic notion of never possibly getting attacked and hacked.

This is true actually. Browser exploits are worth millions or even tens of millions of dollars. And they can only really be used a few times before someone catches them and reports them so that they are patched.

Why would someone spend tens of millions of dollars to compromise you? Do you have information worth millions of dollars on your computer? It's not a "utopic notion", it's being realistic.

If you want maximum browser security, ~~disable javascript~~ use chromium on openbsd. Chromium has slightly stronger sandboxing than firefox, although chromium mostly outputs CVE's at the same rate as firefox. Where it really shines, is when combined with Openbsd's sandboxing (or grapheneos' for phones).

Sure, you can run Xen under that setup. But there will be no benefit, you already have a stronger layer in front of Xen.

TLDR: Your entire security setup is only actually as strong as your strongest layer/shield. Adding more layers doesn't really offer a benefit. But trying to add stronger layers is a waste of your time because you aren't a target.

[–] moonpiedumplings@programming.dev 6 points 3 weeks ago (1 children)

Proxmox is based on debian and uses debian under the hood...

[–] moonpiedumplings@programming.dev 2 points 3 weeks ago (2 children)

to answer your first question, kind of. Gvisor (by google btw) uses the linux kernels sandboxing to sandbox the gvisor process itself.

Distrobox also uses the linux kernels sandboxing, which is how linux based containers work.

Due to issues with the attack surface of the linux's kernels sandboxing components, the ability to create sandboxing or containers inside sandboxes or containers is usually restricted.

What this means is that to use gvisor inside docker/podman (distrobox) you must either loosen the (kinda nonexistent) distrobox sandbox, or you must disable gvisors sandboxing that it applies to itself. You lose the benefit, and you would be better off just using gvisor alone.

It's complicated, but basically the linux's kernels containers/sandboxing features can't really be "stacked".

 

https://security-tracker.debian.org/tracker/CVE-2024-47176, archive

As of 10/1/24 3:52 UTC time, Trixie/Debian testing does not have a fix for the severe cupsd security vulnerability that was recently announced, despite Debian Stable and Unstable having a fix.

Debian Testing is intended for testing, and not really for production usage.

https://tracker.debian.org/pkg/cups-filters, archive

So the way Debian Unstable/Testing works is that packages go into unstable/ for a bit, and then are migrated into testing/trixie.

Issues preventing migration: ∙ ∙ Too young, only 3 of 5 days old

Basically, security vulnerabilities are not really a priority in testing, and everything waits for a bit before it updates.

I recently saw some people recommending Debian Testiny for a "debian but not as unstable as sid and newer packages than stable", which is a pretty bad idea. Testing is not really intended for production use.

If you want newer, but still stable packages from the same repositories, then I recommend (not an exhaustive list, of course).:

  • Opensuse Leap (Tumbleweed works too but secure boot was borked when I used it)
  • Fedora

If you are willing to mix and match sources for packages:

  • Flatpaks
  • distrobox — run other distros in docker/podman containers and use apps through those
  • Nix

Can get you newer packages on a more stable distros safely.

 

cross-posted from: https://programming.dev/post/18069168

I couldn't get any of the OS images to load on any of the browsers I tested, but they loaded for other people I tested it with. I think I'm just unlucky. > > Linux emulation isn't too polished.

 

I couldn't get any of the OS images to load on any of the browsers I tested, but they loaded for other people I tested it with. I think I'm just unlucky.

Linux emulation isn't too polished.

 

According to the archwiki article on a swapfile on btrfs: https://wiki.archlinux.org/title/Btrfs#Swap_file

Tip: Consider creating the subvolume directly below the top-level subvolume, e.g. @swap. Then, make sure the subvolume is mounted to /swap (or any other accessible location).

But... why? I've been researching for a bit now, and I still don't understand the benefit of a subvolume directly below the top level subvolume, as opposed to a nested subvolume.

At first I thought this might be because nested subvolumes are included in snapshots, but that doesn't seem to be the case, according to a reddit post... but I can't find anything about this on the arch wiki, gentoo wiki, or the btrfs readthedocs page.

Any ideas? I feel like the tip wouldn't just be there just because.

 

I've recently done some talks for my schools cybersecurity club, and now I want to edit them.

My actual video editing needs are very simple, I just need to clip parts of the video out, which basically every editor can do, as per my understanding.

However, my videos were recorded from my phone, and I don't have a presentation mic or anything of the sort, meaning background noise, including people talking has slipped in. From my understanding, it's trivial to filter out general noise from audio, as human voices have a specific frequency, even "live", like during recording or during a game, but filtering voices is harder.

However, it seems that AI can do this:

https://scribe.rip/axinc-ai/voicefilter-targeted-voice-separation-model-6fe6f85309ea

Although, it seems to only work on .wav audio files, meaning I would need to separate out the audio track first, convert it to wav, and then re merge it back in.

Before I go learning how to do this, I'm wondering if there is already an existing FOSS video editor, or plugin to an editor that lets me filter the video itself, or a similar software that works on the audio of videos.

 

cross-posted from: https://programming.dev/post/5669401

docker-tcp-switchboard is pretty good, but it has two problems for me:

  • Doesn't support non-ssh connections
  • Containers, not virtual machines

I am setting up a simple CTF for my college's cybersecurity club, and I want each competitor to be isolated to their own virtual machine. Normally I'd use containers, but they don't really work for this, because it's a container escape ctf...

My idea is to deploy linuxserver/webtop, as the entry point for the CTF, (with the insecure option enabled, if you know what I mean), but but it only supports one user at a time, if multiple users attempt to connect, they all see the same X session.

I don't have too much time, so I don't want to write a custom solution. If worst comes to worst, then I will just put a virtual machine on each of the desktops in the shared lab.

Any ideas?

 

docker-tcp-switchboard is pretty good, but it has two problems for me:

  • Doesn't support non-ssh connections
  • Containers, not virtual machines

I am setting up a simple CTF for my college's cybersecurity club, and I want each competitor to be isolated to their own virtual machine. Normally I'd use containers, but they don't really work for this, because it's a container escape ctf...

My idea is to deploy linuxserver/webtop, as the entry point for the CTF, (with the insecure option enabled, if you know what I mean), but but it only supports one user at a time, if multiple users attempt to connect, they all see the same X session.

I don't have too much time, so I don't want to write a custom solution. If worst comes to worst, then I will just put a virtual machine on each of the desktops in the shared lab.

Any ideas?

 

So basically, my setup has everything encrypted except /boot/efi. This means that /boot/grub is encrypted, along with my kernels.

I am now attempting to get secure boot setup, to lock some stuff, down, but I encountered this issue: https://bbs.archlinux.org/viewtopic.php?id=282076

Now I could sign the font files... but I don't want to. Font files and grub config are located under /boot/grub, and therefore encrypted. An attacker doing something like removing my hard drive would not be able to modify them.

I don't want to go through the effort of encrypting font files, does anyone know if there is a version of grub that doesn't do this?

Actually, preferably, I would like a version of grub that doesn't verify ANYTHING. Since everything but grub's efi file is encrypted, it would be so much simpler to only do secure boot for that.

And yes, I do understand there are security benefits to being able to prevent an attacker that has gained some level of running access to do something like replacing your kernel. But I'm less concerned about that vector of attack, I would simply like to make it so that my laptops aren't affected by evil maid attacks, without losing benefits from timeshift or whatnot.

I found the specific commit where grub enforces verification of font files: https://github.com/rhboot/grub2/commit/539662956ad787fffa662720a67c98c217d78128

But I don't really feel interested in creating and maintaining my own fork of grub, and I am wondering if someone has already done that.

 

I'm having trouble with networking on linux. I am renting a vps with only one NIC, one ipv4 address, and a /64 range of ipv6 ones. I want to deploy openstack neutron to this vps, but openstack neutron is designed to be ran on machines with two NIC's, one for normal network access, and entirely dedicated to virtualized networking, like in my case, giving an openstack virtual machine a public ipv6 address. I want to create a virtual NIC, which can get it's own public ipv6 addresses, for the vm's, without losing functionality of the main NIC, and I also want the vm's to have ipv4 connectivity. I know this setup is possible, as the openstack docs say so, but they didnt' cover how to do so.

Docs: https://docs.openstack.org/kolla-ansible/latest/reference/networking/neutron.html#example-shared-interface

There is an overview of what you need to do here, but I don't understand how to turn this into a usable setup. In addition to that, it seems you would need to give vm's public ipv4 addresses, in order for them to have internet connectivity. I would need to create a NAT type network that routes through the main working interface, and then put the neutron interface partially behind that, in order for ipv4 connectivity to happen.

I've been searching around for a bit, so I know this exact setup is possible: https://jamielinux.com/docs/libvirt-networking-handbook/multiple-networks.html#example-2 (last updated in 2016, outdated)

But I haven't found an updated guide on how to do it.

view more: ‹ prev next ›