talkingpumpkin

joined 2 years ago

I really don't get your reasoning, but I recommend helix (because I recommend it to everybody).

It's a pleasure to use, and it's... also not widespread or old enough to have any reported CVE ;)

Oh, it's written rust IIRC, so it probably doesn't suckless.

Don't tear down your server just to have fun - setup a vm (or get one of those minipcs), call i "playground" and have fun there.

Redo your server after you've tried different things, and only if you feel like you found something that is worth it.

Experimenting with different distros can teach you a lot (especially if you try very different ones - mint and debian aren't that much different) and I do recommend you do it, just don't do it in production :)

I'd say it's because:

  1. the people who ask for recommendations won't like (or understand) debian? (it's just "old packages this" and "outdated that" for most people)
  2. the people who do use and appreciate debian don't read "I hate windows pls recommend me a distro" posts (or at least don't reply as often as the fanboys)

And, no, I don't use debian myself.

but when I finally switched over to Debian, everything just worked!

That's most probably because you learned how to use your system without breaking it in the meantime :)

So I've been using it for a while! :)

What is the big deal about 4.4.0?

[–] talkingpumpkin@lemmy.world 2 points 1 day ago (3 children)

Is this the stable release of the rust rewrite?

[–] talkingpumpkin@lemmy.world 9 points 2 days ago* (last edited 2 days ago) (1 children)

Should I just learn how to use Docker?

Since you are not tied to docker yet, I'd recommend going with podman instead.

They are practically the same and most (all?) docker commands work on podman too, but podman is more modern (second generation advantage) and has a better reputation.

As for passing a network interface to a container, it's doable and IIRC it boils down to changing the namespace on the interface.

Unless you have specific reasons to do that, I'd say it's much easier to just forward ports from the host to containers the "normal" way.

There's no limit to how many different IPs you can assign to a host (you don't need a separate interface for each one) and you can use a given port on different IPs for different things .

For example, I run soft-serve (a git server) as a container. The host has one "management" IP (92.168.10.243) where openssh listens on port 22 and another IP (192.168.10.98) whose port 22 is forwarded to the soft-serve container via podman run [...] -p 192.168.10.98:22:22).

[–] talkingpumpkin@lemmy.world 24 points 2 days ago

You don't need to change distro in order to change desktop environment: just install gnome/kde/whatever if you want to give different ones a spin (you don't need to uninstall your current desktop environment either - you can have multiple ones and choose which one to use when you login)

[–] talkingpumpkin@lemmy.world 2 points 3 days ago* (last edited 3 days ago)

That just build t8plus becomes nixos-rebuild build --option eval-cache false --flake './nixcfg#t8plus' (the flake is at ./nixcfg/flake.nix).

 

Here it is https://codeberg.org/gmg/concoctions/src/branch/main/sh-scripts/nixos-rebuild

(if you try it and find any bugs, please let me know)

edit: I didn't realize the screenshot shows just instead of nixos-rebuild... that runs a script ("recipe") that calls nixos-rebuild so the output shown is from the (wrapped) nixos-rebuild

[–] talkingpumpkin@lemmy.world 18 points 5 days ago* (last edited 5 days ago) (3 children)

Is MacOs "absolutely no cli"? It wasn't when I was using it (admittedly, some 10yrs ago), except maybe for the basic things which any mainstream linux distro also provides.

What about Windows? Back in the day I would have paid to have a semi-decent CLI instead of being forced to use regedit (I hear regedit is still going strong, but I've not touched windows for an even longer period than MacOs)

[–] talkingpumpkin@lemmy.world 1 points 6 days ago

The future is 3.0 quantum AI blockchain .com (also orchestrated application server RAD microservices enterprise edition, but TBH those fads weren't as bad as the current ones)

[–] talkingpumpkin@lemmy.world 4 points 6 days ago

And there’s ceo’s listening to these ~~folks~~ fools [FTFY]

I think it's more like the other way around: the current exaggerated faith in AI did not start from the bottom

[–] talkingpumpkin@lemmy.world 16 points 1 week ago

Honestly, do we need a legal definition of what "self hosting" is and what isn't?

I didn't see your post and in the modlog I can only see it's title: "Apparently I'm into Web3, says Netcup" [ed: Netcup is a hosting company].

If your post was discussing stuff specific to your hosting provider, then the mods did well in removing it - if you were talking about things that would have interested this community, then they have probably been too rash in removing the post.

 

I'm trying to get my scripts to have precedence over the home manager stuff.

Do you happen to know how to do that?

(not sure it's relevant, but I'm using home-manager in tumbleweed, not nixos)


edit:

Thanks for the replies - I finally got time to investigate this properly so here's a few notes (hopefully useful for someone somehow).

~/.nix-profile/bin is added (prepended) to the path by the files in /nix/var/nix/profiles/default/etc/profile.d/, sourced every time my shell (fish, but it should be the same for others) starts (rg -L nix/profiles /etc 2> /dev/null for how they are sourced).

The path I set in homemanager (via home.sessionPath, which is added (prepended) to home.sessionSearchVariables.PATH) ends up in .nix-profile/etc/profile.d/hm-session-vars.sh, which is sourced via ~/.profile once per session (I think? certainly not when I start fish or bash). This may be due to how I installed home-manager... I don't recall.

So... the solution is to set the path again in my shell (possibly via programs.fish.shellInitLast - I din't check yet).

 

I'd like to give my users some private network storage (private from me, ie. something encrypted at rest with keys that root cannot obtain).

Do you have any recommendations?

Ideally, it should be something where files are only decrypted on the client, but server-side decryption would be acceptable too as long as the server doesn't save the decryption keys to disk.

Before someone suggests that, I know I could just put lucks-encrypted disk images on the NAS, but I'd like the whole thing to have decent performance (the idea is to allow people to store their photos/videos, so some may have several GB of files).


edit:

Thanks everyone for your comments!

TLDR: cryfs

Turns out I was looking at the problem from the wrong point of view: I was looking at sftpgo and wondering what I could do on the server side, but you made me realise this is really a client issue (and a solved one at that).

Here's a few notes after investigating the matter:

  • The use case is exactly the same as using client-side encryption with cloud storage (dropbox and those other things we self-hoster never use).
  • As an admin I don't have to do anything to support this use case, except maybe guiding my users in choosing what solution to adopt.
  • Most of the solutions (possibly all except cryfs?) encrypt file names and contents, leaking the directory structure and file size (meaning I could pretty much guess if they are storing their photos or... unsavory movies).
  • F-droid has an Android app (called DroidFS) that support gocryptfs and cryfs

I'll recommend my users try cryfs before any other solution. Others that may be worth it looking at (in order): gocryptfs, cryptomator, securefs.

I'll recommend my users to avoid cryptomator if possible, despite its popularity: it's one of those commecrial open source projects with arbitrary limitations (5 seats, whatever that means) and may have nag screens or require people to migrate to some fork in the future.

ecryptfs is to be avoid at all costs, as it seems unamaintaned.

19
submitted 4 months ago* (last edited 4 months ago) by talkingpumpkin@lemmy.world to c/europe@feddit.org
 

Delusional.

 

A lot of selfhosted containers instructions contain volume mounts like:

docker run ...
  -v /etc/timezone:/etc/timezone:ro \
  -v /etc/localtime:/etc/localtime:ro \
  ...

but all the times I tried to skip those mounts everything seemed to work perfectly.

Are those mounts only necessary in specific cases?

PS:

Bonus question: other containers instructions say to define the TZ variable. Is that only needed when one wants a container to use a different timezone than the host?

 

Prometheus-alertmanager and graphana (especially graphana!) seem a bit too involved for monitoring my homelab (prometheus itself is fine: it does collect a lot of statistics I don't care about, but it doesn't require configuration so it doesn't bother me).

Do you know of simpler alternatives?

My goals are relatively simple:

  1. get a notification when any systemd service fails
  2. get a notification if there is not much space left on a disk
  3. get a notification if one of the above can't be determined (eg. server down, config error, ...)

Seeing graphs with basic system metrics (eg. cpu/ram usage) would be nice, but it's not super-important.

I am a dev so writing a script that checks for whatever I need is way simpler than learning/writing/testing yaml configuration (in fact, I was about to write a script to send heartbeats to something like Uptime Kuma or Tianji before I thought of asking you for a nicer solution).

 

I'm not much hoepful, but... just in case :)

I would like to be able to start a second session in a window of my current one (I mean a second session where I log in as a different user, similar to what happens with the various ctrl+alt+Fx, but starting a graphical session rather than a console one).

Do you know of some software that lets me do it?

Can I somehow run a KVM using my host disk as a the disk for the guest VM (and without breaking stuff)?

 

I have two subnets and am experiencing some pretty weird (to me) behaviour - could you help me understand what's going on?


Scenario 1

PC:                        192.168.11.101/24
Server: 192.168.10.102/24, 192.168.11.102/24

From my PC I can connect to .11.102, but not to .10.102:

ping -c 10 192.168.11.102 # works fine
ping -c 10 192.168.10.102 # 100% packet loss

Scenario 2

Now, if I disable .11.102 on the server (ip link set <dev> down) so that it only has an ip on the .10 subnet, the previously failing ping works fine.

PC:                        192.168.11.101/24
Server: 192.168.10.102/24

From my PC:

ping -c 10 192.168.10.102 # now works fine

This is baffling to me... any idea why it might be?


Here's some additional information:

  • The two subnets are on different vlans (.10/24 is untagged and .11/24 is tagged 11).

  • The PC and Server are connected to the same managed switch, which however does nothing "strange" (it just leaves tags as they are on all ports).

  • The router is connected to the aformentioned switch and set to forward packets between the two subnets (I'm pretty sure how I've configured it so, plus IIUC the second scenario ping wouldn't work without forwarding).

  • The router also has the same vlan setup, and I can ping both .10.1 and .11.1 with no issue in both scenarios 1 and 2.

  • In case it may matter, machine 1 has the following routes, setup by networkmanager from dhcp:

default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.101 metric 410
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.101 metric 410
  • In case it may matter, Machine 2 uses systemd-networkd and the routes generated from DHCP are slightly different (after dropping the .11.102 address for scenario 2, of course the relevant routes disappear):
default via 192.168.10.1 dev eth0 proto dhcp              src 192.168.10.102 metric 100
192.168.10.0/24          dev eth0 proto kernel scope link src 192.168.10.102 metric 100
192.168.10.1             dev eth0 proto dhcp   scope link src 192.168.10.102 metric 100
default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.102 metric 101
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.102 metric 101
192.168.11.1             dev eth1 proto dhcp   scope link src 192.168.11.102 metric 101

solution

(please do comment if something here is wrong or needs clarifications - hopefully someone will find this discussion in the future and find it useful)

In scenario 1, packets from the PC to the server are routed through .11.1.

Since the server also has an .11/24 address, packets from the server to the PC (including replies) are not routed and instead just sent directly over ethernet.

Since the PC does not expect replies from a different machine that the one it contacted, they are discarded on arrival.

The solution to this (if one still thinks the whole thing is a good idea), is to route traffic originating from the server and directed to .11/24 via the router.

This could be accomplished with ip route del 192.168.11.0/24, which would however break connectivity with .11/24 adresses (similar reason as above: incoming traffic would not be routed but replies would)...

The more general solution (which, IDK, may still have drawbacks?) is to setup a secondary routing table:

echo 50 mytable >> /etc/iproute2/rt_tables # this defines the routing table
                                           # (see "ip rule" and "ip route show table <table>")
ip rule add from 192.168.10/24 iif lo table mytable priority 1 # "iff lo" selects only 
                                                               # packets originating
                                                               # from the machine itself
ip route add default via 192.168.10.1 dev eth0 table mytable # "dev eth0" is the interface
                                                             # with the .10/24 address,
                                                             # and might be superfluous

Now, in my mind, that should break connectivity with .10/24 addresses just like ip route del above, but in practice it does not seem to (if I remember I'll come back and explain why after studying some more)

 

I want to have a local mirror/proxy for some repos I'm using.

The idea is having something I can point my reads to so that I'm free to migrate my upstream repositories whenever I want and also so that my stuff doesn't stop working if some of the jankiest third-party repos I use disappears.

I know the various forjego/gitea/gitlab/... (well, at least some of them - I didn't check the specifics) have pull mirroring, but I'm looking for something simpler... ideally something with a single config file where I list what to mirror and how often to update and which then allows anonymous read access over the network.

Does anything come to mind?

view more: next ›