talkingpumpkin

joined 2 years ago

on windows it would be to scan your stuff,make sure its the real site etc

It's the same on Linux (*), with two big differences:

  1. you'll install most (all?) of your software from the repos of your distro of choice, so most of the times you don't have to worry
  2. linux is inherently more secure than Windows (and AFAIK there are less viruses targeting it, either because they are harder to write or because it's a smaller target), so you are not as likely to catch viruses.

If you install niche software from app stores (even reputable ones), you'll have to make sure to check it's the real deal (I think both the snap store and flathub had fake cryptowallets?), but if you stick to relatively mainstream software you'll be fine (I mean, it's not like you'll find fake versions of steam or blender on flathub).

That said, the risk is there just as with Windows (or your phone, or anything else): a good operative system helps, but ultimately you are the real line of defense.

(*) well, IDK about scanning... generally speaking, if you feel like you have to scan something before opening it, just don't open it :) (yeah I know it's not possible if - eg - you receive files from customers)

[–] talkingpumpkin@lemmy.world 1 points 1 day ago (2 children)

i use an hp printer,and need to be able to use it on linux.

Then research if your specific model has compatibility issues (AFAIK HP stuff generally works well, but... it's up to you to check before buying)

i expect to be able to use the laptop and not think about the os too much

That will happen, if you are lucky or if you buy hardware that specifically supports linux.

Would you expect macos to run on a dell computer? would you expect windows to run on a mac? linux has much broad compatibility but is no different: if it doesn't work on your PC it's not linux's fault.

my goal of using linux is being far from malware

Just follow basic hygiene and you'll be fine. Most importantly, don't install malware yourself (chrome is available on linux too and, sadly, it's also widely used).

[–] talkingpumpkin@lemmy.world 3 points 3 days ago (1 children)

I've not looked into it much yet, but https://radicle.xyz/ seems interesting.

It's kinda a bittorrent-powerd codeberg and it looks like it's worth playing around with (even though it might not get you rid of much bandwidth... IDK how popular it is, but source usually doesn't weigh that much).

[–] talkingpumpkin@lemmy.world 3 points 1 week ago

Thanks for checking and reporting back! (I was too lazy to do that)

[–] talkingpumpkin@lemmy.world 8 points 1 week ago (2 children)

Doesn't the AGPL just say that you can't keep your changes/improvements private? (honest question: I seem to recall so, but I'm not really sure)

[–] talkingpumpkin@lemmy.world 1 points 1 week ago

I just meant that anything can happen eventually - debian wasn't the happiest example

[–] talkingpumpkin@lemmy.world 1 points 1 week ago (3 children)

could Red Hat eventually take control of the project?

Yes, and they could eventually take control of debian too.

Why bother mitigating such far-fetched risks though?

The mitigation cost is similar to the remediation one (ie. you'll just have to switch distro either way), and it's also likely to go down as the risk increases (ie. people will fork off fedora far sooner than the risk of it actually doing whatever bad things you fear Red Hat is gonna do to it becomes a practical concern).

BTW: are you aware the Linux Foundation is an US entity and funded by (among others) most US IT megacorps? (interestingly, amazon/aws is only a silver member - Bezos must really be a cheapskate)

[–] talkingpumpkin@lemmy.world 3 points 1 week ago

...or we could just stop paying attention to him (which we will do when it's no longrr funny). He can do as he wants :)

[–] talkingpumpkin@lemmy.world 23 points 2 weeks ago (2 children)

...and to think I used to actually be excited about bcachefs (back in the day)

┐(´-`)┌

[–] talkingpumpkin@lemmy.world 8 points 2 weeks ago

OP, you can also use named icons from your theme

[Desktop Entry]
Icon=folder-videos

I think you can use any name from stuff inside /usr/share/icons, but I'm not 100% sure

[–] talkingpumpkin@lemmy.world 7 points 2 weeks ago (1 children)

I actually like Debian’s slow update cycle, as I don’t want to be bothered often with setting up my system again.

I've been there too!

Updating to a new version is such a chore: you have to follow the news, then wonder how long to wait before updating, then you have to set aside at least a few hours for the actual update (well, for fixing what may go wrong - not that stuff actually goes wrong, but you still set aside some time just in case).

The solution to this is in the exact opposite direction you'd imagine.

For a few years (since last time I got a new PC), I've been running a rolling distro (tumbleweed *) and... it's been great: no big updates, just incremental ones.

If anything breaks (and it never happened to me: there has been times where errors prevented the system to update, but never has it broken on me), you just boot the snapshot before the last update and try again in a few hours/days.

I want something as close as “set it and forget it” as possible.

That's nixos :) It takes a long time to "set" (and you never really finish doing it) but you can switch to a new PC at any time and have your exact system on it (bar what the few things you have to change to account for the different hardware, of course).


* I hear that with arch&co you actually have to follow the release notes as sometimes there are manual tasks to do - it's not so in tumbleweed (at least, as much as i know and as far as me experience goes) - IDK about other rolling distros (or debian testing/sid)

[–] talkingpumpkin@lemmy.world 6 points 2 weeks ago (2 children)

What does "powerful" distro mean? (no, I've not watched the video - just curious what it means)

 

I'm looking for a forgejo cli (something similar to gh for github or glab for gitlab - neither of which I've ever used).

I found one named forgejo-cli and another named fgj but, from a quick look at the source, both seem to save my API key in a plaintext file, which... I just find unacceptable (and, frankly, quite dumb).

Do you know of any others?

 

Here it is https://codeberg.org/gmg/concoctions/src/branch/main/sh-scripts/nixos-rebuild

(if you try it and find any bugs, please let me know)

edit: I didn't realize the screenshot shows just instead of nixos-rebuild... that runs a script ("recipe") that calls nixos-rebuild so the output shown is from the (wrapped) nixos-rebuild

 

I'm trying to get my scripts to have precedence over the home manager stuff.

Do you happen to know how to do that?

(not sure it's relevant, but I'm using home-manager in tumbleweed, not nixos)


edit:

Thanks for the replies - I finally got time to investigate this properly so here's a few notes (hopefully useful for someone somehow).

~/.nix-profile/bin is added (prepended) to the path by the files in /nix/var/nix/profiles/default/etc/profile.d/, sourced every time my shell (fish, but it should be the same for others) starts (rg -L nix/profiles /etc 2> /dev/null for how they are sourced).

The path I set in homemanager (via home.sessionPath, which is added (prepended) to home.sessionSearchVariables.PATH) ends up in .nix-profile/etc/profile.d/hm-session-vars.sh, which is sourced via ~/.profile once per session (I think? certainly not when I start fish or bash). This may be due to how I installed home-manager... I don't recall.

So... the solution is to set the path again in my shell (possibly via programs.fish.shellInitLast - I din't check yet).

 

I'd like to give my users some private network storage (private from me, ie. something encrypted at rest with keys that root cannot obtain).

Do you have any recommendations?

Ideally, it should be something where files are only decrypted on the client, but server-side decryption would be acceptable too as long as the server doesn't save the decryption keys to disk.

Before someone suggests that, I know I could just put lucks-encrypted disk images on the NAS, but I'd like the whole thing to have decent performance (the idea is to allow people to store their photos/videos, so some may have several GB of files).


edit:

Thanks everyone for your comments!

TLDR: cryfs

Turns out I was looking at the problem from the wrong point of view: I was looking at sftpgo and wondering what I could do on the server side, but you made me realise this is really a client issue (and a solved one at that).

Here's a few notes after investigating the matter:

  • The use case is exactly the same as using client-side encryption with cloud storage (dropbox and those other things we self-hoster never use).
  • As an admin I don't have to do anything to support this use case, except maybe guiding my users in choosing what solution to adopt.
  • Most of the solutions (possibly all except cryfs?) encrypt file names and contents, leaking the directory structure and file size (meaning I could pretty much guess if they are storing their photos or... unsavory movies).
  • F-droid has an Android app (called DroidFS) that support gocryptfs and cryfs

I'll recommend my users try cryfs before any other solution. Others that may be worth it looking at (in order): gocryptfs, cryptomator, securefs.

I'll recommend my users to avoid cryptomator if possible, despite its popularity: it's one of those commecrial open source projects with arbitrary limitations (5 seats, whatever that means) and may have nag screens or require people to migrate to some fork in the future.

ecryptfs is to be avoid at all costs, as it seems unamaintaned.

19
submitted 5 months ago* (last edited 5 months ago) by talkingpumpkin@lemmy.world to c/europe@feddit.org
 

Delusional.

 

A lot of selfhosted containers instructions contain volume mounts like:

docker run ...
  -v /etc/timezone:/etc/timezone:ro \
  -v /etc/localtime:/etc/localtime:ro \
  ...

but all the times I tried to skip those mounts everything seemed to work perfectly.

Are those mounts only necessary in specific cases?

PS:

Bonus question: other containers instructions say to define the TZ variable. Is that only needed when one wants a container to use a different timezone than the host?

 

Prometheus-alertmanager and graphana (especially graphana!) seem a bit too involved for monitoring my homelab (prometheus itself is fine: it does collect a lot of statistics I don't care about, but it doesn't require configuration so it doesn't bother me).

Do you know of simpler alternatives?

My goals are relatively simple:

  1. get a notification when any systemd service fails
  2. get a notification if there is not much space left on a disk
  3. get a notification if one of the above can't be determined (eg. server down, config error, ...)

Seeing graphs with basic system metrics (eg. cpu/ram usage) would be nice, but it's not super-important.

I am a dev so writing a script that checks for whatever I need is way simpler than learning/writing/testing yaml configuration (in fact, I was about to write a script to send heartbeats to something like Uptime Kuma or Tianji before I thought of asking you for a nicer solution).

 

I'm not much hoepful, but... just in case :)

I would like to be able to start a second session in a window of my current one (I mean a second session where I log in as a different user, similar to what happens with the various ctrl+alt+Fx, but starting a graphical session rather than a console one).

Do you know of some software that lets me do it?

Can I somehow run a KVM using my host disk as a the disk for the guest VM (and without breaking stuff)?

 

I have two subnets and am experiencing some pretty weird (to me) behaviour - could you help me understand what's going on?


Scenario 1

PC:                        192.168.11.101/24
Server: 192.168.10.102/24, 192.168.11.102/24

From my PC I can connect to .11.102, but not to .10.102:

ping -c 10 192.168.11.102 # works fine
ping -c 10 192.168.10.102 # 100% packet loss

Scenario 2

Now, if I disable .11.102 on the server (ip link set <dev> down) so that it only has an ip on the .10 subnet, the previously failing ping works fine.

PC:                        192.168.11.101/24
Server: 192.168.10.102/24

From my PC:

ping -c 10 192.168.10.102 # now works fine

This is baffling to me... any idea why it might be?


Here's some additional information:

  • The two subnets are on different vlans (.10/24 is untagged and .11/24 is tagged 11).

  • The PC and Server are connected to the same managed switch, which however does nothing "strange" (it just leaves tags as they are on all ports).

  • The router is connected to the aformentioned switch and set to forward packets between the two subnets (I'm pretty sure how I've configured it so, plus IIUC the second scenario ping wouldn't work without forwarding).

  • The router also has the same vlan setup, and I can ping both .10.1 and .11.1 with no issue in both scenarios 1 and 2.

  • In case it may matter, machine 1 has the following routes, setup by networkmanager from dhcp:

default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.101 metric 410
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.101 metric 410
  • In case it may matter, Machine 2 uses systemd-networkd and the routes generated from DHCP are slightly different (after dropping the .11.102 address for scenario 2, of course the relevant routes disappear):
default via 192.168.10.1 dev eth0 proto dhcp              src 192.168.10.102 metric 100
192.168.10.0/24          dev eth0 proto kernel scope link src 192.168.10.102 metric 100
192.168.10.1             dev eth0 proto dhcp   scope link src 192.168.10.102 metric 100
default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.102 metric 101
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.102 metric 101
192.168.11.1             dev eth1 proto dhcp   scope link src 192.168.11.102 metric 101

solution

(please do comment if something here is wrong or needs clarifications - hopefully someone will find this discussion in the future and find it useful)

In scenario 1, packets from the PC to the server are routed through .11.1.

Since the server also has an .11/24 address, packets from the server to the PC (including replies) are not routed and instead just sent directly over ethernet.

Since the PC does not expect replies from a different machine that the one it contacted, they are discarded on arrival.

The solution to this (if one still thinks the whole thing is a good idea), is to route traffic originating from the server and directed to .11/24 via the router.

This could be accomplished with ip route del 192.168.11.0/24, which would however break connectivity with .11/24 adresses (similar reason as above: incoming traffic would not be routed but replies would)...

The more general solution (which, IDK, may still have drawbacks?) is to setup a secondary routing table:

echo 50 mytable >> /etc/iproute2/rt_tables # this defines the routing table
                                           # (see "ip rule" and "ip route show table <table>")
ip rule add from 192.168.10/24 iif lo table mytable priority 1 # "iff lo" selects only 
                                                               # packets originating
                                                               # from the machine itself
ip route add default via 192.168.10.1 dev eth0 table mytable # "dev eth0" is the interface
                                                             # with the .10/24 address,
                                                             # and might be superfluous

Now, in my mind, that should break connectivity with .10/24 addresses just like ip route del above, but in practice it does not seem to (if I remember I'll come back and explain why after studying some more)

view more: next ›