PlutoniumAcid

joined 1 year ago
[–] PlutoniumAcid@lemmy.world 2 points 2 weeks ago (1 children)

Yeah, so if I don't see it coming, I'm not scared.

[–] PlutoniumAcid@lemmy.world 5 points 2 weeks ago

Also romanticised in the famous novel The Neverending Story.

[–] PlutoniumAcid@lemmy.world 7 points 1 month ago (2 children)

What a stupid question. Just go visit it??

[–] PlutoniumAcid@lemmy.world 2 points 1 month ago (1 children)

That sounds awfully complicated for home use.

[–] PlutoniumAcid@lemmy.world 85 points 2 months ago (7 children)

Zero trust, but you have to use Amazon AWS, Cloudflare, and make your own Telegram bot? And have the domain itself managed by Cloudflare.

Sounds like a lot of trust right there... Would love to be proven wrong.

[–] PlutoniumAcid@lemmy.world 2 points 2 months ago

Barbarian planets are called meteors.

[–] PlutoniumAcid@lemmy.world -3 points 2 months ago

You should worry about your writing skills. Try some punctuation, for starters.

[–] PlutoniumAcid@lemmy.world 5 points 2 months ago (1 children)

Yup. They burn heavy bunker fuel - the sludge that is too bad to be used for anything else.

Considering the amount of shipping, it's horrendous.

But - and there's always another view - I don't know how much energy you'd need to use to haul that much cargo by other means like rail and trucks. One container ship carries as much as a thousand trains could carry. Vessels are really, really large, which make them quite effective.

[–] PlutoniumAcid@lemmy.world 1 points 3 months ago

than reproducing a desktop

Oh you sweet summer child 😊

I get what you mean, I really do, but the mobile launcher is very different from desktop.

Let me find an image of what a mobile desktop used to look like. It was literally the original Windows95 desktop, complete with recycle bin and start menu and task bar. Now that does not work on a mobile device, and modern phone launchers are light years ahead of those olden days.

I hope this link works:

https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fforum.winworldpc.com%2Fuploads%2Feditor%2Fd2%2Ftoe3rv6udc3v.gif&f=1&nofb=1&ipt=3aece61729c72898491d3dbfe7c7c767230ba63c1bc175f932e91874ab6bb5f6&ipo=images

[–] PlutoniumAcid@lemmy.world 4 points 3 months ago

Yes, you are right of course. It's a sad state of affairs.

[–] PlutoniumAcid@lemmy.world 5 points 3 months ago (1 children)

😲 13 years?? I had no idea! Wild. Okay maybe I need to find a modern alternative - but haven't found anything yet that's so nice as Nova was.

 

I run an old desktop mainboard as my homelab server. It runs Ubuntu smoothly at loads between 0.2 and 3 (whatever unit that is).

Problem:
Occasionally, the CPU load skyrockets above 400 (yes really), making the machine totally unresponsive. The only solution is the reset button.

Solution:

  • I haven't found what the cause might be, but I think that a reboot every few days would prevent it from ever happening. That could be done easily with a crontab line.
  • alternatively, I would like to have some dead-simple script running in the background that simply looks at the CPU load and executes a reboot when the load climbs over a given threshold.

--> How could such a cpu-load-triggered reboot be implemented?


edit: I asked ChatGPT to help me create a script that is started by crontab every X minutes. The script has a kill-threshold that does a kill-9 on the top process, and a higher reboot-threshold that ... reboots the machine. before doing either, or none of these, it will write a log line. I hope this will keep my system running, and I will review the log file to see how it fares. Or, it might inexplicable break my system. Fun!

 
37
submitted 9 months ago* (last edited 9 months ago) by PlutoniumAcid@lemmy.world to c/selfhosted@lemmy.world
 

TLDR: VPN-newbie wants to learn how to set up and use VPN.

What I have:

Currently, many of my selfhosted services are publicly available via my domain name. I am aware that it is safer to keep things closed, and use VPN to access -- but I don't know how that works.

  • domain name mapped via Cloudflare > static WAN IP > ISP modem > Ubiquity USG3 gateway > Linux server and Raspberry Pi.
  • 80,443 fowarded to Nginx Proxy Manager; everything else closed.
  • Linux server running Docker and several containers: NPM, Portainer, Paperless, Gitea, Mattermost, Immich, etc.
  • Raspberry Pi running Pi-hole as DNS server for LAN clients.
  • Synology NAS as network storage.

What I want:

  • access services from WAN via Android phone.
  • access services from WAN via laptop.
  • maybe still keep some things public?
  • noob-friendly solution: needs to be easy to "grok" and easy to maintain when services change.
 

I have some jet lighters in my shop. I'm not a smoker but they are useful for other things too. My problem is that they seem to not work at all?

When I buy them they are fine, push the button, clear "click" sound and a fine hot jet of fire. After a while though, they simply won't fire anymore, even though the little window shows that there's plenty of gas inside.

Are these also using the normal propane/butane as regular lighters?

 

Printing here with eSun PLA at 215 C on a Prusa Mini, and there are lots of hairline strings.

What's causing those strings? Temp too low?

 

Background:

  • At work we use MS Office, because who doesn't. We used to have a central file server with lots of well sorted directories.
  • Then Corporate decided to ditch that, everything must move into OneDrive so there's always a Data Owner.
  • The local boss had to move everything from the network share into his own OneDrive, and then share, with each of us, the folders that were relevant to each of us.
  • This sounds like distributed storage, which is probably smart in some way.

In reality, it's shit. Everything is now a link to "corporateName.sharepoint.com" in the browser, and it's a hassle to find that in the file explorer. SOmeone just shared a folder with me. I see it in my browser. How do I get it from the browser into a normal folder view? Should I forget about on-disk storage; is everything today just a browser bookmark?

Worse, I have no idea what's where. Some people share some stuff and somehow it ends up in my OneDrive, but what's the context of it?

This seems so wrong to me. Am I just not "getting" it??

 

TLDR:

  • Update: the server software has a bug about generating+saving certificates. Bug has been reported; as a workaround I added the local IP to my local 'hosts' file so I can continue (but that does not solve it of course).
  • I suspect there's a problem with running two servers off the same IP address, each with their own DNS name?

Problem:

  • When I enter https://my.domain.abc into Firefox, I get an error ERR_SSL_UNRECOGNIZED_NAME_ALERT instead of seeing the site.

Context:

  • I have a static public IP address, and a Unifi gateway that directs the ports 80,443 to my server at 192.168.1.10 where Nginx Proxy Manager is running as a Docker container. This also gives me a _Let's Encrypt certificate.
  • I use Cloudflare and have a domain foo.abc pointed to my static public IP address. This domain works, and also a number of subdomains with various Docker services.
  • I have now set up a second server running yunohost. I can access this on my local LAN at https://192.168.1.14.
  • This yunohost is set up with a DynDNS xyz.nohost.me. The current certificate is self-signed.
  • Certain other ports that yunohost wants (22,25,587,993,5222,5269) are also routed directly to 192.168.1.14 by the gateway mentioned above.
  • All of the above context is OK. Yunohost diagnostics says that DNS records are correctly configured for this domain. Everything is great (except reverse DNS lookup which is only relevant for outgoing email).

Before getting a proper certificate for the yunohost server and its domain, I need to make the yunohost reachable at all, and I don't see what I am missing.

What am I missing?

 

I mean, the simplest answer is to lay a new cable, and that is definitely what I am going to do - that's not my question.

But this is a long run, and it would be neat if I could salvage some of that cable. How can I discover where the cable is damaged?

One stupid solution would be to halve the cable and crimp each end, and then test each new cable. Repeat iteratively. I would end up with a few broken cables and a bunch of tested cables, but they might be short.

How do the pro's do this? (Short of throwing the whole thing away!)

 

edit: you are right, it's the I/O WAIT that it destroying my performance:
%Cpu(s): 0,3 us, 0,5 sy, 0,0 ni, 50,1 id, 49,0 wa, 0,0 hi, 0,1 si, 0,0 st
I could clearly see it using nmon > d > l > - such as was suggested by @SayCyberOnceMore. Not quite sure what to do about it, as it's simply my sdb1 drive which is a Samsung 1TB 2.5" HDD. I have now ordered a 2TB SSD and maybe I am going to reinstall from scratch on that new drive as sda1. I realize that's just treating the symptom and not the root cause, so I should probably also look for that root cause. But that's for another Lemmy thread!

I really don't understand what is causing this. I run a few very small containers, and everything is fine - but when I start something bigger like Photoprism, Immich, or even MariaDB or PostgreSQL, then something causes the CPU load to rise indefinitely.

Notably, the top command doesn't show anything special, nothing eats RAM, nothing uses 100% CPU. And yet, the load is rising fast. If I leave it be, my ssh session loses connection. Hopping onto the host itself shows a load of over 50,or even over 70. I don't grok how a system can even get that high at all.

My server is an older Intel i7 with 16GB RAM running Ubuntu22. 04 LTS.

How can I troubleshoot this, when 'top' doesn't show any culprit and it does not seem to be caused by any one specific container?

(this makes me wonder how people can run anything at all off of a Raspberry Pi. My machine isn't "beefy" but a Pi would be so much less.)

 

I am looking to buy a 3D printer for my son (and for myself too). We want to print, not tinker, so it should be something that gives great results right from the start.

Can you guide me to a sensible choice?

My first choice would have to be the Prusa MK3S Plus but it is outside the price range I am shopping for, except if I buy used -- would that be bad to do?

Realistic choices:

  • €380 used Prusa MK3S+, with 10 days printing time
  • €400 new Prusa Mini+
  • €250 new Ender 3 V2 Neo

Criteria:

  • High quality, no hassle. I want to print, not tinker.
  • Preferably (semi)assembled.
  • Auto bed leveling.
  • Auto error detection (filament, power, etc.?).
  • Budget up to 600 EUR/USD including extras, excluding filament.
  • Speed is not important.
  • Size is not important.
  • Must not be cloud-based.

Questions:

  • Surface?! Smooth, os satin, or textured? (Why) Should I have more than one kind?
  • (Why) Do I need an enclosure?
 

TLDR: I consistently fail to set up Nextcloud on Docker. Halp pls?

Hi all - please help out a fellow self-hoster, if you have experience with Nextcloud. I have tried several approaches but I fail at various steps. Rather than describe my woes, I hope that I could get a "known good" configuration from the community?

What I have:

  • a homelab server and a NAS, wired to a dedicated switch using priority ports.
  • the server is running Linux, Docker, and NPM proxy which takes care of domains and SSL certs.

What I want:

  • a docker-compose.yml that sets up Nextcloud without SSL. Just that.
  • ideally but optionally, the compose file might include Nextcloud office-components and other neat additions that you have found useful.

Your comments, ideas, and other input will be much appreciated!!

 

TLDR: I am running some Docker containers on a homelab server, and the containers' volumes are mapped to NFS shares on my NAS. Is that bad performance?

  • I have a Linux PC that acts as my homelab server, and a Synology NAS.
  • The server is fast but has 100GB SSD.
  • The NAS is slow(er) but has oodles of storage.
  • Both devices are wired to their own little gigabit switch, using priority ports.

Of course it's slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be "bad practice" to separate CPU and storage this way? Isn't that pretty much what a data center also does?

view more: next ›