suicidaleggroll

joined 1 week ago
[–] suicidaleggroll@lemm.ee 3 points 1 week ago

What does farming have to do with anything? It was a paid endorsement, an infomercial run by the sitting President of the US. It's disgusting.

[–] suicidaleggroll@lemm.ee 21 points 1 week ago* (last edited 1 week ago) (1 children)

Elon is really unpopular right now and “regular” republicans feel like he and DOGE are attacking them.

Only those who have been personally harmed. The rest of them, and that's the vast majority, actually believe Elon is cleaning up the government and finding/eliminating corruption. I work with one, he's convinced everything DOGE has found and cut is corruption/fraud, the kids who are rooting through the Treasury are geniuses capable of finding things that no other administration has been able to (or been willing to?) find, and ultimately the Government is going to run more efficiently and taxes can be lower for regular people when it's all said and done. They're too deep in the rabbit hole to see the light.

[–] suicidaleggroll@lemm.ee 5 points 1 week ago* (last edited 1 week ago)

Best luck I've had with laptops has been Razer, actually. They're gaming laptops, so a bit warm and loud and the battery life isn't great, but they're built like a brick, can be easily opened, all parts are easily replaceable/upgradeable, and since they generally use Intel everything, Linux compatibility is solid as well (except for RGB lighting and stuff, but with OpenRazer and Polychromatic even that usually works except for brand new models).

My last laptop was a Razer Blade 14 which ran great for like 6 years before I just got bored and decided I wanted to upgrade to a newer model with a better display. Over the 6 years I used it I upgraded the RAM, SSD, added a second SSD, upgraded the WiFi card, etc. It ran literally 24/7 during that entire time other than brief moments when I shut it down to throw in a backpack for travel, the only thing I had to replace for maintenance was the battery. I now have a Razer Blade 16 which has been great for the last year, zero issues, also running 24/7.

Before Razer I used Dell, Lenovo, HP, and Asus. None of them lasted more than 2-3 years before either the plastic crap holding it together fell apart, or the monitor, mouse, or keyboard failed, or I wanted/needed to upgrade something that was not user-replaceable (usually RAM or WiFi).

[–] suicidaleggroll@lemm.ee 1 points 1 week ago (1 children)

Would you mind if I added this as a discussion (crediting you and this post!) in the github project?

Yeah that would be fine

[–] suicidaleggroll@lemm.ee 3 points 1 week ago (1 children)

The measure of whether a system of government is good or bad is not "how long it lasts".

[–] suicidaleggroll@lemm.ee 2 points 1 week ago (1 children)

They didn't provide an rsync example until later in the post, the comment about not supporting differential backups is in reference to using rsync itself, which is incorrect, because rsync does support differential backups.

I agree with you that not doing differential backups is a problem, I'm simply commenting that this is not a drawback of using rsync, it's an implementation problem on the user's part. It would be like somebody saying "I like my Rav4, it's just problematic because I don't go to the grocery store with it" and someone else saying "that's a big drawback, the grocery store has a lot of important items and you need to be able to go to it". While true, it's based on a faulty premise, because of course a Rav4 can go to the grocery store like any other car, it's a non-issue to begin with. OP just needs to fix their backup script to start doing differential backups.

[–] suicidaleggroll@lemm.ee 6 points 1 week ago (3 children)

The issue is that the purpose of a union is to give power to the powerless, but police already have all the power. Their union makes them unstoppable.

[–] suicidaleggroll@lemm.ee 2 points 1 week ago* (last edited 1 week ago)

My KVM hosts use “virsh backup begin” to make full backups nightly.

All machines, including the KVM hosts and laptops, use rsync with --link-dest to create daily incremental versioned backups on my main backup server.

The main backup server pushes client-side encrypted backups which include the latest daily snapshot for every system to rsync.net via Borg.

I also have 2 DASs with 2 22TB encrypted drives in each. One of these is plugged into the backup server while the other one sits powered off in a drawer in my desk at work. The main backup server pushes all backups to this DAS weekly and I swap the two DASs ~monthly so the one in my desk at work is never more than a month or so out of date.

[–] suicidaleggroll@lemm.ee 2 points 1 week ago* (last edited 1 week ago) (3 children)

It’s not a drawback because rsync has supported incremental versioned backups for over a decade, you just have to use the --link-dest flag and add a couple lines of code around it for management.

[–] suicidaleggroll@lemm.ee 4 points 1 week ago* (last edited 1 week ago) (1 children)

But from a grammatical sense it’s the opposite. In a sentence, a comma is a short pause, while a period is a hard stop. That means it makes far more sense for the comma to be the thousands separator and the period to be the stop between integer and fraction.

[–] suicidaleggroll@lemm.ee 4 points 1 week ago (1 children)

I hoped it would be better, but all in all I thought it was enjoyable

[–] suicidaleggroll@lemm.ee 3 points 1 week ago* (last edited 1 week ago) (4 children)

Sure, it's a bit hack-and-slash, but not too bad. Honestly the dockcheck portion is already pretty complete, I'm not sure what all you could add to improve it. The custom plugin I'm using does nothing more than dump the array of container names with available updates to a comma-separated list in a file. In addition to that I also have a wrapper for dockcheck which does two things:

  1. dockcheck plugins only run when there's at least one container with available updates, so the wrapper is used to handle cases when there are no available updates.
  2. Some containers aren't handled by dockcheck because they use their own management system, two examples are bitwarden and mailcow. The wrapper script can be modified as needed to support handling those as well, but that has to be one-off since there's no general-purpose way to handle checking for updates on containers that insist on doing things in their own custom way.

Basically there are 5 steps to the setup:

  1. Enable Prometheus metrics from Docker (this is just needed to get running/stopped counts, if those aren't needed it can skipped). To do that, add the following to /etc/docker/daemon.json (create it if necessary) and restart Docker:
{
  "metrics-addr": "127.0.0.1:9323"
}

Once running, you should be able to run curl http://localhost:9323/metrics and see a dump of Prometheus metrics

  1. Clone dockcheck, and create a custom plugin for it at dockcheck/notify.sh:
send_notification() {
Updates=("$@")
UpdToString=$(printf ", %s" "${Updates[@]}")
UpdToString=${UpdToString:2}

File=updatelist_local.txt

echo -n $UpdToString > $File
}
  1. Create a wrapper for dockcheck:
#!/bin/bash

cd $(dirname $0)

./dockcheck/dockcheck.sh -mni

if [[ -f updatelist_local.txt ]]; then
  mv updatelist_local.txt updatelist.txt
else
  echo -n "None" > updatelist.txt
fi

At this point you should be able to run your script, and at the end you'll have the file "updatelist.txt" which will either contain a comma-separated list of all containers with available updates, or "None" if there are none. Add this script into cron to run on whatever cadence you want, I use 4 hours.

  1. The main Python script:
#!/usr/bin/python3

from flask import Flask, jsonify

import os
import time
import requests
import json

app = Flask(__name__)

# Listen addresses for docker metrics
dockerurls = ['http://127.0.0.1:9323/metrics']

# Other dockerstats servers
staturls = []

# File containing list of pending updates
updatefile = '/path/to/updatelist.txt'

@app.route('/metrics', methods=['GET'])
def get_tasks():
  running = 0
  stopped = 0
  updates = ""

  for url in dockerurls:
      response = requests.get(url)

      if (response.status_code == 200):
        for line in response.text.split("\n"):
          if 'engine_daemon_container_states_containers{state="running"}' in line:
            running += int(line.split()[1])
          if 'engine_daemon_container_states_containers{state="paused"}' in line:
            stopped += int(line.split()[1])
          if 'engine_daemon_container_states_containers{state="stopped"}' in line:
            stopped += int(line.split()[1])

  for url in staturls:
      response = requests.get(url)

      if (response.status_code == 200):
        apidata = response.json()
        running += int(apidata['results']['running'])
        stopped += int(apidata['results']['stopped'])
        if (apidata['results']['updates'] != "None"):
          updates += ", " + apidata['results']['updates']

  if (os.path.isfile(updatefile)):
    st = os.stat(updatefile)
    age = (time.time() - st.st_mtime)
    if (age < 86400):
      f = open(updatefile, "r")
      temp = f.readline()
      if (temp != "None"):
        updates += ", " + temp
    else:
      updates += ", Error"
  else:
    updates += ", Error"

  if not updates:
    updates = "None"
  else:
    updates = updates[2:]

  status = {
    'running': running,
    'stopped': stopped,
    'updates': updates
  }
  return jsonify({'results': status})

if __name__ == '__main__':
  app.run(host='0.0.0.0')

The neat thing about this program is it's nestable, meaning if you run steps 1-4 independently on all of your Docker servers (assuming you have more than one), then you can pick one of the machines to be the "master" and update the "staturls" variable to point to the other ones, allowing it to collect all of the data from other copies of itself into its own output. If the output of this program will only need to be accessed from localhost, you can change the host variable in app.run to 127.0.0.1 to lock it down. Once this is running, you should be able to run curl http://localhost:5000/metrics and see the running and stopped container counts and available updates for the current machine and any other machines you've added into "staturls". You can then turn this program into a service or launch it @reboot in cron or in /etc/rc.local, whatever fits with your management style to start it up on boot. Note that it does verify the age of the updatelist.txt file before using it, if it's more than a day old it likely means something is wrong with the dockcheck wrapper script or similar, and rather than using the output the REST API will print "Error" to let you know something is wrong.

  1. Finally, the Homepage custom API to pull the data into the dashboard:
        widget:
          type: customapi
          url: http://localhost:5000/metrics
          refreshInterval: 2000
          display: list
          mappings:
            - field:
                results: running
              label: Running
              format: number
            - field:
                results: stopped
              label: Stopped
              format: number
            - field:
                results: updates
              label: Updates
view more: ‹ prev next ›