glizzyguzzler

joined 2 years ago

They do not, and I, a simple skin-bag of chemicals (mostly water tho) do say

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 4 weeks ago (4 children)

I was channeling the Interstellar docking computer (“improper contact” in such a sassy voice) ;)

There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

An audio codec (not a pipeline) is just actually doing math - just like the workings of an LLM. There’s plenty of work to be done after the audio codec decodes the m4a to get to tunes in your ears. Same for an LLM, sandwiching those matrix multiplications that make the magic happen are layers that crunch the prompts and assemble the tokens you see it spit out.

LLMs can’t think, that’s just the fact of how they work. The problem is that AI companies are happy to describe them in terms that make you think they can think to sell their product! I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago. AI companies will string the LLMs together and let them chew for a while to try make themselves catch when they’re dropping bullshit. It’s still not thinking and reasoning though. They can be useful tools, but LLMs are just tools not sentient or verging on sentient

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 1 month ago* (last edited 1 month ago) (9 children)

Improper comparison; an audio file isn’t the basic action on data, it is the data; the audio codec is the basic action on the data

“An LLM model isn’t really an LLM because it’s just a series of numbers”

But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed

And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think

[–] glizzyguzzler@lemmy.blahaj.zone 7 points 1 month ago (1 children)

It’s literally tokens. Doesn’t matter if it completes the next word or next phrase, still completing the next most likely token 😎😎 can’t think can’t reason can witch’s brew facsimile of something done before

[–] glizzyguzzler@lemmy.blahaj.zone 6 points 1 month ago (15 children)

You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it

[–] glizzyguzzler@lemmy.blahaj.zone 12 points 1 month ago (3 children)

Too deep on the AI propaganda there, it’s completing the next word. You can give the LLM base umpteen layers to make complicated connections, still ain’t thinking.

The LLM corpos trying to get nuclear plants to power their gigantic data centers while AAA devs aren’t trying to buy nuclear plants says that’s a straw man and you simultaneously also are wrong.

Using a pre-trained and memory-crushed LLM that can run on a small device won’t take up too much power. But that’s not what you’re thinking of. You’re thinking of the LLM only accessible via ChatGPT’s api that has a yuge context length and massive matrices that needs hilariously large amounts of RAM and compute power to execute. And it’s still a facsimile of thought.

It’s okay they suck and have very niche actual use cases - maybe it’ll get us to something better. But they ain’t gold, they ain't smart, and they ain’t worth destroying the planet.

[–] glizzyguzzler@lemmy.blahaj.zone 68 points 1 month ago (49 children)

Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 1 month ago* (last edited 1 month ago)

I got my parents to get a NAS box, stuck it in their basement. They need to back up their stuff anyway. I put in 2 18 TB drives (mirrored BTRFS raid1) from server part deals (peeps have said that site has jacked their prices, look for alts). They only need like 4 TB at most. I made a backup samba share for myself. It’s the cheapest symbology box possible, their software to make a samba share with a quota.

I then set up a wireguard connection on an RPi, taped that to the NAS, and wireguard to the local network with a batch script. Mount the samba share and then use restic to back up my data. It works great. Restic is encrypted, I don’t have to pay for storage monthly, their electricity is cheap af, they have backups, I keep tabs on it, everyone wins.

Next step is to go the opposite way for them, but no rush on that goal, I don’t think their basement would get totaled in a fire and I don’t think their house (other than the basement) would get totaled in a flood.

If you don’t have a friend or relative to do a box-at-their-house (peeps might be enticed with reciprocal backups), restic still fits the bill. Destination is encrypted, has simple commands to check data for validity.

Rclone crypt is not good enough. Too many issues (path length limits, password “obscured” but otherwise there, file structure preserved even if names are encrypted). On a VPS I use rclone to be a pass-through for restic to backup a small amount of data to a goog drive. Works great. Just don’t fuck with the rclone crypt for major stuff.

Lastly I do use rclone crypt to upload a copy of the restic binary to the destination, as the crypt means the binary can’t be fucked with and the binary there means that is all you need to recover the data (in addition to the restic password you stored safely!).

[–] glizzyguzzler@lemmy.blahaj.zone 6 points 2 months ago (2 children)

I was hoping the distros would just do the scrub/balance work for you - makes it no effort then! Good to know OpenSUSE does it for ya. Searching it looks like Fedora doesn’t have anything built in sadly, but the posts are +1 yr old so maaaybe they’ve done something.

[–] glizzyguzzler@lemmy.blahaj.zone 12 points 2 months ago (4 children)

It’s great for single drive, raid 0, and raid 1. Don’t use it for more raid, it is not acceptable for that (raid 10 obv ok). It still can lose data for raid 5/6 still.

I’m not sure of the tools that Fedora includes to manage BTRFS but these scripts are great https://github.com/kdave/btrfsmaintenance you use them to scrub and balance. Balance is for redistributing blocks and scrub checks if bits have unexpectedly changed due to bit rot (hardware issue or cosmic ray). Scrub weekly for essential photos, important docs, and the like. Monthly for everything else. Balance monthly, or on demand if free drive space is tight and you want a bit more bits.

RAID 1 will give you bit rot detection with scrub and self-recover said bit rot detection (assuming both drives don’t mystically have the same bit flip, which is very unlikely). Single drive will just detect.

BTRFS snapshot then send/receive is excellent for a quick backup.

Remember that a BTRFS snapshot will keep all files in the snapshot, even if you delete them off the live drive. Delete 500 GB of stuff, but the space didn’t reduce? Probably a snapshot is remembering that 500 GB. Delete the snapshot and your space is back.

You can make sub volumes inside a BTRFS volume, which are basically folders but you can snapshot just them. Useful for scrubbing your essential docs folder more often than everything else, or snapshotting more often too.

Lastly, you can disable copy-on-write (cow) for volumes. Reduces their safety but increases write speed, good for caches and I’ve read VM drive images need it for performance.

Overall, great. Built-in and no need to muck with ZFS’s extra install steps, but you get the benefits ZFS has (as long as you’re ok to be limited to RAID 1)

[–] glizzyguzzler@lemmy.blahaj.zone 5 points 2 months ago* (last edited 2 months ago)

Nice OC very relatable 11/10

 
 

What wacky shenanigans will Gabe and the gang get up to during their struggle for dignity and freedom from class oppression??

source: idk someone sent it to me, apologies original creator

 

Edit: Results tabulated, thanks for all y'alls input!

Results fitting within the listed categories

Just do it live

  • Backup while it is expected to be idle @MangoPenguin@lemmy.blahaj.zone @khorak@lemmy.dbzer0.com @dandroid@sh.itjust.works

  • @Darkassassin07@lemmy.ca suggested adding a real long-ass-backup-script to run monthly to limit overall downtime

Shut down all database containers

  • Shutdown all containers -> backup @PotatoPotato@lemmy.world

  • Leveraging NixOS impermanence, reboot once a day and backup @thejevans@lemmy.ml

Long-ass backup script

  • Long-ass backup script leveraging a backup method in series @STROHminator@lemmy.world @lemmyvore@feddit.nl

Mythical database live snapshot command

(it seems pg_dumpall for Postgres and mysqldump for mysql (though some images with mysql don't have that command for meeeeee))

  • Dump Postgres via pg_dumpall on a schedule, backup normally on another schedule @RegalPotoo@lemmy.world

  • Dump mysql via mysqldump and pipe to restic directly @youRFate@feddit.de

  • Dump Postgres via pg_dumpall -> backup -> delete dump @2xsaiko@discuss.tchncs.de @SteveDinn@lemmy.ca

Docker image that includes Mythical database live snapshot command (Postgres only)

  • Make your own docker image (https://gitlab.com/trubeck/postgres-backup) and set to run on a schedule, includes restic so it backs itself up @Undaunted@discuss.tchncs.de (thanks for uploading your scripts!!)

  • Add docker image prodrigestivill/postgres-backup-local and set to run on a schedule, backup those dumps on another schedule @brewery@lemmy.world @Lem453@lemmy.ca (also recommended additionally backing up the running database and trying that first during a restore)

New catagories

Snapshot it, seems to act like a power outage to the database

  • LVM snapshot -> backup that @butitsnotme@lemmy.world

  • ZFS snapshot -> backup that @ikidd@lemmy.world (real world recovery experience shows that databases act like they're recovering from a power outage and it works)

  • (I assume btrfs snapshot will also work)

One liner self-contained command for crontab

  • One-liner crontab that prunes to maintain 7 backups, dump Postgres via pg_dumpall, zips, then rclone them @DeltaTangoLima@reddrefuge.com

Turns out Borgmatic has database hooks

  • Borgmatic with its explicit support for databases via hooks (autorestic has hooks but it looks like you have to make database controls yourself) @PastelKeystone@lemmy.world

I've searched this long and hard and I haven't really seen a good consensus that made sense. The SEO is really slowing me on this one, stuff like "restic backup database" gets me garbage.

I've got databases in docker containers in LXC containers, but that shouldn't matter (I think).

me-me about containers in containersa me-me using the mental gymnastics me-me template; the template is split into two sections with the upper being a simple 3-step gymnastic routine while the bottom has the one being mocked flipping on gymnastic bars, using gymnastic rings, a balance beam, before finally jetpacking over a burning car. The top says "docker compose up -d" in line with the 3 simple steps of the routine, while the bottom, while becoming increasingly more cluttered, says "pass uid/gid to LXC", "add storage devices to LXC", "proxy network", "install docker on every container", and finally "docker compose up -d".


I've seen:

  • Just backup the databases like everything else, they're "transactional" so it's cool
  • Some extra docker image to load in with everything else that shuts down the databases in docker so they can be backed up
  • Shut down all database containers while the backup happens
  • A long ass backup script that shuts down containers, backs them up, and then moves to the next in the script
  • Some mythical mentions of "database should have a command to do a live snapshot, git gud"

None seem turnkey except for the first, but since so many other options exist I have a feeling the first option isn't something you can rest easy with.

I'd like to minimize backup down times obviously, like what if the backup for whatever reason takes a long time? I'd denial of service myself trying to backup my service.

I'd also like to avoid a "long ass backup script" cause autorestic/borgmatic seem so nice to use. I could, but I'd be sad.

So, what do y'all do to backup docker databases with backup programs like Borg/Restic?

 

[Semi-solved edit]: To answer my question, I was not able to figure out podman. There's just too little community explanations about it for me to pull myself up by my own bootstraps.

So I went for Incus, which has a lot of community explanations (also via searching LXD) and made an Incus container with a macvlan and put the adguard home docker in that. Ran the docker as "root" and used docker compose since I can rely on the docker community directly, but the Incus container is not root-privileged so my goal of avoiding rootful is solved.

Anyone finding this via search, the magic sauce I needed to achieve a technically rootless adguardhome docker setup was:

sudo incus create gooner # For networking, it doesn't need to be named gooner
sudo incus profile device add gooner eth0 nic nictype=macvlan parent=enp0s10 # Get your version of 'enp0s10' via 'ip addr', macvlan thing won't work with wifi
sudo incus profile set gooner security.nesting=true
sudo incus profile set gooner security.syscalls.intercept.mknod=true
sudo incus profile set gooner security.syscalls.intercept.setxattr=true
# Pause here and make adguardhome instance in the Incus web UI (incus-ui-canonical) with the "gooner" profile
# Make sure all network stuff from docker-compose.yml is deleted
# Put docker-compose.yml in /home/${USER}/server/admin/compose/adguardhome
printf "uid $(id -u) 0\ngid $(id -g) 0" | sudo incus config set adguardhome raw.idmap - # user id -> 0 (root), user group id -> 0 (root) since debian cloud default user is root
sudo incus config device add adguardhome config disk source=/home/${USER}/server/admin/config/adguardhome path=/server/admin/config/adguardhome # These link adguard stuff to the real drive
sudo incus config device add adguardhome compose disk source=/home/${USER}/server/admin/compose/adguardhome path=/server/admin/compose/adguardhome
# !! note that the adguardhome docker-compose.yml must say "/server/configs/adguardhome/work" instead of "/home/${USER}/server/configs/adguardhome/work"
# Install docker
sudo incus exec adguardhome -- bash -c "sudo apt install -y ca-certificates curl"
sudo incus exec adguardhome -- bash -c "sudo install -m 0755 -d /etc/apt/keyrings"
sudo incus exec adguardhome -- bash -c "sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc"
sudo incus exec adguardhome -- bash -c "sudo chmod a+r /etc/apt/keyrings/docker.asc"
sudo incus exec adguardhome -- bash -c 'echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null'
sudo incus exec adguardhome -- bash -c "sudo apt update"
sudo incus exec adguardhome -- bash -c "sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin"
# Disable port 53 binding
sudo incus exec adguardhome -- bash -c "[ -d /etc/systemd/resolved.conf.d ] || mkdir -p /etc/systemd/resolved.conf.d"
sudo incus exec adguardhome -- bash -c "printf "%s\n%s\n" '[Resolve]' 'DNSStubListener=no' | sudo tee /etc/systemd/resolved.conf.d/10-make-dns-work.conf"
sudo incus exec adguardhome -- bash -c "sudo systemctl restart systemd-resolved"
# Run the docker
sudo incus exec adguardhome -- bash -c "docker compose -f /server/admin/compose/adguardhome/docker-compose.yml up -d"

I'm trying to get rootless podman to run adguard home on Debian 12. I run the docker-compose.yml file via podman-compose up -d.

I get errors that I cannot google successfully, sadly. I do occasionally see shards of people saying things like "I have adguard running with rootless podman" but never any guides. So tantalizing.

I have applied this change so rootless can yoink port 53:

sudo nano /etc/sysctl.conf

net.ipv4.ip_unprivileged_port_start=53 # at end, required for rootless podman to be able to do 53

(Do I even need that change with a macvlan?)

The sticking point seems to be the macvlan. I want a macvlan so I can host a PiHole as a redundant fallback on the same server. I error with:

Error: netavark: Netlink error: No such device (os error 19) and that error really gets me no where searching for it. I am berry sure the ethernet connection is named enp0s10 and spelled right in the docker-compose file, cause I copied and pasted it in.

I tried forcing the backend to "CNI" but probably did it wrong, it complained about:

WARN[0000] Failed to load cached network config: network dockervlan not found in CNI cache, falling back to loading network dockervlan from disk
WARN[0000] 1 error occurred:
        * plugin type="macvlan" failed (delete): cni plugin macvlan failed: Link not found

(I also made a /etc/cni/net.d/90-dockervlan.conflist file for cni but it didn't seem to see it and I couldn't muster how to get it to see it)

Both still occur if I pre-make the dockervlan with:

podman network create -d macvlan -o parent=enp0s10 --subnet 10.69.69.0/24 --gateway 10.69.69.1 --ip-range 10.69.69.69/32 dockervlan

And adjust the compose file's networks: call to:

networks:
    dockervlan:
        external: true
        name: dockervlan

Has anyone succeeded at this or done something similar?

docker-compose.yml:

version: '3.9'
#
***
NETWORKS
***
networks:
    dockervlan:
        name: dockervlan
        driver: macvlan
        driver_opts:
            parent: enp0s10
        ipam:
            config:
              - type: "host-local"
              - dst: "0.0.0.0/0"
              - subnet: "10.69.69.0/24"
                rangeStart: "10.69.69.69/32" # This range should include the ipv4_address: in services:
                rangeEnd: "10.69.69.79/32"
                gateway: "10.69.69.1"
#
***
SERVICES
***
services:
    adguardhome:
        container_name: adguardhome
        image: docker.io/adguard/adguardhome
        hostname: adguardhome
        restart: unless-stopped
        networks:
            dockervlan:
                ipv4_address: 10.69.69.69# IP address inside the defined dockervlan range
        volumes:
            - '/home/${USER}/server/configs/adguardhome/work:/opt/adguardhome/work'
            - '/home/${USER}/server/configs/adguardhome/conf:/opt/adguardhome/conf'
            #- '/home/${USER}/server/certs/example.com:/certs # optional: if you have your own SSL certs
        ports:
            - '53:53/tcp'
            - '53:53/udp'
            - '80:80/tcp'
            - '443:443/tcp'
            - '443:443/udp'
            - '3000:3000/tcp'

podman 4.3.1

podman-compose 1.0.6

Getting a newer podman-compose is pretty easy peasy, idk about newer podman if that's needed to fix this.

 

Alt text: Star Trek Enterprise scene, Riker is labeled “The last pack of pizza rolls in the freezer” and Data is labeled “Me at 3 AM”. Riker is sleeping, some ominous flashing lights start occurring. He looks up, and then looks down the bed in shock. Cut to Data, who is creepily smiling back. Cut back to Riker, who is then pulled slowly down the bed and out of frame, by Data who is off-screen. End alt text.

For greater context, a previous commenter has reminded me that the Riker sliding down the bed scene is from the episode “Schisms” https://memory-alpha.fandom.com/wiki/Schisms_(episode), and Data is not actually the one pulling him off the bed but rather some cooky aliens levitating his ass into subspace or something. Idk what episode Data lookin zonked but creepy is from tho srry!

 

Shout out to SNW’s

 

I have my home network behind a wireguard VPN connection. The wireguard "server" is run on a debian computer and the home network is handled by an opnsense computer.

Edit: opnsense does DHCP but a switch does the actual local routing, so opnsense isn't involved in 10.0.66.XXX <-> 10.0.69.XXX comms.

My home network is on the subnet 10.0.69.XXX, while the VPN connection gets the subnet 10.0.66.XXX.

Weirdly, this setup worked fine until yesterday for the PS Remote Play app (hard requirement, iOS device). Nothing changed as far as I can tell - but yesterday the PS4 stopped being found by the PS Remote Play app (when I'm home on the 10.0.69.XXX subnet, the PS Remote Play app works fine).

I suspect from what I can google ( https://www.snbforums.com/threads/ps4-remote-play-over-vpn.60629/ , https://github.com/williampiat3/ImprovingPSRemotePlay#longer-and-more-complex-solution ) that to make it seem to the PS Remote Play app that all is well abs. 100% I need to get my device on 10.0.66.10 to do a broadcast search on the 10.0.69.XXX subnet (or have a 10.0.69.XXX address).

I feel (hope) there is a way to do this, but I am no iptables wizard. Does anyone know how to accomplish this? The solutions linked don't make sense to me in a practical way to apply them.

view more: ‹ prev next ›