archomrade

joined 1 year ago
[–] archomrade@midwest.social 6 points 11 hours ago

Facebook, Instagram, Twitter, Reddit....

[–] archomrade@midwest.social 13 points 19 hours ago

Uhhh, because these were bombs - bombs that were remotely and indiscriminately detonated. Some of the people were driving, some standing next to children or on busses full of people. There are reports of children who died because they were standing next to a target at head-level with the pager.There's no guarantee they were even being carried by "Hezbollah's guys".

I don't even know why anyone would assume otherwise. This was a loosely targeted terror attack

[–] archomrade@midwest.social 29 points 1 day ago (5 children)

Likely because the bulk of those wounded by this attack were not Hezbollah

I don't even know how you'd reasonably expect to only injure your targets in an attack as widespread and remote as this one. Seems blatantly indiscriminate at best.

[–] archomrade@midwest.social 4 points 1 day ago

But China is not any more or less likely than any other country to do this type of thing; it really seems like you're associating them with that terrorist attack for no particular reason other than to take advantage of people's imagination.

I don't even know why you'd jump to tie those two things together.

[–] archomrade@midwest.social 24 points 1 day ago (11 children)

I'm honestly surprised peertube has lasted as long as it has as it is

[–] archomrade@midwest.social 2 points 1 day ago

This is the most 'China bad for no particular reason' post I've ever seen

[–] archomrade@midwest.social 25 points 1 day ago (5 children)

Why are we ok with domestic manufacturers doing this though

[–] archomrade@midwest.social -1 points 1 day ago

It makes a difference, just not to any of us plebes

[–] archomrade@midwest.social 6 points 1 day ago

You should really not use double quotation marks when you are paraphrasing, a lot of people will confuse this for a direct quote

[–] archomrade@midwest.social 1 points 3 days ago

I use this for architecture and it's saved me so much time

[–] archomrade@midwest.social 42 points 6 days ago* (last edited 6 days ago) (7 children)

Indiana is not known for their progressive politics

 

edit: a working solution is proposed by @Lifebandit666@feddit.uk below:

So you’re trying to get 2 instances of qbt behind the same Gluetun vpn container?

I don’t use Qbt but I certainly have done in the past. Am I correct in remembering that in the gui you can change the port?

If so, maybe what you could do is set up your stack with 1 instance in, go into the GUI and change the port on the service to 8000 or 8081 or whatever.

Map that port in your Gluetun config and leave the default port open for QBT, and add a second instance to the stack with a different name and addresses for the config files.

Restart the stack and have 2 instances.


Has anyone run into issues with docker port collisions when trying to run images behind a bridge network (i think I got those terms right?)?

I'm trying to run the arr stack behind a VPN container (gluetun for those familiar), and I would really like to duplicate a container image within the stack (e.g. a separate download client for different types of downloads). As soon as I set the network_mode to 'service' or 'container', i lose the ability to set the public/internal port of the service, which means any image that doesn't allow setting ports from an environment variable is stuck with whatever the default port is within the application.

Here's an example .yml:

services:
  gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    environment:
      - VPN_SERVICE_PROVIDER=mullvad
      - VPN_TYPE=[redacted]
      - WIREGUARD_PRIVATE_KEY=[redacted]
      - WIREGUARD_ADDRESSES=[redacted]
      - SERVER_COUNTRIES=[redacted]
    ports:
      - "8080:8080" #qbittorrent
      - "6881:6881"
      - "6881:6881/udp"
      - "9696:9696" # Prowlarr
      - "7878:7878" # Radar
      - "8686:8686" # Lidarr
      - "8989:8989" # Sonarr
    restart: always

  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: "qbittorrent"
    network_mode: "service:gluetun"
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=CST/CDT
      - WEBUI_PORT=8080
    volumes:
      - /docker/appdata/qbittorrent:/config
      - /media/nas_share/data:/data)

Declaring ports in the qbittorrent service raises an error saying you cannot set ports when using the service network mode. Linuxserver.io has a WEBUI_PORT environment variable, but using it without also setting the service ports breaks it (their documentation says this is due to CSRF issues and port mapping, but then why even include it as a variable?)

The only workaround i can think of is doing a local build of the image that needs duplication to allow ports to be configured from the e variables, OR run duplicate gluetun containers for each client which seems dumb and not at all worthwhile.

Has anyone dealt with this before?

 

Anyone else get this email from Leviton about their decora light switches and their changes to ToS expressly permitting them to collect and use behavioral data from your devices?

FUCK Leviton, long live Zigbee and Zwave and all open-sourced standards


My Leviton

At Leviton, we’re committed to providing an excellent smart home experience. Today, we wanted to share a few updates to our Privacy Policy and Terms of Service. Below is a quick look at key changes:

We’ve updated our privacy policy to provide more information about how we collect, use, and share certain data, and to add more information about our users’ privacy under various US and Canadian laws. For instance, Leviton works with third-party companies to collect necessary and legal data to utilize with affiliate marketing programs that provide appropriate recommendations. >As well, users can easily withdraw consent at any time by clicking the links below.

The updates take effect March 11th, 2024. Leviton will periodically send information regarding promotions, discounts, new products, and services. If you would like to unsubscribe from communications from Leviton, please click here. If you do not agree with the privacy policy/terms of service, you may request removal of your account by clicking this link.

For additional information or any questions, please contact us at dssupport@leviton.com.

Traduction française de cet email Leviton

Copyright © 2024 Leviton Manufacturing Co., Inc., All rights reserved. 201 North Service Rd. • Melville, NY 11747

Unsubscribe | Manage your email preferences

 

Pretend your only other hardware is a repurposed HP Prodesk and your budget is bottom-barrel

46
submitted 7 months ago* (last edited 7 months ago) by archomrade@midwest.social to c/linux@lemmy.ml
 

I'm currently watching the progress of a 4tB rsync file transfer, and i'm curious why the speeds are less than the theoretical read/write maximum speeds of the drives involved with the transfer. I know there's a lot that can effect transfer speeds, so I guess i'm not asking why my transfer itself isn't going faster. I'm more just curious what the bottlenecks could be typically?

Assuming a file transfer between 2 physical drives, and:

  • Both drives are internal SATA III drives with ~~5.0GB/s~~ ~~5.0Gb/s read/write~~ 210Mb/s (this was the mistake: I was reading the sata III protocol speed as the disk speed)
  • files are being transferred using a simple rsync command
  • there are no other processes running

What would be the likely bottlenecks? Could the motherboard/processor likely limit the speed? The available memory? Or the file structure of the files themselves (whether they are fragmented on the volumes or not)?

 
  • Edit- I set the machine to work last night testing memtester and badblocks (read only) both tests came back clean, so I assumed I was in the clear. Today, wanting to be extra sure, i ran a read-write badblocks test and watched dmesg while it worked. I got the same errors, this time on ata3.00. Given that the memory test came back clean, and smartctl came back clean as well, I can only assume the problem is with the ata module, or somewhere between the CPU and the ata bus. i'll be doing a bios update this morning and then trying again, but seems to me like this machine was a bad purchase. I'll see what options I have with replacement.

  • Edit-2- i retract my last statement. It appears that only one of the drives is still having issues, which is the SSD from the original build. All write interactions with the SSD produce I/O errors (including re-partitioning the drive), while there appear to be no errors reading or writing to the HDD. Still unsure what caused the issue on the HDD. Still conducting testing (running badblocks rw on the HDD, might try seeing if I can reproduce the issue under heavy load). Safe to say the SSD needs repair or to be pitched. I'm curious if the SD got damaged, which would explain why the issue remains after being zeroed out and re-written and why the HDD now seems fine. Or maybe multiple SATA ports have failed now?


I have no idea if this is the forum to ask these types of questions, but it felt a little like a murder mystery that would be a little fun to solve. Please let me know if this type of post is unwelcome and I will immediately take it down and return to lurking.

Background:

I am very new to linux. Last week I purchased a cheap refurbished headless desktop so I could build a home media server, as well as play around with vms and programming projects. This is my first ever exposure to linux, but I consider myself otherwise pretty tech-savvy (dabble in python scripting in my spare time, but not much beyond that).

This week, i finally got around to getting the server software installed and operating (see details of the build below). Plex was successfully pulling from my media storage and streaming with no problems. As i was getting the docker containers up, I started getting "not enough storage" errors for new installs. Tried purging docker a couple times, still couldn't proceed, so I attempted to expand the virtual storage in the VM. Definitely messed this up, and immediately Plex stops working, and no files are visible on the share anymore. To me, it looked as if it attempted taking storage from the SMB share to add to the system files partition. I/O errors on the OMV virtual machine for days.

Take two.

I got a new HDD (so i could keep working as I tried recovery on the SSD). I got everything back up (created a whole new VM for docker and OMV). Gave the docker VM more storage this time (I think i was just reckless with my package downloads anyway), made sure that the SMB share was properly mounted. As I got the download client running (it made a few downloads), I notice the OVM virtual machine redlining on memory from the proxmox window. Thought, (uh oh, i should fix that). Tried taking everything down so I could reboot the OVM with more memory allocation, but the shutdown process hung on the OVM. Made sure all my devices on the network were disconnected, then stopped the VM from the proxmox window.

On OVM reboot, i noticed all kinds of I/O errors on both the virtual boot drive and the mounted SSD. I could still see files in the share on my LAN devices, but any attempt to interact with the folder stalled and would error out.

I powered down all the VM's and now i'm trying to figure out where I went wrong. I'm tempted to just abandon the VM's and just install it all on a Ubuntu OS, but I like the flexibility of having the VM's to spin up new OS's and try things out. The added complexity is obviously over my head, but if I can understand it better I'll give it another go.

Here's the build info:

Build:

  • HP prodesk 600g1
  • intel i5
  • upgraded 32gb after-market DDR3 1600mhz Patriot Ram
  • KingFlash 250gb SSD
  • WD 4T SSD (originally NTFS drive from my windows pc with ~2T of data existing)
  • WD 4T HDD (bought this after the SSD corrupted, so i could get the server back up while i delt with the SSD)
  • 500Mbps ethernet connection

Hypervisor

  • Proxmox (latest), Ubuntu kernel
  • VM110: Ubuntu-22.04.3-live server amd64, OpenMediaVault 6.5.0
  • VM130: Ubuntu-22.04.3-live, docker engine, portainer
    • Containers: Gluetun, qBittorrent, Sonarr, Radarr, Prowlarr)
  • LCX101: Ubuntu-22.04.3, Plex Server
  • Allocations
  • VM110: 4gb memory, 2 cores (balooning and swap ON)
  • VM130: 30gb memory, 4 cores (ballooning and swap ON)

Shared Media Architecture (attempt 1)

  • Direct-mounted the WD SSD to VM110. Partitioned and formatted the file system inside the GUI, created a folder share, set permissions for my share user. Shared as an SMB/CIFS
  • bind-mounted the shared folder to a local folder in VM130 (/media/data)
  • passed the mounted folder to the necessary docker containers as volumes in the docker-compose file (e.g. - volumes: /media/data:/data, ect)

No shame in being told I did something incredibly dumb, i'm here to learn, anyway. Maybe just not learn in a way that destroys 6 months of dvd rips in the process ___

 

Does anyone know if this enables any kind of tracking (either through WiFi device logging or network activity)? I've typically used my own networking modems and routers, I'm a little weary of a required smart device that I don't have control over.

So far I haven't been able to find much information beyond what's available from century-link

view more: next ›