needanke

joined 1 year ago
MODERATOR OF
[–] needanke@feddit.org 2 points 5 days ago (1 children)

Every 5mins seems insane. Why do you need to schedule it that frequently?

[–] needanke@feddit.org 1 points 1 week ago

Also, I wouldn't shy from keeping the database on the same, fast storage as the OS, even if that's flash. Move to an external SSD when you can. HDDs have such long seek times.

Very much true. I installed Immich on my dads Synology for him and compared to my own setup at home the speeds are abysmall (it even crased a few times during the first indexing and ML run). I suspect a major part is that the whole os runs on an hdd.

If you put the database on an sd-card just ensure you make frequent backups somwhere else. I wouldn't trust flash storage to keep my data safe.

[–] needanke@feddit.org 3 points 2 weeks ago

https://feddit.org/post/19150492

Literal translation of the headline: The rich agains inheritance tax.

(Beacuse the name of our minister for the economy translates to "The rich")

[–] needanke@feddit.org 5 points 3 weeks ago

Premium Shitpost mein Herr.

[–] needanke@feddit.org 3 points 1 month ago

Right, he who does not rely on someone elses DNS-server shall throw the first stone!

[–] needanke@feddit.org 4 points 1 month ago (2 children)

Furi Labs runs a fully optimized system called Furi OS

If I were to switch to a Linux phone I'd want it to be made for an open and trusted OS, not the (unknown to me at least) manufacturers own.

[–] needanke@feddit.org 1 points 1 month ago (2 children)

Afaik Synology supports Btrfs which I honestly prefer at this point if you don't need filesystem based encryption or professionall scaling and caching features.

[–] needanke@feddit.org 4 points 1 month ago

Germany did this too up until a month ago https://lemmy.bestiver.se/post/573659 .

[–] needanke@feddit.org 5 points 1 month ago (1 children)

Could you elabotrate why?

[–] needanke@feddit.org 14 points 1 month ago (1 children)

Was the Matrix not famously not a Utopia, because that ruined 'crop' yields.

[–] needanke@feddit.org 3 points 1 month ago (1 children)

You cannot just say that and then not link it!

 

Fors off: I am a total beginner when it comes to docker. I do have some self hosting experience, but run pretty much everything in its on lxc and treat it like a full linux system.

Recently I installed immich in a container and was surprised to see how well it worked.

This lead my to finally tackle something I have been putting off for way to long; installing nightscout (a self hosten glucose monitoring&reporting utility).

For that I followed their guide. Everything worked well up untill the point where I wanted to connect to the web interface. I started of by entering my domain into the nightscout containers arguments (in the form subdomain.domain.tld). Then I used my reverse proxy (nginx, not inside docker) to forward the subdomain to the docker IP on Ports 443, then 80 and lastly the one displayed at the container when listing them with docker ps. None of those worked (I was not able to get a certificate using letsEncrypt and got a 404 when connecting without tls).

I then entered nighscout.[docker-IP] and tried to access it dkrectly which did not work either.

When googling I only find comparisons on how to set up nginx in Docker, or comparisons between the two.

docker-compose file

  GNU nano 7.2                                                                                                 docker-compose.yml                                                                                                          
version: '3'

x-logging:
  &default-logging
  options:
    max-size: '10m'
    max-file: '5'
  driver: json-file

services:
  mongo:
    image: mongo:4.4
    volumes:
      - ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
    logging: *default-logging

  nightscout:
    image: nightscout/cgm-remote-monitor:latest
    container_name: nightscout
    restart: always
    depends_on:
      - mongo
    labels:
      - 'traefik.enable=true'
      # Change the below Host from `localhost` to be the web address where Nightscout is running.
      # Also change the email address in the `traefik` service below.
      - 'traefik.http.routers.nightscout.rule=Host(`localhost`)'
      - 'traefik.http.routers.nightscout.entrypoints=websecure'
      - 'traefik.http.routers.nightscout.tls.certresolver=le'
    logging: *default-logging
    environment:
      ### Variables for the container
      NODE_ENV: production
      TZ: [removed]

      ### Overridden variables for Docker Compose setup
      # The `nightscout` service can use HTTP, because we use `traefik` to serve the HTTPS
      # and manage TLS certificates
      INSECURE_USE_HTTP: 'true'

      # For all other settings, please refer to the Environment section of the README
      ### Required variables
      # MONGO_CONNECTION - The connection string for your Mongo database.
      # Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
      # The default connects to the `mongo` included in this docker-compose file.
      # If you change it, you probably also want to comment out the entire `mongo` service block
      # and `depends_on` block above.
      MONGO_CONNECTION: mongodb://mongo:27017/nightscout

      # API_SECRET - A secret passphrase that must be at least 12 characters long.
      API_SECRET: [removed]

      ### Features
      # ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
      # See https://github.com/nightscout/cgm-remote-monitor#plugins for details
      ENABLE: careportal rawbg iob

      # AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
      # When readable, anyone can view Nightscout without a token. Setting it to denied will require
      # a token from every visit, using status-only will enable api-secret based login.
      AUTH_DEFAULT_ROLES: denied

      # For all other settings, please refer to the Environment section of the README
      # https://github.com/nightscout/cgm-remote-monitor#environment

  traefik:
    image: traefik:latest
    container_name: 'traefik'
    command:
      - '--providers.docker=true'
      - '--providers.docker.exposedbydefault=false'
      - '--entrypoints.web.address=:80'
      - '--entrypoints.web.http.redirections.entrypoint.to=websecure'
      - '--entrypoints.websecure.address=:443'
      - "--certificatesresolvers.le.acme.httpchallenge=true"
      - "--certificatesresolvers.le.acme.httpchallenge.entrypoint=web"
      - '--certificatesresolvers.le.acme.storage=/letsencrypt/acme.json'
      # Change the below to match your email address
      - '--certificatesresolvers.le.acme.email=[removed]'
    ports:
      - '443:443'
      - '80:80'
    volumes:
      - './letsencrypt:/letsencrypt'
      - '/var/run/docker.sock:/var/run/docker.sock:ro'
    logging: *default-logging

 

The sensor is located on the case (not near the exhaust) of the server. With the structure of my appartment this is the only place I can realistically put my Server but sadly also the hottest place in my appartment.

The outside temperature is supposed to reach 36°C today so I expect the ambient temp for the server to rise another 2-3 degrees.

1
Recursion (cdn.masto.host)
submitted 7 months ago* (last edited 7 months ago) by needanke@feddit.org to c/programmerhumor@lemmy.ml
 

Edit: Added my 'Solution' at the bottom

Does anyone know how I can unlock oneplus 7 Pros bootloader. I want to switch to lineageOs but am failing at that step during the install

original PostThings I have tried so far:

  • enable  OEM unlock in developer settings and then using fastboot to unlock it. That fails with an error message (known issue on android 12, other threads reccomens downgrading to 11).
  • using an inoffical firehose client. I don't know what the issue was. There was no relevant error in the output, even in debugging mode (though I suspect it is the same issue as above).

edit: from what I saw the unbrick tools mentioned below doing it seems like someone with more knowledge might be able to flash the correct images onto the correct partitions using that tool (although it is very buggy, I needed to change a bit of the python code to even get it going without throwing errors)

Things I have tried to downgrade to Android 11:

  • using the official rollback packeges. Sadly the links on the official thread are dead and I cant find the rollback packages anywhere else
  • using the unbrick tools. Since I don't have a windows PC, I tried it on a friends laptop where it just didn't work without a clear error message. I then set up a new win 11 VM. There I couldn't install the required drivers (super generic errror message, something like path not found but without specifying the path).

Edit from lineage os 22.1:

What ended up working was downgrading to android 11 using the afformentioned unbrick tool. The issues I had with installing the driver originated in an incomplete unpack of the driver file by the windows explorer. After that worked, I was able to downgrade to oos 11. That's where the next issue came up: I had to upgrade to 12 again to continue the lineage install process but the first update (from an the flashed oos 11 to the latest oos 11) git stuck at 80% with an unlocked bootloader. For me it worked to update 11 as far as possible, only then unlocking the bootloader and lastly updating to oos12. After that you can continue with the lineage install as officially documented. I tried some stuff with twrp as well (wanted to dualboot postmarket OS) but that didn't work either so I just gave up.

 

I recently moved my files to a new zfs-pool and used that chance to properly configure my datasets.

This led me to discovering zfs-deduplication.

As most of my storage is used by my jellyfin library (~7-8Tb), which is mostly uncompressed bluray rips I thought I might be able to save some storage using deduplication in addition to compression.

Has anyone here used that for similar files before? What was your experience with it?

I am not too worried about performance. The dataset in question is rarely changed. Basically only when I add more media every couple of months. I also have overshot my cpu-target when originally configuring my server so there is a lot of headroom there. I have 32Gb of ram which is not really fully utilized either (but I also would not mind upgrading to 64 too much).

My main concern is that I am unsure it is useful. I suspect just because of the amount of data and similarity in type there would statistically be a lot of block-level duplication but I could not find any real world data or experiences on that.

view more: next ›