chiisana

joined 1 year ago
[–] chiisana@lemmy.chiisana.net 14 points 7 months ago (5 children)

Last time this was asked, I’ve voiced the concern that tying fixed IP address to container definitions is an anti-pattern, and I’ll voice that again. You shouldn’t be defining a fixed IP address to individual services as that prevents future scaling.

Instead, you should leverage service discover mechanisms to help your services identify each other and wire up that way.

It seemed like in NPM, there is no fitting mechanisms out of the box. Which may suggest your use case is out growing what it may be able to service you for in the future. However, docker compose stacks may rescue the current implementation with DNS resolution. Try simplifying your npm’s docker compose to just this:

   networks:
      - npm

networks:
  npm:
    name: npm_default
    external: true

And your jellyfin compose with something like:

   networks:
      - npm
      - jellyfin_net

networks:
  npm:
    name: npm_default
    external: true
  jellyfin_net:
    name: jellyfin_net
    internal: true

Have your other services in Jellyfin stack stay only on jellyfin_net or whatever you define it to be, so they’re not exposed to npm/other services. Then in the configs, have your npm talk direct to the name of your jellyfin service using hostname, maybe something like jellyfin or whatever you’ve set as the service name. You may need to include the compose stack as prefix, too. This should then allow your npm to talk to your jellyfin via the docker compose networks’ DNS directly.

Good luck!

[–] chiisana@lemmy.chiisana.net 2 points 7 months ago

I don’t use the two you’ve called out, so I cannot guarantee my Google results are accurate, but the principle is similar…

If the app supports external authentication (usually, looking for things like OIDC, SAML, or SSO in the documentation), then I’d configure the app to do that and skip the Traefik middleware piece.

This is what I’d do based on what I’m seeing on this article for NextCloud. That is, when all is said and done, I’d go https://nexcloud.myunexistent.deployment/ and be greeted with the next cloud login screen, where the external authentication option is shown on screen.

A similar setup might be achieved with Home Assistant’s commandline authentication provider to delegate authentication out via command line setup. Alternatively, use hass-auth-header plugin along with trusted proxy to delegate authentication out to the reverse proxy.

Hope this points to a relevant direction for you!

[–] chiisana@lemmy.chiisana.net 1 points 7 months ago

I’m so lucky I got my SO on board with using a password manager early on! However, the passwordless login (after figuring out how send a user to the enroll stage initially) makes it so smiple, don’t even need the federated Google login.

[–] chiisana@lemmy.chiisana.net 3 points 7 months ago* (last edited 7 months ago) (1 children)

I don’t know about other platforms, but YouTube membership is totally implementable on any other platform.

The workflow anyone need to implement is the same flow Discord has implemented:

  1. Perform OAuth to get the user’s own channel using the mine filter on channels.list end point. This way the service can know SomeOneWatching is owner of channel UC1234ABCD
  2. Perform OAuth to get the host’s members on a fixed interval to get a list of all members, and match it against all known users’ channel IDs or target individual user like SomeOneWatching’s UC1234ABCD channel ID as part of filterByMemberChannelId on the same members.list end point.
  3. Upgrade users’ groups on the service to reflect membership accordingly, no direct YouTube partnership required.
  4. Revisit the same flow in 2 regularly to downgrade when memberships are not renewed; beyond the pubsubhubbub which notifies subscription content updates (new uploads/deletions) on a subscribed channel, YouTube does not have a push notification for automatic updates. This is why there’s always a slight delay when membership status changes.

Source: I’ve worked in YouTube adjacent company using all of their public and several proprietary APIs for around 10 years now. I’m fairly familiar with their API offerings.

[–] chiisana@lemmy.chiisana.net 0 points 7 months ago (1 children)

B.C. proposes protections for renters and landlords alike

Proposed changes to legislation around residential rentals will mean more protection for other renters and landlords…

Nothing in the list of changes in the article seems to benefit the landlords, lol. Why do they think they’d need to claim it benefit the landlords to get them onboard when the changes are clearly targeting to benefit the renters?

[–] chiisana@lemmy.chiisana.net 7 points 7 months ago (4 children)

I use Traefik as reverse proxy and Authentik as SSO IdP. When I connect to my “exposed” service, Traefik middleware determines if I have the appropriate access credentials established. If so, I get access; if not, I’m bounced over to Authentik, where I enter my username, and authenticate via Passkey (modern passwordless gated by private keys behind biometrics unlock). The middleware can also be bypassed based on my pre established private custom HTTP header, so apps doesn’t support the flow (ie mobile client for some apps) can get in directly as well.

[–] chiisana@lemmy.chiisana.net 2 points 7 months ago

It’s not as a fully scalable solution, no. Without swarm, last I checked, it cannot even really run on multiple instances. However, it does have the functionality to scale individual services within the same host if resources are available and the service can benefit from such a scaling. It is not very uncommon to see something require multiple worker instances and this breaks that paradigm.

Service discovery will certainly play a much larger role in even more orchestrated systems, but doesn’t mean it shouldn’t start here.

[–] chiisana@lemmy.chiisana.net 1 points 7 months ago

Except it is explicitly being told to use a singular IP address here. So the engine is either going to go against explicit assignment or going to create a conflict within its own network. Neither of which are the expected behavior.

Just because people are self hosting, doesn’t mean they should be doing things incorrectly.

[–] chiisana@lemmy.chiisana.net 1 points 7 months ago (7 children)

This feels like an anti-pattern that should be avoided. Docker compose allows for scaling individual services to have more than one instance. By hard assigning an IP address to a service, how is that going to be scaled in the future?

I don’t know how to reconcile this issue directly for NPM, but the way to do this with Traefik is to use container labels (not hard assigning IP address) such that Traefik can discover the service and wire itself up automatically. I’d imagine there should be a similar way to perform service discovery in NPM?

[–] chiisana@lemmy.chiisana.net 13 points 7 months ago (2 children)

Most self hosted DNS level blocking will be very fast as it is really easy to keep the block list in RAM. I hosted Pi Hole on RPi 3 and an over provisioned VM (4 cores and 4GB of ram lol). The only difference I’ve noticed is whether or not the device is hardwired. When my RPi was hardwired into the network, there was no notable difference between the two.

[–] chiisana@lemmy.chiisana.net 3 points 7 months ago* (last edited 7 months ago)

Oh I hear you. Not saying they shouldn’t pay. If anything, they should pay even more for putting extra strain on the public infrastructure. Just pointing out that a vocal minority of people feel entitled and that just because they prefer to drive that they shouldn’t need to pay, and they will be very vocal of such tax.

[–] chiisana@lemmy.chiisana.net 5 points 7 months ago (2 children)

I’m with you on this, but anyone who remotely uses transit will hear an earful from those that “prefers to drive” — source: 10 years of undergrad and grad, hearing kids who drive complain about mandatory UPass.

view more: ‹ prev next ›