this post was submitted on 26 Jul 2023
19 points (85.2% liked)

Selfhosted

39281 readers
227 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I'm sure I'm massively overthinking this, but any help would be greatly appreciated.

I have a domain name that I bought through NameCheap and I've pointed it to Cloudflare (i.e. updated the name servers). I have a Synology NAS on which I run Docker and a few containers. Up until now I've done this using IP addresses and ports to access everything (I have a Homepage container running and just link to everything from there).

But I want to setup SSL and start running Vaultwarden, hence purchasing a domain name to make it all easier.

I tried creating an A record in Cloudflare to point to the internal IP of my NAS (and obviously, this couldn't be orange-clouded through CF because it's internal to my LAN). I'm very reluctant to point the A record to the external IP of my NAS (which, for added headache is dynamic, so I'd need to get some kind of DDNS) because I don't want to expose everything on my NAS to the Internet. In actual fact, I'm not precious about accessing any of this stuff over the internet - if I need remote access I have a Tailscale container running that I can connect to (more on that later in the post). The domain name was purely for ease of setting up SSL and Vaultwarden.

So I guess my questions are:

  • What is the best way to go about this - do I create a DDNS on the NAS and point that external IP address to my domain in Cloudflare, then use Traefik to just expose the containers I want to have access to using subdomains?
  • If so, then how do I know that all other ports aren't accessible (I assume because I'm only going to expose ports 80 and 443 in Traefik?)
  • What do other people see (i.e. outside my network) if they go to my domain? How do I ensure they can't access my NAS and see some kind of page?
  • Is there a benefit to using Cloudflare?
  • How would Pi-hole and local DNS fit into this? I guess I could point my router at Pi-hole for DNS and create my A records on Pi-hole for all my subdomains - but what do I need to setup initially in Cloudflare?
  • I also have a RPi that has a (very basic) website on it - how do I setup an A record to have Cloudflare point a sub-domain to the Pi's IP address?
  • Going back to the Tailscale thing - is it possible to point the domain to the IP address of the Tailscale container, so that the domain is only accessible when I switch on the Tailscale VPN? Is this a good idea/bad idea? Is there a better way to do it?

I'm sure these are all noob-type questions, but for the past 6-7 years I've purely used this internally using IP:port combinations, so never had to worry about domain names and external exposure, etc.

Many thanks in advance!

top 35 comments
sorted by: hot top controversial new old
[–] DRx@lemmy.world 10 points 1 year ago (3 children)

I do this for some dockers in my unraid, except I use the zero trust tunnels. MUCH easier, can use SSL, and can set up a login page for users. Also, you don't have to open any ports on your router!

Im not sure about synology, but I would assume you can find a "cloudflared" docker in the app store.

check out this youtube video for a good explanation: https://www.youtube.com/watch?v=ZvIdFs3M5ic

[–] nef@lemmy.world 4 points 1 year ago

A hundred times this. It's going to be the easiest to set up by a wide margin. https://www.cloudflare.com/products/tunnel/

[–] schmurnan@lemmy.world 1 points 1 year ago (1 children)

Interesting, I've never considered Cloudflare Tunnels. Thanks.

However I do remember seeing this video the other day, that suggests perhaps it's not always the best solution? Not sure this applies here, though: https://www.youtube.com/watch?v=oqy3krzmSMA.

[–] DRx@lemmy.world 2 points 1 year ago (1 children)

Christian brings up some great points worthy of consideration; however, if your going to use traditional routing through their network (A/cname) your still doing the same thing. CF will still see your traffic.

The second thing I should say is, I only use zero trust for websites I share with family. So, I have a Searxng and wef/voyager dockers running through zero trust.

For admin, homeassistant/iot/ip cams, I use an always on IPSec vpn on my iPhone, iPad, and steam deck (take it to work and plug into 3rd monitor) … this is cool because I get 24/7 ad blocking no matter where I am because it routes all my traffic through my pihole at home. This is a great solution for a single person, but I do not want to manage vpn access for multiple ppl. So, I agree with christian in NOT putting admin stuff/sensitive info behind CF at all (zero trust OR tradition web routing) unless you fully trust them. Otherwise do a 24/7 vpn like I do.

[–] schmurnan@lemmy.world 1 points 1 year ago

I don’t plan on exposing any of this stuff to anybody other than me. I do plan on spinning up SearX but it’ll only be me using it. I’ve given up trying to convince my family to move away from Google to even DuckDuckGo or Startpage, so there’s no way I’ll convince them to use SearX!

I think, therefore, for accessing away from home I’ll perhaps setup a subdomain that points to the IP of my Tailscale container — that means it’ll be accessible externally but only when I turn on the VPN.

When I’m on my home network I have a VPN on my Mac anyway.

[–] PipedLinkBot@feddit.rocks 0 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/watch?v=ZvIdFs3M5ic

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] Crimson_Chin@lemmy.world 4 points 1 year ago (2 children)

If using Docker, then just setup NGINX Proxy Manager. It has Let's Encrypt built in, so you literally just fill out a few fields, ask for a new certificate, provide your email, and BAM!, all done.

https://nginxproxymanager.com/screenshots/

[–] schmurnan@lemmy.world 2 points 1 year ago

Before I was using Traefik I used to use plain NGINX and was pretty happy with it. I made the switch to Traefik after reading some good things about it on Reddit.

More than happy to switch to NPM and give it a try. At this point I have no reverse proxy running at all, so not even like I have to swap out Traefik — there’s nothing they’re to begin with.

[–] Dirk@lemmy.ml 1 points 1 year ago

NPM is such a blessing! It works absolutely flawless!

[–] Double_A@discuss.tchncs.de 2 points 1 year ago (1 children)

Going back to the Tailscale thing - is it possible to point the domain to the IP address of the Tailscale container, so that the domain is only accessible when I switch on the Tailscale VPN? Is this a good idea/bad idea? Is there a better way to do it?

Yeah that works perfectly. The domain will point to your Tailscale IP, but that IP is not reachable unless you are in the VPN.

On my box I have a Caddy container with the Cloudflare plugin, that automatically generates Let's Encrypt certificates. And I can use it to point (sub)domains to certain docker containers. (see: https://caddy.community/t/how-to-guide-caddy-v2-cloudflare-dns-01-via-docker/8007 )

[–] schmurnan@lemmy.world 1 points 1 year ago

Thanks.

I guess the issue with this, though, is that I don’t always need to access it via Tailscale - I’d only do that when away from home. Perhaps there’s a way to point a subdomain to the Tailscale IP, and that’s only accessible when Tailscale is active? And then use an alternative subdomain to access it the rest of the time? Is that achievable?

[–] Kangie@lemmy.srcfiles.zip 2 points 1 year ago* (last edited 1 year ago) (1 children)

You're on the right track. I'm on mobile so will be brief, edit from a laptop in a while.

You can use subdomains, which is my preferred way if making services work with traefik, but you could also look for, say, example.com/potato to get to the potato service; this may work better with DDNS.

Edit: each subdomain needs to be updated, you might be able to get away with making them all a CNAME that points at the DDNS.

You're correct in your assessment that you only expose 80 and 443 for the Traefik container and access everything else through that. Also only use 80 to redirect to 443.

Don't expose the NAS directly to the web, instrad look at port forwarding on your router, it should be able to forward requests received on only 80 and 443 to the NAS while still blocking everything else.

My only complaint about Synology stuff is that I couldn't get Traefik in swarm mode going!

Any questions reach out.

Edit2: consider looking at a cheap VPS or a static IP to eliminate the requirement to expose your NAS directly to the web. Alternately run your internal DNS for stuff (including SSL certs from LetsEncrypt) and VPN in (I use Wireguard) when you want to access it.

[–] schmurnan@lemmy.world 1 points 1 year ago (1 children)

Thanks. Yep, subdomains was what I’d planned on: traefik.mydomain.com to access the Traefik dashboard; home.mydomain.com to access the Homepage container. I was planning on spinning up an Authelia container as well to provide 2FA for the services I want protecting. I guess it’d also be nice to have some kind of landing page for traffic coming directly to www.mydomain.com or mydomain.com as well.

Ideally I don’t want to port forward, so would I need to rely on Traefik to redirect the traffic from port 80 to port 443, and then proxy from port 443 to the required container? How do I therefore stop traffic from hitting the DSM admin on ports 5000/5001 for example?

I need to figure out a starting point to get traffic from my domain into my NAS (safely) then start spinning up containers and have Traefik route them appropriately, then I can look at Pi-hole/local DNS and Tailscale. And then I guess SSL.

[–] Kangie@lemmy.srcfiles.zip 1 points 1 year ago (1 children)

Ideally I don’t want to port forward, so would I need to rely on Traefik to redirect the traffic from port 80 to port 443, and then proxy from port 443 to the required container? How do I therefore stop traffic from hitting the DSM admin on ports 5000/5001 for example?

That's not quite how it works - the port forwarding is on your internet gateway to allow traffic on those ports to a specific host internal to your network. That's your only option if you want these services to be available on the wider web.

My recommendation around using 80 to redirect to 443 is because in 2023 there's no reason for that traffic to be unencrypted - just listen on 80 and say "Hey, go to https://example.com" instead.

If you don't care about that you can do internal only DNS + VPN into the network and still get the benefits of free SSL certificates via the LetsEncrypt DNS01 challenge.

[–] schmurnan@lemmy.world 1 points 1 year ago

Thanks, and yeah sorry, what I meant was to listen on both ports 80 and 443 and have a redirect in Traefik from 80 to 443 - I don't plan on having anything directly accessible over port 80.

As per another post, I've hit a stumbling block:

OK so made a start with this. Spun up a Pi-hole container, added mydomain.com as an A record in Local DNS, and created a CNAME for traefik.mydomain.com to point to mydomain.com.

In Cloudflare, I removed the mydomain.com A record and the www CNAME record.

Doing an nslookup on mydomain.com I get

Non-authoritative answer:
*** Can't find mydomain.com: No answer

Which I guess is to be expected.

However, when I then navigate to http://traefik.mydomain.com in my browser, I’m met with a Cloudflare error page: https://imgur.com/XhKOywo.

Below is the docker-compose of my traefik container:

traefik:
    container_name: traefik
    image: traefik:latest
    restart: unless-stopped
    networks:
      - medianet
    ports:
      - 80:80
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /volume1/docker/traefik:/etc/traefik
      - /volume1/docker/traefik/access.log:/logs/access.log
      - /volume1/docker/traefik/traefik.log:/logs/traefik.log
      - /volume1/docker/traefik/acme/acme.json:/acme.json
    environment:
      - TZ=Europe/London
    labels:
      - traefik.enable=true
      - traefik.http.routers.traefik.rule=Host(`$TRAEFIK_DASHBOARD_HOST`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
      - traefik.http.routers.traefik.service=api@internal

My traefik.yml is also nice and basic at this point:

global:
  sendAnonymousUsage: false

entryPoints:
  web:
    address: ":80"

api:
  dashboard: true
  insecure: true

providers:
  docker:
    endpoint: "unix:///var/run/docker.sock"
    watch: true
    exposedByDefault: false

log:
  filePath: traefik.log
  level: DEBUG

accessLog:
  filePath: access.log
  bufferingSize: 100

Any ideas what’s going wrong? I’m unclear on why the domain is still routing to Cloudflare.

[–] dm_me_your_feet@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)

Easiest Solution imo:

  • get Wildcard DNS, point it to the public IP of your NAS
  • deploy the ssl cert (containing your main domain and sudomains for your docker containers)
  • configure reverse Proxy in Synology configy proxying requests for the subdomains to your docker container (you can enforce only local access to certain services too)
  • Static route or local dns (Pihole) to redirect local requests for your public ip to the private IP of your NAS
  • done!
[–] schmurnan@lemmy.world 1 points 1 year ago

Thanks, I’d like to know more about how to go about this approach.

I guess in my head, I want to achieve the following (however I go about it):

  • Access https://mydomain.com from outside my network and hit some kind of blank page that wouldn’t necessarily suggest to the public that anything exists here
  • Access https://mydomain.com from inside my network and hit a login page of some kind (Authelia or otherwise), to then gain access to the Homepage container running in Docker (essentially a dashboard to all my services)
  • Access https://secure.mydomain.com from outside my network and route through to the same as above, only this would be via the Tailscale IP address/container running on my stack to allow for remote access
  • Route all HTTP requests to HTTPS
  • Use the added protection that Cloudflare brings (orange clouds where possible)
  • SSL certificates for all services
  • Ability to turn up extra Docker containers and auto-obtain SSL certs for them Ensure that everything else on my NAS and network is secure/inaccessible other than the services I expose through Traefik.

I have no idea where Cloudflare factors in (if at all), nor how Pi-hole factors in (if at all).

Internal stuff I’ve been absolutely fine with. Stick a domain name, a reverse proxy and DNS in front of me and it’s like I’m learning how to code a Hello World app all over again.

[–] MangoPenguin@lemmy.blahaj.zone 1 points 1 year ago (1 children)

How would Pi-hole and local DNS fit into this?

Pihole/local DNS would resolve all your queries when on your local network. So you would add the A/CNAME records for your services there with local IPs.

but what do I need to setup initially in Cloudflare?

Nothing if you just want local usage of the domain name, queries never hit cloudflare. But you do want the domain at least added to cloudflare so you can issue SSL certs using letsencrypt and its DNS-01 challenge.

What do other people see (i.e. outside my network) if they go to my domain? How do I ensure they can’t access my NAS and see some kind of page?

If you don't open ports on your firewall they wouldn't have any access. Otherwise if you do open the web ports, they generally go to a reverse proxy running somewhere that routes traffic as needed, so you could choose to display some kind of page or just show nothing.

I also have a RPi that has a (very basic) website on it - how do I setup an A record to have Cloudflare point a sub-domain to the Pi’s IP address?

You would need a reverse proxy running either on the Pi or on the NAS that cloudflare points to, then that proxy takes the subdomain and routes it to the appropriate internal IP/service.

[–] schmurnan@lemmy.world 2 points 1 year ago (1 children)

Thanks. There’s definitely stuff in here I want to do, I just need to figure out the order of play and break it down a bit.

As per reply to another comment.

Do I have to port forward 80 and 443 no matter what? Ideally I don’t want to forward anything.

Do I need DDNS in here somewhere, i.e. create a DDNS and link it to my NAS, create an A record in Cloudflare to point my domain to the external IP of the DDNS? Is that how I get into my NAS from the domain without worrying about the IP changing? How do I then prevent anybody accessing the NAS admin on port 5000/5001, as well as anything else except the containers I expose via Traefik?

[–] MangoPenguin@lemmy.blahaj.zone 1 points 1 year ago (1 children)

Do I have to port forward 80 and 443 no matter what? Ideally I don’t want to forward anything.

You only need to port forward if you want external access without using a VPN or something like that. Like if you wanted friends to be able to access your server for example.

Do I need DDNS in here somewhere, i.e. create a DDNS and link it to my NAS, create an A record in Cloudflare to point my domain to the external IP of the DDNS?

Yes, but only if you want to port forward and have external access. If you want local access only then you don't need port forwarding, DDNS, or any A records in cloudflare.

How do I then prevent anybody accessing the NAS admin on port 5000/5001, as well as anything else except the containers I expose via Traefik?

Assuming you did port forward 80/443, then the NAS admin wouldn't be exposed since it's on different ports.

[–] schmurnan@lemmy.world 1 points 1 year ago (1 children)

Thanks. I realise they’re all pretty basic questions. But brace yourself: more are on their way!

So… no, I don’t want to give external access - I’m not running any services that anyone would want/need access to - other than perhaps my Jellyfin server, but not sure I even want anyone accessing that. So let’s assume for right now, no access to the outside world. Therefore, no port forwarding required.

So to get access to my internal network from the domain, do I simply setup local DNS records in something like Pi-hole, to point mydomain.com to the internal IP or my NAS? Kind of like a network-wide equivalent of modding the /etc/hosts file on my machine?

Perhaps a(nother) silly question but, what’s to stop me doing that now with a completely random domain name? Is there some kind of authentication I’d need to go through to prove that mydomain.com is, in fact, mine? Or does it simply not matter since it’s internal only?

If I’ve understood correctly, then, I don’t need Cloudflare at all in my setup if there’s no external access? Nothing to proxy, nothing to protect?

Assuming I get all of the above working and traffic routing to my containers, how would I then go about setting up SSL? Can that be done through Traefik rather than Cloudflare? Even if the domain isn’t external?

[–] MangoPenguin@lemmy.blahaj.zone 2 points 1 year ago* (last edited 1 year ago) (1 children)

do I simply setup local DNS records in something like Pi-hole, to point mydomain.com to the internal IP or my NAS? Kind of like a network-wide equivalent of modding the /etc/hosts file on my machine?

Yep exactly!

Perhaps a(nother) silly question but, what’s to stop me doing that now with a completely random domain name?

Nothing, it's local to your network only so it only affects you. You could set google.com to return whatever IP you want for example, but it would prevent you from actually accessing google.

If I’ve understood correctly, then, I don’t need Cloudflare at all in my setup if there’s no external access? Nothing to proxy, nothing to protect?

The only thing you need Cloudflare (or another DNS-01 supported service) for, is getting letsencrypt SSL certificates. Since it uses automatically generated public DNS records on your domain name to verify that you own it.

Can that be done through Traefik rather than Cloudflare? Even if the domain isn’t external?

Yep it's done through Traefik either way, their docs should have a section on SSL with cloudflare IIRC.

[–] schmurnan@lemmy.world 1 points 1 year ago (1 children)

Absolute superstar, thanks for your help so far. I’ll make a start on some of this tomorrow and see how far I get — either with Traefik or NPM.

Do I need to do anything with the domain itself on Cloudflare at the moment? Or do I just leave it with its current A record pointing at an IP address (it was done as part of the setup in Cloudflare so I have no idea what that IP address is).

Obviously that domain in reality will just sit there doing nothing.

[–] MangoPenguin@lemmy.blahaj.zone 1 points 1 year ago (1 children)

Yeah you can just leave it, delete the A record if you want to.

[–] schmurnan@lemmy.world 1 points 1 year ago (1 children)

OK so made a start with this. Spun up a Pi-hole container, added mydomain.com as an A record in Local DNS, and created a CNAME for traefik.mydomain.com to point to mydomain.com.

In Cloudflare, I removed the mydomain.com A record and the www CNAME record.

Doing an nslookup on mydomain.com I get

Non-authoritative answer:
*** Can't find mydomain.com: No answer

Which I guess is to be expected.

However, when I then navigate to http://traefik.mydomain.com in my browser, I'm met with a Cloudflare error page: https://imgur.com/XhKOywo.

Below is the docker-compose of my traefik container:

traefik:
    container_name: traefik
    image: traefik:latest
    restart: unless-stopped
    networks:
      - medianet
    ports:
      - 80:80
      - 443:443
    expose:
      - 8080
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /volume1/docker/traefik:/etc/traefik
      - /volume1/docker/traefik/access.log:/logs/access.log
      - /volume1/docker/traefik/traefik.log:/logs/traefik.log
      - /volume1/docker/traefik/acme/acme.json:/acme.json
    environment:
      - TZ=Europe/London
    labels:
      - traefik.enable=true
      - traefik.http.routers.traefik.rule=Host(`$TRAEFIK_DASHBOARD_HOST`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
      - traefik.http.routers.traefik.service=api@internal
      - traefik.http.routers.traefik.entrypoints=traefik

My traefik.yml is also nice and basic at this point:

global:
  sendAnonymousUsage: false

entryPoints:
  web:
    address: ":80"
  traefik:
    address: "8080"

api:
  dashboard: true
  insecure: true

providers:
  docker:
    endpoint: "unix:///var/run/docker.sock"
    watch: true
    exposedByDefault: false

log:
  filePath: traefik.log
  level: DEBUG

accessLog:
  filePath: access.log
  bufferingSize: 100

Any ideas what's going wrong? I'm unclear on why the domain is still routing to Cloudflare.

[–] MangoPenguin@lemmy.blahaj.zone 1 points 1 year ago (1 children)

It sounds like your client isn't using PiHole for DNS, do you see the DNS lookup come through the pihole logs?

[–] schmurnan@lemmy.world 1 points 1 year ago (2 children)

Actually, no I don't see anything coming through.

So the IP address of my router is 192.168.1.1, IP of my NAS is 192.168.1.116.

Checked the DNS on my Mac and it's 192.168.1.1. Checked the DNS on my NAS and it's 192.168.1.1. I changed the DNS in my router to 192.168.1.116.

Have I missed a step somewhere?

[–] MangoPenguin@lemmy.blahaj.zone 1 points 1 year ago (1 children)

It sounds like you haven't updated your routers DHCP server to hand out the Pihole IP to clients. You can manually set the DNS server to the Pihole IP on your Mac for testing too.

The flow should be: Clients > Pihole > Router > Public DNS

Or you can skip the router: Clients > Pihole > Public DNS

[–] schmurnan@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

I wasn't planning on using Pi-hole for DHCP - I have a LOT of reserved addresses on my network and I don't fancy having to move them all over. My hope had been to use Pi-hole for DNS but keep the DHCP reservation with the router.

I've manually updated the DNS on my Mac to 192.168.1.116 and I can now access the Traefik dashboard via http://traefik.mydomain.com:8080 (so, getting there). So some kind of issue with the DNS on my router I think - caching maybe?

[–] MangoPenguin@lemmy.blahaj.zone 1 points 1 year ago (1 children)

Yeah that's fine, you just need to change the DHCP settings on your router so it gives the Pihole IP for DNS. It's possible some routers don't allow that though.

[–] schmurnan@lemmy.world 1 points 1 year ago (1 children)

Figured it out. It’s a weird setting on Netgear routers whereby you have to also update the MAC address. All been working well for the last few hours and getting queries running through Pi-hole.

I’ve also got my Homepage container setup at http://home.mydomain.com and configured Traefik a little further so it’s now accessible from http://traefik.mydomain.com (no port).

For the past few hours I’ve been struggling with getting Pi-hole behind Traefik and accessible using http://pihole.mydomain.com. Only works if I stick /admin on the end, which defeats the object of using a subdomain. Found a forum post suggesting to use Traefik’s addPrefix after declaring the Host as pihole.mydomain.com, which works great for accessing the login screen, but when you enter the password it just loops back to the login screen.

Also tried a few other things that ultimately broke the Pi-hole container and took out my entire connection, as everything is dependent on Pi-hole for DNS! So need to figure out some kind of resiliency/backup for that (my router is using the NAS IP as it’s primary and only DNS server).

So, some progress. I’ve set Pi-hope back to IP:port and I’m gonna focus on getting other containers behind Traefik and leave Pi-hole till last. Then and only then will I look at SSL certificates (unless it’s advised to do it earlier?)

Any pointers on any of the above would be appreciated! And thanks again for getting me this far.

[–] schmurnan@lemmy.world 1 points 1 year ago (1 children)

Update from this morning.

So far I've got the Traefik dashboard and my Homepage container using subdomains. Pi-hole is still an issue that I need to figure out.

I've decided to start on the SSL certificates and am following a couple of guides. Once I have those in place, I'll start moving more containers behind subdomains.

I might have to expose my NAS IP to the internet and link it via Cloudflare, because I use an ExpressVPN on my Mac at all times, and when it's turned on I can't access ***any ***of my subdomains - this is obviously because ExpressVPN use their own DNS entries and don't use the ones I've set. That will probably prevent me from using Vaultwarden (which is the whole purpose for all of this in the first place) because if I'm on the VPN I won't be able to access the Vaultwarden container.

Unless anyone knows of a workaround for that?

Next steps:

  • Get SSL working
  • Figure out how to access subdomains whilst on the VPN (or get a DDNS account, create an A record in Cloudflare and point it at the DDNS IP, and open up ports 80 and 443)
  • Spin up a Vaultwarden container via a subdomain
  • Put all my other services behind subdomains
  • Figure out how to get Pi-hole working via Traefik and subdomain
  • Figure out how to get Tailscale access to my containers when not on my LAN
[–] schmurnan@lemmy.world 1 points 1 year ago

Just a quick update on where I'm up to...

I've managed to get all my containers working behind the Traefik reverse proxy with SSL. I've also deployed a Cloudflare DDNS container in Docker and have linked the external IP address of my Synology NAS to Cloudflare. I haven't port forwarded 80 and 443, though, so it's not accessible over the internet. So I've added local DNS into Pi-hole so I can access all the containers using subdomains.

I've also deployed an Authelia container and have started running through my containers adding 2FA in front of them all.

I should probably point out at this juncture, that if I encounter any errors, the HTTP 404 page that I get is a Cloudflare one - I assume that's expected behaviour?

So, the final three bits I'm struggling with now are:

  • Pi-hole behind the reverse proxy
  • Portainer behind the reverse proxy
  • Accessing Vaultwarden over the internet (because as soon as I leave my house, if the vault hasn't synced then I don't have access to all my passwords) - unless anybody has a better suggestion?

Portainer - I have no idea how I do it, because I use it to manage my containers, so don't have the config for Portainer in Portainer (obviously). So if I screw up the config, how am I getting back in to Portainer to fix it?

And the far more troubling one is Pi-hole. I just cannot get that thing working behind the reverse proxy.

I've followed a few different guides (though none of them are recent), and the below is the latest docker-compose I have. It will bring up the login page, but when I login it keeps returning me back to the login page - it won't go to the main admin page.

version: "3.7"

services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    restart: unless-stopped
    networks:
      - medianet
      - npm_network
    ports:
      - 8008:80
      - 53:53/tcp
      - 53:53/udp
    environment:
      - TZ=Europe/London
      - WEBPASSWORD=xxxxxxxxxx
      - FTLCONF_LOCAL_IPV4=192.168.1.116
      - WEBTHEME=default-auto
      - DNSMASQ_LISTENING=ALL
      - VIRTUAL_HOST=pihole.mydomain.com
    volumes:
      - /path/to/pihole:/etc/pihole
      - /path/to/pihole/dnsmasq.d:/etc/dnsmasq.d
    cap_add:
      - NET_ADMIN
    labels:
      - traefik.enable=true
      - traefik.http.routers.pihole.entrypoints=http
      - traefik.http.routers.pihole.rule=Host(`pihole.mydomain.com`)
      - traefik.http.middlewares.pihole-https-redirect.redirectscheme.scheme=https
      - traefik.http.routers.pihole.middlewares=pihole-https-redirect
      - traefik.http.middlewares.pihole-addprefix.addprefix.prefix=/admin
      - traefik.http.routers.pihole.middlewares=pihole-addprefix
      - traefik.http.routers.pihole-secure.entrypoints=https
      - traefik.http.routers.pihole-secure.rule=Host(`pihole.mydomain.com`)
      - traefik.http.routers.pihole-secure.tls=true
      - traefik.http.routers.pihole-secure.service=pihole
      - traefik.http.services.pihole.loadbalancer.server.port=80

networks:
  medianet:
    external: true
  npm_network:
    external: true
[–] darelik@lemmy.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

I think the pihole container needs to be on the host network or macvlan?

[–] schmurnan@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

I've just added in a macvlan network to my Pi-hole compose as well, not sure if it's making any difference or not.

load more comments
view more: next ›