Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
This is standard, but often unwanted, behavior of docker.
Docker creates a bunch of chain rules, but IIRC, doesn't modify actual incoming rules (at least it doesn't for me) it just will make a chain rule for every internal docker network item to make sure all of the services can contact each other.
Yes it is a security risk, but if you don't have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.
I think from the dev's point of view (not that it is right or wrong), this is intended behavior simply because if docker didn't do this, they would get 1,000 issues opened per day of people saying containers don't work when they forgot to add a firewall rules for a new container.
Option to disable this behavior would be 100x better then current, but what do I know lol
My problem with this, is that when running a public facing server, this ends up with people exposing containers that really, really shouldn't be exposed.
Excerpt from another comment of mine:
It’s only docker where you have to deal with something like this:
Originally from here, edited for brevity.
Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.
So uh, I just spun up a vps a couple days ago, few docker containers, usual security best practices... I used ufw to block all and open only ssh and a couple others, as that's what I've been told all I need to do. Should I be panicking about my containers fucking with the firewall?
Docker will have only exposed container ports if you told it to.
If you used
-p 8080:80
(cli) or- 8080:80
(docker-compose) then docker will have dutifully NAT'd those ports through your firewall. You can either not do either of those if it's a port you don't want exposed or as @moonpiedumplings@programming.dev says below you can ensure it's only mapped to localhost (or an otherwise non-public) IP.Thanks - more detailed reply below :)