jbloggs777

joined 2 years ago
[–] jbloggs777@discuss.tchncs.de 1 points 2 months ago* (last edited 2 months ago)

wg-quick takes a different approach, using an ip rule to send all traffic (except its own) to a different routing table with only the wireguard interface. I topped it up with iptables rules to block everything except DNS and the wireguard udp port on the main interface. I also disabled ipv6 on the main interface, to avoid any non-RFC1918 addresses appearing in the (in my case) container at all.

edit: you can also do ip rule matching based on uid, such that you could force all non-root users to use your custom route table.

[–] jbloggs777@discuss.tchncs.de 1 points 2 months ago

It might be a simple issue like ip forwarding not being enabled, or host-level iptables configuration, or perhaps weird and wonderful routing (eg. wireguard or other VPNs).

[–] jbloggs777@discuss.tchncs.de 1 points 2 months ago (2 children)

Your k3s/calico networking is likely screwed. Try creating a new cluster with flannel instead.

[–] jbloggs777@discuss.tchncs.de 1 points 2 months ago (4 children)

Sorry - I totally misread this. You cannot access internet addresses. So it's a routing or NAT issue, most likely.

I assume you are using k3d for this, btw?

So.. on the "server" (eg. docker exec -ti k3d-k3s-default-server-0 -- /bin/sh), you should be able to "ping 8.8.8.8" successfully.

If not, the issue may lie with your host's docker setup.

[–] jbloggs777@discuss.tchncs.de 1 points 2 months ago* (last edited 2 months ago) (1 children)

Do you have any NetworkPolicies configured that could block ingress (to kubedns, in kube-system) or egress (in your namespace) ? If any ingress or egress networkpolicy matches a pod, it flips from AllowByDefault to DenyByDefault.

You should also do kubectl get service and kubectl get endpoints in kube-system, as well as kubectl get pods | grep -i dns

[–] jbloggs777@discuss.tchncs.de 1 points 2 months ago* (last edited 2 months ago) (1 children)

Is the 404 page from Traefik or the backend service?

[–] jbloggs777@discuss.tchncs.de 2 points 2 months ago* (last edited 2 months ago) (8 children)

I'd be surprised if it's still kubedns.. the service name is still kubedns, but there will probably be CoreDNS pods behind it. To debug this, you should first ensure that you can resolve DNS by directly pointing to an external DNS server from a pod, and then from the node if that fails. eg. dig @1.1.1.1 google.com, or host google.com 1.1.1.1. It might be a routing/firewall/nat issue more than DNS, and this would help track that down.

[–] jbloggs777@discuss.tchncs.de 4 points 2 months ago (10 children)

Ok... so your actual issue is with CoreDNS, and you are asking here for a more complicated, custom, untested, alternative?

What is your issue with CoreDNS?

[–] jbloggs777@discuss.tchncs.de 4 points 2 months ago (12 children)

You want to resolve *.cluster.local addresses outside of the cluster/on your LAN, on that domain? This would only be useful if you can route to them... Right?

So... assuming you can route to them, you probably want to configure your powerdns DNS server to forward requests for this zone to the CoreDNS service in the cluster, which should have a static IP.

[–] jbloggs777@discuss.tchncs.de 19 points 2 months ago (8 children)

My pranks were less destructive ... /ctcp nick +++ath0+++ ... it was amazing how often that worked. 🤣

[–] jbloggs777@discuss.tchncs.de 1 points 2 months ago

Did you find a solution?

view more: ‹ prev next ›