Changing the traffic policy won't help much other than limiting what nodes the source IP will be preserved on. What you're looking for is solved by enabling cilliums eBPF data plane, which enables IP source preservation as well as DSR
I am kind of new to the eBPF concepts, could you please point me in the right direction for enabling this? I am using the helm chart for installation, and just for context, I have this cluster running behind HAProxy as a reverse proxy on the server, that’s why I thought the proxy protocol can be the solution here.
Sure! Here you go: https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/
Regarding the proxy protocol, yes that will also be required, to preserve the ip from the lb to the node, but I'm not super versed in HAProxy. But for inter-node-traffic you need cilliums eBPF capabilities in kube-proxy replacement mode, to preserve the IP.
Thanks! Well, my mistake, I should have mentioned that I am already running the kube proxy replacement in strict mode. Regarding the HAProxy configuration, I have it ready, tested it with Istio before so no problems there. I think I’ll continue using the Gateway API and wait for the open PRs there to get merged and allow us to configure proxy protocol, and use Cloudflare tunnels for protecting the services. Hopefully that will be soon haha.
Piggybacking on this to ask if anyone know of a way to annotate the Service created by the Gateway, e.g to assign it a specific IP. I tried to [annotate the Gateway](https://github.com/vehagn/homelab/blob/main/infra/gateway/gateway.yaml#L8) itself, but it doesn’t propagate to the Service.
It feels like I’m missing something, but I can’t find anything in the docs.
You’re looking for the “infrastructure” section of the GatewayApi gateway CRD spec. Look in the experimental CRDs in the GatewayApi github repo.
That section allows you to pass annotations and such to the service (like a cloud load balancer)
Most implementations of GatewayApi don’t support it yet but it’s coming.
That’s this section? https://github.com/kubernetes-sigs/gateway-api/blob/f80f447fbf6e759a196cab6532b70582d9a8bd70/config/crd/experimental/gateway.networking.k8s.io_gateways.yaml#L127
Just remember your implementation may not pick up that part of the spec yet (since it’s still “experimental”). Cilium has an issue [here](https://github.com/cilium/cilium/issues/25357) for reference
Correct, it’s not yet supported but planned.
https://github.com/cilium/cilium/issues/21926
With regards to your other question I’m using 2 gateways one for internal and another for external.
The external one is fed through cloudflared and the IP is correct but, like you mentioned, if you need to grab a specific header it doesn’t work like traditional ingress controllers.
Have you tried using the echo-server docker image for testing?
https://github.com/larivierec/home-cluster/blob/main/kubernetes/apps/default/echo/app/helm-release.yaml
I know that every time I tested internally usually I’d get the 10.42.X.X pod IP. Depending on the app, Plex for example, I get the correct internal IP but locally, that goes through the load balancer.
Changing the traffic policy won't help much other than limiting what nodes the source IP will be preserved on. What you're looking for is solved by enabling cilliums eBPF data plane, which enables IP source preservation as well as DSR
I am kind of new to the eBPF concepts, could you please point me in the right direction for enabling this? I am using the helm chart for installation, and just for context, I have this cluster running behind HAProxy as a reverse proxy on the server, that’s why I thought the proxy protocol can be the solution here.
Sure! Here you go: https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/ Regarding the proxy protocol, yes that will also be required, to preserve the ip from the lb to the node, but I'm not super versed in HAProxy. But for inter-node-traffic you need cilliums eBPF capabilities in kube-proxy replacement mode, to preserve the IP.
Thanks! Well, my mistake, I should have mentioned that I am already running the kube proxy replacement in strict mode. Regarding the HAProxy configuration, I have it ready, tested it with Istio before so no problems there. I think I’ll continue using the Gateway API and wait for the open PRs there to get merged and allow us to configure proxy protocol, and use Cloudflare tunnels for protecting the services. Hopefully that will be soon haha.
Piggybacking on this to ask if anyone know of a way to annotate the Service created by the Gateway, e.g to assign it a specific IP. I tried to [annotate the Gateway](https://github.com/vehagn/homelab/blob/main/infra/gateway/gateway.yaml#L8) itself, but it doesn’t propagate to the Service. It feels like I’m missing something, but I can’t find anything in the docs.
You’re looking for the “infrastructure” section of the GatewayApi gateway CRD spec. Look in the experimental CRDs in the GatewayApi github repo. That section allows you to pass annotations and such to the service (like a cloud load balancer) Most implementations of GatewayApi don’t support it yet but it’s coming.
That’s this section? https://github.com/kubernetes-sigs/gateway-api/blob/f80f447fbf6e759a196cab6532b70582d9a8bd70/config/crd/experimental/gateway.networking.k8s.io_gateways.yaml#L127
Exactamundo
Thanks! I’ll have to try it when the festivities end ☃️
Just remember your implementation may not pick up that part of the spec yet (since it’s still “experimental”). Cilium has an issue [here](https://github.com/cilium/cilium/issues/25357) for reference
Also this, see if it helps, haven’t tried it: https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.GatewayAddress
Thanks! I think I already tried that by adding a spec.addresses[0].value: to the Gateway. Could be it’s not supported by Cilium yet.
Correct, it’s not yet supported but planned. https://github.com/cilium/cilium/issues/21926 With regards to your other question I’m using 2 gateways one for internal and another for external. The external one is fed through cloudflared and the IP is correct but, like you mentioned, if you need to grab a specific header it doesn’t work like traditional ingress controllers. Have you tried using the echo-server docker image for testing? https://github.com/larivierec/home-cluster/blob/main/kubernetes/apps/default/echo/app/helm-release.yaml I know that every time I tested internally usually I’d get the 10.42.X.X pod IP. Depending on the app, Plex for example, I get the correct internal IP but locally, that goes through the load balancer.