I have a service that looks like this:
apiVersion: v1
kind: Service
metadata:
annotations:
oci-network-load-balancer.oraclecloud.com/is-preserve-source: "true"
oci.oraclecloud.com/load-balancer-type: nlb
oci.oraclecloud.com/security-rule-management-mode: NSG
oci-network-load-balancer.oraclecloud.com/external-ip-only: "true"
oci-network-load-balancer.oraclecloud.com/subnet: "ocid1.subnet.oc1.iad.aaaaaaaanxrf5b6rXXXX"
name: traefik
namespace: traefik
spec:
allocateLoadBalancerNodePorts: false
externalTrafficPolicy: Local
healthCheckNodePort: 21450
internalTrafficPolicy: Local
ipFamilies:
- IPv4
- IPv6
ipFamilyPolicy: PreferDualStack
ports:
- name: web
nodePort: 8000
port: 80
protocol: TCP
targetPort: web
The assigned subnet is a /64 that is not part of the host's networks. When the NLB health checks the port, it uses the source address that is assigned to the NLB's VIP. This address however gets assigned to each host's kube-ipvs0 interface in order to recognize it for DSR. So what happens is health check attempts that look like this:
18:41:11.000556 enp0s6 In IP6 2603:XXXX:XXXX:bb02:0:5aa5:e12a:e201.61362 > 2603:XXXX:XXXX:bb01:7731:5fdf:3e71:272e.21450: Flags [S], seq 4140863142, win 64800, options [mss 1440,sackOK,TS val 3747564428 ecr 0,nop,wscale 7], length 0
18:41:11.000644 lo In IP6 2603:XXXX:XXXX:bb01:7731:5fdf:3e71:272e.21450 > 2603:XXXX:XXXX:bb02:0:5aa5:e12a:e201.61362: Flags [S.], seq 1310987368, ack 4140863143, win 65464, options [mss 65476,sackOK,TS val 2049806895 ecr 3747564428,nop,wscale 7], length 0
18:41:11.000657 lo In IP6 2603:XXXX:XXXX:bb02:0:5aa5:e12a:e201.61362 > 2603:XXXX:XXXX:bb01:7731:5fdf:3e71:272e.21450: Flags [R], seq 4140863143, win 0, length 0
The bb02 subnet being the /64 for the LB and bb01 subnet being the /64 that the hosts reside on. This fails because it can't route back out the eth device if the address is on the host.
IPv4 works fine because the load balancer has a private IP address and uses that as its health check source address. IPv6 doesn't have a private/public address distinction so it only has the VIP address to use. Could the NLB be assigned two IPv6 addresses? One for the VIP and one for the health check source address (that won't end up getting added to the ipvs interface). At the end of the day, what I want is a NLB DSR solution that works for IPv6.
I have a service that looks like this:
The assigned subnet is a /64 that is not part of the host's networks. When the NLB health checks the port, it uses the source address that is assigned to the NLB's VIP. This address however gets assigned to each host's
kube-ipvs0interface in order to recognize it for DSR. So what happens is health check attempts that look like this:The
bb02subnet being the /64 for the LB andbb01subnet being the /64 that the hosts reside on. This fails because it can't route back out the eth device if the address is on the host.IPv4 works fine because the load balancer has a private IP address and uses that as its health check source address. IPv6 doesn't have a private/public address distinction so it only has the VIP address to use. Could the NLB be assigned two IPv6 addresses? One for the VIP and one for the health check source address (that won't end up getting added to the ipvs interface). At the end of the day, what I want is a NLB DSR solution that works for IPv6.