In a new cluster set up everything workes fine, except a service using the UDP transport. TCP based services work perfect in the cluster. A little bit of tcpdump and Linux networking reveals the problem and points to the solution.
UDP Echo Service in the Cluster
A node in the cluster has the IP address 192.168.56.17. The cluster has a virtual IP address 192.168.56.16 configured as the virtual IP address. So all services should be reachable under the virtual address. All TCP based services work fine, but one UDP based service causes trouble. The clients just does not receive any answer.
For a demonstration I use the echo service (port UDP/7) which is started outside of the cluster. A
$ echo "Hello" | nc -u 192.168.56.16 7
does not get an answer. At least there was no echo on the screen of the client.
A tcpdump on the line between the client and server shows the problem.
# tcpdump -nt port 7 IP 192.168.56.24.35466 > 192.168.56.16.echo: UDP, length 6 IP 192.168.56.17.echo > 192.168.56.24.35466: UDP, length 6
The node generates a the answer UDP packet with its own IP address. And of course the client drops this answer packet, since it originates from a different IP address. Looking into the routing table of the cluster node table shows why it uses the dedicated address as the source for outgoing traffic:
# ip route list 192.168.56.0/24 dev eth0 scope link src 192.168.56.17 default via 192.168.56.1 dev eth0 src 192.168.56.17
The routing table tells the kernel to use the dedicated IP address of the node as the source address for any packet generated in the host. Since UDP traffic orginiates from the node it consults it routing table which address to use and gets the dedicated address.
In TCP connections you do not have that problem since TCP connections are stateful. The clients starts the TCP session, the cluster node answers to the TCP-SYN packet with a TCP-SYN/ACK packet and afterwards knows the state of the connection. In the definition of the TCP connection also the source and destination IP addresses are listed.
In a cluster that offers a UDP service it is not enough to configure a IPaddr2 resource. You also need a IPsrcaddr. This resource alters the routing table so the node knows it has to use the virtual IP address when it generates packets:
# ip route list 192.168.56.0/24 dev eth0 scope link src 192.168.56.16 default via 192.168.56.1 dev eth0 src 192.168.56.16
Of course the IPsrcaddr resource has to be started together with the virtual IP address. You can do this by combining both resources in a group.