We have a vSphere cluster consisting of two identical servers running ESXi 6.7U2.
Each server has three 1Gbps uplinks. The uplinks have public IP addresses and are connected to the ISP's top-of-the rack switch.
Two of the uplinks are for traffic in and out of ESXi itself (largely for redundancy) one uplink per machine is to serve as the external IP address of a VM-based router to which a different /24 per server is routed. The VM-based routers on each server then make the two /24s available to both cluster members via two Distributed Switches. One Distributed Switch for each /24.
That way, both servers can draw out of either /24 pool as desired.
The servers have a second set of NICs that offer 2x 40Gbps connectivity and are directly connected via DAC cables.
How do we need to configure those interconnect NICs and ESXi to ensure that traffic between the servers used by VMs attached to the he Distributed Switches will only run through the 2x 40bps NIC and never take a "detour" through the slower 1Gbps side (and its associated vSwitch)?
I read quite a few docs, watched a lot of how-to videos, and searched on google, but I can't figure out how to actually enforce such a routing.
Finally, what IP addresses should be used for the interconnect NICs? I presume that if we we use IP addresses out of the two /24s, traffic from one /24 to the other /24 would be routed out via one router VM, back in via the other router VM and therefore again be bottlenecked.
Do we use private IP addresses on each of the 40Gbps NICs and them somehow force a route between the /24s via the interconnect NICs from the ESXi command line somehow?
My apologies, I am neither an ESXi nor a networking expert.
Big Thanks!
--Marc