santhoshnz wrote:
I understand all the hosts in the VSA cluster must have two dual-port or four single-port network interface cards for the VSA network configuration.
Is it possible to have EtherChannel / Link aggregation enabled for VSA?
e.g. Dl 380 G7 comes with 2 dual port and if I add one additional 4 port gigabit card - i have total of 8 ports
2G - VM network + ESXi management network
2G - VSA front end network
2G - VSA back end network
2G - vMotion, HA (ESXi feature network)
Let me know if thats possible, if so, how to enable it
Cheers
Santhosh
This is a pretty interesting question to come across, unanswered.
In VSA 5.1, a close reading of the brownfield installation indicates that NIC teaming with explicit failover is required.
I too am interested if others have static link aggregated ports for management/frontend and backend/vMotion. After putting the VSA in maintenance mode, making changing the front-end/managment vSwitch0 from teaming with explicit failover order to IP-hashed link aggregation, and bringing the VSA out of maintenance mode, things seemed fine (able to successfully stay up when pulling cables for either of the front-end/managment vmnics on either of the two hosts).
Put the VSA into maintenance mode to do the same to the back-end/vMotion vmnics, but that hadn't completed after 20 minutes of waiting. Not sure if this is related to the move to LAG ports, but will test that tomorrow.
Any other experiences out there?