ZFSRocks wrote:
Which means the those LAGs at least are functioning to pass traffic. Whether it would actually load balance is a different question.
What I primarily mean is that the multi-switch LAG might not be fully functional and that you by luck have not seen the effects of this yet. However, that is just speculation coming from how the VMware IP Hash algorithm work.
Now, I am setting up the LAG on the management interface via the host console and not via vCenter. Docs say to do it via vCenter but I don't want to do anything that I can't undo if I can't get them to talk. With as much trouble as I have been having I guess I am worried I will lose the host.
With setting up the LAG, do you mean to set the vSwitch NIC teaming policy to IP Hash and then attach the specific vmnics?
You could always quite easy remove extra vmnics from vSwitch0 through the ESXi DCUI console, so it might be possible to work from vSphere Client and still be able to revert. It might make it somewhat more easy to look at the settings and verify they are correct.
I am still curious if the Cisco devices support multi-switch etherchannel or not, however: do you really need IP Hash / LAG on the Management interfaces? You could still get redundancy with multiple vmnics connected to both physical switches using NIC teaming policy "Port ID" which needs no physical switch configuration except VLAN tagging.