Hello Frank,
i read your article and i understand that i imagined the multi-nic vmotion differently.
Although vmotion in ESXi 5 works better than in 4 i think that some improvements should be considered (i'll try to ask them as feature request hoping they could be taken into consideration)
- multi subnet and multi vlan support (not necessarily meaning we have to set static router in esxi, attached L2 network is just fine)
- this feature could lead to a better use of switches backplane bandwidth (if you use a single addressing and a single vlan the traffic could pass between switch interlink even in non-faulty scenarios)
- some customers (us for example) have different groups of switches not interconnected each other but suitable for vmotion traffic
- this feature could lead to a better use of switches backplane bandwidth (if you use a single addressing and a single vlan the traffic could pass between switch interlink even in non-faulty scenarios)
- ESXi should try to send vmotion traffic on a "path" after having checked that path as available (with keepalives packets exchanged among hosts, multicat or unicast depending on the needs)
- this will prevent failures as we are experiencing now and this could even be useful to chose the best path to use based on keepalive round trip
- keepalive packets round trip could reveal even if a 10G Link is efficient or not, link speed propagated by the network interface card driver is not reliable to determine the real speed a vmotion could achieve
- this will prevent failures as we are experiencing now and this could even be useful to chose the best path to use based on keepalive round trip
- i imagined milti-nic vmotion a sort of multipath communication with the entire packet to transfer splitted among all the available nics and reassembled on destination, after closing the case i understood that it gives advantages only if more than one vm is vmotioned otherwise only one nic is used
Best regards, Roberto Traversi.