kermic wrote:
I strongly disagree with this one. This will be true only if:
there is a single host accessing array
there are at least 8 luns on array, all accessed at full throttle at the same time, every LUN via different NIC (one active path per LUN at a time, unless you're using 3rd party multipathing plugin)
You will be able to access iSCSI array even if you have just a single RJ45 port on your host. Typically people use 2 for availability.
WBR
Imants
hi Kermic - I wont agree with your comments.
first of all the paths are managed by the esx storage stack and not by the network stack, and the esx will use only one pNIC to send/receive traffic this is the esx architecture. So now in the DELL there is 8 ports total that is 4 ports per conroller, so whether you have single host or multiple host, sinle lun of multiple lun, we need to have multiple paths. For best results, here there are 8 iscsi targets, so you can have 8 paths. The PSP plugin will use each path after 1000 IOPS, and there is no need for 3 party plugings, off course you can use others also, the esx PSP is great.
Now you have 8 paths, and if the esx have only 4 or 2 nics, still you can have 8 paths, but the entire traffic will have to be pass through these 2 or 4 nics,That is why if you use more nics for the iscsi, it is better, the ethernet layer will easily get congested. But in the FC case even with the 2gig dual hba there wont be any bandwidth congestion, when compared to ethernet.
here what he can do is, he can create a etherchannel/LAG between storage and the ISCSI switch, that is combine 4 storage ethernet ports and give to the ISCSI switch and give one ip each. From the esx side, he can use 2 pnics and 2 vmkernel port groups, this will be better than the current situation. Still there will be bottle neck in the esx side, because of the less nic.