Welcome to communities!
If the hosts are able to see the same LUNs and paths after zone re-config, then everything should be OK. They will see the same datastores and content.
If your FC switch does not allow you to have same WWNs in multiple zones at the same time (it should, but worth checking) then some storage access downtime is expected while you reconfigure zones.
I would definitely configure the zones for hosts in one-by-one manner:
1) place one host in maintenance mode (no VMs running), make note of NAA IDs and available paths for storage devices that are going to be affected by zone reconfig
2) optionally - on that particular host unmount the datastore(s) that will be affected by zone config. DON'T select Delete, as that will destroy volume metadata.
3) create single initiator zones for that host. Each zone will probably consist of HBA port WWN and Storage Controller Port WWN. Make sure you create a zone for each path (HBA port - array port pair)
4) create a zone group and include all single-initiator zones from the same host-array pair.
5) do a rescan on host and confirm that you see all devices with NAA IDs and all paths as recorded in step 1)
6) if you unmounted datastores in step 2), mount them back and confirm they're accessible / OK
7) exit maintenance mode
Then move on with next host. I'd probably do this during non-peak hours, just in case.
Zoning is done on FC switch level, you should not have a need to reconfigure anything on array side.
If you can, take a backup of your FC switch config before changing anything.
If you're not an expert on configuring FC switches, get one in.
hope this helps
WBR
Imants