A Good Network performance is very important for HCI solution as the storage traffic across the nodes transit from the Access layer or Leaf Layer depend on network design. Nutanix Recommend the Leaf – Spine Network Design for optimal bandwidth , Easily to scale east-west and expected traffic flow even the during switch maintenance
1. Line rate –>
Ensures that all ports can simultaneously achieve advertised throughput.
2. Low latency –>
Minimizes port-to-port latency, measured in microseconds or nanoseconds. Port-to-
port latency should be no higher than 2 microseconds.
3. Large per-port buffers –>
Handle speed mismatch from uplinks without dropping frames. Avoid using shared
buffers for the 10 GbE ports. Use a dedicated buffer for each port.
4. Nonblocking, with low or no oversubscription —>
Reduces the chance of drops during peak traffic periods.
5. 10 Gbps or faster links for Nutanix CVM traffic –>
Only use 1 Gbps links when 10 Gbps connections are not available (example: ROBO
deployment) or for other guest VM traffic. Limit Nutanix clusters using 1 Gbps
links to eight nodes.
6.Fast-convergence technologies –>
Use fast-convergence technologies (such as Cisco PortFast) on switch ports that
connect to the hypervisor host.
7.Nutanix data locality
Nutanix data locality helps optimize the use of available east-west network
bandwidth consumed for storage I/O. However, you should plan for sufficient
bandwidth in the event maintenance or failure.
8. Jumbo Frames –>
Jumbo Frames are Ethernet frames with a payload larger than 1,500 bytes. Jumbo
Frame payload is 9,000 bytes. Jumbo Frames might reduce network overhead and
increase throughput due to the reduced number of frames on the network, each
carrying a larger payload.
9. Use a maximum of three switch hops –>
Networks that have too many switch hops or introduce WAN links between only some of the nodes introduce variable latency, which has a negative impact on storage latency, and variable throughput. Overloaded networks also add components (datacenters, zones, switches, and switch fabrics) that can all impact availability.
Nutanix nodes send storage replication traffic to each other in a distributed fashion across the top-of-rack network. One Nutanix node can therefore send replication traffic to any other Nutanix node in the cluster. The network should provide low and predictable latency for this traffic. Ensure that there are no more than three switches between any two Nutanix nodes in the same cluster.
10. Attach CVM and hypervisor host to the native VLAN –>
Nutanix recommends attaching the CVM and hypervisor host to the native VLAN, which allows for easy node addition and cluster expansion. By default, newly installed Nutanix nodes send and receive untagged traffic. If you use a tagged VLAN for the CVM and hypervisor hosts instead, you must configure that VLAN while provisioning the new node, before adding that node to the Nutanix cluster
11. Use tagged VLANs for all guest VM traffic –>
Use tagged VLANs for all guest VM traffic and add the required guest VM VLANs to all connected switch ports for hosts in the Nutanix cluster. Limit guest VLANs for guest VM traffic to the smallest number of physical switches and switch ports possible to reduce broadcast network traffic load. If a VLAN is no longer needed, remove it. Nutanix AHV network automation makes VLAN provisioning for guest VMs automatic
12. Do not place Nutanix nodes in the same Nutanix cluster –>
Do not place Nutanix nodes in the same Nutanix cluster if any of the following exist: the stretched L2 network spans multiple datacenters; the stretched L2 network spans multiple availability zones; or if there is a remote link between two locations.
13. Use separate Nutanix clusters at each physical location