Know How ! to Improve Network Performance of Nutanix Cluster

A Good Network performance is very important for HCI solution as the storage traffic across the nodes transit from the Access layer or Leaf Layer depend on network design. Nutanix Recommend the Leaf – Spine Network Design for optimal bandwidth , Easily to scale east-west and expected traffic flow even the during switch maintenance

leaf and spinecore agre access

1. Line rate –>
Ensures that all ports can simultaneously achieve advertised throughput.

2. Low latency –>
Minimizes port-to-port latency, measured in microseconds or nanoseconds. Port-to-
port latency should be no higher than 2 microseconds.

3. Large per-port buffers –>
Handle speed mismatch from uplinks without dropping frames. Avoid using shared
buffers for the 10 GbE ports. Use a dedicated buffer for each port.

4. Nonblocking, with low or no oversubscription —>
Reduces the chance of drops during peak traffic periods.

5. 10 Gbps or faster links for Nutanix CVM traffic –>
Only use 1 Gbps links when 10 Gbps connections are not available (example: ROBO
deployment) or for other guest VM traffic. Limit Nutanix clusters using 1 Gbps
links to eight nodes.

6.Fast-convergence technologies –>
Use fast-convergence technologies (such as Cisco PortFast) on switch ports that
connect to the hypervisor host.

7.Nutanix data locality
Nutanix data locality helps optimize the use of available east-west network
bandwidth consumed for storage I/O. However, you should plan for sufficient
bandwidth in the event maintenance or failure.

8. Jumbo Frames –>
Jumbo Frames are Ethernet frames with a payload larger than 1,500 bytes. Jumbo
Frame payload is 9,000 bytes. Jumbo Frames might reduce network overhead and
increase throughput due to the reduced number of frames on the network, each
carrying a larger payload.

9. Use a maximum of three switch hops –>

Networks that have too many switch hops or introduce WAN links between only some of the nodes introduce variable latency, which has a negative impact on storage latency, and variable throughput. Overloaded networks also add components (datacenters, zones, switches, and switch fabrics) that can all impact availability.

Nutanix nodes send storage replication traffic to each other in a distributed fashion across the top-of-rack network. One Nutanix node can therefore send replication traffic to any other Nutanix node in the cluster. The network should provide low and predictable latency for this traffic. Ensure that there are no more than three switches between any two Nutanix nodes in the same cluster.

10. Attach CVM and hypervisor host to the native VLAN –>

Nutanix recommends attaching the CVM and hypervisor host to the native VLAN, which allows for easy node addition and cluster expansion. By default, newly installed Nutanix nodes send and receive untagged traffic. If you use a tagged VLAN for the CVM and hypervisor hosts instead, you must configure that VLAN while provisioning the new node, before adding that node to the Nutanix cluster

11. Use tagged VLANs for all guest VM traffic  –>

Use tagged VLANs for all guest VM traffic and add the required guest VM VLANs to all connected switch ports for hosts in the Nutanix cluster. Limit guest VLANs for guest VM traffic to the smallest number of physical switches and switch ports possible to reduce broadcast network traffic load. If a VLAN is no longer needed, remove it. Nutanix AHV network automation makes VLAN provisioning for guest VMs automatic

12. Do not place Nutanix nodes in the same Nutanix cluster –>

Do not place Nutanix nodes in the same Nutanix cluster if any of the following exist: the stretched L2 network spans multiple datacenters; the stretched L2 network spans multiple availability zones; or if there is a remote link between two locations.

13. Use separate Nutanix clusters at each physical location

 

 

Convert VMware ESXI Cluster to Nutanix AHV | Key Points to know

Nutanix Customer really wanted to test the Nutanix AHV with their Running production environment but also don’t want to get locked , also there are many others cases where Nutanix Cluster Conversation heling in effortlessly transition

Nutanix provide capability to convert VMware ESXI Cluster to Nutanix AHV cluster with the single click , below are some key notes .

  • it is support when you already have VMware Cluster is running
  • you can rollback anytime in case if you want to continue with AHV and want to revert back to VMware
  • Nutanix NGT tool needs to be install in all vm
  • All Vm must need to  be contacted to VM Network port group not with backplane or any default
  • Run Validation wizard to know if there is any errors
  • Admission control must need to be disable on vcenter
  • NTP Server must need to be reachable
  • VCenter Must need to register in Prism Element
  • it is batter to shutdown VMs while you convert the cluster and it is always recommended to have good backup for VMs
  • CVM Memory 32 GB
  • Run NCC Health Check After conversion

 

Thanks

nISHanT

How Reset vmware vcenter Appliance root and SSO Admin Password

Following blog is to share quick steps to reset the Vcenter Appliance password , you can Reset password without old password .

Reset Appliance Root password :-

  • Access Vcenter Appliance Console from Host ESXI Webconsole
  • Take a Appliance Snapshot for backup purpose
  • Reboot the vCenter Server Appliance
  • After the VCSA Photon OS starts, press e key to enter the GNU GRUB Edit Menu.
  • Locate the line that begins with the word Linux.
  • Append these entries to the end of the line:

    rw init=/bin/bash

    The line should look like the following screenshot:  ( No need to Double ..  as shown in screen shot )

photo.JPG

 

  • Press F10 to continue booting.
  • Run the command

mount -o remount,rw / 

  • In the Command prompt, enter the command passwd and provide a new root password (twice for confirmation):
      passwd
  • Unmount the filesystem by running this command (yes, the unmount command is umount  –  it’s not a spelling error):

umount /

    reboot -f

  • Press F10 to continue booting.
  • Run the command
      mount -o remount,rw / 
  • In the Command prompt, enter the command passwd and provide a new root password (twice for confirmation):
passwd
  • Unmount the filesystem by running this command (yes, the unmount command is umount  –  it’s not a spelling error):
umount /
  1. Reboot the vCenter Server Appliance by running this command:
reboot -f
———————————————————————————————–
reset SSO admin Password (administrator@vsphere.local)
Enable SSH For Vcenter Appliance , you can enable from Appliance management page https:// appliance-IP-address-or-FQDN :5480
SSH to Vcenter Appliance

Command> shell
Shell access is granted to root
root@TEC01 [ ~ ]# /usr/lib/vmware-vmdir/bin/vdcadmintool
==================
Please select:
0. exit
1. Test LDAP connectivity
2. Force start replication cycle
3. Reset account password
4. Set log level and mask
5. Set vmdir state
6. Get vmdir state
7. Get vmdir log level and mask
==================

3
Please enter account UPN : administrator@vsphere.local
New password is –
5&jSw/ugarCVf’,Gwum)      –> New password , Copy and try to login
==================
Please select:
0. exit
1. Test LDAP connectivity
2. Force start replication cycle
3. Reset account password
4. Set log level and mask
5. Set vmdir state
6. Get vmdir state
7. Get vmdir log level and mask
==================

exit
Thanks !