change nutanix move ip address after deployment

  1. ssh nutanix move
  2. admin@move on ~ $ rs
  3. enter password
  4. root@move on ~ $ configure-static-ip
  5. Enter the required information as shown in the following example.

Do you want to configure static IPv4 address?(y/N)
y
Enter Static IPv4 Address (e.g. 10.136.97.98)
10.136.7.1
Enter Netmask (e.g. 255.255.255.0)
255.255.255.0
Enter Gateway IP Address (e.g. 10.136.97.1)
10.136.72.1
Enter DNS Server 1 IP Address (e.g. 10.136.74.189)
10.136.72.189
Enter DNS Server 2 IP Address (e.g. 10.136.74.190)
10.136.72.190
Enter Domain (e.g. blr.ste.lab)
bltr.ste.lab

6. retry the failed replication

Cross Hyperviosr Disaster recovery ,ESXI TO AHV or AHV to ESXI

Cross hypervisor disaster recovery (CHDR) provides the ability to migrate the VMs from one
hypervisor to another (ESXi to AHV or AHV to ESXi) by using the protection domain semantics
of protecting VMs, generating snapshots, replicating the snapshots, and then recovering the
VMs from the snapshots

Key notes

  1. install and configure NGT on all the VMs
  2. When you recover VMs from the AHV hypervisor on the ESXi hypervisor, the VMs come up with E1000 NIC attached by default. However, if you want to bring the VMs up withVMXNET NIC is attached, you must install VMware Tools
  3. Static IP address preservation is not supported for CHDR workflows. If a VM has an astatic IP address configured, you must manually reconfigure the VM IP address after its recovery on the target site.
  4. Hypervisor-specific properties like multi-writer flags, independent persistent and independent non-persistent disks, or changed block tracking (CBT) is not preserved in the CHDR operation

Requirements

  1. Nutanix supports only VMs with flat files. Nutanix does not support vSphere snapshots
    or delta disk files. If you have delta disks attached to the VM and you proceed with the
    migration, the VMs are lost
  2. Nutanix does not support VMs with attached volume groups or shared virtual disks
  3. Nutanix supports IDE/SCSI and SATA disks only. PCI/NVMe disks are not supported
  4. Nutanix supports CHDR on both the sites with all the AOS versions that are not EOL
  5. Set the SAN policy to OnlineAll for all the Windows VMs for all the non-boot SCSI disksso that they can automatically be brought online. For more information about setting SANpolicy, see Bringing Multiple SCSI Disks Online on page 208
  6. (For migration from AHV to ESXi) Automatic reattachment of iSCSI-based volume groups(VGs) fails after VMs are migrated to ESXi because vCenter and, in turn, the Nutanix cluster do not obtain the IP addresses of the VMs. Therefore, after migration, manually reattach theVGs to the VMs
  7. Do not enable VMware vSphere Fault Tolerance (FT) on VMs that are protected with CHDR.If already enabled, disable VMware FT. VMware FT does not allow registration of VMs. enabled with FT on any ESXi node after migration. This also results in a failure or crash of Uhura service

Limitations of CHDR

  1. Nutanix cluster nodes to vSphere cluster mapping: The Nutanix data distribution architecture
    does not support the mapping of the nodes of a Nutanix cluster to multiple vSphere clusters
  2. Ensure that all the nodes in the Nutanix cluster are in the same vSphere (ESXi) host cluster and that the network is available on all the nodes in the Nutanix cluster at the primary or remote site. Any configuration using Nutanix DR where there is more than one ESXi cluster on top of a single Nutanix cluster is not supported
  3. Recommends that each application that constitutes a set of entities is protected by a unique protection domain
  4. Supports one snapshot per hour as the shortest possible snapshot frequency for asynchronous replication
  5. Does not support snapshots of the entire file systems or storage containers
  6. Supports a VM to be a part of a protection domain only if the VM is entirely on a Nutanixdatastore and not on an external storage
  7. Does not support running multiple third-party backups in Parallel on the same set of VMs.The system becomes unstable
  8. Does not support VMX editing when you restore a VM. As a result, the characteristics of the restored VM such as MAC addresses may be in conflict with other VMs in the cluster
  9. The replication speed reduces when deduplication is enabled on storage containers containing protected VM

for more details

what is Nutanix Cloud Infrastructure (NCI), Nutanix Cloud Manager (NCM) & Nutanix Cloud Platform (NCP) Bundles License

Nutanix Cloud Infrastructure (NCI) is a complete software stack to unify your hybrid cloud infrastructure including compute, storage and network, hypervisors, and containers, in public or enterprise clouds; all with built-in resilience, self-healing, disaster recovery capabilities, and security. It includes enterprise data services and consolidated storage, data protection and disaster recovery, native virtualization and container management, networking, and security.

How it looks like in BOQ

NCI Software Editions

Nutanix Cloud Manager (NCM) offers our customers simplicity and ease of use to build and grow their cloud deployments faster and realize rapid ROI, by providing intelligent operations, self service and orchestration, visibility and governance of spend, security and teams, all through a unified Multi-cloud management solution.

NCM licenses can be purchased and applied on the number of physical CPU cores capacity in your deployment. Licenses are portable across hardware platforms and are available in 1 through 5-year term options.

By default, NCM provides coverage for all Nutanix and on-prem VMWare environments, metered per core. For supporting public cloud environments using the same NCM deployment, customers should purchase appropriate NCM Cloud SKUs as add-ons. Three public cloud-focused add-ons are available as SKUs- NCM Self-Service add-on for Public Cloud SKU, NCM Cost Governance SaaS SKU, and NCM Security Central SaaS SKU. These add-ons are metered by the number of Virtual Machines (VM) managed in the public cloud. Note: For on-prem environments, Cost Governance is available for AHV and ESXi on AOS, and Security Central is known for AHV.

NCM is also available as a fully managed Software as Service Option. Customers can experience multi-cloud self-service, app automation, governance, and security compliance capabilities, without requiring to run any on-prem Nutanix software. The NCM SaaS offering is available to purchase as à la carte SaaS licenses for these four NCM SaaS modules:

  • NCM SaaS – Operations (in development) 
  • NCM SaaS – Self-Service
  • NCM SaaS – Cost Governance
  • NCM SaaS – Security Central

NCM SaaS licenses are metered by the number of Virtual Machines (VM) managed in the public cloud.

Nutanix Cloud Infrastructure (NCI) and Nutanix Cloud Manager (NCM) can be purchased together in 3 ‘better together’ Nutanix Cloud Platform (NCP) bundles:

Thanks to visit my blog

What is ICAP and integration with Nutanix files

ICAP stands for internet content adaption protocol is an open standard being adopted to connect devices to enterprise-level virus scan engine . same way with the nutanix files is to enable communication with external servers hosting third-party anti-virus software to scan inbound data (files) in transit via Secure Proxy before sending it to the backend destination server.

ICAP WORKFLOW

The ICAP service runs on each Nutanix Files file server and can interact with more than one ICAP server in parallel to support horizontal scale-out of the antivirus server. The scale-out nature of Files and one-click optimization greatly mitigate any antivirus scanning performance overhead. If the scanning affects Nutanix Files FSVM performance, one-click optimization recommends either increasing the virtual CPU resources or scaling out the FSVMs. This feature also helps both the ICAP server and Files scale out, ensuring fast responses from the customer’s antivirus vendor

WHY Nutanix files integration with AV server is important

Ransomware is a persistent concern that requires multiple security controls and software layers to mitigate integration is important to protect users from malware and viruses,

WHAT all third-party vendor are support with Nutanix files

  1. Trand Micro
  2. McAfee
  3. BitDefender
  4. Symantec
  5. sentinelone

HOW to configure integration

  1. In the Files Console, go to Configuration > Antivirus.
  2. Connect the ICAP server.
    1. Click + Connect ICAP Server.A new row appears for new ICAP server details.
    2. Enter the following information in the corresponding fields:
      • IP address or hostname
      • Port (the default port number is 1344)
      • Description
    3. To save the configuration, click the check mark icon.For a detected antivirus server, the software tests the validity of the configured server and updates the status to OK.
    4. Ensure the connection status automatically updates to OK.
    5. Click Next.
    6. (https://portal.nutanix.com/page/documents/details?targetId=Files-v4_2:fil-file-server-anti-virus-enable-t.html) for more details

AOS 6.0 STS Released

AOS integrates storage, compute and networking resources into a scalable, secure, resilient and easy-to-use software-defined infrastructure.

AOS 6.0 has various new features and notably enables advanced disaster recovery capabilities for organizations of all sizes and expands workload support for leading big data and analytics applications. Additionally, has other workload acceleration and enterprise grade features, such as resiliency improvements to gain insights and control over the self-healing process and ability to reserve capacity.

Please review the release notes before deploying AOS 6.0

Here is a sample of the latest features:

Disaster Recovery

  • Near-Sync Autonomous Schedules: Native near-zero always-on data loss solution for applications running on Nutanix HCI. Available with Protection Domain
  • DR Dashboard: Provides global visibility into DR landscape with LEAP. Gives team one place to monitor and manage their ability to keep business operations functional in the face of disaster. Brings observability to DR for meeting/maintaining RPO SLAs, Recovery Readiness and DR Reports
  • DR Traffic Encryption: Encrypt DR traffic across WAN links without the need for dedicated encryption hardware or VPN.Native SSL based encryption is now integrated with Nutanix AOS, enabling secure replication even where VPN tunnels are unavailable. Available with Protection Domain
  • DR Replication Performance: DR Replication Performance: Next gen architecture for data mobility across clouds with improved performance and tight SLAs
  • Instant Restore for AHV: Instantly recover the workload even though the disk data for the workload are not locally available in the underlying AOS cluster. Migrate the disk data back to (seed) the AOS cluster without interruption to the recovered workload

Core Data Path

  • Option to reserve rebuild(spare) capacity for a cluster: The Reserve Rebuild Capacity feature simplifies cluster resiliency management by allowing you to reserve storage capacity required to recover and rebuild from failures. AOS calculates, reserves, and dynamically manages capacity required for self-healing after a failure.
  • Rebuild progress indicator and ETA: The new Rebuild progress indicator in Prism enables administrators to better manage cluster resiliency by providing additional insight into the self-healing process such as the rate of progress, expected time of completion and other details for rebuild operations
  • Brownfield AES: Provides ability to convert existing pre-AES containers to AES
  • Support mix storage capacity for HW Swap: This feature adds support mixing storage drive capacity for drive RMA and node capacity increase use-cases.
  • Improve performance (e.g.: metadata re-warming) after failures/CVM restart/change of vdisk host: This feature helps mitigate potential application performance impact during certain CVM restarts or live migrations by proactively pre-filling the metadata cache

AHV

  • P2P intra cluster VM live migration: While VM live migration is in progress if the migrate task restarts, VMs can be left in a paused state or killed in certain scenarios causing downtime, this feature overcomes these issues
  • ADS vGPU Support: We now have Acropolis Dynamic Scheduling (ADS) support for VMs with virtual GPUs (vGPUs)
  • Nvidia vGPU Console Support: Provide GPU support for our VM console (Also part of 5.20)
  • Scale out PE Lazan: With this feature, cluster state capture phase is distributed among every Lazan slave to make it faster and more scalable. Each slave then sends the captured state to Lazan master making anomaly detection faster and the CPU usage to compute the cluster state is also distributed among the nodes.
  • PC UI using v3 APIs: The goal was to upgrade the APIs used for VM operations for our Prism UI, while preserving the existing feature set and workflows

Please take a moment to read the release notes

Be sure to share your upgrade experience in the community forums.

Register for our annual .NEXT user conference happening September 20-23, 2021 online. It’s Cloud on your terms.

Upgrade Order For Nutanix Cluster

Nutanix has recommended Order to Upgrade Nutanix component ,as following

Upgrade Prism Central 

  1. Upgrade and run NCC on Prism Central
  2. Upgrade Prism Central.
  3. Run NCC.

 Upgrade Prism Element 

  1. Upgrade and run NCC.
  2. Upgrade Foundation
  3. Run and upgrade Life Cycle Manager (LCM): but dont upgrade any other component such as sata dom
  4. Upgrade AOS.
  5. Run and upgrade Life Cycle Manager (LCM): upgrade all other components
  6. Upgrade Hypervisor
  7. Run NCC

 

How to Configure VLAN Trunking on Nutanix Cluster

By default, a virtual NIC (VM NIC) on a guest VM operates in access mode. In this mode, the VM NIC can send and receive traffic only over its own specified VLAN. If restricted to using access mode interfaces, a VM running an application on multiple VLANs (such as a firewall application) must use multiple VM NICs—one for each VLAN. Instead of configuring multiple VM NICs in access mode, you can configure a single VM NIC to operate in trunked mode. A VM NIC in trunked mode can send and receive traffic over any number of VLANs in addition to its own VLAN. You can trunk specific VLANs or all VLANs. A VM NIC in trunk mode can convert back to access mode, meaning the VM NIC reverts to sending and receiving traffic only over its own VLAN.

Steps :-

acli vm.nic get <vm_name>
acli vm.nic_create  <VM_name> network=<net_name> vlan_mode=kTrunked  trunked_networks=<comma separated VLAN ID list>

 

To update an existing virtual NIC from access mode to trunked mode.

acli vm.nic_update  <VM_name>  <vm_mac_addr>  update_vlan_trunk_info=true  vlan_mode=kTrunked  trunked_networks=100,200,300

acli vm.get  <vm_name>

Jumbo frames for Nutanix Cluster

Jumbo Frame is basically a ability to increase packet payload size which can be beneficial when processing a large amount of data through the network itself.

Nutanix AHV uses the standard Ethernet max transmission unit (MTU) of 1,500 bytes for all external communication by default. Jumbo frames are optional, because the standard 1,500 byte MTU still delivers excellent performance, but increasing the MTU can be beneficial in some scenarios

Default MTU vaule is 1500

Jubmo Frame MTU Vault is 9000

Nutanix recommanded to use Jubmo frame to use for Nutanix Volume to represent high Performance ISCSI Storage External Server which can help to increase Effciency for application such as oracle database connected to the Nutanix volumes .

Be sure to plan carefully before enabling jumbo frames. Configure the entire network infrastructure from end to end with jumbo frames to avoid packet fragmentation, poor performance, or disrupted connectivity.

Check the current configuratin :-

root@ahv# ovs-appctl bond/show
root@ahv# ip link | grep mtu

 

Reconfigure CVM to make changes :-

nutanix@cvm$ sudo ip link set eth0 mtu 9000 
nutanix@cvm$ sudo ip link set eth2 mtu 9000

To permanently modify the MTU so it persists through a restart, append the string ‘MTU=”9000“’ to the end of the following configuration files on the CVM:

nutanix@cvm$ sudo echo ‘MTU=“9000”’ >> /etc/sysconfig/network-scripts/ifcfg-eth0 
nutanix@cvm$ sudo echo ‘MTU=“9000”’ >>  /etc/sysconfig/network-scripts/ifcfg-eth2

host reboot is required hence follow the safe reboot task.