Event ID 1196, 1119 DNS operation refused Cluster Servers

Cluster network name resource failed registration. WINDOWS FAILOVER CLUSTER

Microsoft geek

Error:

Event id 1196, 1119 FailoverClustering appearing on the clustered Exchange and SQL servers, although the cluster seems to be fine the errors are annoying. Cluster network name resource ‘SQL Network Name (MCCNPSQLDB00)’ failed to register DNS name ‘MCCNPSQLDB00.smtp25.org’ over adapter ‘Production VLAN 400’ for the following reason: DNS operation refused.

Cause:

The cluster name resource which has been added to the DNS prior to setup active passive cluster ( or any type) need to be updated by the Physical nodes on behalf of the resource record itself. When the active node owns the resources it want to update the A record in the DNS database and DNS record which was created won’t allow any authenticated user to update the DNS record with the same owner

Solution:

Delete the existing A record for the cluster name and re-create it and make sure select the box says “Allow any authenticated user…

View original post 34 more words

Adding and sharing RDM disk to multiple VMs in VMware step by step

I had a requirement to configure Two node Microsoft failover clustering on VMware VMs in my office, The plan was to configure clustered file server. Virtual machines has to be separated from each other and should be located on different esxi servers, My first task was to provide my colleague wintel engineers to provide shared disk from vmware, so they can start configuring clustering. For this I have achieved it using RDM – Raw Device Mapping disk. RDM is recognized as a pass-thru disk, and is a mapping file that performaces as a proxy for a physical device such as a LUN. When you choose to use RDM over VMDK datastore you get little bit better performance.

vmware vsphere raw device mapping, rdm, storage lun, datastore, rdm disk, shared, physical and virtual, esxi storage, raw lun diagram

Before starting with RDM I have setup and prepared my Storage server and Esxi server, Created new 2 new Luns on Storage device, then configured VMKernel adapters on Esxi server, and rescaned hba adapter to list disk, at this point I didn’t added and formatted datastores as VMFS. For more info on configuring storage and esxi check my below blog articles.

Warning: Windows iSCSI is not listed on VMWare HCL as Esxi iSCSI datastore. I am using it to show as a demo purpose.

MICROSOFT WINDOWS 2012 R2 ISCSI TARGET STORAGE SERVER FOR ESXI AND HYPERV
VMWARE ESXI CONFIGURE (VSWITCH) VMKERNEL NETWORK PORT FOR ISCSI STORAGE

To start, I have listed my 2 VMs, Select the  first virtual machine 001, click Actions then Edit Settings from the list. In the bottom of edit settings wizard, Click select on the new device, choose SCSI Controller and click Add. Here I am adding Raw device mapping disk luns to separate SCSI controller. You can add total 4 virtual SCSI controllers and each controller can have 15 disks attached.

Vmware vSphere web client, virtual machine, vm actions edit settings, add select new device, scsi controller, esxi vm. harddisk, virtual raid contorller vmware.png

Expand New SCSI controller settings, on SCSI Bus sharing choose Physical, by doing this Virtual disks can be shared by virtual machines on any Esxi server. Keep setting for controller type to default (LSI Logic SAS).

None Virtual disks cannot be shared by other virtual machines.
Physical Virtual disks can be shared by virtual machines on any Esxi server.
Virtual Virtual disks can be shared by virtual machines on the same Esxi server.

Click Ok to proceed.

 

vmware vcenter vsphere web client html, virtual machine vm edit settings Add select device, new scsi raid controller, scsi bus sharing..png

Next, Click on VM 001 again go to actions and select Edit Settings. Now from in the select list, choose RDM Disk and click Add.

vmware vsphere web client, vcenter, virtual machine Edit Settings, new device select add from list, RDM Disk, raw disk mapping add.png

This lists and pops upSelect Target Lun wizard, Select one of the Lun from list (Choose correct one by verifying correct NAA id). Click Ok.

vmware vsphere web client add new device RDM Disk, raw device mapping, select target LUN, msft iscsi disk shared RDM

Expand New hard disk, Select the location, and keep it on shared VMFS datastore. Make sure this datastore is visible on other esxi server where second VM is located. Compatibility mode should be physical.

Two compatibility modes are available for RDMs. Physical compatibility mode allows direct access of the SCSI device for those applications that need lower level control. Physical mode is useful to run SAN management agents or other SCSI target-based software in the virtual machine. Physical mode also allows virtual-to-physical clustering for cost-effective high availability. Virtual compatibility mode allows Raw Device Mapping to act precisely like a virtual disk file, including the use of clone and snapshots. You can realize the benefits of VMFS such as advanced file locking for data protection and snapshots for streamlining development processes.

Next the multi-writer sharing option allows VMFS-backed disks to be shared by multiple virtual machines. This option is also used to support VMware fault tolerance, which allows a primary virtual machine and a standby virtual machine to simultaneously access a .vmdk file.

Virtual Device Node is the option of selecting SCSI controller, where this new RDM disk will reside, I am selecting SCSI controller path 1:0 created earlier. Press Ok to get it added.

 

vmware vsphere web client vcenter add new rdm hard disk, vm edit settings, disk location, compatibility mode Physical, sharing multi-writer, virtual device node scsi controller, raid controller

RDM Disk is added to first virtual machine successfully. Next Login to VM, open server management, collapse Storage then under Disk management, from the list right click Disk 1, click online, (I have 2 disks now disk 0 is OS, RDM is Disk 1, so this might differ and depend on how many disk you have already on VM)

vmware virtual machine vm, Server manager, storage disk management disk unknows, offline, online right click, hard disk unallocated

Next initialize the disk. select the disk from wizard and click Ok. Right click on the disk 1 to  create new simple volume.

VMware Virtual machine vm rdm initialize disk mbr, gpt guid partition table, master boot record, create new simple volume, harddisk setup windows server disk management

In the new simple volume wizard, specify volume size, assign drive letter, click next.

vmware vsphere web client, esxi server, new hard disk rdm disk, raw device mapping, windows server new simple volume wizard, voluem partition disk size, assign drive letter, NTFS folder mapping

In the Format Partition, perform a quick format. and Finish the wizard. Configuration for first VM is completed, It should be listed in my computer.

vmware vsphere web client, esxi new harddisk rdm disk, raw device mapping, windows server new simple volume wizard, voluem partition disk size, assign drive letter,folder mapping, quick format file system NTFS vs vmfs.png

Next select second VM 002, right click and in the VM Edit Settings, Create a new physical SCSI controller as shown for VM 001, Under select new device, click Existing Hard Disk. and Add it.

vmware vsphere web client, add ne device existing hard disk drive, virtual machine edit settings, windows servers to scsi controller

This opens the Select file wizard, and opens datastore browser, select the Shared-Disk01 then select the folder and choose vmdk under contents (Here on this datastore and path I created RDM disk for vm 001 earlier, verify folder path). Click ok to proceed.

Choose the sharing option to multi-writer, and virtual device node should be SCSI controller 1.0 as created earlier. Ok to proceed with adding.

vmware vsphere web client add existing hdd, hard disk, scsi controlle, lsi logic sas, datastore, sharing multi-writer, virtual vmdk disk, virtual machine edit settings

Login on the second server, go to disk management, right click and rescan disks, this will refresh, detect and show disk in list.

vmware vsphere web client, windows server disk management make disk online, virtual disk vmdk, server manager, manage disks, rescan disk storage, hdd, hard disk, vmdk, rdm raw device mapping

Next you have to just make the disk Online no need to format it, as it shared and already formatted by VM 001.

vmware vsphere web client, windows server disk management make disk online, virtual disk vmdk, server manager, manage disks

At this point it is Virtual machines are ready to get further Microsoft failover clustering role and feature configured.

Useful articles
POWERCLI: VMWARE ESXI CONFIGURE (VSWITCH) VMKERNEL NETWORK PORT FOR ISCSI STORAGE
POWERCLI VMWARE: CONFIGURE SOFTWARE ISCSI STORAGE ADAPTER AND ADD VMFS DATASTORE
POWERCLI: VIRTUAL MACHINE STORAGE MIGRATE/SVMOTION AND DATASTORE PORT BINDING MULTIPATHING
Emulate HDD as SSD flash disk on Esxi and VMware workstation

 

Source :-

http://vcloud-lab.com/entries/esxi-installation-and-configuration/adding-and-sharing-rdm-disk-to-multiple-vms-in-vmware-step-by-step

 

Nutanix Commands for System Administrators

im pretty sure anyone who is either a Nutanix employee or a customer that uses the product on a daily basis has a list somewhere of the commands they use.  I decided to create a blog post to become a living document with the most used commands.  It should get expanded and be kept up to date over time.

AHV

configure mgt IP address / network

vi /etc/sysconfig/network-scripts/ifcfg-br0

VLAN tag mgt network

ovs-vsctl set port br0 tag=####

Show OVS configuration

ovs-vsctl show

Show configuration of bond0

ovs-appctl bond/show bond0

Show br0 configuration (for example to confirm VLAN tag)

ovs-vsctl list port br0

List VMs on host / find CVM

virsh list –all | grep CVM

Power On powered off CVM

virsh start [name of CVM from above command]

Increase RAM configuration of CVM

virsh setmaxmem [name of CVM from above command] –config –size ram_gbGiB

virsh setmem [name of CVM from above command] –config –size ram_gbGiB

ESXi

Show vSwitch configurations

esxcfg-vswitch -l

Show physical nic list

esxcfg-nics -l

Show vmkernel interfaces configured

esxcfg-vmknic -l

Remove vmnic from vSwitch0

esxcfg-vswitch -U vmnic# vSwitch0

Add vmnic to vSwitch0

esxcfg-vswitch -L vmnic# vSwitch0

Set VLAN for default VM portgroup

esxcfg-vswitch -v [vlan####] -p “VM Network” vSwitch0

Set VLAN for default management portgroup

esxcfg-vswitch -v [vlan id####] -p “Management Network” vSwitch0

Set IP address for default management interface (vmk0)

esxcli network ip interface ipv4 set -i vmk0 -I [ip address] -N [netmask] -t static

Set default gateway

esxcfg-route [gateway ip]

List VMs on host/find CVM

vim-cmd vmsvc/getallvms | grep -i cvm

Power on powered off CVM

vim-cmd vmsvc/power.on [vm id# from above command]

CVM

VLAN tag CVM  (only for AHV or ESXi using VGT)

change_cvm_vlan ####

Show AHV host physical uplink configuration

manage_ovs show_uplinks

Remove 1gb pNICs from bond0 on AHV host

manage_ovs –bridge_name br0 –bond_name bond0 –interfaces 10g update_uplinks

Configure mgt IP address / network

vi /etc/sysconfig/network-scripts/ifcfg-eth0

Create cluster

cluster -s [cvmip1,cvmip2,cvmipN…] create

Get cluster status

cluster status

Get detailed local to current CVM services’ status

genesis status

Restart specific service across entire cluster (example below:  cluster_health)

allssh genesis stop cluster_health; cluster start

Show Prism leader

curl localhost:2019/prism/leader

Stop cluster

cluster stop

Start a stopped cluster

cluster start

Destroy cluster

cluster destroy

Discover nodes

discover_nodes

Gracefully shutdown CVM

cvm_shutdown -P now

Upgrade non-cluster joined node from cluster CVM without expanding the cluster

cluster -u [remote node cvmip] upgrade_node

Check running AOS upgrade status for cluster

upgrade_status

Check running hypervisor upgrade status for cluster

host_upgrade_status

Get CVM AOS version

cat /etc/nutanix/release_version

Get cluster AOS version

ncli cluster version

Create Prism Central instance (should be ran on deployed PC vm, not cluster CVM)

cluster –cluster_function_list multicluster -s [pcipaddress] create

Run all NCC health checks

ncc health_checks run_all

Export all logs (optionally scrubbed for IP info)

ncc log_collector –anonymize_output=true run_all

ipmitool (NX platform)

(hypervisor agnostic), leading / required for ESXi (/ipmitool)

Configure IPMI to use static ip

ipmitool lan set 1 ipsrc static

Configure IPMI IP address

ipmitool lan set 1 ipaddr [ip address]

Configure IPMI network mask

ipmitool lan set 1 netmask [netmask]

Configure IPMI default gateway

ipmitool lan set 1 defgw ipaddr [gateway ip]

Configure IPMI VLAN tag

ipmitool lan set 1 vlan id [####]

Remove IPMI VLAN tag

ipmitool lan set 1 vlan id off

Show current IPMI configuration

ipmitool lan print 1

Show IPMI mode (failover/dedicated)

ipmitool raw 0x30 0x70 0x0c 0

The result will be one of the following

  1. 00 = Dedicated
  2. 01 = Onboard / Shared
  3. 02 = Failover (default mode)

Get IPMI user list

ipmitool user list

Reset IPMI ADMIN user password back to factory (trailing ADMIN is the password)

ipmitool user set password [# of ADMIN user from command above] ADMIN

Reboot the BMC (reboot the IPMI only)

ipmitool mc reset cold

URLs

CVM built-in foundation

http://[cvmip]:8000/gui

Legacy cluster-init (should attempt redirect to foundation on newer AOS)

http://[cvmip]:2100/cluster_init.html

Get cluster status

http://[cvmip]:2100/cluster_status.html

 

 

 

https://acropolis.ninja/helpful-nutanix-commands-cheat-sheet/>

Lenovo DS2200 Storage Configuration

Initial configuration of Lenovo DS2200 is not complicated but it might be annoying you if you  used to do initial configuration from USB method.  Here are steps which I followed to configure Lenovo DS2200 storage.

As per the in the official document from the Lenovo each controller has default IP address 10.0.0.2(A) 10.0.0.3 but due to some reason both the IP address are not reachable so now remaining option is to configure by accessing the COM part ,

Requirement :-

I would recommend to use windows 10 machine or if you are on Linux or old window machine then you need to download the install supportive drive from the Lenovo website https://support.lenovo.com/us/en/   (Select Product Support –> navigate to Storage Projects –>  Peruse the location for information about the USB Driver

–> from the storage box take the USB cable and connect to storage controller

–> Download  HyperTerminal  https://hyperterminal-private-edition-htpe.en.softonic.com/download

 

Steps :-  

  1. connect you computer with storage controller A with the help of USB cable
  2. Go to computer management storage detect “Disk Array USB Port”1

3.  open Hyperterminal from your computer and connect to COM port with following setting

2

4. User Name :- manage

Password :-  !manage

7. To the set the ip addres run the following command
For example:

# set network-parameters ip 192.168.0.10 netmask 255.255.255.0 gateway 192.168.0.1 controller a

# set network-parameters ip 192.168.0.11 netmask 255.255.255.0 gateway 192.168.0.1 controller b

Enter the following CLI command to verify the new IP addresses:

show network-parameter

3

1039795335 (Lenono Guide)

 

Thanks :

Nishant