Seamless Integration Between vSAN and VxRail

It’s been a while since my last post, mostly due to lack of time.

I have joined recently the VxRail R&D team, from DELL EMC as part of the CPSD Division.

For this reason I have been learning VxRail while keeping up my vSAN Skills. One of the very first testing I have performed on a VxRail Gen1 (Phoenix Servers) was to test the blinking LED Feature from the vSAN Disk Management Tab.

This feature is particularly handy for anyone that needs to replace and locate a disk in a vSAN or VxRail Cluster.

I must say I was impressed with the convergence of both products (vSAN and VxRail Manager) to the point that this simple test worked perfectly.

So let’s see how is this done !

 

  • Blink the LED from the vSAN Disk Management Tab from vCenter

 

 

 

  • Login to the VxRail Manager and go to the Physical View Tab

 

Blink-2.png

 

This simple feature really shows how vSAN and VxRail Manager are so well integrated !

I hope to come with more posts in the feature !

 

Advertisements

Configure Fault Tolerance in VSAN 6.1

VSAN 6.1 has introduced support for SMP-FT VMs, so we can now use Fault Tolerance to protect our vCenter VM for example.

Fault Tolerance has some requirements in order to be implemented.

A full list of the requirements can be accessed on the vSphere 6.0 documentation:

https://goo.gl/ld6Tqo

Let’s then see how to configure Fault Tolerance on a VSAN 6.1 Cluster:

Configure Host Networking for FT

FT requires compatible hosts and a dedicated 10GB network.

I am using a dvSwitch for the set up in this scenario.

FT1

Set up vmkernel for FT on every host of the Cluster

FT2

Verify that you can vmkping all the FT vmkernel IP addresses that were configured for FT Logging.

Using vmkping you can test the network connectivity to each of the VSAN nodes through the FT vmkernel uplink.

FT3

Testing FT vmkernel interfaces with vmkping

Create a simple VM with 4 vCPUs

FT4

Right Click the VM and turn on FT:

FT5

Turning on Fault Tolerance on the 4vCPU VM

FT6

Select the host that will have the secondary VM

FT7FT8

Verify that FT is active

FT9

Power On the VM:

FT10

And that’s it ! Your VM is now fully protected by FT with zero RPO/RPO should the host where the primary VM fails.

We can now see 2 VM’s running on each host

[root@ds-lab-vsan14:/var/log] esxcli vm process list
SMP-FT VM - Nelson
World ID: 4569207
Process ID: 0
VMX Cartel ID: 4569169
UUID: 42 32 e3 82 f6 b9 51 00-eb 90 dd 0d d5 83 cb 97
Display Name: SMP-FT VM - Nelson
Config File: /vmfs/volumes/vsan:525145947e3307d9-a9e094a5a7db903d/e8ff2456-0046-be24-8b7d-90b11c2465b3/SMP-FT VM - Nelson.vmx
[root@ds-lab-vsan14:/var/log]

[root@ds-lab-vsan12:~] esxcli vm process list
SMP-FT VM - Nelson
World ID: 4560087
Process ID: 0
VMX Cartel ID: 4560086
UUID: 42 32 e3 82 f6 b9 51 00-eb 90 dd 0d d5 83 cb 97
Display Name: SMP-FT VM - Nelson
Config File: /vmfs/volumes/vsan:525145947e3307d9-a9e094a5a7db903d/1ee92456-80 68-1d50-641d-90b11c2b5454/SMP-FT VM.vmx
[root@ds-lab-vsan12:~]

Monitor vsan.resync_dasboard From Bash Shell in vCenter Appliance

Hi,

It’s been a while since my last post and this one won’t be too long as I’m very busy with VSAN at the moment.

Just found a very interesting way to monitor the progress of vsan.resync_dashboard with a simple bash one liner you can execute from within the vcenter linux appliance. Does not work for Windows vCenter


# watch '/usr/bin/rvc -c "vsan.resync_dashboard 1/Datacenter_VSAN/computers/Cluster_VSAN/" -c "quit" root:'PaSSW0rd'@localhost /dev/null 2>&1 | grep "Total"'

 

Feel free to test it on your VSAN Environments to proactively monitor the progress of the resyncs

Avoiding Resyncs When an ESXi in a VSAN Cluster Suffers a PSOD

This article is about avoiding unnecessary resyncs when an ESXi host in a VSAN Cluster suffers a PSOD. It’s only valid for VSAN Clusters with 4 or more hosts.

The technical justification for this, is to avoid unnecessary resync of components, that are left in an Absent State following the PSOD.

By default if an ESXi hosts suffers a PSOD it will stay up and running unresponsively with the PSOD stack trace displayed on the Console. It may take a while until realizing that the host is unresponsive and the time to reboot the host.

I assume that the default cluster policy has not been changed from FT=1.

In a HA enabled VSAN Cluster, should a PSOD happen, the VMs will be automatically restarted in one of the remaining hosts if the VM Folder and VM data objects meet the VSAN requirements for accessibility. There are a few situations where VMs can fail to restart like the ones referred in the following post: http://blogs.vmware.com/vsphere/tag/psod.

Three different things can happen here:

  1. The host had running VMs but the VSAN components of those VMs were not located in the Disk Groups of the crashed host.
  2. The host had some components that were stored in the Disk Groups, leaving some VMs that are running in another host, out of compliance.
  3. A combination of 1 and 2: Host had the VM running and also had one or more components of that VM in his own Disk Groups.

In the first scenario, resync is not needed, HA will normally take care of restarting the VM somewhere.

In the second scenario HA will kick almost immediately and resync after 60 minutes.

And finally in the third scenario HA should kick in and the VM should be able to power on in another host and resync after 60 minutes.

Again, all these scenarios are theoretical and have to be tested in a Lab environment for example.

Important to mention here is that the resync of components, after a PSOD, will only happen on VSAN Clusters with at least 4 nodes. In 3 node VSAN Cluster, should a host fail VSAN will not reprotect the failed components. With 3 node VSAN Cluster you get re-protection against Magnetic Disk Failures and SSD but not against host failures.

The amount of data to be resynced  to other hosts can be quite significant and can eventually have a performance impact on the running VMs in the VSAN Cluster. VSAN has an internal scheduler that throttles the resync IO traffic in such a way to be fast enough to recover from the failure and that also tries to not compromise the performance of the running VMs.

Even if the scheduler is there, we want to guarantee that in a PSOD we won’t have to start resync operations after the default timeout of 60 minutes.

For more information about this setting (ClomRepairDelay) please consult this KB

Changing the default repair delay time for a host failure in VMware Virtual SAN(2075456)

Also, we want to capture the PSOD dump, in order to diagnose the root cause of the crash.

For that purpose, I strongly recommend to set up a Network Dump Collector. The instructions to set up the Network Dump Collector can be find in this KB:

ESXi Network Dump Collector in VMware vSphere 5.x (1032051)

So now that we know what we want to achieve, it’s just a matter to change the default ESXi behavior during PSOD.
Again, the following VMware KB explains how configure it:

Configuring an ESX/ESXi host to restart after becoming unresponsive with a purple diagnostic screen (2042500)

The tricky part of this is that we want to leave enough time for the coredump be transferred through the network to the Network Dump Collector and at the same time we want that the remaining time (60-x) to be enough to completely allow the ESXi to reboot.
I will leave up to you to test this setting and try to figure out how long does the ESXi takes to send the coredump over the network and how long it takes to reboot.

You can try crashing you ESXi host running the following command in a ssh session:

vsish -e set /reliability/crashMe/Panic 1

Hope this article makes you think about the implications of a PSOD in a VSAN Cluster.

ESXi Scratch Partition on the VSAN Datastore – The Risks

One of the Best Practices for VSAN is to not use the VSAN Datastore for the Scratch Partition or for the Syslog Server.

As per VMware KB:

Creating a persistent scratch location for ESXi 4.x and 5.x (1033696)

http://kb.vmware.com/kb/1033696

Note: It is not supported to configure a scratch location on a VSAN datastore.

 

So what is the reason for this ?

Imagine that for any reason you are forced to leave an ESXi from the VSAN Datastore, and when you type the command:

~ # esxcli vsan cluster leave

You will get this error:

/dev/disks # esxcli vsan cluster leave

Failed to leave the host from VSAN cluster. The command should be retried (Sysinfo error on operation returned status : Failure. Please see the VMkernel log for detailed error information

Vob Stack:

[vob.sysinfo.set.failed]: Sysinfo set operation VSI_MODULE_NODE_umount failed with error status Failure.

)

Basically, the ESXi host can’t leave the Cluster as it’s impossible to release the configured scratch partition that is locked in the VSAN Datastore.

This can also happen if the Syslog folder is configured inside the VSAN Datastore.

To solve this just connect directly to your host using the vSphere Client and change the Scratch Partition to another folder.

Scratch PartitionSyslog Configuration

Conclusion:

Follow the Best practices and don’t ever configure the Scratch Partition or the Syslog on the VSAN Datastore.

 

 

 

How to Redirect Output from the VSAN Ruby Console to a File

Hi,

I was struggling today to get the output of the Ruby console redirected to a file. I was using the Windows vCenter Ruby Console.

For the Linux vCenter the regular linux redirectors work as usual.

So, in order to get this working you have to modify the rvc bat to look tlike this:

 

The rvc is located here:

 

C:\Program Files\VMware\Infrastructure\VirtualCenter Server\support\rvc

 

..\ruby-1.9.3-p392-i386-mingw32\bin\ruby -Ilib -Igems\backports-3.1.1\lib -Igems\builder-3.2.0\lib -Igems\highline-1.6.15\lib -Igems\nokogiri-1.5.6-x86-mingw32\lib -Igems\rbvmomi-1.7.0\lib -Igems\terminal-table-1.4.5\lib -Igems\trollop-1.16\lib -Igems\zip-2.0.2\lib bin\rvc -c “vsan.support_information 1” -c “quit” administrator:VMware123!@localhost

 

 

Once you have modified the rvc bat, just create a shortcut on the Desktop and modify the Target as follow:

 

 

RVC Shortcut

 

Then, just click on the Shortcut and you will get a text file created in C:\VSAN.log

VSAN Log File

VSAN Log File

 

For more information about the Ruby Console check this Blog:

http://blogs.vmware.com/vsphere/2014/07/managing-vsan-ruby-vsphere-console.html

 

 

Troubleshooting File Transfer Performance Between VM’s – Part 2

So, here we are again to finish this chapter.

After getting the Customer on a Webex session we managed to go to the BIOS settings of the DELL Server and change the System Profile to “Performance”.

BIOS Settings

So, we started the ESXi servers and we did some testings.

As a reference, before this change, the average speed to copy a 9 GB file from one VM to other was around 20 Mb/s with some drops during the transfer.

So, here are the results after:

File transfer speed after Power Management change

File Transfer Speed After Power Management Change: 55 Mbs

Conclusion:

Power Management really matters !