How to create a Nested VMware vSAN 6.6 environment.

/, Virtualization, vSAN/How to create a Nested VMware vSAN 6.6 environment.

How to create a Nested VMware vSAN 6.6 environment.

After many times postponed, this week I start implementing and testing correctly VMware vSAN 6.6 in my home lab. In this case will be the new All-Flash vSAN 6.6.

What is VMware vSAN?

“VMware vSAN (formerly Virtual SAN), is the industry-leading software powering Hyper-Converged Infrastructure solution. vSAN is a core building block for the Software-Defined Data Center.

What vSAN Does?

As the only native-vSphere storage, vSAN enables you to seamlessly extend virtualization to storage, creating a hyper-converged solution that simply works with your existing tools, skillsets, software solutions and hardware platforms.

vSAN now further reduces risk with the first native HCI security solution, protecting data-at-rest while offering simple management and a hardware-agnostic solution. vSAN continues to offer the broadest set of deployment choices supported by the large, proven vSAN ReadyNode ecosystem of leading server vendors.”

There are 3 Editions: Standard, Advanced, Enterprise. You can compare HERE the different editions and their features and more information about VMware vSAN.

  • What is new in the VMware vSAN 6.6.
  • Industry’s First Native HCI Encryption
  • Stretched Cluster with Local Site Protection
  • vSAN Cloud Analytics
  • Unicast Networking
  • vSAN Management Pack for vRealize Operations
  • Always-On Protection with Enhanced Availability
  • Intelligent Operations and Lifecycle Management
  • Up to 50% Higher All-Flash Performance
  • Support of Next-Generation Workloads
  • Day 1 Support of New Hardware Technologies
  • Certified File Services and Data Protection Solutions

You can check all information HERE.

After a small introduction to VMware vSAN 6.6, let us start the Nested Implementation.

This All-Flash vSAN Nested implementation will be in two parts.

  • Part I  – Create Nested vSphere 6.5 for vSAN 6.6
  • Part II – Configure All-Flash vSAN Nested

This is Part I – Nested VMware vSAN 6 6 Environment

This implementation is a fully nested in my vSphere 6.0 environment.

This is a VMware All-Flash vSAN Nested example, and for tests purposes only, you should not use this type of implementation for any production environment. Follow this article to implement a test environment and if you check that this implementation fits in your environment, then build a proper and physical environment.

These are the All-Flash vSAN 6.6 System Requirements:

Hardware Host

  • 1GB NIC; 10GB NIC recommended
  • SATA/SAS HBA or RAID controller
  • At least one flash caching device and one persistent storage disk (flash or HDD) for each capacity-contributing node Cluster Size
  • Min. 2 hosts – Max. 64 hosts
vSAN Ready Nodes and Hardware Compatibility List Available HERE.
Software
  • VMware vSphere 6.5 EP02 (any edition)
  • VMware vSphere with Operations Management 6.1 (any edition)
  • VMware vCloud Suite 6.0 (any edition updated with 6.5)
  • VMware vCenter Server 6.5

My Nested environment:

3x VMware vSphere 6.5.0d ESXi hosts.

Virtual Hardware:

  • 4 vCPU
  • 8Gb vMemory
  • 4 Virtual Disks
    • One with 15Gb for ESXi install.
    • One with 5Gb for Cache (needs to be an SSD, so we will need to simulate a virtual SDD disk later on).
    • 2x with 20Gb for vSAN Capacity Tier

Note:  The size of the Virtual Disks are not mandatory. You can increase or reduce my disk size examples.

Regarding network, will try to simulate my physical implementations.

  • 6x Virtual Network Adapter:  VMXNET3 (since my Physical host has 8 Physical Network Cards, I can use 6 vNICs in my Nested, but you can do this only with 2 vNICs).
    • 2x for Management network (192.168.1.x)
    • 2x for vSAN/iSCSI Network (192.168.10.x) +  for vMotion Network (192.168.0.x)
    • 2x for VMs Network (192.168.1.x)

Note: We can just use subnets, and all be routed or use VLANs for each subnet. Will only use routing in this case.

After we create our VM for the ESXi host, we need to prepare the VM to be a Nested ESXi host.

Note: In previous vSphere versions we need to add some extra values to the file vmx from the VM (like vhv.enable = “TRUE” or hypervisor.cpuid.v0 = “FALSE”), but with vSphere 6.5 we don’t need to change anything at that level. Still, we need to prepare the VM for vSAN and run VMs inside.

First, let us prepare the VM to run a proper Nested vSphere 6.0.

Create Nested vSphere vSAN VM

Nested VMware vSAN 6 6 Environment

When you create the VM in the “Guest Operation System” option, you choose “Other” and then “vSphere 6.5”.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After selecting the VM you created and chose “Edit Settings,” and then you make the necessary changes (4 vCPU, 8Gb vMemory, 1x Virtual Disk 5Gb, 2x Virtual Disk 20Gb). When you edit VM settings, you will notice something different from previous Nested vSphere VMs.

With the new Nested vSphere 6.5 and Virtual Hardware 11 with vSphere 6.5 ISO (with vSAN included), you will have VMware Tools automatic installed in the VM. Also, the VM is created automatically with Disk Virtual SCSI Controller Paravirtual and also Virtual Network Adapter with VMXNET3.

These are updated Virtual Drivers that bring improvements in the VM communicating directly with the physical hardware (the Hypervisor).

Some of the benefits of using VMware Paravirtual SCSI and Network devices are:

  • Better data integrity as compared to Intel E1000e
  • Lower CPU utilization
  • Increased Throughput
  • Less Overhead
  • Greater throughput performance and efficiency.

Before we start to install the vSphere, we need to do some changes in the vCPU so that Nested VM will work as a Hypervisor and CPU can be virtualized and run x64 VMs.

First, we need to expose the physical CPU to the VM.

Just open vSphere Web Client (will not work with vSphere 6.5 Client), Edit settings on your Nested ESXi VM.
Go to CPU and enable the option: “Expose hardware-assisted virtualization to guest OS and change also the option CPU/MMU Virtualization” to “Hardware CPU And MTU”  (Use Intel@ VT-x/AMD-V for instruction set virtualization and Intel EPT / AMD RVI for MMU Virtualization).

Check next image:

If we do not enable the “CPU/MMU Virtualization” option when installing vSphere, we will get this:

When we do not enable the option “Expose hardware-assisted virtualization to guest OS‘ we will get this when trying to Power on an x64 guest OS VM.

We now can start to install our first vSphere 6.5 vSAN server in the VM.

Note: Do not forget that you need the vSphere ISO with vSAN already included, not the standard vSphere 6.5 ISO. For vSAN v6.6, download vSphere and vCenter from HERE.

We will bypass the install of the vSphere since this is something that you should know already when trying to use vSAN.

After we finish the vSphere 6.5 install, we need now to check our Disks (virtual disks) so that we can build our vSAN.

Remember that we created 4 Virtual Disks (one needs to be SSD to work with the All-Flash cache).

Login to your vSphere shell console with and let us check the disks types:

Using the command vdq-q, we will able to check which disks are eligible to apply by vSAN, but also which ones are SDD.

As we can see in the next image, all are vSAN eligible, but none is an SSD (particularly the mpx.vmhba0:C0:T1:L0 that is the Virtual disk that we created for the cache). Therefore, we need to do some changes in the VM settings to simulate an SSD disk.

 

Edit your Nested vSphere VM settings and choose tab “Options” then option “Advanced – General” then click “Configuration Parameters”.

We will add a new row in the VM Settings.

In this case, we will add a setting stating that the device mpx.vmhba0:C0:T1:L0,  our Virtual Disk nr 2 (identified in the disk controller as SCSI(0:1) is an SSD disk.

Adding the name scsi0:1.virtualSSD value 1 (that means enable or true)

Then click “OK,” close the Edit Settings and Power On the Nested vSphere VM and let us recheck the disks if now we can display as SSD disk.

Let us use the vdq command again to check the devices. As we can now see the mpx.vmhba0:C0:T1:L0 is set as “IsSSD”

 

As we want to use this Nested vSAN as All-Flash, we will also change the setting from the other Virtual Disks (20Gb) that we will use for the Capacity Tier, so that we will have all SDD disks for cache and Capacity Tier.

Since we have already change the mpx.vmhba0:C0:T1:L0″, the other disk devices should be:

Since the mpx.vmhba0:C0:T0:L0 is the one that we have to install ESXi and is not eligible to vSAN (as we could see in the above information).

Just follow the same procedure to change the settings to SSD disks.

After changing the Capacity Tier Disks to SSD, we should now mark them as “IsCapacityFlash” so that we can build our All-Flash vSAN.

To do this, just run the following command:

Do the same for the second disk: mpx.vmhba0:C0:T3:L0

To finish, we can double check if all disks are correctly set for All-Flash and set as SSDs.

We can also check the disks as SSD connecting to vSphere Client tool in the Storage area. Just try to add a new Datastore, and we will see the three remaining Virtual Disks that we created initially, and one now is SSD.

Cloning Nested vSphere vSAN

After we now finish installing vSphere and configuring the VM for the vSAN, we need to create two more ESXi hosts. However, since we should not repeat the same configurations, we just create clones from this initial Nested vSphere.

To create a clone of a Nested vSphere we need to do some changes in our source VM.

Not everything from outside of the guest OS looks needs any changes since we will get new have a unique MoRef ID, InstanceUUID, BIOS UUID and MAC Addresses for each of the virtual network adapters. Still, inside the Nested vSphere, we will have some issues in the clone.

  • The first issue is that we will get a duplicated MAC Address of the VMkernel interface(s) because the Nested ESXi configuration is the same.
  • The second issue is that we will have a duplicated ESXi System UUID (or VMkernel UUID) which typically should be unique and can sometimes be used for tracking purposes. You can see this System UUID by running the following ESXCLI command: esxcli system uuid get or by looking in esx.conf configuration file

Note: Thanks to William Lam for is help and article HERE regarding cloning Nested ESXi.

Therefore, we need to do same changes before the clone.

In the source Nested ESXi, we need to set the FollowHardwareMac that will automatically update the VMkernel’s MAC Address if the Nested vSphere VM virtual network adapter MAC Addresses changes. Go to ESX shell console.

First, run:

To check if the option is enabled, or disable (0 disable, 1 enable).

As we can see in the above image, is disable, so run the next command to enable.

Next, we need to delete the existing UUID in /etc/vmware/esx.conf configuration file. By deleting the UUID when we reboot/power on the Nested ESXi will generate a new one.
We can do this by editing the file /etc/vmware/esx.conf configuration with VI, but you can run this command and will remove without the need to use VI.

Next, we need to save, and these changes in the file esx.conf are persistence we need to run the backup.sh in the ESXi: /sbin/auto-backup.sh

After this, we can now Power off the target Nested ESXi VM and create a clone. After the clone is finished, we need of course to change in the new Nested ESXi the IP Address and DNS Name for each clone.

After all our Nested ESXi are cloned, we can now start adding ESXi hosts to the vCenter.

Note: Since this is a vSAN implementation from scratch, I did install a fresh vCenter 6.5 to this implementation. So I will add the Nested ESXi to this new vCenter.

Adding the first Nested ESXi did work ok, adding the second (a clone), Nested ESXi did finish with an error.

This error is because of the Nested ESXi clones have duplicated VMFS UUIDs. Since the original Nested ESXi host had already a Datastore, now the VMFS UUID is the same in this clone.
To fix this issue, we need to resign the VMFS signature. Login to your ESXi clone shell console, list all our Datastores and resign the volume signature.

Now a print screen of all commands and changes to fix Datastore issues in the Nested ESXi clones.

Besides the VMFS UUID conflict, another issue that we can encounter when adding the clone is the name of existing Datastore that already exists(is created by default by the ESXi installation) is the Datastore name. In this case, the default is datastore01, that is the same for all 3 Nested ESXi hosts.

Even we rename the Datastore we will find issues, so the best and fast solution is to delete the Datastore. Alternatively, before you clone the Nested ESXi host, just delete the Datastores. Create your Datastores after you clone the ESXi hosts.

Note: To fix the last issue we could just unmount the Datastore and recreate the filesystem with new datastore name, but I think is a significant effort for a clone and test environment when you could just delete in the clone, or delete the Datastore in the ESXi host source before the clone.

After all this process in all cloned Nested ESXi hosts, I was able to add ESXi hosts to the vCenter. All Nested environment is finished, next article we will start configuring the All-Flash vSAN.

Finished Part I

Read the second part of this article: How to create a Nested VMware vSAN 6.6 environment – Part II

Note: Share this article, if you think it is worth sharing.

©2017 ProVirtualzone. All Rights Reserved
By | 2019-09-23T09:42:36+02:00 May 13th, 2017|Storage, Virtualization, vSAN|9 Comments

About the Author:

I am over 20 years’ experience in the IT industry. Working with Virtualization for more than 10 years (mainly VMware). I am an MCP, VCP6.5-DCV, VMware vSAN Specialist, Veeam Vanguard 2018/2019, vExpert vSAN 2018/2019 and vExpert for the last 4 years. Specialties are Virtualization, Storage, and Virtual Backups. I am working for Elits a Swedish consulting company and allocated to a Swedish multinational networking and telecommunications company as a Teach Lead and acting as a Senior ICT Infrastructure Engineer. I am a blogger and owner of the blog ProVirtualzone.com

9 Comments

  1. My Virtual Journey 13/05/2017 at 04:38 - Reply

    Nice efforts sir Luciano,very well written and informative post,thanks for sharing your expertise.

  2. Imran Hossain 14/06/2017 at 16:31 - Reply

    I loved this blog. Very well written and explained. Great work! Thank you for your hard work.

  3. carlo 16/06/2017 at 01:11 - Reply

    many many thanks….this is pretty cool blog, nice!!!!
    but…..where is the second part of this tutorial? 🙂

    • Luciano Patrao 16/06/2017 at 01:47 - Reply

      Ciao Carlo,

      Being very busy 🙂

      But working on the article all day. Decided to go very detail in this second part. I should finish today.
      If you subscribe to the newsletter, you will receive emails when is online.

      Ciao

      Luciano Patrão

  4. […] In Part I  – Create Nested vSphere 6.5 for vSAN 6.6 of this article we had to install and configure all the Nested ESXi hosts and also set all Virtual Disks as SDD to work in All-Flash vSAN. […]

  5. carlo 22/06/2017 at 15:49 - Reply

    YEAAAAAAAAAAA……. ciao Luciano.
    At first, sorry for my bad english, so….. i’m fast subscribe to your newsletter because you do a very hard job for this article but you do a very GREAT JOB, really.
    i’m very happy to found this blog.
    nice to meet you and your job, have a nice day.
    p.s. if you have a pleasure, i would like to exchange our skype contact, please write me to xxxx@tin.it
    see you soon.
    bye 😉

    • Luciano Patrao 22/06/2017 at 19:33 - Reply

      Ciao Carlo,

      Your English is good enough.

      Thank you for your kind words about the blog. But yes this article was the most detail and time consuming that I have written. But I think was worth it because is very well detail and will help others, not just build a Nested vSAN, but also build a physical vSAN.

      I have edit your skype email so that is not public you will get many spam etc. I have send you a skype request.

      Luciano Patrao

Leave a Reply

%d bloggers like this: