In this second article How to create a Nested VMware vSAN 6.6 environment – Part II – Configure All-Flash vSAN Nested we will configure the vSAN in the vSphere Web Client.
In Part I – Create Nested vSphere 6.5 for vSAN 6.6 of this article we had to install and configure all the Nested ESXi hosts and also set all Virtual Disks as SDD to work in All-Flash vSAN.
Note: Like in my first article, this Part II we will try to go step by step in every detail in all areas. This will not only help people that have experience with vSphere, but also some new people that still not comfortable with some of the tasks (networks, storage, etc.).
As we notice in the previous article, a vCenter Appliance 6.5 is already installed, and ESXi hosts were already added to the vCenter. But let us take a small step back and focus on the Cluster that we need to create and enable vSAN (will not use HA or DRS for now).
So before you add your vSAN ESX hosts to the vCenter Cluster, if you enable the vSAN on that Cluster before creating all necessary networks and assigning disks, you will get a warning.
Note: Even we can configure after, but is easy to create first Networks and then do the vSAN configure on a fly.
Warning:
Note: Since I want to add some VMs in my further test I create a 400Gb LUN and add 100Gb Disk to each of my ESXi vSAN hosts.
As discussed in the first article, I need to change the Virtual Disk settings so that vSAN recognize as an SSD disk.
As we can see in the next image, all my Disks are SSD (flash) that I will use for this All-Flash vSAN.
Let us create and configure the ESXi networks and also vSAN network.
Create vSAN Network – Nested VMware vSAN 6.6 environment
First, create a vDS for vSAN Network, will be used for all vSAN ESXi hosts.
In vSphere Web Client just click Networking – > Networks Tab -> Distributed Switches Tab and click New Distributed Switch
Will name this vDS “vSAN”.
This All-Flash vSAN implementation is all vCenter / vSphere 6.5, no need to use the vDS v6.0 for older versions. So will stick to the latest v6.5.
Next, we create the Portgroup that we will use for our vmkernel networks (vMotion, vSAN, and vSphere Replications), will be named “vSAN-Portgroup”.
Next just check the information is all ok and just click Finish.
After we create our vDS, we now need to add the vSAN ESXi hosts to this vDS so that we can then add the physical network adapters to the vDS.
Select your new vDS “vSAN” and click on the top icon(add hosts to the vDS).
Select the hosts that you will add to this vDS. In this case is all hosts.
Next, we select what type of tasks will we need. In our case, we will add physical network adapters and also add VMkernel adapters. So we will choose these options.
Next, let us select the Network adapters that we will use for this vDS and assign an uplink. In our case, we will add vmnic2 and vmnic3 for vSAN Network.
Select for each host the network adapters and select an uplink(uplink1 or uplink2). In this case for the uplink assigning I will choose Auto-assign option (vCenter will choose a free uplink to add to the network adapter).
Note: As you remember we created the vDS only with two uplinks, so only two network adapters will be added to this vDS and only two uplinks are available.
Create VMkernel
After we have added the network adapter and assigned the uplinks, we need to create the VMkernel Network.
Click New adapter to add the VMkernel to this host.
Note: A VMkernel Network Adapter needs to be created for each vSAN ESXi host. So repeat the next steps for each host.
Next, we select(is selected by default) the option VMkernel Network Adapter.
Browse to choose the Portgroup(inside of the vSAN vDS) that we will use for this VMkernel network.
In our case is the “vSAN-Portgroup” that we create before.
Next, we select the services that we will enable for this VMkernel network. For this vSAN implementation, need only vMotion and vSAN. But we will use this vSAN for other tests, so we will also enable some other services.
Just add an IP address to your VMkernel network(as shown in the first article, this vSAN subnet will be 192.168.10.x)
Double check the information, if is all ok, just click “Finish”.
After we finish the VMkernel Network, let us go back the vSAN vDS and finish.
There are no network issues with the changes we did, not we can continue.
As we can check in the next image, we add three hosts and 6 Network adapters to this vDS.
Click Finish, and we have finished the configuration for the vSAN network.
After completing the vSAN network, we will create a second vDS for the Virtual Machine network.
No need again to provide a step by step for this new vDS will just show the Networks created (vmnic4 and vmnic5 were used for this new vDS).
After we finish the Network, we can now enable vSAN in the vCenter Cluster and configure the vSAN.
Configure vSAN
Before I start the vSAN configuration, I will just explain these three areas that we can set up in the vSAN Cluster.
Option 1:
In this option, we can enable/disable vSAN and where we configure our vSAN.
Select vCenter Cluster -> Configure Tab -> General option -> Configure… button in the upper right corner.
- Deduplication and Compression
This option will enable deduplication, and compress. Will reduce your data stored on your physical disks. This option only works with all-flash group disks. If this option is enabled will not be possible to create hybrid disk groups.
Since we have set all our disks as flash disks, and purpose of this installation is All-Flash vSAN, we will enable this option.
Important Note: In case you need to disable this option or even evacuate any disks, all Disk groups will recreate and formatted. - Encryption
Will encrypt all your data in your Disk groups(flash or hybrid disks). - Fault Domains and Stretched Cluster
Is a vSAN host failover cluster.
Virtual SAN Stretched Clusters provides customers with the ability to deploy a single Virtual SAN cluster across multiple data center.
You have different failover options, check next document to review how each option works.
There is a great document where you can read all about Stretched Clusters and how it works and how you can build one. Check vSAN Stretched Cluster & 2 Node Guide.
Next, you set your vSAN network. As we can see in the next image, all the VMkernel adapters we created above will be used for the vSAN Network.
By default vSAN VMkernel adapters will show, if not change the view to see vSAN VMkernel adapter.
Next, we will claim our disks groups for Cache Tier and Capacity Tier.
First, let us select our disk for the cache. Just select the disk group you need for your cache(in this case are the 5gb disks that we create in our vSAN ESXi hosts) and then click the icon claim.
For Capacity Tier follow the same process.
These are all the Disks that we can claim for the capacity tier, and we have added in our first article (plus the 100Gb that we added in this article).
Note: The number of capacity tier disks must be greater than or equal to the number of cache disks claimed per host.
After we finish Claiming Disks, this is the final state for the claiming disks.
Check all information and finish the process.
As we can see, we have a full 420Gb (2x20Gb + 1x100Gb disks per host) for Capacity Tier and 15Gb(3 disks per host) for the Cache Tier.
In the next image in Datastores, we can check the new Capacity Tier (vSAN datastore) created.
This finishes the vSAN configuration. After this, we can now start using the vSAN for our VMware Infrastructure.
Let us do a quick trough to the rest of the options.
Option 2:
Here we can configure the vSAN internet access. If needed you can also set a proxy here.
Option 3:
In this option, you can set your vSAN as an iSCSI target. Meaning that you can provide iSCSI LUNs to other ESXi hosts(or other servers) outside of your vSAN Cluster or even outside the vSAN.
Select the iSCSI network(VMkernel) that you will use for this SCSI target and configure as a simple iSCSI target.
After you can create LUNs and add initiators or initiator Groups to the LUN created.
Note: Plan to write a more detail article regarding this option in the future.
With this last option, we finish all our Nested All-Flash vSAN implementation.
Before I close this article, just want to provide an update on an issue that I had almost in the end.
Issues Found
Like I said at the beginning of this article, I added tree 100Gb LUNs from my Storage and present to each vSAN ESXi host as a disk to add more space to the Capacity Tier. Then I configure the same disks as SSD. All was ok, and as we saw above, everything was fine. But then I got some issues in my home Storage on those particular LUNs and was getting timeouts and warnings in the vSAN.
After 1 minute or so, the LUNs were back on, and the issue disappears. Then I notice that I had some issues with my Storage(some disks were failing) and for the safe side I decided to remove this 100Gb LUNs from the vSAN until I fix my home Storage.
As you remember those LUNs were added plus the 20Gb disks to the Capacity Tier, so if this were a real scenario I would need to evacuate all data from this disks and replace them.
How can we do that?
First, we test a scenario to remove the disks and check what the result is. This can be done in the vSAN configuration in the Disk Management option.
Select Disk Groups in the upper right corner and select the disks that you want to remove/test.
Then click on the icon to test.
As you can see in the next image, in my case is not possible to evacuate the data. So need to double check the reason.
For a better test, I then click the evacuate button, and I will get the reason(for this particular case9 why is not possible to evacuate the data.
As we can see in the next image, deduplication and compression are enable and where this option is enabled, is not possible to evacuate the data without reformating.
So like we warn in the begging of the vSAN configuration, be aware if you enable deduplication and compression in your vSAN if you need to replace disks or recreate the Disk Group, you need to format all your Disks in that group.
In this case, since there was no Data, no problem in disable deduplication and reformatting the Disk Group.
Just go back to the Option 1, in the vSAN configuration disable deduplication. As you notice in the next image, we get a warning that all disks will be reformated.
After you disable deduplication, disks will be reformated.
We can back to the Disk Management option and start to remove the damaged disks.
Note: You need to select each host to remove the disks from each host. In this case was the 100Gb disks in all hosts, but you can remove just one disk from one host. As long as a rule “The number of capacity tier disks must be greater than or equal to the number of cache disks claimed per host” is not broken.
Just select the disk to remove and click the remove button.
As we can see in the next image, now all is green and was possible to evacuate the data if necessary(after we disable the deduplication).
The process will start.
As we can see in the next image, all is green and healthy.
To finish this process and the article, check the size of the vSAN Datastore after we remove the broken disks.
As we can see in the next image we now only have 120Gb in the Datastore (Capacity Tier) that are the 2x20gb per host that we have.
After this long Part II – Configure All-Flash vSAN Nested we now finish the implementation of this Nested All-Flash vSAN.
We now can play around with this Nested vSAN and do all kind of tests and learn more about the great product from VMware.
More vSAN articles will arrive soon on the blog.
Thank you for reading this long two parts article. Hope you find it worth to be shared.
Leave A Comment
You must be logged in to post a comment.