Tuesday, April 5, 2016

New Blog Site














This blog have been migrate to a new blog domain.

All articles were created in the new blog. All new articles will be created in the new domain blog.

Since I want to create a more flexible and professional blog, I decided to create my own domain and create a new blog address.

To my old and new viewers, thank you for all the visits and also the feedback since I started this blog in 2010 and reactivated in 2013.

So please just check the new blog address: www.provirtualzone.com
















Thank You All!

Luciano Patrão

Saturday, March 19, 2016

VMware: What New vCenter and vSphere 6.0 Update 2




VMware launched a couple of days a new update(update 2) for vCenter 6.0 and also ESXi.

What is new:

vCenter Update 2 Features:
  • Two-factor authentication for vSphere Web Client: Protect the vSphere Web Client using the following form of authentication:
    • RSA SecurID
    • Smart card (That is the UPN based Common Access Card)
  • Support to change vSphere ESX Agent Manager logging level: This release supports dynamic increase or decrease of vSphere ESX Agent Manager (EAM) logging levels without restarting EAM.
  • vSphere Web Client support: vSphere Web Client now supports Microsoft Windows 10 OS.
  • vCenter Server database support: vCenter Server now supports the following external databases:
    • Microsoft SQL Server 2012 Service Pack 3
    • Microsoft SQL Server 2014 Service Pack 1
  • vCenter Server 6.0 Update 2 issues that have been fixed in this update Resolved Issues .
Also the import Know Issues for this release.
 
ESXi 6.0 Update 2 Features:
  • High Ethernet Link Speed: ESXi 6.0 Update 2 supports 25 G and 50 G Ethernet link speeds.
  • VMware Host Client: The VMware Host Client is an HTML5 client that is used to connect to and manage single ESXi hosts. 
Note: It can be used to perform administrative tasks to manage host resources, such as virtual machines, networking, and storage. The VMware Host Client can also be helpful to troubleshoot individual virtual machines or hosts when vCenter Server and the vSphere Web Client are unavailable. More Information VMware Host Client Release Notes

  • vSphere APIs for I/O Filtering (VAIO) Enhancement:
    • ESXi 6.0 Update 2 supports the IO Filter VASA Provider in a pure IPv6 environment. Resolved Issues 
    • ESXi 6.0 Update 2 supports the VMIOF versions 1.0 and 1.1. Resolved Issues
  • ESXi 6.0 Update 2 issues that have been fixed in the this update Resolved Issues
Also the import Know Issues for this release. ( a long list )

Storage: Dell Equalogic PS6000 - How to upgrade Array firmware and Hard Disk


In this article we talk about how to update the firmware(Arrays and disks) of a Dell Equalogic PS6000

Since this disks firmware needs at least Array firmware 6.x version and we are running 7.x, I decided to update the disk first, then update the Arrays firmware to 8x.

So in this article we will  upgrading firmware for PS6000 to version 8.1.1(V8-1-1-R417753), from version 7.1.4 and our Hard Disks(Seagate) to v9(V9.0_DriveFw_2663786033) from

First we will check if our Hard Disks(HD) are in the list for this firmware upgrade. Check the Dell update document(in our case for this version was 110-6044-R13_DriveFW_UPD.pdf) and look for your HD model.

We can do this in two ways One using the GUI, other using Array console.

Using GUI:

Connect to your Array Group IP in your browser and choose Group, Member and then Array member name and tab Disks.












As we can see in the above image we have HD firmware version KD08 and PD04, depending of the HD model.

If we want to check this in the console(CLI) we need to connect to the Group Manager IP address trough SSH or FTP.

Note: If you will do the following tasks(Upload files and update Disks) manually, connect on member level, not Group Level. So that we can update Disks(and also Array) only the member we are working.  So connect through an IP address assigned to a network interface on the array (for example, Ethernet port 0). Do not connect to the group IP address.

Using CLI – Issue these commands:

# Member select (groupname)
# Disk select 1
# Show

xxx-xxx-grp0> member select xxx-xxx-02
xxx-xxx-grp0(member_xxx-xxx-02)> disk select 1
xxx-xxx-grp0(member_xxx-xxx-02 disk_1)> show




As we can see in the image GUI above we have different HD with models, so double check all your disks in the document.

Example for a second disk:

xxx-xxx-grp0> member select xxx-xxx-02
xxx-xxx-grp0(member_xxx-xxx-02)> disk select 2





After we double check everything we can now upload our HD firmware file to the Array to run the update.

We can just use ftp to the array and upload the files, or use a tool like WinSCP(my prefer option) and just upload the files to the Array root.

After your files are in the Array, connect to the Array console(if you did not use ftp to upload the files) and just run the command " update".

The array will automatically pickup the file that you uploaded to the root.



As we can see update find 16 HD that can be updated, so click "Y" and continue to confirm the update.

Note: This disks firmware update will not stop or disruption any of the storage and volumes.

Update finished for this array:




Lets just have a quick look at the HD we updated in the GUI



Now our HD have firmware KD0A and PD0A for the different models.
After we update our HD we should and will update the Equalogic Firmware Array.

NOTE: Before we start this section, updating Array firmware demands a reboot of the Array. So only start this tasks if you are able to reboot your Array system after the Firmware update.

Again this can be done in two ways, one with the GUI or in the console(CLI).
Since we have 2 Arrays, we will update one with the GUI and the other with the console(CLI).

Using GUI.

Connect to your Array Group IP in your browser and choose Group, Member and then Array member name and tab Maintenance.




As we can see in the above image we have Firmware V7.1.4 and will update to v8.1.1.
Check your Array firmware and then compare to this list HERE. Check if your firmware is upgradable to this new version(or other), or if you need first go to a lower version and then go for the latest version.

Firmware Downloads and documentation you can check HERE

Note: You need a Dell Support login to download the files and also the documentation or the version matrix.

Here is very easy, just click in the button "Update Firmware" and we will get a safety box to confirm your password




Enter you grpadmin and continue to choose the member that you want to update and also to upload the firmware file.




Adding the file(that we downloaded in the above section) the the system is recognize that both member can be updated(last column). So we will choose the member xxx-xxx-02 to update with the GUI. Just click "Update select members" to upload the file to member and start the firmware update.




After your update will start and will take some minutes.

After the update is finish you see a warning in the bottom of the GUI informing that the Array needs to be restarted.



Note: The firmware update will only effective after the reboot.

Using CLI – Issue these commands:

Again we need to connect to the Group Manager IP address trough SSH or FTP.

Note: Also again we need to connect through an IP address assigned to a network interface on the array (for example, Ethernet port 0). Do not connect to the group IP address. In this case we will connect to Array 03 and we will choose the IP address of the eth0 from that Array Interface.

First lets check the Array version before do the upgrade or upload the files.

After connected to the Group Array we display all members Array and then select what we will work(in this case xxx-xxx-03)

xxx-xxx-grp0> member show
xxx-xxx-grp0> member select xxx-xxx-03
xxx-xxx-grp0(member_xxx-xxx-03)> show




As we can see in the image above we have both Array and the full information for the one we choose.
After you double check the versions and check the Dell Matrix for firmware updates you can start to upload the files to the Array.

Like the in the HD we need to upload the files to the Array(will use again the WinSCP for this task).

After the file is uploaded(in our case was the kit_V8.1.1-R417753_666488616.tgz) we just type "update" in the console.

You will get the information about the firmware versions and then just type "y"



After this the manually update will start.

You can again check the % process of the firmware update in GUI



When if finish you see this messages in the console

## Update completed successfully.
## The firmware update will take effect when the array is restarted. 

## To restart the array, enter the restart command at the CLI prompt.

In this case we will restart the Array right away.

xxx-xxx-grp0> restart

There is new firmware in the update area.

As part of applying the new firmware, the active and secondary control
modules will switch roles.  Therefore, the current active control module
will become the secondary after the firmware is applied.

Would you like to load the firmware now? (yes/no) [no] yes


The process will take a while, since the Group Interface controller will change to the other Array.

17:24:12 Verifying new firmware integrity.
17:25:40
17:25:40 PLEASE NOTE:
17:25:40 The restart process may take up to approximately 10 minutes.
17:25:40 During the restart process, do not restart or power down the array.
17:25:40
17:25:40 Start update of flash memory on secondary controller.
17:25:43 Setting cache to write through
17:25:52 Update of flash memory on secondary controller completed.
17:25:52 Restarting secondary controller.
17:26:12 Waiting for secondary controller to restart...
###... some line for 10/15 times until the controller is restarted.
 17:27:35 Waiting for secondary controller to restart...
17:27:41 Secondary controller successfully restarted.  Start secondary control module synchronization.
17:27:41 Waiting for secondary control module synchronization...
17:28:01 Waiting for secondary control module database synchronization...
17:28:33 iSCSI PR PPool synchronization ..
17:28:33 Waiting for iSCSI PR PPool synchronization...
17:28:34 Secondary controller successfully updated.  Transition current active controller to secondary controller.
17:28:36 Restarting active controller to complete the update.


After this we have our Dell Equalogic PS6000 upgraded.

As you can see here in this image both have the update firmware(meanwhile I rebooted the second Array)



Hope this article will help you updating your Dell Equalogic PS6000(or other versions, since the procedure is similar)

Note: Share this article, if you think is worth sharing.

Monday, March 14, 2016

Veeam upgrade issue: Warning 1327.Invalid Drive D:\











Today when we try to update one of our Veeam v8 to v9, we get this issue and the upgrade failed.




Checking the logs(C:\ProgramData\Veeam\Setup\Temp\CatalogSetup.log) from the upgrade I see that this is related to VBRCatalog that is/was pointing to this drive that no longer exists.

"MSI (s) (2C:9C) [22:04:17:969]: Note: 1: 2318 2: 
MSI (s) (2C:9C) [22:04:17:969]: Executing op: FolderRemove(Folder=D:\VBRCatalog,Foreign=0)
MSI (s) (2C:9C) [22:04:17:969]: Note: 1: 1327 2: D:\
Warning 1327.Invalid Drive: D:\"


So we need to fix the catalog issue, before restart the upgrade.

To fix this issue, we need to change the registry where the VBRCatalog is set. To do this we do the following tasks:

1. Stop the Veeam Services
2. Check the permissions that are set on the folder
3. Unshare the folder and move it to a new location
4. Now share the folder and reset the permissions
5. Then navigate to the following registry key:

    HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup Catalog

    Find the string called CatalogPath, right-click it and choose modify

    Under Value Data, edit the field to display the new location

Example: c:\VBRCatalog\ (be aware that you should choose a Disks with space to allocate and grow of the VBRCatalog). In our case I create a new folder in the drive F:\VBRCatalog and then changed the registry with the path F:\VBRCatalog.


6. Restart the services mentioned above(should be Veeam Backup Service, but also Veeam Backup Catalog Data Service because is not in the start state because of this issue)

You can check the here the Veeam KB1453 how to change the VBRCatolog.

Next, try the upgrade again and it should pass this issue.

Note: Share this article, if you think is worth sharing.

Sunday, March 13, 2016

vCenter 6.0 vMotion Issue: PBM error occurred during PreMigrate- CheckCallback



Today we had an issue where was not possible to do any Storage tasks from one of our vCenters.

Adding a Virtual Disk to a VM, we get some issues and trying to vStorage(migrate VM between Datastores) VMs get this issue:
"Relocate virtual machine "VM Name" A general system error occurred: PBM error occurred during PreMigrate-CheckCallback: No connection could be made because the target machine actively refused it Invoking prechecks."

Troubleshooting this vCenter logs (C:\ProgramData\VMware\vCenterServer\logs\vmware-vpx\vpxd.log) , found this entries regarding this issue:

"Date - Time: warning vpxd[06004] [Originator@6876 sub=Default] Failed to connect socket; , >, e: system:10061(No connection could be made because the target machine actively refused it)
Date - Time: warning vpxd[06004] [Originator@6876 sub=Default] Failed to connect socket; , >, e: system:10061(No connection could be made because the target machine actively refused it)
Date - Time: error vpxd[06004] [Originator@6876 sub=HttpConnectionPool-000555] [ConnectComplete] Connect failed to ; cnx: (null), error: class Vmacore::SystemException(No connection could be made because the target machine actively refused it)
Date - Time: info vpxd[05856] [Originator@6876 sub=pbm opID=CF6EDCDB-00000456-a8-9b] PBMCallback: PbmFunctionTracer::~PbmFunctionTracer: Leaving PbmServiceAccess::Connect
Date - Time: error vpxd[05856] [Originator@6876 sub=pbm opID=CF6EDCDB-00000456-a8-9b] [Vpxd::StorageCommon::ServiceClientAdapter::ConnectLocked] Failed to login to service: class Vmacore::SystemException(No connection could be made because the target machine actively refused it)
Date - Time: info vpxd[05856] [Originator@6876 sub=pbm opID=CF6EDCDB-00000456-a8-9b] PBMCallback: PbmFunctionTracer::~PbmFunctionTracer: Leaving PbmService::GetPbmProfileManager
Date - Time: error vpxd[05856] [Originator@6876 sub=pbm opID=CF6EDCDB-00000456-a8-9b] PBMCallback: PbmService::HandleInternalFaultMessage: PBM error occurred during PreMigrateCheckCallback: No connection could be made because the target machine actively refused it
Date - Time: info vpxd[06336] [Originator@6876 sub=vpxLro opID=opId-afe130be-2324-491c-ac81-ca5141e17245-c1-d0] [VpxLRO] -- BEGIN task-internal-35372 -- ServiceInstance -- vim.ServiceInstance.GetServerClock -- 5252c7a2-71fc-247e-a663-e364a80da2c5(52c55130-d25c-8bf1-6457-dcf56b771aa4)
Date - Time: info vpxd[06336] [Originator@6876 sub=vpxLro opID=opId-afe130be-2324-491c-ac81-ca5141e17245-c1-d0] [VpxLRO] -- FINISH task-internal-35372
Date - Time: info vpxd[05856] [Originator@6876 sub=pbm opID=CF6EDCDB-00000456-a8-9b] PBMCallback: PbmFunctionTracer::~PbmFunctionTracer: Leaving PbmCallBack::PreMigrateCheckCallback
Date - Time: error vpxd[05856] [Originator@6876 sub=VmProv opID=CF6EDCDB-00000456-a8-9b] [CallbackManager] Got exception while invoking precheck on PbmCallBack: vmodl.fault.SystemError"

Checking the logs and troubleshooting the issue, found that this is related to the vCenter service "Profile-Driven Storage"

Next step is to check this service in the vCenter(Windows version):



As we can see for some reason the service is stop. Not only that one, but others tu(not related to this issue). After a reboot to apply Windows updates, some of the services did not started(this is something very often in the vCenter Windows version).

Just start the service and try again the vStorage and any Storage tasks and you don't see the issue anymore.

You can find here the VMware KB 2118551 related to this issue.

Hope this can help you fixing this issue.

Note: Share this article, if you think is worth sharing.

Wednesday, March 2, 2016

Veeam Vanguard 2016 is now Open

A Veeam Vanguard if a program that give award to individuals for their contribute to Veeam products in participation in forums, webinars, blog posts, white papers, etc.

Official announcement here


This is the official page: Veeam Vanguard. Veeam Vanguard 2016 will be open until 30 of March. I have myself applied this year(since is my first time, do not have too many hopes, since no much work to show).

In my free time(that I recognize is not much), I try to contribute to Veeam community. Being a Pro-Partner from Veeam for almost 8 year and working with Veeam since 2005, I now decided to be more involved in sharing some Veeam knowledge and create some articles regarding my experience.

These are the articles that I have in my road map to create here in this blog.
  • Veeam Backup & Replication integration with NetApp Storage (how to)
  • Veeam Backup & Replication for disaster recovery - Replication over the WAN (how to)
These are two articles that I have in my road map to create as soon I have some time to do it.

Meanwhile I will vote in some really good Veeam professionals that I know they share a lot of their knowledge to the community like Vladan and Anthony Spiteri. Just to mention 2 that I follow their work and their really good articles.

If you have anyone that you think is worth be a Veeam Vanguard, please nominate HERE because if it contributes to the Veeam community, then it deserves to be nominated.















ESXi 6.0 Bug: Deprecated VMFS volume warning reported by ESXi hosts (adding iSCSI LUN)

Today we need to add a new iSCSI LUN to one of our vCenter 6.0 and found a bug in ESXi 6.0.

After create the LUN in NetApp we presented to the hosts.
Adding the iSCSI to one of the hosts everything was ok.

But when the rest of the hosts recognize the new Datastore we had a warming in all hosts.

"Deprecated VMFS volume(s) found on the host. Please consider upgrading volume(s) to the latest version"








Troubleshooting the host log in /var/log/hostd.log I found this:

warning hostd[2EFC2B70] [Originator@6876 sub=Hostsvc.DatastoreSystem opID=7878B682-0000041D-2b-bb-41-25e0 user=vpxuser] VMFS volume [/vmfs/volumes/56d6cfc8-c7b45bfc-0cd5-984be167ca4c] of version [0] is not supported.
warning hostd[2EFC2B70] [Originator@6876 sub=Hostsvc.DatastoreSystem opID=7878B682-0000041D-2b-bb-41-25e0 user=vpxuser] UpdateConfigIssues: Deprecated VMFS filesystems detected. These volumes should be upgraded to the latest version 

It seems that ESXi 6.0 have a bug when adding the iSCSI LUN to the host and while is mounting(when unmounted) it get this warning. Because the version of the filesystem is not know during the initial detection. So cannot be match in the initial state(while mounting the LUN)


There is no solution from VMware to this issue.

The KB about this bug is here: VMware KB 2109735

Since there is no fix at the moment for the issue, the solution is to restart the management agents on the hosts that have this issue. This will clear the warming message.

To restart the management agents we can go trough Direct Console User Interface (DCUI) and just choose the Restart Management Agents option.

Note: This option can disconnect your ESXi host temporarily from vCenter.

Or we can just Log in ssh in the ESXi host(my prefer option) and restart from the console.

Just use this:

/etc/init.d/hostd restart
/etc/init.d/vpxa restart

Do this on all hosts affected(that have the warning message) and will be clear.

Hope this can help you fixing this bug.

Note: Share this article, if you think is worth sharing.

Saturday, February 27, 2016

ESXi: After ESXi 5.5 update - No coredump target has been configured. Host core dumps cannot be saved

Today after update(just apply the latest bug fix and security patches) our ESXi 5.5 farm had a strange issue.

Applying VMware update in 8 ESXi 5.5 HP DL360 Gen9, after the reboot I had the same issue in all hosts

Warning: No coredump has been configured. Host core dumps cannot be saved


First I try investigate the updates and check any VMware KB information and don't see anything that could cause this issue. Or if is an hidden issue, or the issue was already in this hosts and never been discover.

So we try to find where was the issue here.

Starting to check the partition and look for the partition coredump state.
List all coredump partitions: # esxcli system coredump partition list



It seems there is no coredump partition.
Then check heck the devices in the host: # esxcfg-scsidevs -c



I see the the SD Flash card where the ESXi is installed.

So lets check all partitions in this device and see if there is any partition for the coredump(call vmkDiagnostic) using partedUtil.

List all partitions: # partedUtil getptbl /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0



There is the #7(normally the vmkDiagnostic and coredump partition) but also #9.
So we need to set the coredump into the proper partition and enable and delete the other.

This type of issues(2 coredump partitions) normaly happen in ESXi hosts that were upgraded(these hosts were upgraded from 5.0 to 5.5 some time ago)

First bind the coredump to the right partition: # esxcli system coredump partition set --partition='mpx.vmhba32:C0:T0:L0:7'
Then set the to coredump to true: # esxcli system coredump partition set --enable true
Then just list the coredump partitions again to check if now is set: # esxcli system coredump partition list



As we can see, now the partition #7 is enable and active for coredump, but we still have a second one. So lets just delete the #9 partition.

To delete the partition we need to use the partedUtil again: # partedUtil delete /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0 9

After we delete the partition, lets check if is already remove from the partition table and check if there is only one active coredump partition.

# partedUtil getptbl /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0



All partitions are correct and now only one coredump partitions is visible.

After this coredump is again active and enable and we don't see any warning anymore in the host(in vSphere Client)

Hope this help you correcting your coredump issues.

Monday, February 22, 2016

Veeam: How to enable Direct NFS Access backup Access feature

In this article we will configure our Veeam Backup Infrastructure to use Direct NFS Access transport mechanism. Since in this infrastructure we use iSCSI(as Backup Repository) and NFS(as VMware VMs Storage Datastore) we need to make some changes in our infrastructure to enable Direct NFS Access.

First I will try to share a design of a Veeam Backup Infrastructure without Direct NFS Access backup.



Note: Direct NFS Access backup transport mechanism is only available in Veeam v9

In above I try to design the Veeam Backup flow between iSCSI vs NFS.

In this case we did not had the proper configuration so that Direct NFS Access backup transport mechanism could work.

In this case we have a Veeam Backup Server and a Veeam Backup proxy.

Actual Veeam Backup Infrastructure:

192.100.27.x is iSCSI subnet - vLAN 56
192.128.23.x is the NFS Subnet vLAN 55

192.168.6.x(vLAN 25) is Management subnet. Used by Veeam Backup Server, vCenter and most of our ESXi hosts. But we still have some ESXi hosts that use our old management subnet 192.168.68.x
This is why we build a new Proxy with this subnet 192.68.68.x(vLAN 29)

Veeam Server I have(physical server):

1 interface(is 2 with NIC Teaming) for 192.168.6.x for Management Network.
1 Interface(also 2 with NIC Teaming) for 192.168.27.x using iSCSI initiator for the iSCSI connections.

Veeam Proxy(Virtual Machine):
1 interface with 192.168.68.x (vLAN 29) for Management Network.

This was the initial configuration and where Veeam Backup Server and Proxy never use the Direct NFS Access backup transport mechanism. All backups were running always with the option [nbd] for network block device (or network) mode, [hotadd] for virtual appliance mode.

Future Veeam Backup Infrastructure:So we need to add the following:

In my Veeam Server I have(physical server):
1 interface(is 2 with NIC Teaming) for 192.168.6.x for Management Network.
1 Interface(also 2 with NIC Teaming) for 192.168.27.x using iSCSI initiator for the iSCSI connections.
Add: 1 Interface with 192.168.23.x for NSF connections

Veeam Proxy(Virtual Machine):
1 interface with 192.168.68.x for Management Network.
Add: 1 Interface with 192.100.27.x using iSCSI initiator for the iSCSI connections.
Add: 1 Interface with 192.168.23.x for NFS connections

All new interfaces were proper configured to the right vLANs.

Note: All NFS interfaces subnets, or IPs(from Veeam Server and Veeam Proxy), need to have read and write permissions in the Storage NFS Share. So that Veeam and Storage can communicate through NFS.

After these changes we need to set up Veeam so that can use the proper transport mode.

Main configurations that we should check and config:
  • Make sure you are on Veeam v9
  • Make each Veeam VMware Backup Proxy have communication on the NFS network
  • Ensure the NFS Storage are allowing those proxies read and write permissions to the NFS share
  • Set the Proxies to use “Automatic Selection” transport
Limitations for the Direct NFS Access Mode

1. Veeam Backup & Replication cannot parse delta disks in the Direct NFS access mode. For this reason, the Direct NFS access mode has the following limitations:

  • The Direct NFS access mode cannot be used for VMs that have at least one snapshot.
  • Veeam Backup & Replication uses the Direct NFS transport mode to read and write VM data only during the first session of the replication job. During subsequent replication job sessions, the VM replica will already have one or more snapshots. For this reason, Veeam Backup & Replication will use another transport mode to write VM data to the datastore on the target side. The source side proxy will keep reading VM data from the source datastore in the Direct NFS transport mode.

2. If you enable the Enable VMware tools quiescence option in the job settings, Veeam Backup & Replication will not use the Direct NFS transport mode to process running Microsoft Windows VMs that have VMware Tools installed.

3. If a VM has some disks that cannot be processed in the Direct NFS access mode, Veeam Backup & Replication processes these VM disks in the Network transport mode.


The Direct NFS Access feature is implemented automatically if the “Direct storage access” or “Automatic selection” transport mode is selected from a VMware Backup Proxy inside of the Veeam Backup & Replication user interface.

First we need to Setup our Proxies(Default Veeam Proxy and the new Veem Proxy created) to run with the proper mode.

Go to Proxies and right mouse and choose properties:


After choosing the Proxy, lets check the Transport Mode and choose the right one.



We should choose Automatic Selection. Then the Proxy will choose the right Transport Mode to perform the Backup. If NFS Access is set, Transport Mode will use it in the VMs that are in NFS Share Volumes.

Option 3: Even Failover to Network mode is enable by default, you should check if is enabled. This will prevent the Backup Jobs to fail. If a Transport Mode is not available, Veeam will use the Network Mode(more slow performance).

We should perform all the above tasks for all our Proxies.(even the Default called VMware Backup Proxy)

After we change the Transport Mode in the Proxies section we now need to change in the Backup Jobs the Proxy that will use.



We should choose also in the job configuration the Automatic Selection(Veeam will choose the best Proxy for the Backup and NFS Direct).

If not all your Proxies have access to the Storage NFS, then you should follow the options 1-1 and 1-2 in the image above. Choose the Proxy that have connection and permissions in the Storage NFS and this job will always run with this Proxy.

Direct NFS Access will be enabled in all type of jobs, this example uses NetApp storage – and Enterprise Plus installations can use Backup from Storage snapshots for NetApp storage, all other editions can use Direct NFS Access.

Note: Next article we will talk about Veeam Backup from Storage snapshots for NetApp storage

We can check if backup is running with Direct NFS Access in the job log.


These are the Transport Modes that we can see in you Backup Job log.

[nfs] - Direct NFS Access mode.
[san] - iSCSI and Fibre Channel Mode(which doesn't work with virtual disks on NFS storage).
[nbd] - Network Block Device Mode(or just network, the same option we choose in the failover).
[hotadd] - Virtual Appliance Mode.

Note: All these Transport Mode will show in the Job Log after each Virtual Disk Backup.

This is the final design for Veeam Direct NFS Access backup transport mechanism flow.




Performance improvements that we will have with this new configuration:

The Direct NFS Access will deliver a significant improvement in terms of Read and Write I/O for the relevant job types, so definitely consider using it for your jobs. This improvement will help in a number of areas:
  • Significantly reduce the amount of time a VMware snapshot is open during a backup or replication job (especially the first run of a job or an active full backup)
  • Reduce the amount of time a job requires for extra steps (in particular the sequence of events for HotAdd) to mount and dismount the proxy disks
  • Increase I/O throughput on all job types
FINAL NOTE: I will like to thank to Rick Vanover from Veeam, for all the help and support to implement this Veeam Direct NFS Access. But also in some ideas to write this article(I even use some of is explanations). For all that, thanks again Rick

Hope this help you improve your Veeam Backup Infrastructure when using NFS.


Saturday, January 16, 2016

How to install and Enable VAAI NetApp® NFS Plug-in in VMware

 

Lets start to have a small explanation what is VAAI(vStorage API for Array Integration ) and what are the benefits using it.

VAAI is an API framework in VMware that enable some Storage tasks. It first presented in ESXi 4.1, but only after 5.x supports hardware acceleration with NAS storage devices.
Allows certain I/O operations to be offloaded from the ESXi to the physical array. With VAAI we release the workload on the virtual server hardware. Enables certain storage tasks, such as thin provisioning, to be offloaded from the VMware server virtualization hardware to the storage array.

You can read more about VAAI in VMware HERE

Our first tasks will be in the NetApp so that we can prepare the system to use VAAI in VMware.

Install NetApp VAAI plugin VIB in NetApp

Login to you your NetApp console:

First check if nfs.vstorage in enabled. If not, enable

Commands:
# options nfs.vstorage.enable
# vfiler run vfiler_name options nfs.vstorage.enable


In our case(7-Mode CLI) was disable/off, so we will enable.

In 7-Mode CLI 
# options nfs.vstorage.enable on
In Clustered Data ONTAP CLI 
# vserver nfs modify –vserver vserver_name -vstorage enabled
         Note: vserver_name is the name of your SVM

Active also for vfiler 7-Mode CLI in the vFiler
# vfiler run vfile_name options nfs.vstorage.enable on
         Note: vfile_name is the name of your vfiler. In our case is vfiler0



NetApp recommendations:


If you are using NetApp  clustered Data ONTAP you need to modify export policy rules for ESXi servers that use VAAI.

Enter the following command to set nfs as the access protocol for each export policy rule for ESXi servers that use VAAI:
# vserver export-policy rule modify -vserver vs1 -policyname mypolicy -ruleindex 1 -protocol nfs
In the above example:
  • vs1 is the name of the SVM.
  • mypolicy is the name of the export policy.
  • 1 is the index number of the rule.
  • nfs includes the NFSv3 and NFSv4 protocols.
After we have finish our work in the NetApp side, we should start the VMware tasks. But first let download the VAAI files from NetApp support site.

Note: You need to have a support account to download the files from NetApp support site HERE.

In the next image we see the option to download for both ESXi versions(we will use for 5.5)


Files are:
  1. ESXi 6.0
    NetAppNasPlugin.v22.zip - offline bundle
    NetAppNasPlugin.v22.vib - online bundle
  2. ESXi 5.x
    NetAppNasPlugin.v21.zip - offline bundle
    NetAppNasPlugin.v21.vib - online bundle
Now we that we have our files lets start the tasks in the VMware side.

First lets check if the VAAI is enabled on each ESXi host

Connect to your ESXi shell console

Commands:
# esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove
# esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit
If VAAI is enabled then we should get 1 for each one(ESXi 5.0 and later, VAAI is enabled by default).



If we get 0 then VAAI is disable(not in case above). Lets enable.

Commands:
# esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedInit
# esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedMove



We can also enable and check ha in vSphere Client.

vSphere Client
  • Using the vSphere client, log in to the vCenter server.
  • For each ESXi server, click the server name.
  • In the Software section of the Configuration tab, click Advanced Settings.
  • Select DataMover, and ensure that the DataMover.HardwareAcceleratedMove and DataMover.HardwareAcceleratedInit parameters are set to 1.



Also check if VMFS3.HardwareAcceleratedLocking is enable.



After we have set Hardware Accelerated we can now install the VAAI plug-in.

Install NetApp VAAI plugin VIB in ESXi

Copy the files that we download from NetApp support to the ESXi host(we can use a tool like WinSCP to do this)

Since I have done this tests with my ESXi 5.5 lab I have downloaded the offline bundle NetAppNasPlugin.v21.zip.

In the ESXi shell console:
  • Let check our offline bundle
   # esxcli software sources vib list -d /VIB/NetAppNasPlugin.v21.zip
After we check that we have the file and is ok, then we will intall.
  •  Install the offline bundle
   #  esxcli software vib install -n NetAppNasPlugin -d /VIB/NetAppNasPlugin.v21.zip
In the next image we see that VAAI is installed:



If you want to install a online bundle(vib file) then you need to run the following command
Install the online bundle
    # esxcli software vib install -v /VIB/NetAppNasPlugin.v21.vib
Note: Always use full path(for online and offline bundle) or the install will not work.

After VAAI is installed we need to reboot ESXi host.

After reboot Check if NetApp plugin is enabled in ESXi
# esxcli software vib list | more 
You could see in the next image the NetApp VAAI Plug-in installed and enable(first line)



After we know that everything is enable now our Volumes/Datastore should be Hardware Accelerated supported.



After checking that our NetApp VAAI is installed lets mount a volume to use the VAAI(we will do it manually, but we can also do it in the vSphere Client or Web Client).

In case that you want to add new volumes and also check the volumes in the Shell Console.
# esxcfg-nas -a NFS-Simulator-01 -o 164.48.131.50 -s /vol/LUN_NetappTest001_vol


Lets check if the volume if supported and enabled with VAAI from NetApp
# vmkfstools -Ph /vmfs/volumes/NFS-Simulator-01
As we can see in the next image, NAS VAAI Supported is YES.



We can use vmkfstools -Ph to check all our Volumes if they are VAAI Supported. 

After you finish

For testing purposes:

Use the NFS plug-in space reservation and copy offload features to make routine tasks more efficient:
  •  Create virtual machines in the thick virtual machine disk (VMDK) format on NetApp traditional or FlexVol volumes and reserve space for the file at the time you create it.
  •  Clone existing virtual machines within or across NetApp volumes:
           1. Datastores that are volumes on the same SVM on the same node.
           2. Datastores that are volumes on the same SVM on different nodes.
           3. Datastores that are volumes on the same 7-Mode system or vFiler unit.
  •  Perform cloning operations that finish faster than non-VAAI clone operations because they do not need to go through the ESXi host.
Hope this article will help you install and enable VAAI in your VMware environment.