Saturday, February 27, 2016

ESXi: After ESXi 5.5 update - No coredump target has been configured. Host core dumps cannot be saved

Today after update(just apply the latest bug fix and security patches) our ESXi 5.5 farm had a strange issue.

Applying VMware update in 8 ESXi 5.5 HP DL360 Gen9, after the reboot I had the same issue in all hosts

Warning: No coredump has been configured. Host core dumps cannot be saved


First I try investigate the updates and check any VMware KB information and don't see anything that could cause this issue. Or if is an hidden issue, or the issue was already in this hosts and never been discover.

So we try to find where was the issue here.

Starting to check the partition and look for the partition coredump state.
List all coredump partitions: # esxcli system coredump partition list



It seems there is no coredump partition.
Then check heck the devices in the host: # esxcfg-scsidevs -c



I see the the SD Flash card where the ESXi is installed.

So lets check all partitions in this device and see if there is any partition for the coredump(call vmkDiagnostic) using partedUtil.

List all partitions: # partedUtil getptbl /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0



There is the #7(normally the vmkDiagnostic and coredump partition) but also #9.
So we need to set the coredump into the proper partition and enable and delete the other.

This type of issues(2 coredump partitions) normaly happen in ESXi hosts that were upgraded(these hosts were upgraded from 5.0 to 5.5 some time ago)

First bind the coredump to the right partition: # esxcli system coredump partition set --partition='mpx.vmhba32:C0:T0:L0:7'
Then set the to coredump to true: # esxcli system coredump partition set --enable true
Then just list the coredump partitions again to check if now is set: # esxcli system coredump partition list



As we can see, now the partition #7 is enable and active for coredump, but we still have a second one. So lets just delete the #9 partition.

To delete the partition we need to use the partedUtil again: # partedUtil delete /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0 9

After we delete the partition, lets check if is already remove from the partition table and check if there is only one active coredump partition.

# partedUtil getptbl /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0



All partitions are correct and now only one coredump partitions is visible.

After this coredump is again active and enable and we don't see any warning anymore in the host(in vSphere Client)

Hope this help you correcting your coredump issues.

Monday, February 22, 2016

Veeam: How to enable Direct NFS Access backup Access feature

In this article we will configure our Veeam Backup Infrastructure to use Direct NFS Access transport mechanism. Since in this infrastructure we use iSCSI(as Backup Repository) and NFS(as VMware VMs Storage Datastore) we need to make some changes in our infrastructure to enable Direct NFS Access.

First I will try to share a design of a Veeam Backup Infrastructure without Direct NFS Access backup.



Note: Direct NFS Access backup transport mechanism is only available in Veeam v9

In above I try to design the Veeam Backup flow between iSCSI vs NFS.

In this case we did not had the proper configuration so that Direct NFS Access backup transport mechanism could work.

In this case we have a Veeam Backup Server and a Veeam Backup proxy.

Actual Veeam Backup Infrastructure:

192.100.27.x is iSCSI subnet - vLAN 56
192.128.23.x is the NFS Subnet vLAN 55

192.168.6.x(vLAN 25) is Management subnet. Used by Veeam Backup Server, vCenter and most of our ESXi hosts. But we still have some ESXi hosts that use our old management subnet 192.168.68.x
This is why we build a new Proxy with this subnet 192.68.68.x(vLAN 29)

Veeam Server I have(physical server):

1 interface(is 2 with NIC Teaming) for 192.168.6.x for Management Network.
1 Interface(also 2 with NIC Teaming) for 192.168.27.x using iSCSI initiator for the iSCSI connections.

Veeam Proxy(Virtual Machine):
1 interface with 192.168.68.x (vLAN 29) for Management Network.

This was the initial configuration and where Veeam Backup Server and Proxy never use the Direct NFS Access backup transport mechanism. All backups were running always with the option [nbd] for network block device (or network) mode, [hotadd] for virtual appliance mode.

Future Veeam Backup Infrastructure:So we need to add the following:

In my Veeam Server I have(physical server):
1 interface(is 2 with NIC Teaming) for 192.168.6.x for Management Network.
1 Interface(also 2 with NIC Teaming) for 192.168.27.x using iSCSI initiator for the iSCSI connections.
Add: 1 Interface with 192.168.23.x for NSF connections

Veeam Proxy(Virtual Machine):
1 interface with 192.168.68.x for Management Network.
Add: 1 Interface with 192.100.27.x using iSCSI initiator for the iSCSI connections.
Add: 1 Interface with 192.168.23.x for NFS connections

All new interfaces were proper configured to the right vLANs.

Note: All NFS interfaces subnets, or IPs(from Veeam Server and Veeam Proxy), need to have read and write permissions in the Storage NFS Share. So that Veeam and Storage can communicate through NFS.

After these changes we need to set up Veeam so that can use the proper transport mode.

Main configurations that we should check and config:
  • Make sure you are on Veeam v9
  • Make each Veeam VMware Backup Proxy have communication on the NFS network
  • Ensure the NFS Storage are allowing those proxies read and write permissions to the NFS share
  • Set the Proxies to use “Automatic Selection” transport
Limitations for the Direct NFS Access Mode

1. Veeam Backup & Replication cannot parse delta disks in the Direct NFS access mode. For this reason, the Direct NFS access mode has the following limitations:

  • The Direct NFS access mode cannot be used for VMs that have at least one snapshot.
  • Veeam Backup & Replication uses the Direct NFS transport mode to read and write VM data only during the first session of the replication job. During subsequent replication job sessions, the VM replica will already have one or more snapshots. For this reason, Veeam Backup & Replication will use another transport mode to write VM data to the datastore on the target side. The source side proxy will keep reading VM data from the source datastore in the Direct NFS transport mode.

2. If you enable the Enable VMware tools quiescence option in the job settings, Veeam Backup & Replication will not use the Direct NFS transport mode to process running Microsoft Windows VMs that have VMware Tools installed.

3. If a VM has some disks that cannot be processed in the Direct NFS access mode, Veeam Backup & Replication processes these VM disks in the Network transport mode.


The Direct NFS Access feature is implemented automatically if the “Direct storage access” or “Automatic selection” transport mode is selected from a VMware Backup Proxy inside of the Veeam Backup & Replication user interface.

First we need to Setup our Proxies(Default Veeam Proxy and the new Veem Proxy created) to run with the proper mode.

Go to Proxies and right mouse and choose properties:


After choosing the Proxy, lets check the Transport Mode and choose the right one.



We should choose Automatic Selection. Then the Proxy will choose the right Transport Mode to perform the Backup. If NFS Access is set, Transport Mode will use it in the VMs that are in NFS Share Volumes.

Option 3: Even Failover to Network mode is enable by default, you should check if is enabled. This will prevent the Backup Jobs to fail. If a Transport Mode is not available, Veeam will use the Network Mode(more slow performance).

We should perform all the above tasks for all our Proxies.(even the Default called VMware Backup Proxy)

After we change the Transport Mode in the Proxies section we now need to change in the Backup Jobs the Proxy that will use.



We should choose also in the job configuration the Automatic Selection(Veeam will choose the best Proxy for the Backup and NFS Direct).

If not all your Proxies have access to the Storage NFS, then you should follow the options 1-1 and 1-2 in the image above. Choose the Proxy that have connection and permissions in the Storage NFS and this job will always run with this Proxy.

Direct NFS Access will be enabled in all type of jobs, this example uses NetApp storage – and Enterprise Plus installations can use Backup from Storage snapshots for NetApp storage, all other editions can use Direct NFS Access.

Note: Next article we will talk about Veeam Backup from Storage snapshots for NetApp storage

We can check if backup is running with Direct NFS Access in the job log.


These are the Transport Modes that we can see in you Backup Job log.

[nfs] - Direct NFS Access mode.
[san] - iSCSI and Fibre Channel Mode(which doesn't work with virtual disks on NFS storage).
[nbd] - Network Block Device Mode(or just network, the same option we choose in the failover).
[hotadd] - Virtual Appliance Mode.

Note: All these Transport Mode will show in the Job Log after each Virtual Disk Backup.

This is the final design for Veeam Direct NFS Access backup transport mechanism flow.




Performance improvements that we will have with this new configuration:

The Direct NFS Access will deliver a significant improvement in terms of Read and Write I/O for the relevant job types, so definitely consider using it for your jobs. This improvement will help in a number of areas:
  • Significantly reduce the amount of time a VMware snapshot is open during a backup or replication job (especially the first run of a job or an active full backup)
  • Reduce the amount of time a job requires for extra steps (in particular the sequence of events for HotAdd) to mount and dismount the proxy disks
  • Increase I/O throughput on all job types
FINAL NOTE: I will like to thank to Rick Vanover from Veeam, for all the help and support to implement this Veeam Direct NFS Access. But also in some ideas to write this article(I even use some of is explanations). For all that, thanks again Rick

Hope this help you improve your Veeam Backup Infrastructure when using NFS.