Tuesday, August 14, 2012

Open Solaris and Debian on NetWorker

Debian, Ubuntu and Open Solaris are not in NetWorker support matrix.  So don't expect to get any support from EMC.

For Debian, so far, I don't encounter issue by following the steps in the uWaterloo website to install NetWorker client software.  I had issue with IPV6 with NetWorker in Debian.  Backup is fine with IPV6 turned off.

I obtain the binary from PowerLink.
  1. Install alien to convert rpm packages 
  2. apt-get install alien
  3. Convert to debian packages and install
  4. alien --to-deb -i lgtoman-7.6.4-1.x86_64.rpm
  5. alien --to-deb -i --scripts lgtoclnt-7.6.4-1.x86_64.rpm (ignore scripts warning)
  6. cd /etc/init.d
  7. Get script networker: Legato Networker Startup Script
  8. chmod 755 networker
  9. update-rc.d networker defaults
  10. Update /nsr/res/servers adding backup.cs.uwaterloo.ca (/nsr directory isn't created until after networker is started the first time, even then the servers file isn't created)
  11. ./networker start

From uWaterloo website, there is a reminder for Ubuntu.
Ubuntu 10.10 now has a recover command that can undelete files and conflicts with the Legato install

I find out Open Solaris will not work in NetWorker from a blog.  I have never tested it but it recommends 7.4.5 seems to work.

========================================================================
NetWorker 8 support Debian and Ubuntu.  Open Solaris is not supported though.  Check EMC Software Compatibility Guide for latest information.

Sunday, July 29, 2012

Hotadd limitation

Other than the 1TB vmdk limit with early version of VDDK (fixed in VDDK 1.1.1), the following are the limit and suggestion of HotAdd (similar info can be found in vendor support documentation).

1)Hot-Add only works on SCSI disks, not IDE disks
2)Disable automount on proxy host
3)For block size mismatch issue, the VMFS containing the Target VM and the VMFS containing the VADP proxy must use the same or larger block size. For example, if you back up virtual disk on a datastore with 4Mb blocks, the proxy (vRanger) must also be on a datastore with 4Mb blocks or above. Its strongly recomended to install proxy on VM which is residing on 8Mb.

Other consideration not only applied to hotadd but also other transport mode.
1) change the workingDir to a datastore with enough block size if there are more than 1 vmdk for a VM and they are on different datastore with different block size.  For ex, if C drive is on datastore with 1 MB block size, and D drive is on a datastore with 2 MB block size.  workingDir should be on datastore with 2 MB block size.  See VM KB: 1012384

P.S. Do not use hotadd if there is dynamic disk in the VM.  It is just a headache.  See VM KB: 2006279

VADP application quiescing

Recently, I am helping out a friend on VADP deployment.  He has question about VADP quiescing.  I pointed him to an article in Backup Central.  To make sure application quiescing is supported for Windows 2008 R2, make sure ESX 4.1 U1 is deployed (documented in VMware KB: 1031298).  However, if Windows 2008 or Windows 2008 R2 VM deployed under ESX 4.0, make sure KB: 1028881 is followed.

For Exchange and SQL, I still suggest use of Exchange or SQL agent from backup software to backup them up.  When restore is required, for ex, Exchange, you can restore folder in a mailbox easily.

Wednesday, July 11, 2012

nsrwatch on NetWorker 7.6.2 in Windows platform

Just upgrade to NetWorker 7.6.2.  nsrwatch is available for Unix and Linux in the past.  Now, it is added to the Windows platform.  Since the Management Console GUI is slow, this really saves a lot of hassle.

Thursday, May 31, 2012

10 GbE performance troubleshooting 1

On a Windows 2003 server with 1 GbE NIC and DSN writing to DataDomain device, I can see about 140 MB/s (dedup happens on the DSN and NIC utilization is approx 15%).

Now, with 2 Windows 2008 R2 servers setup with 10GbE, I copy a file from one Windows 2008 R2 to another one.  It at most utilizes 12% of the 10GbE.  If I add write another file at the same time, I see the utilizes 20% of the 10 GbE.  I follow some of the suggestions by Cisco to tweat the OS (only thing I have not done is Jumbo frame).  However, I don't see much improvement.

After doing more research, it looks like it is an OS limitation.  See kb article from HP site.

"There was still perceived TCP performance issue, but it turned out to be a matter of limitations in performance per thread in Windows Server 2003. For instance, if copying only one file from one server hosting a NC522SFP to another using a NC522SFP, only a small fraction of the theoretical 10-Gigabit performance was achieved. However, if multiple sessions were run simultaneously, similar performance gains were seen as with UDP. In other words, the bottleneck was not the NIC."

Hopefully, I will have more time to run test and determine the limitation in the summer.  Not sure if Linux / Unix will do a better job.

Sunday, May 27, 2012

DataDomain support with NetWorker 7.6.2 and 7.6.3

With NW 7.6.2 boost devices, max session supported / devices are increased.  Max default session / device will be 10 instead of 4 in NW 7.6.1.  Dedup ratio will not be affected since it is SN side dedup for boost device.  I remember if AFTD is used, set the max session to 1 per device in 7.6. 

With NW 7.6.3, multiplexing is supported for VTL in DataDomain. 

Keep in mind as mentioned in the older article, NW 7.6.3 does not support DDOS 4.9.  If you plan to upgrade from NetWorker 7.6.1 to 7.6.2, make sure you go through the DataDomain Integration guide for NetWorker 7.6.2.  My plan is to stage off the saveset from old DD devices after migration to the new one generated in NW 7.6.2.  This seems to be the simplest option. 

Once migration completed, a case will be open with DataDomain to remove the old unused LSU.  You cannot delete old LSU from NetWorker / DataDomain GUI.  Do not delete LSU that contain data. 

Sunday, April 29, 2012

Never mixing NDMP and non-NDMP backup in the same media for NetWorker

The following are the limitation of NDMP backup with NetWorker.  You can find these in admin guide but most ppl tend to skip them.  It will be a nightmare when you find out data cannot be recovered.  Some of these limitation possibly apply to other backup solution as well.

For pure NDMP backup (backup to NDMP devices.  that is, FC tape drive presented to NAS), make sure you have non-NDMP backup sent to different pool including index and bootstrap.  The same applied to cloning.  So media written by NDMP devices should not be accessed by non-NDMP devies.  It is not possible to save data from any NAS filer to an tape device using standard NDMP and then write non-NDMP data to the same tape volume.  You will have issue during recovery.

NDMP and non-NDMP savesets can be saved on the same volume only if the NDMP backups to that volume were written using NDMP-DSA (NAS data backup to device presented to Storage Node / NetWorker server).

There is an old kb article from Powerlink to explain it.
"When non-NDMP backups are written to tape, the backup is written directly to the tape by NetWorker in it's own proprietary format. When NDMP and non-NDMP data is written to the same tape, the file marks get out of order and as a result the restore cannot position the tape correctly to find the data image.
Fix: Ensure that NDMP data be backed up to it's own separate pool to prevent NDMP recovery and scanner problems."

Keep in mind when migrating your NAS.  If you plan to migrate to different vendor, for ex, NetApp to BlueArc, make sure you no longer need the backup on the old NAS (in this ex, NetApp) or you keep your old NAS somewhere in case recovery required.  NetApp and BlueArc are on different OS.  That's why you cannot recover NDMP backup by NetApp to BlueArc.

Also, make sure you have a copy of the index saveset.  scanner -i does not work for NDMP backup.

For full details of limitation and other requirements of NetWorker NDMP backup, please consult EMC support or refer to EMC documentation.

NetWorker NDMP DSA performance

We try to backup Celerra VG2 using NetWorker NDMP.  From existing hardware, we decide to use NDMP DSA since we don't have license for VTL on DataDomain.  We backup 4 streams together and the total throughput is 30 - 40 MB/s.

(To find out backup performance on Celerra, logon to control station and run the following command: assuming you are backing up filesystem on Data Mover server_2)
server_pax server_2 -s -v

We contact support and they suggest us to use the -P option for the backup command where sn is the storage node you want the backup data sent to.

nsrndmp_save -T dump -M -P sn

We also list only the sn in the storage node field on the client properties of the Celerra in NetWorker.

After making the changes, we see performance improve to 100 MB/s.

Keep in mind, if you are running pure NDMP backup (that is, having a FC tape drive connected to Celerra Data Mover), the backup command will be

nsrndmp_save -T dump

i.e. For most NAS, dump is the only option supported.  Please refer to NetWorker and Celerra documentation for more info.
-M option is for DSA backup.
For pure NDMP, do not add -M in the backup command.

Thursday, March 8, 2012

Data Domain ifgroup part III

We try to config ifgroup with NetWorker 7.6.1, and it fails to work as expected.  For ex, if I setup ifgroup with 2 dual 10GbE NICs, and unplug a cable.  Sometimes, I don't see the backup session failovers to the other 3 working NIC.    If I set up failover on the NICs of DataDomain, I don't have any issue as mentioned in my earlier post.  Contact support and the suggestion is to upgrade to NetWorker to 7.6.2 and the DDOS to 5.0.  I am happy with NetWorker 7.6.1 and DDOS 4.9.x for now.  Will decide when to upgrade in the future.  Currently NIC failover on DataDomain is good enough!

Wednesday, February 15, 2012

VADP with hotadd configuration

Helper VM is not required for VADP with ESX 4.0 or above if hotadd mode is selected.  Make sure you do have the necessary license for hotadd.  If VCB is still used in the environment, you can follow Commvault's page "Create a Helper Virtual Machine" on how to configure VCB helper VM.

The following is the properties of the proxy VM before snapshot being mounted. 


After snapshot is created, it will be mounted on the proxy host (which is a Virtual machine).  See screenshot below. 

I have backup 2 VM with one disk on each VM at the same time.  That's why you see two extra virtual disk on the proxy host.  Now, you probably understand why you need Advanced version for ESX 4.1 which include the hotadd feature.  Otherwise, you can only use nbd or san transport mode to backup VM. 

Of course, there is limitation on hotadd such as max vmdk size of 1 TB.

Actually, the 1 TB limitation is no longer valid for VDDK 1.1.1 (http://www.vmware.com/support/developer/vddk/VDDK-1.1.1-Relnotes.html) 

If NBD is used, there is a 1 TB limitation.