Sunday, July 24, 2016

Backup Unix raw device with NetWorker

To backup Unix Raw Device in NetWorker, pls follow esg59701 or video in EMC community


Here is the kb article from EMC website.


How can NetWorker be configured to backup a rawdevice?
How to backup a RAW device?
 
Fact
NetWorker for UNIX/Linux 7.x.x
 
Resolution
Create a directive using rawasm. Rawasm used to backup /dev entries (block andcharacter-special files) and their associated raw disk partitiondata. On some systems, /dev entries are actually symboliclinks to device specific names. Unlike other ASMs, this ASM followssymlinks, allowing the shorter /dev name to be configured.
  1. Open NMC.
  2. Go to the Configuration > Directives.
  3. Right Click > new.
  4. In the Name field enter Raw.
  5. In the directives field enter the directive in the following format:
    <
>
+rawasm: partition1
+rawasm: partition2
Click on Apply.
  • Close this window and go to the Clients menu and select Client Setup. 
  • Select the client with the raw devices. 
  • In the Saveset field specify:
    /dev/partition1
    /dev/partition2 
  • In the Directive field select Raw. 
  • Click on Apply.

  • Note:
     
    Precautions when using rawasm to back up UNIX raw partitions:
    One can specify the rawasm directive to back up raw disk partitions on UNIX.
    However, if the raw partition contains data managed by an active database management system (DBMS), ensure that the partition is offline and the database manager is shutdown. For greater flexibility when backing up partitions that contain data managed by a DBMS, use a NetWorker Module application.
    Similarly, if rawasm is used to save a partition containing a UNIX file system, the file system must be unmounted or mounted read-only to obtain a consistent backup.
     
    Note: Do not specify the rawasm directive to backup or recover raw partitions on Windows. See    Backing up raw partitions on Windows on page 88 provides more information.

    Using rawasm to recover a UNIX raw partition
    When recovering data, rawasm requires that the file system node for the raw device exist prior to the recovery. This protects against the recovery of a /dev entry and the overwriting of data on a reconfigured disk. You can create the /dev entry, having it refer to a different raw partition, and enforce overwrite if needed. If you create the /dev entry as a symbolic link, the data is recovered to the target of the symbolic link.
     
    Recovery of a raw partition must occur on a system configured with the same disk environment and same size partitions as the system that performed the backup:
    • If the new partition is smaller than the original partition, the recovery will not complete successfully
    • If the new partition is larger than the original partition the estimated size reported upon recovery is not accurate
    Note: Only level FULL is supported with rawasm, this is because the original NetWorker traditional backup codes are designed for Filesystem level Backup, and not Block Level backup. 



    ESX host does not discover new path after migrating to new fabric

    Once, I work on fabric migration.  They want to migrate to new equipment and choose not to link the existing and new switch together (no ISL) to have a clean config in new fabric.  The servers connect to different fabric A and B.  On each migration, we need to re-connect storage controller on one fabric and the servers that are connecting the controller to new fabric at the same time.  Only one fabric and one controller will be worked on each time.  Since they are all ESX hosts and no AIX, it is much easier. We complete the job in the weekend during low I/O period.  What happens is after connecting to the new switch, it does not see the new path even VI team rescan.  I did confirm zoning had been completed correctly on the new fabric.  They suggest me to follow VM article Configuring fibre switch so that ESX Server doesn't require a reboot after a zone set change


    However, I know RSCN is enabled in the Cisco switch.  Eventually, VI agree to vMotion the VMs in the host, and reboot the ESX host, one by one.  After that, they are discovered correctly. 


    One thing we learn, stay away from boot from SAN for ESX host, it takes 30 min for each host to come up after reboot even there are not a lot of luns in the ESX clusters.  It is proven much faster to boot ESX from local disk. 


    Also, minimize the number of RDMs!!!  I work in another messy environment with lots of RDM.  Even rescan cannot happen during business hours.