NetApp Cluster-Mode Snapshots

February 28, 2012 Leave a comment

NetApp Snapshottechnology is famous for its uniqueness and ease of use. For example, unlike most competing snapshot technologies, you can create a volume snapshot in seconds, regardless of the size of the volume. It is very efficient in storage space usage.  And you can create hundreds of snapshot copies based on your needs. These excellent features are there for you in Data ONTAP 8.1 operating in cluster-mode.

The familiar 7-mode commands, such as snap reserve, snap sched and snap list, are still operational in cluster-mode. But cluster-mode has a new set of commands (see Fig. 1); which you can explore by simply typing the command (e.g., snap create) and hit return (see Fig. 2).

fig1.PNG

Figure 1: Cluster-mode snapshot commands

fig2.PNG

Figure 2: Cluster-mode snap create’s usage

One thing I did observe is that the cluster-mode snapshot policy seems to take precedence over the 7-mode snap sched command. The default snapshot policy in cluster-mode is that the hourly, daily and weekly snapshot schedules are enabled, with the following frequency:

  • Hourly: 6
  • Daily: 2
  • Weekly: 2

If you try to set snapshot schedule using the command snap sched0 0 0, meaning don’t take any scheduled snapshot, you will be surprised that this command is ignored; and hourly, daily and weekly snapshot copies are taken.

There are several ways to change the default snapshot policy in cluster-mode. Here are some examples:

a)     Use the snap policy modify command to disable the policy

b)     Under the  scope of snapshot policy, use add-schedule, modify-schedule, or remove-schedule to change it to your liking (see Fig. 3)

c)      You can also use snap policy create to create new snapshot policy

fig3.PNG

Figure 3: Cluster-mode snapshot policy commands

In summary, the 7-mode commands, by and large, are still valid for snapshot management. But be aware of the new cluster-mode snapshot policy which may take precedence.

Categories: netapp Tags: , ,

NetApp Powershell with Snaps & Cluster-Mode

February 28, 2012 Leave a comment

Many Powershell cmdlets have been developed for NetApp Data ONTAP. This is true for both 7-mode and cluster-mode. Since the cluster-mode cmdlets are relatively new, we’ll take a close look at it here, using a couple cluster-mode cmdlets to demonstrate how to create a volume snapshot and then restore it.

First, two prerequisites:

·         Powershell v2.0, which you can download and install from the Microsoft website here.

·         Data ONTAP Powershell Toolkit v1.7, or DataONTAP.zip, you can download from NetApp Community here (see Fig.1). You need to login with your NOW credential to download.

fig1_2012feb27.PNG

Figure 1: Download DataONTAP.zip from NetApp Community

Note: for Powershell background info, here are some useful websites with good info:

After you downloaded the ONTAP Powershell Tool Kit v1.7, on your Windows host, open a command prompt, create a directory C:\psontap. Unzip the DataONTAP.zip kit to C:\psontap\DataONTAP. Fig. 2 shows the contents after unzipping the tool kit.

fig2_2012feb27.PNG

Figure 2: Unzip DataONTAP.zip

Next, open a powershell prompt by clicking on the icon (see Fig. 3).

fig3_2012feb27.PNG

Figure 3: Click on the Powershell icon to open the Powershell command prompt

Then, initialize the ONTAP Powershell Tool Kit using import-module dataontap, as shown in Fig. 4.

fig4_2012feb27.PNG

Figure 4: Import the DataONTAP module

To distinguish cluster-mode cmdlets from 7-mode ones, the mnemonic ‘Nc’ is used. For example, to create a snapshot, you use New-NaSnapshot in 7-mode, but New-NcSnapshot in cluster-mode. Therefore, to discover all the snapshot cmdlets in cluster-mode, you can simply do get-help *NcSnapshot*, as show in Fig. 5. Note, the whildcard is allowed in cmdlets.

fig5_2012feb27.PNG

Figure 5: Cluster-mode snapshot cmdlets

In order to take a volume snapshot (or manage the FAS controller), you first need to use the cmdlet connect-NcController  to establish communication with the NetApp FAS controller (operating in cluster-mode) from your Windows box (see Fig. 6). When prompted, type in the admin password and hit OK. Note again the cmdlet is cluster-mode because of the presence of ‘Nc’.

fig6_2012feb27.PNG

Figure 6: Establish connection to a FAS controller operating in cluster-mode

Create a Snapshot

Figure 7 shows how to use the Powershell cmdlet New-NcSnapshot to create a volume snapshot called mysnap. Note that here we assumed that the FlexVol volume test_fv and Vserver vc1 are already on the controller. The parameter VserverContext is useful because it can uniquely identify the volume belonging to the specific Vserver, if there are multiple volumes named test_fv while belonging to different Vservers.

fig7_2012feb27.PNG

Figure 7: Create a snapshot in cluster-mode using Powershell cmdlet

Restore a Snapshot

Suppose after sometime you want to restore the snapshot mysnap for whatever reason, you can do that by using the cmdlet restore-NcSnapshotVolume, as shown in Fig. 8. The parameter PreserveLunIds allows the LUNs within the volume being restored to stay mapped and their identities preserved.

fig8_2012feb27.PNG

Figure 8: Restore a volume snapshot

You can explore other cluster-mode snapshot cmdlets by doing get-helpfor each cmdlet shown in Fig. 5. I found these cmdlets are quite straightforward to use, although a little bit verbose. And of course, if you have many volumes and snapshot copies, you can write your own scripts based on these cmdlets to streamline the operations.

Categories: netapp Tags: , , ,

VMware Command Cheat Sheet

February 27, 2012 Leave a comment
Esxcfg-Commands
esxcfg-advcfg Set/Get Advance Configuration Parameters (Stored in /etc/vmware/esx.conf)
esxcfg-auth Configure authentication (ADS, NIS, Kerberos)
esxcfg-boot Configure Boot-Options
esxcfg-configcheck Checks format of /etc/vmware/esx.conf (e.g. Used after esx-updates)
esxcfg-dumppart Configure partition for core-dumps after PSOD
esxcfg-firewall Configure ESX-server firewall
esxcfg-hwiscsi Configure hardware iSCSI initiators
esxcfg-info Get information about hardware, resources, storage, … of the ESX-Server
esxcfg-init Used Internally on boot
esxcfg-linuxnet Setup/Remove linux network devices (ethX)
esxcfg-module Enable/Disable/ Add new/ Query VMKernal modules and set/ get parameters for them.
esxcfg-mpath Configure multipathing for Fibre-Channel and iSCSI
esxcfg-nas Configure NFS-datastores (“NFS-client”)
esxcfg-nics Configure physical nics (VmnicX).
esxcfg-pciid Recreate PCI-device list /etc/vmware/{pci.ids, pcitable, pcitable.linux, vmware-device.map } from the configuration files /etc/vmware/pciid/*.xml
esxcfg-rescan Rescan a SCSI/FC/iSCSI adapter.
esxcfg-resgrp Configure resource groups
esxcfg-route Configure the VMKernel default route
esxcfg-swiscsi Configure /Rescan software iSCSI initiator
esxcfg-upgrade Used for upgrades from ESX2.x to ESX3
esxcfg-vmhbadevs Get information about attached LUNs with /dev/sdX/mappings
esxcfg-vmknic Add /Remove /Configure VMKernel NICs.
esxcfg-vswif Add/Remove/Configure ServiceConsole NICs
esxcfg-vswitch Add/Remove/Configure Virtual Switches
 
esx-Commands
esxnet-support

 

Diagnostic information about Console NICs (Gives Errors in ESX-3.5.0)
esxtop

 

Live Statistics of Virtual Machines (with VM-Names)
esxupdate Tool for updating ESX-3.x
 
Vmware-Commands
Vmware-authd For internal use only (authentication)
Vmware-cmd See vmware-cmd section
vmware-configcheck Check Virtual Machine configuration files (*.vmx)
vmware-config.pl Configure ESX-hostd port, recompile/install VMware VmPerl Scripting API
vmware-hostd Demon for VI Client connections (should only be

started by mgmt-vmware start-script)

vmware-hostd-support Creates /var/log/vmware/hostd-support.tgz
vmware-mkinitrd Creates initrd (initial ramdisk)
vmware-vim-cmd Please see vmware-vim-cmd section
vmware-vimdump Get information about ESX-Server configuration and Virtual Machines.
vmware-vimsh Interactive shell – comparable to vmware-vim-cmd with additional commands
vmware-watchdog Watchdog-Demon to keep vmware-hostd running

(should only be started by mgmt-vmware start-script)

vmware-webAccess WebAccess-Demon for browser based management (should only be started by

vmware-webAccess start-script)

 
Vm-Commands
vmfsqhtool Prints UUID of a device header
vmfsqueuetool Formats all partitions in vmfs queue
vmkchdev Manage PCI devices (give control over the device to VMKernel or Service Console)
vmkdump Manage VMKernel dump partition
vmkerrcode Give description of VMKernel error codes base on decimal or hex value
vmkfstools Create/Remove/Configure VMFS-Filesystems and Virtual Machine .vdsk files (Virtual Disk File)
vmkiscsid iSCSI demon
vmkiscsi-device iSCSI device information
vmkiscsi-ls List iSCSI devices
vmkiscsi-tool Configure software iSCSI initiator
vmkiscsi-util Get information about iSCSI devices
vmkloader Load and unloads the VMKernel
vmkload mod Load/Unload VMKernel modules (e.g. device drivers)
vmklogger Create logmessages (like logger for VMKernel

messages)

vmkpcidivy deprecated
vmkping Ping on VMKernel network
vmkuptime.pl Creates HTML output with Uptime/Downtime/Availability
vmres.pl deprecated
vmsnap all Snapshot all Virtual Machines on a ESX-Server
vmsnap.pl deprecated
vmstat (this is a standard linux command – lists memory/disk access statistics)
vm-support Creates /etc/init.d/esx-<date>.tgz
vmware internal use – can not be started manually
 
Other Commands
vdf Show free disk space of mounted partitions (like df with vmfs-support)
 
Start-Scripts
Scripts inside /etc/init.d/
mgmt-vmware Start/Stop/Restart the demon for the VI-Client connections
vmkhalt internal use – can not be started manually
vmware internal use – can not be started manually
vmware-functions internal use – can not be started manually
vmware-late internal use – can not be started manually
vmware-vmkauthd internal use – can not be started manually
vmware-vpxa Start/Stop/Restart the demon for the Virtual Center connections
vmware-webAccess Start/Stop/Restart the demon for the Web-Interface connections
 
Running Processes
crond Schedule jobs at specific intervals
gpm Mouse support in the text console
init First process which runs every other process
klogd Kernel log demon
logger Logs messages to /var/log
sshd Provides secure shell access
syslogd Log/Filter demon with a remote logging ability
vmware-hostd Demon for VI Client connections
vmkload app Loads vmware applications (internal use only)
vmklogger Logs VMKernel messages to /var/log/vmware

wsmand

Web Services Management
vmware-vmkauthd Demon for user authentication
vmware-vmx Provides context for a Virtual Machine (internal use only)
vmware-watchdog Checks if vmware processes are running (no connection test à does not restart hung

processes)

vpxa Virtual Center agent
webAccess Web-Interface (TomCat-Server)
xinetd Listen on network ports for other demons and start them on-demand
 
vmware-cmd Commands
Commands for a Virtual Machines (vmware-cmd -h).
getconnectedusers List name and IP of connected users (non-working with esx3.5.0?)
getstate Show current state of VM (Ofi/On/…)
start Start a VM
stop Stop a VM
reset Reset a VM
suspend Suspend a VM
setconfig Set a variable in the vmx-configuration-file
getconfig Get a variable from the vmx-file
setguestinfo Set guest info variable
getguestinfo Get guest info variable
getproductinfo Get various product info
connectdevice Connect a device
disconnectdevice Disconnect a device
getconfigfile Get path/filename of config file
getheartbeat Get current heartbeat
gettoolslastactive Time since last notification from vmware-tools (in seconds)
getresource Get a VM resource
setresource Set a VM resource
hassnapshot Determine if VM has a snap-shot
createsnapshot Create a snapshot
revertsnapshot Revert to last snapshot
removesnapshots Remove all snapshots
answer Answer a question (if VM requires input)
 
vmware-vim-cmd Commands
hostsvc/ ESX-Server commands
internalsvc/ ESX-Server internal com-

mands

proxysvc/ Web-SDK proxy commands
vimsvc/ VirtualCenter commands
vmsvc/ VM commands
 
Log Files
Logs are in /var/log/vmware/ if no other path is specified)
/etc/syslog.conf Configure logging behaviour
esxcfg-boot.log Boot messages
esxcfg-firewall.log List of executed firewall commands and log messages
esxcfg-linuxnet.log LinuxNet messages
esxupdate.log Debug messages for updates
hostd.log hostd messages
vpx-iupgrade.log Logs for package installations/removals by

Virtual Center (e.g. output of rpm –hiv VMware-vpxa-2.5.0-64192.i386.rpm)

vpx/vpxa.log Virtual Center Agent messages
vmfsqueuetool.log VMFSQueueTool messages
webAccess Web-Access messages
/proc/vmware/log VMKernel messages
/var/log/ storage-Monitor VMKernel storage monitor messages
/var/log/ vmkernel VMKernel messages (info messages only)
/var/log/ vmkproxy VMKernel userworld proxy messages
/var/log/ vmk-summary VMKernel messages (notice and higher)

/var/log/ vmk-warning VMKernel warning messages
   
Categories: VMware Tags:

NetApp commands for Volume / LUN management

February 19, 2012 Leave a comment


vol options <volname> fractional_reserve 0

This command sets the fractional reserve to zero percent, down from the default of 100 percent. Note that fractional reserve only applies to LUNs, not to NAS storage presented via CIFS or NFS.

snap autodelete trigger snap_reserve

This sets the trigger at which Data ONTAP will begin deleting Snapshots. In this case, Snapshots will start getting deleted when the snap reserve for the volume gets nearly full. The current size of the snap reserve can be viewed for a particular volume with the “snap reserve” command.

snap autodelete defer_delete none

This command instructs Data ONTAP not to exhibit any preference in the types of Snapshots that are deleted. Options for this command include “user_created” (delete user-created Snapshot copies last) or “prefix” (Snapshot copies with a specified prefix string).

snap autodelete target_free_space 10

With this setting in place, Snapshots will be deleted until there is 10% free space in the volume.

snap autodelete on

Now that the Snapshot autodelete options have been configured, this command will actually turn the functionality on.

vol options <volname> try_first snap_delete

When a FlexVol runs into an issue with space, this option tells Data ONTAP to first try to delete Snapshots in order to free up space. This command works in conjunction with the next command:

vol autosize on

This enables Data ONTAP to automatically grow the size of a FlexVol if the need arises. This command works hand-in-hand with the previous command; Data ONTAP will first try to delete Snapshots to free up space, then grow the FlexVol according to the autosize configuration options. Between these two options—Snapshot autodelete and volume autogrow—you can reduce the fractional reserve from the default of 100 and still make sure that you don’t run into problems taking Snapshots of your LUNs.

SnapMirror and Deduplication

February 16, 2012 Leave a comment

In a recent blog, I talked about the interaction between deduplication and SnapVault.  In this post I’ll discuss SnapVault’s cousin – SnapMirror.

SnapVault was designed to make efficient D2D backup copies, but SnapMirror has a different purpose – making replication copies.  Using good old Snapshot technology, SnapMirror transfers snapshots from one storage system to another, usually from the data center to an offsite disaster recovery location.

SnapVault and SnapMirror have many similarities, but there is one important item that distinguishes these two cousins – Unlike SnapVault, SnapMirror relationships are peer-based and can be reversed.  In fact, when we talk about SnapMirror pairs, we don’t use the terms primary and secondary as we do with SnapVault, instead we refer to source and destination systems.  Either of the SnapMirror systems can be a source or a destination, it just depends on the direction the snapshots are moving.  Take a look at the diagram below to get a better understanding of what I mean:

https://i0.wp.com/media.netapp.com/images/blogs-6a00d8341ca27e53ef01348486ab8d970c-800wi.jpg

I’ve used this diagram in dozens customer briefings, and I use it to point out the subtle differences between SnapVault and SnapMirror.  First of all, notice the arrows.  SnapVault’s go from left to right only, but SnapMirror’s arrows travel in both directions.  Normally, the SnapMirror source system (the one on the left) controls the flow of application data to servers and clients.  However if the source system goes down for some reason, the SnapMirror destination system (on the right) takes control, and we call this a “Failover” event.  When we bring up and revert control back to the source system we call this a “Failback”.  In either case, Snapshot copies are passed back and forth between the systems to insure that both the source and destination systems are synchronized to the current point and time, using the most current SnapMirror copy.

Now, lets talk about using deduplication with SnapMirror.  There are two types of SnapMirror replication, and deduplication behaves differently with each type.

The first type is called Qtree SnapMirror, or QSM.  As the name implies, QSM performs replication at the Qtree level.  What is a Qtree?  Its a logical subset of a NetApp volume.  Storage admins like to use Qtrees in NAS systems when they need to administer quotas or set access permissions.  Much more info on the why’s and how’s of Qtrees can be found in the Data ONTAP System Administration Guide on the NOW Support site.

In the context of deduplication, QSM presents a bit of a problem.  Since replication is done at the logical level, any deduplication done at the source will be re-inflated at the destination, and will need to be re-deduplicated.  This kind of defeats the purpose of space reduction.  But there is one valuable use case – if you don’t want to dedupe the source, and only want to deduplicate the destination, QSM makes perfect sense.  Refer to the following diagram:

https://i1.wp.com/media.netapp.com/images/blogs-6a00d8341ca27e53ef0133f15b6b95970b-800wi.jpg

As the diagram shows, with QSM, only the Qtree portion of the volume is replicated and it is only deduplicated at the DR site.  To configure QSM for deduplication, just enable it on the destination volume and set the deduplication schedule to “auto”.  The source volume will remain untouched and the destination volume will deduplicate automatically.  Failovers and Failbacks will work just fine, since any replication from the destination back to the source will be un-deduplicated.

The second type of replication is Volume SnapMirror, or VSM, which takes a different approach.  VSM replicates entire volumes (including Qtrees) at the physical level.  What this means to deduplication is that blocks are replicated once, and any deduplication pointers are sent along with the blocks.  By replicating at the +physical +level, this means that the destination volume “inherits” deduplication automatically.  Here’s a diagram that shows how VSM works with deduplication:

https://i0.wp.com/media.netapp.com/images/blogs-6a00d8341ca27e53ef01348482dc7a970c-800wi.jpg

To configure VSM for deduplication, enable it on both the source and destination volumes but only set the deduplication schedule at the source.  The source volume will do all the work and the destination volume will get deduplication for free.  After you have a Failover/Failback event, you might want to run a deduplication scan on the source volume (sis start -s) to pick up any duplicate blocks that might have been written to the destination during the Failover, but then again its probably a very small amount that won’t be worth the effort.  Your choice.

In a nutshell, that’s how deduplication and SnapMirror work together.  If you’d like to read a much more complete description, here is an excellent Technical Report that includes best practices.

Categories: netapp Tags: , ,

SANtricity E-Series

December 13, 2011 Leave a comment

To manage NetApp E-Series arrays, you use the SANtricity Storage Manager software. SANtricity is easy and intuitive to use. Figure 1 shows the first GUI when you have selected an array to manage.

santricity1.PNG

The process of configuring storage on an E-Series array is, in my opinion, really not that different from doing it on a FAS storage system. Of course, the commands are different. However, conceptually, they are quite similar. I’m going to mention just a few very basic storage management tasks on an E-Series array and draw some comparisons between E-Series array and FAS storage systems.

Configure Host Access

This step basically establishes the path(s) between the array and host(s) such that the host can access the storage. This is similar to creating igroup(s) on a FAS storage system. Note that with E-Series arrays, you can configure host access manually or automatically. Auto-config involves a host discovery step by SANtricity. To ensure the path is working, SANtricity creates a tiny 20-MB volume on the array and presents it as a 20-MB disk to the host (see fig. 2). So, don’t be alarmed if you see this disk show up on the host; it’s actually a common practice by number of storage vendors.

unallocatedLUN.jpg

Create Volume Groups and Volumes

Volume Group is a logical storage entity that aggregates a number of physical drives. When you create a Volume Group, you select number of disks as well as a RAID level (e.g., 0, 1, 5 or 6) for that Volume Group. Think of it as aggregate and FlexVol combined on FAS, although they are not quite the same. Within a Volume Group, you can create one or more volumes, which are similar to LUNs on FAS. Figure 3 shows the relationship between Volume Groups and Volumes.

volumegrp.PNG

Create Host-to-Volume Mappings

This step maps host(s) to volumes so that the host can access the volumes, thus the storage array. It is very similar to LUN mapping on FAS. Figure 4 captures the state after several volumes have been mapped to the host x3550-test. Note, if multi-path is used, then proper DSM should be installed and/or configured.

host2volmapping.PNG

Configure Hot Spares

On E-Series arrays, hot spares should be configured so they can be used automatically in place of a failed drive in a volume group. When you configure hot spare disks, you can select which disk as well as how many disks should be hot spares. Note, the hot spares are global, meaning they can be used by any volume group in an array. Figure 5 shows two hot spares have been configured. This step is different from FAS, since on FAS storage systems, un-configured disks are usually hot spares automatically.

hotspare.PNG

SANtricity Storage Manager is a powerful storage management tool. Here, I only touched on a few very basic tasks. Yet these simple configuration steps are enough to let a host access an E-Series array and perform I/O operations (read and write) between the host and the array.

Categories: netapp Tags: ,

Space Reclaimer for NetApp SnapDrive

October 18, 2011 Leave a comment

SnapDrive for Windows 6.3, which was released last year, introduced support for VMDKs on NFS & VMFS Datastores

mainscreen.JPG

A couple of quick notes, you need Data ONTAP 7.3.4 or later to use block reclamation with SnapDrive for VMDKs.

You need to have VSC 2.0.1 or later installed, with the Backup and Recovery feature, and also SnapDrive (within the VM) must have network connectivity to the VSC (port 8043, by default) as well as Virtual Center.

Also, SnapDrive cannot create VMDKs for you, in the way it can create RDM LUNs. Instead, you have to create VMDKs the old fashioned way, but once they’re attached to the VM, SnapDrive will be able to manage them.

Okay, so I’ve got a VMDK (my C: Drive), which is in an NFS Datastore.

screen.JPG

I copied 5GB worth of Data into the C: drive, then deleted it. This left my VMDK at 10GB in size.

datastore.JPG

So, Windows took up about 5GB, and the data (which is now deleted), was about 5GB – so let’s kick off space reclamation and see how much space I get back.

Right click on the VMDK, and select “Start Space Reclaimer”.

rightclick.jpg

It will do a quick check, to see if it actually can reclaim any space for you.

screen2.JPG

The confirmation box reckons I get reclaim up to 3GB? Hmm, I was hoping for a bit more than that. Well, let’s run it anyway and see how well it does.

confirm.JPG

It’s worth noting the warning on here – while the VSC requires VMs to be shutdown in order to reclaim space, SnapDrive runs space reclamation while the VM is powered up – but, it will take a backseat to any actual I/O that’s going on, so you might want to run it in a low usage period.

 

So, I clicked okay, and it kicked off space reclamation – and it even gives me a nice little progress bar.

progress2.JPG

In my lab, it took about 3 minutes, and when it was done, it had shrunk my VMDK down to 5.6 GB.

datastore2.JPG

 

So it was just being modest earlier when it said I could free up to 3GB!

In total, it has reclaimed 5.2GB – which is actually a little more than the data I copied in and deleted to start with!

Categories: netapp Tags: , ,