Archive for the ‘netapp’ Category

NetApp Cluster-Mode Snapshots

February 28, 2012 Leave a comment

NetApp Snapshottechnology is famous for its uniqueness and ease of use. For example, unlike most competing snapshot technologies, you can create a volume snapshot in seconds, regardless of the size of the volume. It is very efficient in storage space usage.  And you can create hundreds of snapshot copies based on your needs. These excellent features are there for you in Data ONTAP 8.1 operating in cluster-mode.

The familiar 7-mode commands, such as snap reserve, snap sched and snap list, are still operational in cluster-mode. But cluster-mode has a new set of commands (see Fig. 1); which you can explore by simply typing the command (e.g., snap create) and hit return (see Fig. 2).


Figure 1: Cluster-mode snapshot commands


Figure 2: Cluster-mode snap create’s usage

One thing I did observe is that the cluster-mode snapshot policy seems to take precedence over the 7-mode snap sched command. The default snapshot policy in cluster-mode is that the hourly, daily and weekly snapshot schedules are enabled, with the following frequency:

  • Hourly: 6
  • Daily: 2
  • Weekly: 2

If you try to set snapshot schedule using the command snap sched0 0 0, meaning don’t take any scheduled snapshot, you will be surprised that this command is ignored; and hourly, daily and weekly snapshot copies are taken.

There are several ways to change the default snapshot policy in cluster-mode. Here are some examples:

a)     Use the snap policy modify command to disable the policy

b)     Under the  scope of snapshot policy, use add-schedule, modify-schedule, or remove-schedule to change it to your liking (see Fig. 3)

c)      You can also use snap policy create to create new snapshot policy


Figure 3: Cluster-mode snapshot policy commands

In summary, the 7-mode commands, by and large, are still valid for snapshot management. But be aware of the new cluster-mode snapshot policy which may take precedence.

Categories: netapp Tags: , ,

NetApp Powershell with Snaps & Cluster-Mode

February 28, 2012 Leave a comment

Many Powershell cmdlets have been developed for NetApp Data ONTAP. This is true for both 7-mode and cluster-mode. Since the cluster-mode cmdlets are relatively new, we’ll take a close look at it here, using a couple cluster-mode cmdlets to demonstrate how to create a volume snapshot and then restore it.

First, two prerequisites:

·         Powershell v2.0, which you can download and install from the Microsoft website here.

·         Data ONTAP Powershell Toolkit v1.7, or, you can download from NetApp Community here (see Fig.1). You need to login with your NOW credential to download.


Figure 1: Download from NetApp Community

Note: for Powershell background info, here are some useful websites with good info:

After you downloaded the ONTAP Powershell Tool Kit v1.7, on your Windows host, open a command prompt, create a directory C:\psontap. Unzip the kit to C:\psontap\DataONTAP. Fig. 2 shows the contents after unzipping the tool kit.


Figure 2: Unzip

Next, open a powershell prompt by clicking on the icon (see Fig. 3).


Figure 3: Click on the Powershell icon to open the Powershell command prompt

Then, initialize the ONTAP Powershell Tool Kit using import-module dataontap, as shown in Fig. 4.


Figure 4: Import the DataONTAP module

To distinguish cluster-mode cmdlets from 7-mode ones, the mnemonic ‘Nc’ is used. For example, to create a snapshot, you use New-NaSnapshot in 7-mode, but New-NcSnapshot in cluster-mode. Therefore, to discover all the snapshot cmdlets in cluster-mode, you can simply do get-help *NcSnapshot*, as show in Fig. 5. Note, the whildcard is allowed in cmdlets.


Figure 5: Cluster-mode snapshot cmdlets

In order to take a volume snapshot (or manage the FAS controller), you first need to use the cmdlet connect-NcController  to establish communication with the NetApp FAS controller (operating in cluster-mode) from your Windows box (see Fig. 6). When prompted, type in the admin password and hit OK. Note again the cmdlet is cluster-mode because of the presence of ‘Nc’.


Figure 6: Establish connection to a FAS controller operating in cluster-mode

Create a Snapshot

Figure 7 shows how to use the Powershell cmdlet New-NcSnapshot to create a volume snapshot called mysnap. Note that here we assumed that the FlexVol volume test_fv and Vserver vc1 are already on the controller. The parameter VserverContext is useful because it can uniquely identify the volume belonging to the specific Vserver, if there are multiple volumes named test_fv while belonging to different Vservers.


Figure 7: Create a snapshot in cluster-mode using Powershell cmdlet

Restore a Snapshot

Suppose after sometime you want to restore the snapshot mysnap for whatever reason, you can do that by using the cmdlet restore-NcSnapshotVolume, as shown in Fig. 8. The parameter PreserveLunIds allows the LUNs within the volume being restored to stay mapped and their identities preserved.


Figure 8: Restore a volume snapshot

You can explore other cluster-mode snapshot cmdlets by doing get-helpfor each cmdlet shown in Fig. 5. I found these cmdlets are quite straightforward to use, although a little bit verbose. And of course, if you have many volumes and snapshot copies, you can write your own scripts based on these cmdlets to streamline the operations.

Categories: netapp Tags: , , ,

NetApp commands for Volume / LUN management

February 19, 2012 Leave a comment

vol options <volname> fractional_reserve 0

This command sets the fractional reserve to zero percent, down from the default of 100 percent. Note that fractional reserve only applies to LUNs, not to NAS storage presented via CIFS or NFS.

snap autodelete trigger snap_reserve

This sets the trigger at which Data ONTAP will begin deleting Snapshots. In this case, Snapshots will start getting deleted when the snap reserve for the volume gets nearly full. The current size of the snap reserve can be viewed for a particular volume with the “snap reserve” command.

snap autodelete defer_delete none

This command instructs Data ONTAP not to exhibit any preference in the types of Snapshots that are deleted. Options for this command include “user_created” (delete user-created Snapshot copies last) or “prefix” (Snapshot copies with a specified prefix string).

snap autodelete target_free_space 10

With this setting in place, Snapshots will be deleted until there is 10% free space in the volume.

snap autodelete on

Now that the Snapshot autodelete options have been configured, this command will actually turn the functionality on.

vol options <volname> try_first snap_delete

When a FlexVol runs into an issue with space, this option tells Data ONTAP to first try to delete Snapshots in order to free up space. This command works in conjunction with the next command:

vol autosize on

This enables Data ONTAP to automatically grow the size of a FlexVol if the need arises. This command works hand-in-hand with the previous command; Data ONTAP will first try to delete Snapshots to free up space, then grow the FlexVol according to the autosize configuration options. Between these two options—Snapshot autodelete and volume autogrow—you can reduce the fractional reserve from the default of 100 and still make sure that you don’t run into problems taking Snapshots of your LUNs.

SnapMirror and Deduplication

February 16, 2012 Leave a comment

In a recent blog, I talked about the interaction between deduplication and SnapVault.  In this post I’ll discuss SnapVault’s cousin – SnapMirror.

SnapVault was designed to make efficient D2D backup copies, but SnapMirror has a different purpose – making replication copies.  Using good old Snapshot technology, SnapMirror transfers snapshots from one storage system to another, usually from the data center to an offsite disaster recovery location.

SnapVault and SnapMirror have many similarities, but there is one important item that distinguishes these two cousins – Unlike SnapVault, SnapMirror relationships are peer-based and can be reversed.  In fact, when we talk about SnapMirror pairs, we don’t use the terms primary and secondary as we do with SnapVault, instead we refer to source and destination systems.  Either of the SnapMirror systems can be a source or a destination, it just depends on the direction the snapshots are moving.  Take a look at the diagram below to get a better understanding of what I mean:

I’ve used this diagram in dozens customer briefings, and I use it to point out the subtle differences between SnapVault and SnapMirror.  First of all, notice the arrows.  SnapVault’s go from left to right only, but SnapMirror’s arrows travel in both directions.  Normally, the SnapMirror source system (the one on the left) controls the flow of application data to servers and clients.  However if the source system goes down for some reason, the SnapMirror destination system (on the right) takes control, and we call this a “Failover” event.  When we bring up and revert control back to the source system we call this a “Failback”.  In either case, Snapshot copies are passed back and forth between the systems to insure that both the source and destination systems are synchronized to the current point and time, using the most current SnapMirror copy.

Now, lets talk about using deduplication with SnapMirror.  There are two types of SnapMirror replication, and deduplication behaves differently with each type.

The first type is called Qtree SnapMirror, or QSM.  As the name implies, QSM performs replication at the Qtree level.  What is a Qtree?  Its a logical subset of a NetApp volume.  Storage admins like to use Qtrees in NAS systems when they need to administer quotas or set access permissions.  Much more info on the why’s and how’s of Qtrees can be found in the Data ONTAP System Administration Guide on the NOW Support site.

In the context of deduplication, QSM presents a bit of a problem.  Since replication is done at the logical level, any deduplication done at the source will be re-inflated at the destination, and will need to be re-deduplicated.  This kind of defeats the purpose of space reduction.  But there is one valuable use case – if you don’t want to dedupe the source, and only want to deduplicate the destination, QSM makes perfect sense.  Refer to the following diagram:

As the diagram shows, with QSM, only the Qtree portion of the volume is replicated and it is only deduplicated at the DR site.  To configure QSM for deduplication, just enable it on the destination volume and set the deduplication schedule to “auto”.  The source volume will remain untouched and the destination volume will deduplicate automatically.  Failovers and Failbacks will work just fine, since any replication from the destination back to the source will be un-deduplicated.

The second type of replication is Volume SnapMirror, or VSM, which takes a different approach.  VSM replicates entire volumes (including Qtrees) at the physical level.  What this means to deduplication is that blocks are replicated once, and any deduplication pointers are sent along with the blocks.  By replicating at the +physical +level, this means that the destination volume “inherits” deduplication automatically.  Here’s a diagram that shows how VSM works with deduplication:

To configure VSM for deduplication, enable it on both the source and destination volumes but only set the deduplication schedule at the source.  The source volume will do all the work and the destination volume will get deduplication for free.  After you have a Failover/Failback event, you might want to run a deduplication scan on the source volume (sis start -s) to pick up any duplicate blocks that might have been written to the destination during the Failover, but then again its probably a very small amount that won’t be worth the effort.  Your choice.

In a nutshell, that’s how deduplication and SnapMirror work together.  If you’d like to read a much more complete description, here is an excellent Technical Report that includes best practices.

Categories: netapp Tags: , ,

SANtricity E-Series

December 13, 2011 Leave a comment

To manage NetApp E-Series arrays, you use the SANtricity Storage Manager software. SANtricity is easy and intuitive to use. Figure 1 shows the first GUI when you have selected an array to manage.


The process of configuring storage on an E-Series array is, in my opinion, really not that different from doing it on a FAS storage system. Of course, the commands are different. However, conceptually, they are quite similar. I’m going to mention just a few very basic storage management tasks on an E-Series array and draw some comparisons between E-Series array and FAS storage systems.

Configure Host Access

This step basically establishes the path(s) between the array and host(s) such that the host can access the storage. This is similar to creating igroup(s) on a FAS storage system. Note that with E-Series arrays, you can configure host access manually or automatically. Auto-config involves a host discovery step by SANtricity. To ensure the path is working, SANtricity creates a tiny 20-MB volume on the array and presents it as a 20-MB disk to the host (see fig. 2). So, don’t be alarmed if you see this disk show up on the host; it’s actually a common practice by number of storage vendors.


Create Volume Groups and Volumes

Volume Group is a logical storage entity that aggregates a number of physical drives. When you create a Volume Group, you select number of disks as well as a RAID level (e.g., 0, 1, 5 or 6) for that Volume Group. Think of it as aggregate and FlexVol combined on FAS, although they are not quite the same. Within a Volume Group, you can create one or more volumes, which are similar to LUNs on FAS. Figure 3 shows the relationship between Volume Groups and Volumes.


Create Host-to-Volume Mappings

This step maps host(s) to volumes so that the host can access the volumes, thus the storage array. It is very similar to LUN mapping on FAS. Figure 4 captures the state after several volumes have been mapped to the host x3550-test. Note, if multi-path is used, then proper DSM should be installed and/or configured.


Configure Hot Spares

On E-Series arrays, hot spares should be configured so they can be used automatically in place of a failed drive in a volume group. When you configure hot spare disks, you can select which disk as well as how many disks should be hot spares. Note, the hot spares are global, meaning they can be used by any volume group in an array. Figure 5 shows two hot spares have been configured. This step is different from FAS, since on FAS storage systems, un-configured disks are usually hot spares automatically.


SANtricity Storage Manager is a powerful storage management tool. Here, I only touched on a few very basic tasks. Yet these simple configuration steps are enough to let a host access an E-Series array and perform I/O operations (read and write) between the host and the array.

Categories: netapp Tags: ,

Space Reclaimer for NetApp SnapDrive

October 18, 2011 Leave a comment

SnapDrive for Windows 6.3, which was released last year, introduced support for VMDKs on NFS & VMFS Datastores


A couple of quick notes, you need Data ONTAP 7.3.4 or later to use block reclamation with SnapDrive for VMDKs.

You need to have VSC 2.0.1 or later installed, with the Backup and Recovery feature, and also SnapDrive (within the VM) must have network connectivity to the VSC (port 8043, by default) as well as Virtual Center.

Also, SnapDrive cannot create VMDKs for you, in the way it can create RDM LUNs. Instead, you have to create VMDKs the old fashioned way, but once they’re attached to the VM, SnapDrive will be able to manage them.

Okay, so I’ve got a VMDK (my C: Drive), which is in an NFS Datastore.


I copied 5GB worth of Data into the C: drive, then deleted it. This left my VMDK at 10GB in size.


So, Windows took up about 5GB, and the data (which is now deleted), was about 5GB – so let’s kick off space reclamation and see how much space I get back.

Right click on the VMDK, and select “Start Space Reclaimer”.


It will do a quick check, to see if it actually can reclaim any space for you.


The confirmation box reckons I get reclaim up to 3GB? Hmm, I was hoping for a bit more than that. Well, let’s run it anyway and see how well it does.


It’s worth noting the warning on here – while the VSC requires VMs to be shutdown in order to reclaim space, SnapDrive runs space reclamation while the VM is powered up – but, it will take a backseat to any actual I/O that’s going on, so you might want to run it in a low usage period.


So, I clicked okay, and it kicked off space reclamation – and it even gives me a nice little progress bar.


In my lab, it took about 3 minutes, and when it was done, it had shrunk my VMDK down to 5.6 GB.



So it was just being modest earlier when it said I could free up to 3GB!

In total, it has reclaimed 5.2GB – which is actually a little more than the data I copied in and deleted to start with!

Categories: netapp Tags: , ,

NetApp SAN Boot with Windows

October 6, 2011 Leave a comment

Thoughts and ideas concerning booting from SAN when we attempted this with our NetApp array.

  1. SAN Boot is fully supported by MSFT.  The first thing that happened is that we were told that SAN boot is not supported and we could not get Microsoft support for this configuration.  It turns out that this is not correct.  SAN boot is fully support by Microsoft along with HW partners like NetApp.  This TechNet article fully outlines MSFT’s support for SAN Boot:
  2. Zoning is the #1 issue with SAN Boot on FC. In talking with NetApp support team (who were a HUGE help on this issue) the most common issue in SAN Boot from Fiber Channel is zoning.  Because zoning can be complex, this is the most likely cause of error.  We strongly recommend you check and then double-check your zoning before opening a support ticket.   In our case, the zoning for the server was correct, but we did make a zoning error on another server that we were able to correct on our own.
  3. Windows MPIO support at install time is limited. Because WinPE is not MPIO aware, there can be strange results when deploying against a LUN that is visible via multiple paths.  Keep in mind that at install time, Windows boots to boot.wim which is running WinPE instead of a full Windows install.  After the bits are copied locally, windows reboots to complete the install and at this time Windows is actually running.  Because of this, NetApp support team recommends having only one path to the LUN at install time and then adding paths later once Windows is up and running and you can enable Windows MPIO.
  4. AND YET…  MPIO is strongly recommended for SAN Boot.  Because a Windows host will blue screen if it’s boot lun dies, MPIO is strongly recommended for boot LUNs.  This is documented here:  This can seem contradictory at first, but the bottom line is that MPIO is good, just add it later once Windows is up and running correctly.
  5. Yes, but what about Sysprep?  It turns out that MPIO is not supported for Sysprep based deployments:  So, again you need to configure MPIO post deployment when you are deploying against sysprep’d images.  In the case of NetApp, we strongly recommend using Sysprep boot LUNS which you can then clone for new machines.  This significantly shortens deployment time as opposed to doing a full Windows install for each new host.
  6. It’s all about BIOS. Actually installing windows on a boot LUN does require that Windows Setup sees your target LUN as a valid install path.  This means that the server must report this drive as a valid install target or Setup will not let you select this disk.  For FC, you will need to enable the BIOS setting and select your target LUN in the HBA setup screen.  This process varies by vendor.  Then you need to make this disk your #1 boot target in your server’s BIOS.  Again, this process varies by manufacturer.  As noted above, you should only establish one path.  This includes dual port HBA’s.  Only configure one of the ports.
  7. Where’s my disk? Once you do all the above correctly, Setup may still refuse to show you the disk.  This could be because the correct driver is not present on the install media.  One way to fix this is to inject drivers into your Boot.WIM and Install.WIM.  This process is required if you are using WDS but optional if you are hand building a server from DVD or ISO.  In our case, we were building a single master image that we were going to Sysprep so we simply inserted the media and added the drivers manually during setup.
  8. OK, the disk is there, but I can’t install! One funny thing about Windows setup is that if you are installing from DVD, that DVD must be present to install (duh).  This is fine, unless you used the process above to insert the driver.  To do this, you need to remove the disk.  Then you get the drives and click install.  Windows will fail to install with a fairly obtuse error.  You need to remove the drivers DVD at this point and put the install DVD back in.  Seems obvious, but it took me a few minutes to figure out what was wrong the first time I tried it.

The Windows Host Utilities Installation and Setup Guide ( has a very detailed description of Windows support for FC LUNs and there is a step by step process in this guide for configuring Boot LUNs.

Categories: netapp Tags: , ,