Archive

Posts Tagged ‘Snapmirror’

SnapMirror and Deduplication

February 16, 2012 Leave a comment

In a recent blog, I talked about the interaction between deduplication and SnapVault.  In this post I’ll discuss SnapVault’s cousin – SnapMirror.

SnapVault was designed to make efficient D2D backup copies, but SnapMirror has a different purpose – making replication copies.  Using good old Snapshot technology, SnapMirror transfers snapshots from one storage system to another, usually from the data center to an offsite disaster recovery location.

SnapVault and SnapMirror have many similarities, but there is one important item that distinguishes these two cousins – Unlike SnapVault, SnapMirror relationships are peer-based and can be reversed.  In fact, when we talk about SnapMirror pairs, we don’t use the terms primary and secondary as we do with SnapVault, instead we refer to source and destination systems.  Either of the SnapMirror systems can be a source or a destination, it just depends on the direction the snapshots are moving.  Take a look at the diagram below to get a better understanding of what I mean:

https://i0.wp.com/media.netapp.com/images/blogs-6a00d8341ca27e53ef01348486ab8d970c-800wi.jpg

I’ve used this diagram in dozens customer briefings, and I use it to point out the subtle differences between SnapVault and SnapMirror.  First of all, notice the arrows.  SnapVault’s go from left to right only, but SnapMirror’s arrows travel in both directions.  Normally, the SnapMirror source system (the one on the left) controls the flow of application data to servers and clients.  However if the source system goes down for some reason, the SnapMirror destination system (on the right) takes control, and we call this a “Failover” event.  When we bring up and revert control back to the source system we call this a “Failback”.  In either case, Snapshot copies are passed back and forth between the systems to insure that both the source and destination systems are synchronized to the current point and time, using the most current SnapMirror copy.

Now, lets talk about using deduplication with SnapMirror.  There are two types of SnapMirror replication, and deduplication behaves differently with each type.

The first type is called Qtree SnapMirror, or QSM.  As the name implies, QSM performs replication at the Qtree level.  What is a Qtree?  Its a logical subset of a NetApp volume.  Storage admins like to use Qtrees in NAS systems when they need to administer quotas or set access permissions.  Much more info on the why’s and how’s of Qtrees can be found in the Data ONTAP System Administration Guide on the NOW Support site.

In the context of deduplication, QSM presents a bit of a problem.  Since replication is done at the logical level, any deduplication done at the source will be re-inflated at the destination, and will need to be re-deduplicated.  This kind of defeats the purpose of space reduction.  But there is one valuable use case – if you don’t want to dedupe the source, and only want to deduplicate the destination, QSM makes perfect sense.  Refer to the following diagram:

https://i0.wp.com/media.netapp.com/images/blogs-6a00d8341ca27e53ef0133f15b6b95970b-800wi.jpg

As the diagram shows, with QSM, only the Qtree portion of the volume is replicated and it is only deduplicated at the DR site.  To configure QSM for deduplication, just enable it on the destination volume and set the deduplication schedule to “auto”.  The source volume will remain untouched and the destination volume will deduplicate automatically.  Failovers and Failbacks will work just fine, since any replication from the destination back to the source will be un-deduplicated.

The second type of replication is Volume SnapMirror, or VSM, which takes a different approach.  VSM replicates entire volumes (including Qtrees) at the physical level.  What this means to deduplication is that blocks are replicated once, and any deduplication pointers are sent along with the blocks.  By replicating at the +physical +level, this means that the destination volume “inherits” deduplication automatically.  Here’s a diagram that shows how VSM works with deduplication:

https://i0.wp.com/media.netapp.com/images/blogs-6a00d8341ca27e53ef01348482dc7a970c-800wi.jpg

To configure VSM for deduplication, enable it on both the source and destination volumes but only set the deduplication schedule at the source.  The source volume will do all the work and the destination volume will get deduplication for free.  After you have a Failover/Failback event, you might want to run a deduplication scan on the source volume (sis start -s) to pick up any duplicate blocks that might have been written to the destination during the Failover, but then again its probably a very small amount that won’t be worth the effort.  Your choice.

In a nutshell, that’s how deduplication and SnapMirror work together.  If you’d like to read a much more complete description, here is an excellent Technical Report that includes best practices.

Categories: netapp Tags: , ,

Firewall usage with SnapMirror

May 24, 2011 Leave a comment

SnapMirror uses the typical socket/bind/listen/accept sequence on a TCP socket.

SnapMirror source binds on port 10566.  The destination storage system contacts the SnapMirror source storage system at port 10566 using any of the available ports assigned by the system.  The firewall must allow requests to this port of the SnapMirror source storage system.  Synchronous SnapMirror requires additional TCP ports to be open.  The source storage system listens on TCP ports 10566 and 10569.  The destination storage system listens on TCP ports 10565, 10567, and 10568.  Therefore, you should ensure that the firewall allows a range of TCP ports from 10565 to 10569.
Categories: netapp Tags: , ,

Removing broken SnapMirror relationships

May 24, 2011 Leave a comment

If you have managed SnapMirror relationships on a NetApp SAN you no doubt encountered problems deleting them after they have been broken off. One command I have found that resolves this if FilerView will not work is:

snapmirror release source { filer:volume | filer:qtree }

Tell snapmirror that a certain direct mirror is no longer going to request updates. If a certain destination is no longer going you request updates, you must tell SnapMirror so that it will no longer retain a snapshot for that destination. This command will remove snapshots that are no longer needed for replication and can be used to clean up SnapMirror created snapshots after snapmirror break is issued on the destination side.

I find I have to use this command every so often to clean up my snapmirror configs.

NetApp SnapMirror for Migrations

May 24, 2011 Leave a comment

Volume migration using SnapMirror

Ontap Snapmirror is designed to be simple, reliable and cheap tool to facilitate disaster recovery for business critical applications. It comes default with Ontap but has to be licensed to use.
Apart from DR, snapmirror is an extremely useful in situation like

1. Aggregates or volumes reached maximum size limit.

2, Need to change volume disk type (tiering).

Prep workBuild a new aggregate from free disks

1. List the spares in the system

# vol status -s

Spare disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

———       ——  ————- —- —- —- —– ————–    ————–

Spare disks for block or zoned checksum traditional volumes or aggregates

spare           7a.18 7a    1   2   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare           7a.19 7a    1   3   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare           7a.20 7a    1   4   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.21 7a    1   5   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.22 7a    1   6   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.23 7a    1   7   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.24 7a    1   8   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.25 7a    1   9   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.26 7a    1   10  FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

2. Create new aggregate

Add the new disks. Make sure you add sufficient disks to create complete raid groups. Else later when you add new disks to the aggregate , all the new writes will go to the newly added disks until it fills up to the level of other disks in the raid group. This creates a disk bottleneck in the filer as all the writes are  now handled by limited number of spindles.

# aggr add aggr_new 7a.18,7a.19,7a.20,7a.21,7a.22,7a.23,7a.24,7a.25,7a.26,7a.27

3. Verify the aggregate is online

# aggr status aggr_new

3. Create new volumes with name vol_new and size 1550g on aggr_new

# vol create vol_new aggr_new 1500g

4. Verify the volume is online

# vol status vol_new

5. Setup snapmirror between old and new volumes

First you need to restrict the destination volume by using the command  # vol restrict vol_new

a. snapmirror initialize -S filername:volname filername:vol_new

b. Also make an entry in /etc/snapmirror.conf file for this snapmirror session

filername:/vol/volume filername:/vol/vol_new kbs=1000 0 0-23 * *

Note kbs=1000 is throttling the snapmirror speed

On day of cut over

Update snapmirror session

# snapmirror update vol_new
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.

# snapmirror status vol_new
Snapmirror is on.
Source Destination State Lag Status
filername:volume_name filername:vol_new   Snapmirrored 00:00:38 Idle

Quiesce the relationship – this will finish the in session transfers, and then halt any further updates from snapmirror source to snapmirror destination. Quiecse the destination

# snapmirror quiesce vol_new
snapmirror quiesce: in progress
This can be a long-running operation. Use Control – C (^C) to interrupt.
snapmirror quiesce: dbdump_pb : Successfully quiesced

Break the relationship – this will cause the destination volume to become writable

# snapmirror break vol_new
snapmirror break: Destination vol_new is now writable.
Volume size is being retained for potential snapmirror resync. If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off

Enable quotas: quota on volname

Rename volumes

Once the snapmirror session is terminated, we can now rename the volumes

# vol rename volume_name volume_name_temp

# vol rename vol_new volume_name

Remember, the shares move with the volume name. ie. if the volume hosting the share is renames the corresponding change is reflected in the recreate the path of the share. This requires us to delete the old share and recreate it with the correct volume name. File cifsconfig_share.cfg under etc$ has listing of the commands run to create the shares. Use this file as reference.

cifs shares -add “test_share$” “/vol/volume_name” “Admin Share Server Admins”

cifs access “test_share$” S-1-5-32-544 Full Control

Use a -f at the end of the cifs shares -add line to eliminate the  y or n prompt.

Start quotas on the new volume

# quota on volume_name

You are done. The shares and qtrees are now referring to the new volume on a new aggregate. Test the shares by mapping them on a windows host.

Categories: netapp Tags: , ,

Resizing a Snapmirrored NetApp Flex Volume

May 2, 2011 1 comment

So you have a bunch of volumes you are currently snap mirroring to an alternate NetApp SAN. Now after three months of being in production, your volume requirements grow. So as a NetApp admin, you say hey no problem just increase the size of the volume. It so easy I’ll just do it:

site1> vol size demo4 +25m

There all done, I just increased the size by 25 Megs. Wow that was easy. But wait whats this, the snapmirror is failing now. That is because the source volume receiving the snapmirror has to be equal or larger than the source. Since we just increased the source, we caused a problem as shown below:

site2>snapmirror update -S site1:demo4 site2:demo4

Mon Apr 12 00:39:45 GMT [replication.dst.err:error]: SnapMirror: destination transfer from site1:demo4 to demo4 : destination volume too small; it must be equal to or larger than the source volume. Transfer aborted: destination volume too small; it must be equal to or larger than the source volume.

So now what? Well, all one has to do is resize the destination volume to be the same size or larger. In fact this is the first place you would start if one needed to increase the source volume.

Five easy steps to increasing the snapmirroed volume on the destination filer:
On the destination filer in our example “site2″ and Flexvol “demo4″
Step 1 — Need to temporarily break the mirror to make the volume read/writeable
site2>snapmirror break demo4
Step 2 — We need to change the volume option of our “demo4″ volume fs_size_fixed to off
site2> vol options demo4 fs_size_fixed off
Step 3 — Next we increase the size of the destination volume
site2> vol size demo4 +25m
Step 4 — Change the fs_size_fixed back to on before we resume the mirror
site2> options demo4 fs_size_fixed on
Step 5 — Re-establish the mirror
site2>snapmirror resync -S site1:demo4

Couple of things to consider with Snapmirrored volumes
1. Destination filer must be later or equal to Ontap version as the source filer
2. If a-sis volume is being replicated to a smaller filer, be mindful  of dedup limits on destination filer