Archive

Archive for May, 2011

Firewall usage with SnapMirror

May 24, 2011 Leave a comment

SnapMirror uses the typical socket/bind/listen/accept sequence on a TCP socket.

SnapMirror source binds on port 10566.  The destination storage system contacts the SnapMirror source storage system at port 10566 using any of the available ports assigned by the system.  The firewall must allow requests to this port of the SnapMirror source storage system.  Synchronous SnapMirror requires additional TCP ports to be open.  The source storage system listens on TCP ports 10566 and 10569.  The destination storage system listens on TCP ports 10565, 10567, and 10568.  Therefore, you should ensure that the firewall allows a range of TCP ports from 10565 to 10569.
Categories: netapp Tags: , ,

Removing broken SnapMirror relationships

May 24, 2011 Leave a comment

If you have managed SnapMirror relationships on a NetApp SAN you no doubt encountered problems deleting them after they have been broken off. One command I have found that resolves this if FilerView will not work is:

snapmirror release source { filer:volume | filer:qtree }

Tell snapmirror that a certain direct mirror is no longer going to request updates. If a certain destination is no longer going you request updates, you must tell SnapMirror so that it will no longer retain a snapshot for that destination. This command will remove snapshots that are no longer needed for replication and can be used to clean up SnapMirror created snapshots after snapmirror break is issued on the destination side.

I find I have to use this command every so often to clean up my snapmirror configs.

NetApp SnapMirror for Migrations

May 24, 2011 Leave a comment

Volume migration using SnapMirror

Ontap Snapmirror is designed to be simple, reliable and cheap tool to facilitate disaster recovery for business critical applications. It comes default with Ontap but has to be licensed to use.
Apart from DR, snapmirror is an extremely useful in situation like

1. Aggregates or volumes reached maximum size limit.

2, Need to change volume disk type (tiering).

Prep workBuild a new aggregate from free disks

1. List the spares in the system

# vol status -s

Spare disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

———       ——  ————- —- —- —- —– ————–    ————–

Spare disks for block or zoned checksum traditional volumes or aggregates

spare           7a.18 7a    1   2   FC:B   -  FCAL 10000 372000/761856000  560879/1148681096

spare           7a.19 7a    1   3   FC:B   -  FCAL 10000 372000/761856000  560879/1148681096

spare           7a.20 7a    1   4   FC:B   -  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.21 7a    1   5   FC:B   -  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.22 7a    1   6   FC:B   -  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.23 7a    1   7   FC:B   -  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.24 7a    1   8   FC:B   -  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.25 7a    1   9   FC:B   -  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.26 7a    1   10  FC:B   -  FCAL 10000 372000/761856000  560879/1148681096

2. Create new aggregate

Add the new disks. Make sure you add sufficient disks to create complete raid groups. Else later when you add new disks to the aggregate , all the new writes will go to the newly added disks until it fills up to the level of other disks in the raid group. This creates a disk bottleneck in the filer as all the writes are  now handled by limited number of spindles.

# aggr add aggr_new 7a.18,7a.19,7a.20,7a.21,7a.22,7a.23,7a.24,7a.25,7a.26,7a.27

3. Verify the aggregate is online

# aggr status aggr_new

3. Create new volumes with name vol_new and size 1550g on aggr_new

# vol create vol_new aggr_new 1500g

4. Verify the volume is online

# vol status vol_new

5. Setup snapmirror between old and new volumes

First you need to restrict the destination volume by using the command  # vol restrict vol_new

a. snapmirror initialize -S filername:volname filername:vol_new

b. Also make an entry in /etc/snapmirror.conf file for this snapmirror session

filername:/vol/volume filername:/vol/vol_new kbs=1000 0 0-23 * *

Note kbs=1000 is throttling the snapmirror speed

On day of cut over

Update snapmirror session

# snapmirror update vol_new
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.

# snapmirror status vol_new
Snapmirror is on.
Source Destination State Lag Status
filername:volume_name filername:vol_new   Snapmirrored 00:00:38 Idle

Quiesce the relationship – this will finish the in session transfers, and then halt any further updates from snapmirror source to snapmirror destination. Quiecse the destination

# snapmirror quiesce vol_new
snapmirror quiesce: in progress
This can be a long-running operation. Use Control – C (^C) to interrupt.
snapmirror quiesce: dbdump_pb : Successfully quiesced

Break the relationship – this will cause the destination volume to become writable

# snapmirror break vol_new
snapmirror break: Destination vol_new is now writable.
Volume size is being retained for potential snapmirror resync. If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off

Enable quotas: quota on volname

Rename volumes

Once the snapmirror session is terminated, we can now rename the volumes

# vol rename volume_name volume_name_temp

# vol rename vol_new volume_name

Remember, the shares move with the volume name. ie. if the volume hosting the share is renames the corresponding change is reflected in the recreate the path of the share. This requires us to delete the old share and recreate it with the correct volume name. File cifsconfig_share.cfg under etc$ has listing of the commands run to create the shares. Use this file as reference.

cifs shares -add “test_share$” “/vol/volume_name” “Admin Share Server Admins”

cifs access “test_share$” S-1-5-32-544 Full Control

Use a -f at the end of the cifs shares -add line to eliminate the  y or n prompt.

Start quotas on the new volume

# quota on volume_name

You are done. The shares and qtrees are now referring to the new volume on a new aggregate. Test the shares by mapping them on a windows host.

Categories: netapp Tags: , ,

NetApp DataMotion for OnTap 8.0.1 7-Mode

May 17, 2011 Leave a comment

NetApp DataMotion for Volumes.  Not many people know about this feature so I thought I would let the folks out there know DataMotion lets you move volumes nondisruptively between aggregates on the same controller. With Data ONTAP 8.0.1 7-Mode, this is supported for volumes that only contain one or more LUNs. This nondisruptive data movement is useful for many purposes: for example, to free up space in an aggregate, to load-balance disk operations, to move data to a different tier of storage, and to replace old disk drives with newer models. Application and user access is maintained during and after data movement, and data can be moved between aggregates that use different drive types; FC, SAS, SSD, and SATA drives are all supported.

NetApp DataMotion for volumes lets you nondisruptively migrate volumes containing LUNs.

Differences between DataMotion for Volumes and DataMotion for vFiler.  DataMotion for vFiler allows you to migrate volumes between separate storage systems or HA pairs. DataMotion for vFiler is managed using NetApp Protection Manager. DataMotion for Volumes can only be invoked from the command line by using the vol move command.

Great pdf on this subject.

Categories: netapp Tags: ,

NetApp “Config” Command

May 12, 2011 Leave a comment

I think it’s very important to save a config of a good setup.  Firstly it’s a great reference if you ever need to go back and refer to things, secondly it’s a great way to show what you did was actually correct and that you did configure things correctly from the start!

There is a handy tool provided within OnTap to do entire config dumps, compares and restores. This is limited to the filers base configuration and doesn’t necessarily include areas like volume setup.

filer01> config
Usage:
config clone
config diff [-o ] [ ]
config dump [-f] [-v]
config restore [-v]

The command is very simple and straight forward. You start by dumping out the configuration from the filer.  This automatically goes into /etc/configs. From here you can then clone the config if needed, or compare (diff) the config. Running diff is a very good way of comparing a config between 2 points in time, if you aren’t sure what has changed, or even if you are comparing a filer upgrade and you copy the config files between the 2 systems (checkout [cref netapp-file-copy]).  And finally you can also use the restore feature, although this would probably require a reboot, and may have a knockon affect to what may or may not be required in various other config files within /etc.

Overall a very useful command.  I use this most for taking backups of filer configs and comparing them between similar systems (for instance primary and DR), or even comparing configs over time.

Netapp Admin Pocket Guide

May 10, 2011 Leave a comment

Here are a few Data Ontap CLI commands that I’ve put together for reference. I will continuously add to this list

General Commands
setup (Re-Run initial setup)
halt (Reboots controller into bootrom)
reboot (Reboots controller back to Data Ontap)
sysconfig -a (System configuration and information)
java netapp.cmds.jsh (limited freebsd cli)
storage show disk (show physical information about disks)
passwd (Change password for logged in user)
sasadmin shelf (shows a graphical layout of your shelves with occupied disk slots)
options trusted.hosts x.x.x.x or x.x.x.x/nn (hosts that are allowed telnet, http, https and ssh admin access. x.x.x.x = ip address, /nn is network bits)
options trusted.hosts * (Allows all hosts to the above command)
sysstat -s 5 (Displays operating statistics every 5 seconds i.e. CPU, NFS, CIFS, NET, DISK, etc)

Diagnostics
Press DEL at boot up during memory test followed by boot_diags and select all (Diagnostic tests for a new install)
priv set diags (Enter diagnostics CLI mode from the Ontap CLI)
priv set (Return to normal CLI mode from diagnostics mode)

Software
software list (Lists software in the /etc/software directory)
software get http://x.x.x.x/8.0_e_image.zip 8.0_e_image.zip (Copy software from http to software directory)
software delete (Deletes software in the /etc/software directory)
software update 8.0_e_image.zip -r (Install software. The -r prevents it rebooting afterwards)

ACP (Alternate Control Path)
options acp.enabled on (Turns on ACP)
storage show acp -a (show ACP status)

Root Volume
The Root Volume can only be on a 32 bit aggregate if you want to create a 64 bit aggregate you must create a seperate aggregate.

Aggregates
aggr create aggregate_name (Creates an Aggregate)
aggr destroy aggregate_name (removes an Aggregate)
aggr offline aggregate_name (takes an Aggregate offline)
aggr online aggregate_name (bring an Aggregate online)
aggr options aggregate_name root (makes an Aggregate root|Only use if your Root Aggregate is damanged)
aggr status (shows status of all aggregates)
aggr status aggregate_name (show status of a specific Aggregate)
aggr show_space aggregate_name (shows specific aggregate space information)
aggr options aggregate_name nosnap=on (Disable snapshot autocreation)
aggr options aggregate_name raidsize=x (x being the number of drives in the RAID)
snap reserve -A aggregate_name 0 (Set Aggregate snap reserve to 0% or any number you enter)

Volumes
vol create volume_name (Creates a volume)
vol autosize volume_name (Shows autosize settings for a given volume)
vol autosize volume_name on|off (Turns Volume autosize on or off)
vol options volume_name (Lists volume options)
vol size volume_name + size k|m|g|t (increase volume size by KB, MB, GB or TB)
vol status -f (lists broken or failed disks)

Qtree’s
qtree create /vol/volume_name/qtree_name (Create a qtree within a volume)
qtree security /vol/volume_name/qtree_name unix|ntfs|mixed (Change security settings of a qtree)
qtree stats qtree_name (Shows CIFS or NFS ops/sec for a given qtree)

Snapshots
snap create volume_name snapshot_name (create a snapshot)
snap list volume_name (List snapshots for a volume)
snap delete volume_name snapshot_name (delete a snapshot on a volume)
snap delete -a volume_name (Deletes all snapshots for a volume)
snap autodelete volume_name show (Shows snapshot autodelete settings for a volume)
snap restore -s snapshot_name volume_name (Restores a snapshot on the specified volume name)
snap sched volume_name weeks days hours@time (Creates a snapshot schedule on a volume i.e. snap sched volume 4 5 1@07)
snap delta volume_name (Shows delta changes between snapshots for a given volume)
snap reserve volume_name (Shows the snap reserve for a given volume)
snap reclaimable volume_name snapshot_name (Shows the amount of space reclaimable if you remove this snapshot from the volume)
options cifs.show_snapshot on (Allows snapshot directory to be browse-able via CIFS)
options nfs.hide_snapshot off (Allows snapshot directory to visible via NFS)

SnapMirror
options snapmirror.enable on (turns on SnapMirror. Replace on with off to toggle)
rdfile /etc/snapmirror.allow (Performed on the Source Filer. You should see you destination filers in this file.)
wrfile /etc/snapmirror.allow (Performed on the Source Filer. Overwrites the file with the specified destination filer name and ip address)
vol restrict volume_name (Performed on the Destination. Makes the destination volume read only which must be done for volume based replication. Don’t use for Qtree based replication)
snapmirror initialize -S srcfiler:source_volume dstfiler:destination_volume (Performed on the destination. This is for full volume mirror. For example snapmirror initialize -S filer1:vol1 filer2:vol2)
snapmirror initialize -S srcfiler:/vol/vol1/qtree dstfiler:/vol/vol1/qtree (Performed on the destination. Performans the same as the command above but for Qtree’s only)
snapmirror status (Shows the status of snapmirror and replicated volumes or qtree’s)
snapmirror quiesce volume_name (Performed on Destination. Pauses the SnapMirror Replication)
snapmirror break volume_name (Performed on Destination. Breaks or disengages the SnapMirror Replication)
snapmirror resync volume_name (Performed on Destination. When data is out of date, for example working off DR site and wanting to resync back to primary, only performed when SnapMirror relationship is broken)
snapmirror update -S srcfiler:volume_name dstfiler:volume_name (Performed on Destination. Forces a new snapshot on the source and performs a replication, only if an initial replication baseline has been already done)
snapmirror release volume_name dstfiler:volume_name (Performed on Destination. Removes a snapmirror destination)
/etc/snapmirror.conf (edit or wrfile this file to enter in a snapmirror schedule. i.e. srcfiler:vol1 dstfiler:vol1 – 15 * * * * This will replicate every 15 minutes. Each * represents a value. Starting from right to left you have day of week, month, day of month, hour minute. Each value can only be a number. i.e. for month enter in 5 for May)

Cluster
cf enable (enable cluster)
cf disable (disable cluster)
cf takeover (take over resources from other controller)
cf giveback (give back controller resources after a take over)

vFiler – Multistore
vfiler status (Displays the status of the vfiler i.e. running or stopped)
vfiler run vfiler_name setup (Runs the vfiler setup wizard)
vfiler run vfiler_name cifs setup (Runs the cifs setup wizard for a vfiler)
vfiler create vfiler_name -i x.x.x.x /vol/volume_name or qtree_name (Creates a vfiler name with ip address x.x.x.x and assigns the volume or qtree to the vfiler)
vfiler add vfiler_name -i x.x.x.x /vol/volume_name (Adds an ip address and additional volume to an existing vfiler name)
vfiler remove vfiler_name -i x.x.x.x /vol/volume_name (Removes an IP address and volume from an existing vfiler)
vfiler rename vfiler_name_old vfiler_name_new (Renames a vfiler from old name to new name)
vfiler stop vfiler_name (Stops a vfiler instance)
vfiler start vfiler_name (Starts a vfiler instance)

Autosupport
options autosupport.support.enable on (Turns Autosupport on)
options autosupport.support.enable off (Turns Autosupport off)
autosupport.doit “description” (creates an autosupport alert with a user defined description)

Hot Spares
Any functioning disk that is not assigned to an aggregate but is assigned to a controller functions as a hot spare disk
disk show
vol status -r (displays which disks are allocated as spare)

Disks
disk show (Show disk information)
disk show -n (Show unowned disks)
disk assign 0d.02.0 -s unowned (Changes ownership from owned to unowned or to other cluster member)
disk assign 0d.02.0 (assigns the disk to the controller you perform the command on)
options disk.auto_assign off (turns auto assign of unowned disks to controllers to off)
options disk.auto_assign on (turns auto assign of unowned disks to controllers to on)
storage show disk -p (displays primary, secondary port, shelf and bay in a metro cluster)

Luns
lun setup (runs the cli lun setup wizard)
lun offline lun_path (takes a lun offline)
lun online lun_path (brings a lun online)
lun show -v (Verbose listing of luns)
lun move /lun_path_source /lun_path_destination (Move lun from source to destination)
lun resize -f lun_path +|- new_size k|m|g|t (Resizes a lun by adding space (+) or subtracting space (-) Note: a lun can only ever grow 10x it’s original size)

Fiber FCP
fcadmin config -t taget 0a (Changes adapter from initiator to target)
fcadmin config (lists adapter state)
fcadmin start (Start the FCP service)
fcadmin stop (Stop the FCP service)
fcp show adapters (Displays adapter type, status, FC Nodename, FC Portname and slot number)
fcp nodename (Displays fiber channel nodename)
fcp show initiators (Show fiber channel initiators)
fcp wwpn-alias set alias_name (Set a fiber channel alias name for the controller)
fcp wwpn-alias remove -a alias_name (Remove a fiber channel alias name for the controller)
igroup show (Displays initiator groups with WWN’s)

iSCSI
iscsi start (Start the iscsi service)
iscsi stop (Stop the iscsi server)
iscsi status (Show whether iscsi server is running or not running)
iscsi interface show (Show which interfaces are enabled or disabled for iscsi)
iscsi interface enable interface_name (Enable an interface for iscsi)
iscsi interface disableinterface_name (Disable an interface for iscsi)
iscsi nodename (Display the controllers iscsi nodename)
igroup show (Displays iSCSI initiators)

Cifs
cifs setup (cifs setup wizard)
cifs terminate (terminate the cifs service)
cifs restart (restarts cifs)
cifs shares (displays cifs shares)
cifs status (show status of cifs)
cifs lookup SID|name (Either displays the SID if you type in the name or name if you type in the SID)
cifs sessions (Show you current cifs sessions)
cifs sessions -s username (Shows the current session for a user)
cifs broadbast -v volume_name “message” (Broadcast a message to all users connected to volume_name)
cifs shares -add share_name /vol/volume_name/qtree_name (Create a cifs share on a specific volume or qtree)
cifs shares -delete share_name (Deletes a share name)
cifs shares share_name (Displays full path and permissions of the share)
cifs access share_name -g user_rights (Grants specific user rights to the share)
cifs access share_name user_name permission (Grants a specific permission to a user for a share. Permissions = Full Control, Change, Read, No Access)
cifs domain info (Lists information about the filers connected Windows Domain)
cifs testdc ip_address (Test a specific Windows Domain Controller for connectivity)
cifs prefdc (Displays configured preferred Windows Domain Controllers)
cifs prefdc add domain address_list (Adds a preferred dc for a specific domain i.e. cifs prefdc add netapplab.local 10.10.10.1)
cifs prefdc delete domain (Delete a preferred Windows Domain Controllers)
cifs gpresult (Displays which Windows Group Policies apply to this filer)
cifs gpupdate (Forces an update of Windows Group Policy)
cifs top (Performance data for cifs. cifs.per_client_stats.enable option must be on to use this feature)
vscan on (Turns virus scanning on)
vscan off (Turns virus scanning off)
vscan reset (Resets virus scanning)

NFS
nfs setup (Runs the NFS setup wizard)
exportfs (Displays current exports)
exportfs -p path (Adds exports to the /etc/exports file)
exportfs -uav (Unexports all current exports)
exportfs -u path (Unexports a specific export from memory)
exportfs -z path (Unexports a specific export and also removes it from /etc/exports)
exportfs -a (Updates memory buffer with contents in /etc/exports)
nfsstat -d (Displays NFS statistics)

HTTP Admin
options httpd.admin.enable on (Turns on http web admin, na_admin)
options httpd.admin.access host=x.x.x.x,x.x.x.x (Allows admin access for specific hosts separated by a comma)

SIS (Deduplication)
sis status (Show SIS status)
sis config (Show SIS config)
sis on /vol/vol1 (Turn on deduplication on vol1)
sis config -s mon-fri@23 /vol/vol1 (Configure deduplication to run every monday – Friday at 11pm on vol1)
sis start -s /vol/vol1 (Run deduplication manually on vol1)
sis status -l /vol/vol1 (Display deduplication status on vol1)
df -s vol1 (View space savings with deduplication)
sis stop /vol/vol1 (Stop deduplication on vol1)
sis off /vol/vol1 (Disables deduplication on vol1)

User Accounts
useradmin user add user_name -g group_name (Adds a user to a group)
useradmin user list (Lists current users)
useradmin user list user_name (List specific user information)
useradmin group list (Lists current groups)
useradmin group delete group_name (Deletes a specific group name)
useradmin group modify group_name -g new_group_name (Modify group name)
useradmin user delete user_name (Delete a specific user)
useradmin user modify user_name -g group_name (Adds a user to a group)
useradmin domain user add user_name -g group_name (Adds a Windows Domain user to a local group)
useradmin domain user list -g group_name (List Windows Domain users in a specific group)

DNS
dns flush (Flushes the DNS cache)

Reading and Writing Files (Deduplication)
rdfile path/file (Reads a file)
wrfile path/file (Writes to a file. Warning this method overwrites the file. Make sure you copy out original contents if you wish to keep it. If you haven’t used this before try on the simulator.)
wrfile -a path/file (Writes to a file by appending the changes)

Logging
/etc/messages (All logging is for the system is stored here)

Network
if config vif0 -addr=x.x.x.x -mask=x.x.x.x -gw=x.x.x.x -dns-x.x.x.x (Sets IP information on the interface named vif0)

Windows Storage Viewer 1.1

May 9, 2011 Leave a comment

Windows Storage Viewer (WSV) version 1.1 is now available.  Major changes since the previous version: The main Server tab now also displays details on physical drive count and iSCSI objects Three new tabs have been added; iSCSI Portals, iSCSI Sessions & Drives iSCSI Portals lists the portals defined to the host.  Any Windows Storage Viewer (WSV) version 1.1 is now available.

  1. The main Server tab now also displays details on physical drive count and iSCSI objects
  2. Three new tabs have been added; iSCSI Portals, iSCSI Sessions & Drives
  3. iSCSI Portals lists the portals defined to the host.  Any can be right-clicked and deleted from the context menu
  4. iSCSI Sessions lists the active sessions and devices.
  5. Drives lists the physical drives, partitions and their associated drives.
  6. The Targets tab has been renamed to iSCSI Targets and now enables any target to be remotely logged on or off.
  7. A new menu option “Add Portal” allows a new portal to be added to a host remotely, in a similar way to the Quick Connect option works on the iSCSI Initiator tool.

You can download version Windows Storage Viewer 1.1.0 here.  Credit goes to the Storage Architect.

WSV Screenshot 1
WSV Screenshot 1
WSV Screenshot 2
WSV Screenshot 2
WSV Screenshot 3
WSV Screenshot 3
WSV Screenshot 4
WSV Screenshot 4
WSV Screenshot 5
WSV Screenshot 5
Categories: iSCSI Tags: , ,

Windows Storage Server iSCSI Target software available for Win 2008 R2 now

May 9, 2011 Leave a comment

This week, Microsoft released their iSCSI Target software for general availability.  Previously this had only been available for installations with Windows Storage Server via OEMs.  Now anyone with Windows 2008 R2 can install and use the software without restrictions.

Installation

Installation of the Microsoft Target is pretty simple; it can be downloaded here: Microsoft iSCSI Target, then follow the instructions.

Configuration

The Microsoft Target is configured through a MMC plugin that can be found under the Administrative Tools folder from the Start Menu (see screenshot 1).  As the management tool uses a vanilla MMC window, it’s rather basic in appearance but follows standard conventions of right-click options to select properties or using the Action menu item.  For instance, right-clicking on the Microsoft iSCSI Software Target displays a context menu and Properties option, leading to a two-tab dialog box.  This allows the IP and iSNS details to be specified.  In my example (screenshot 2) I’ve tied the Microsoft Target to a single IP address as all of the Target software products are deployed on the same server.  There doesn’t appear to be an option to change the listening port, which defaults to 3260.  iSNS server configuration is pretty simple, consisting of a list of either IP address or server names (screenshot 3).

Targets can be created by clicking on the iSCSI Targets tree item and selecting Action or right clicking.  The configuration wizard asks for basic details such as the target name and default security details; specifying “*” for the IQN provides open access.  In my test environment I created two targets, target0 and target1. These are shown in screenshot 4.  The properties for a target allow configuration of security/authentication, performance parameters and virtual disks.  Virtual disks use the VHD format and can be either fully provisioned or differencing.  Unfortunately thin provisioned VHDs are not supported, which is disappointing (see screenshot 5).  Once created, virtual disks are associated with a target and exported for use.  One or more LUNs can be associated with a Target (as is standard with SCSI).  These appear to the hosts as separate devices.  The benefit of having multiple LUNs to the same target is that security is performed at the target rather than LUN level.  Therefore access to one or more devices can be performed once on the Target.  Screenshot 6 shows the two targets configured and idle (no connected hosts), with screenshot 7 showing a single target login.  There doesn’t appear to be a way to find the logged in initator for a target although this may be available via WMI (still under investigation).

Snapshots

A point in time (PIT) copy of a LUN can be created using snapshots.  Each snapshot represents a copy of the LUN at the time the snapshot was taken and can be used to return the LUN to a previous state.  Alternatively the snapshot can be exported via another target or mounted to the iSCSI Target host itself.  Either way, these LUNs are read-only copies and can’t be modified.  Screenshots 8 and 9 show the snapshot list and a schedule to create a daily snapshot of Virtual Disk 0.

Summary

The Microsoft iSCSI Target offers basic functionality with the ability to add snapshots.  Not being able to use thin provisioned VHDs is disappointing, however the underlying filesystem could be placed on thinly provisioned disks, but that may defeat the point of presenting storage using the iSCSI Target.  Of course the iSCSI Target is free and free is (almost) always good.

Categories: iSCSI Tags: ,

NetApp wrfile and rdfile commands

May 9, 2011 Leave a comment

There are the standard “rdfile” and “wrfile”, to read and write to a file respectively. Remember that wrfile is a complete file writer, it is not a file editor. Soon as you commit that command, you will have overwritten that file with a blank copy. You can use “wrfile -a” to append to a file, which can be useful for things like hosts files. Best bet is to copy the output of “rdfile” into your favourite editor before pasting it back in after doing “wrfile”.

In “priv set advanced” you can use “ls” to look at what actually exists on your volumes (very useful in some cases, although no options available), but there is also a hidden java shell, “java netapp.cmds.jsh”. This gives you the ability to copy, move, and delete files (as well as a few other things). Use with caution and as a last resort as it’s totally unsupported, but can be useful if you’ve not got CIFS or NFS access but need to move things around.

There are others ways to manipulate and copy files around, checkout NetApp File Copy

Categories: netapp Tags: ,

How to copy files within NetApp

May 9, 2011 Leave a comment

It always comes up, how can I copy single files, or large areas directly from the NetApp console? Generally the answer comes back, you can’t, use RoboCopy or rsync or another file migration tool. However there are definitely ways of copying files around directly from the filer itself, and often this is the most efficient way of doing it! However, these aren’t the most intuitive or well documented commands.

There may be other methods, and if you have something you have used in the past or you know of, please feel free to share! Not all methods are suitable for all tasks, but each has it’s own individual uses.

ndmpcopy

This is often overlooked as a file / folder copy command, and is often just used to migrate entire volumes around. In fact it can be used to copy individual folders or filers around, and even better can be used to copy data to other filers! Make sure ndmp is enabled first (ndmpd on). The syntax is quite simple…

ndmpcopy /vol/vol_source_name/folder/file /vol/vol_dest_name/file

Just to break this down, we are choosing to copy a filer from “/vol/vol_source_name/folder” and we want to copy it into “/vol/vol_dest_name”. This isn’t too restrictive, we don’t have to keep the same path, and we can even copy things about in the same volume (such as copying things into QTrees if you need). You can copy things from an entire volume, to a single QTree, down to single folders way down in the directory tree. The only real restriction is you cannot use wildcards, and you cannot select multiple files to copy.

If you want to copy files from one filer to another, we simply extend this syntax…

ndmpcopy -sa:-da:source_filer:/vol/vol_source_name/folder/file destination_filer:/vol/vol_dest_name/file

Replaceandwith the source filer (-sa) login and the destination filer (-da) login. Here we copy a single file from one location on one filer, to another on another!

We can also define the the incremental level of transfer. By default the system will do a level 0 transfer, but you can define to do a single level 1 or 2 incremental transfer. If the data has changed too much, or too much time has passed since the last copy, this may fail or may take longer than a clean level 0.

This can be very useful, and as the filer is doing this at block level, all ACL’s are completely preserved. Take care to enable that the security style is the same on the destination to prevent ACL’s from being converted however.

mv

This is a “priv set advanced” command, and so apparently reserved for “Network Appliance personnel”. “mv” is very straight forward, give it a source and destination, and a single file will get moved. Remember this is a move, so it is not technically a file copy at all.

mv <file2> <file2>

flex clone

This is a real cheat, but a great cheat! You clone an entire volume based on a snapshot, then you split this volume off from the snapshot. This a great way of getting an entire volume copied with minimal disruption. The clone is almost immediately created, and can then be online and used live. The clone split operation happens in the background so you can move things and be live at the new location in very little time at all.

vol clone create new_vol -s volume -b source_vol source_snap

Where “new_vol” is the new volume you want to create, “-s volume” is the space reservation, “-b source_vol” is the parent volume that the clone will be based on and “source_snap” is the snapshot you want to base the clone on.

vol clone split start new_vol

Will then start the split operation on the “new_vol”

vol copy

Rather than a flex clone, if you haven’t got that licensed, you can do a full vol copy. This is effectively the same as a vol clone, but you need to do the entire operation before the volume is online and available. You need to create the destination volume first and then restrict it so that it is ready for the copy. Then you start the copy process.

vol copy start -s snap_name source_vol dest_vol

“-s snap_name” defines the snapshot you want to base the copy on, and “source_vol” and “dest_vol” define the source and destination for the copy. “-S” can also be used to copy across all the snapshots that are also included in the volume. This can be very useful if you need to copy all backups within a volume as well as just the volume data.

lun clone

If you need to copy an entire LUN, and again you haven’t got flex clone licensed, you can do a direct lun clone, and lun clone split. This is only really useful if you need a duplicate of the LUN in the same volume. It will create a clone based on a snapshot that already exists.

lun clone create clone_path -b parent_path parent_snap

“clone_path” being the new LUN you want to create, “parent_path” being the source LUN you want to clone from and “parent_snap” being a snapshot that already exists of the parent LUN. The you need to split the LUN to become independent with.

lun clone split start clone_path

SnapMirror / SnapVault

You can also use SnapMirror or SnapVault to copy data around. SnapMirror can be useful if you need to copy a large amount of data that will change. You can setup a replication schedule, then during a small window of downtime, you can do a final update and bring the new destination online.

dump and restore

This isn’t really a good way of copying files around, but it certainly a method. If you attach a tape device directly to the filer, you could do a dump, then a restore to a new location or filer. This can be the only method if you have a large amount of data to move to a new site, and no bandwidth or no way of having the 2 systems side by side temporarily.

Follow

Get every new post delivered to your Inbox.