Archive

Archive for May, 2011

Firewall usage with SnapMirror

May 24, 2011 Leave a comment

SnapMirror uses the typical socket/bind/listen/accept sequence on a TCP socket.

SnapMirror source binds on port 10566.  The destination storage system contacts the SnapMirror source storage system at port 10566 using any of the available ports assigned by the system.  The firewall must allow requests to this port of the SnapMirror source storage system.  Synchronous SnapMirror requires additional TCP ports to be open.  The source storage system listens on TCP ports 10566 and 10569.  The destination storage system listens on TCP ports 10565, 10567, and 10568.  Therefore, you should ensure that the firewall allows a range of TCP ports from 10565 to 10569.
Categories: netapp Tags: , ,

Removing broken SnapMirror relationships

May 24, 2011 Leave a comment

If you have managed SnapMirror relationships on a NetApp SAN you no doubt encountered problems deleting them after they have been broken off. One command I have found that resolves this if FilerView will not work is:

snapmirror release source { filer:volume | filer:qtree }

Tell snapmirror that a certain direct mirror is no longer going to request updates. If a certain destination is no longer going you request updates, you must tell SnapMirror so that it will no longer retain a snapshot for that destination. This command will remove snapshots that are no longer needed for replication and can be used to clean up SnapMirror created snapshots after snapmirror break is issued on the destination side.

I find I have to use this command every so often to clean up my snapmirror configs.

NetApp SnapMirror for Migrations

May 24, 2011 Leave a comment

Volume migration using SnapMirror

Ontap Snapmirror is designed to be simple, reliable and cheap tool to facilitate disaster recovery for business critical applications. It comes default with Ontap but has to be licensed to use.
Apart from DR, snapmirror is an extremely useful in situation like

1. Aggregates or volumes reached maximum size limit.

2, Need to change volume disk type (tiering).

Prep workBuild a new aggregate from free disks

1. List the spares in the system

# vol status -s

Spare disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

———       ——  ————- —- —- —- —– ————–    ————–

Spare disks for block or zoned checksum traditional volumes or aggregates

spare           7a.18 7a    1   2   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare           7a.19 7a    1   3   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare           7a.20 7a    1   4   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.21 7a    1   5   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.22 7a    1   6   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.23 7a    1   7   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.24 7a    1   8   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.25 7a    1   9   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

spare          7a.26 7a    1   10  FC:B   –  FCAL 10000 372000/761856000  560879/1148681096

2. Create new aggregate

Add the new disks. Make sure you add sufficient disks to create complete raid groups. Else later when you add new disks to the aggregate , all the new writes will go to the newly added disks until it fills up to the level of other disks in the raid group. This creates a disk bottleneck in the filer as all the writes are  now handled by limited number of spindles.

# aggr add aggr_new 7a.18,7a.19,7a.20,7a.21,7a.22,7a.23,7a.24,7a.25,7a.26,7a.27

3. Verify the aggregate is online

# aggr status aggr_new

3. Create new volumes with name vol_new and size 1550g on aggr_new

# vol create vol_new aggr_new 1500g

4. Verify the volume is online

# vol status vol_new

5. Setup snapmirror between old and new volumes

First you need to restrict the destination volume by using the command  # vol restrict vol_new

a. snapmirror initialize -S filername:volname filername:vol_new

b. Also make an entry in /etc/snapmirror.conf file for this snapmirror session

filername:/vol/volume filername:/vol/vol_new kbs=1000 0 0-23 * *

Note kbs=1000 is throttling the snapmirror speed

On day of cut over

Update snapmirror session

# snapmirror update vol_new
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.

# snapmirror status vol_new
Snapmirror is on.
Source Destination State Lag Status
filername:volume_name filername:vol_new   Snapmirrored 00:00:38 Idle

Quiesce the relationship – this will finish the in session transfers, and then halt any further updates from snapmirror source to snapmirror destination. Quiecse the destination

# snapmirror quiesce vol_new
snapmirror quiesce: in progress
This can be a long-running operation. Use Control – C (^C) to interrupt.
snapmirror quiesce: dbdump_pb : Successfully quiesced

Break the relationship – this will cause the destination volume to become writable

# snapmirror break vol_new
snapmirror break: Destination vol_new is now writable.
Volume size is being retained for potential snapmirror resync. If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off

Enable quotas: quota on volname

Rename volumes

Once the snapmirror session is terminated, we can now rename the volumes

# vol rename volume_name volume_name_temp

# vol rename vol_new volume_name

Remember, the shares move with the volume name. ie. if the volume hosting the share is renames the corresponding change is reflected in the recreate the path of the share. This requires us to delete the old share and recreate it with the correct volume name. File cifsconfig_share.cfg under etc$ has listing of the commands run to create the shares. Use this file as reference.

cifs shares -add “test_share$” “/vol/volume_name” “Admin Share Server Admins”

cifs access “test_share$” S-1-5-32-544 Full Control

Use a -f at the end of the cifs shares -add line to eliminate the  y or n prompt.

Start quotas on the new volume

# quota on volume_name

You are done. The shares and qtrees are now referring to the new volume on a new aggregate. Test the shares by mapping them on a windows host.

Categories: netapp Tags: , ,

NetApp DataMotion for OnTap 8.0.1 7-Mode

May 17, 2011 Leave a comment

NetApp DataMotion for Volumes.  Not many people know about this feature so I thought I would let the folks out there know DataMotion lets you move volumes nondisruptively between aggregates on the same controller. With Data ONTAP 8.0.1 7-Mode, this is supported for volumes that only contain one or more LUNs. This nondisruptive data movement is useful for many purposes: for example, to free up space in an aggregate, to load-balance disk operations, to move data to a different tier of storage, and to replace old disk drives with newer models. Application and user access is maintained during and after data movement, and data can be moved between aggregates that use different drive types; FC, SAS, SSD, and SATA drives are all supported.

NetApp DataMotion for volumes lets you nondisruptively migrate volumes containing LUNs.

Differences between DataMotion for Volumes and DataMotion for vFiler.  DataMotion for vFiler allows you to migrate volumes between separate storage systems or HA pairs. DataMotion for vFiler is managed using NetApp Protection Manager. DataMotion for Volumes can only be invoked from the command line by using the vol move command.

Great pdf on this subject.

Categories: netapp Tags: ,

NetApp “Config” Command

May 12, 2011 Leave a comment

I think it’s very important to save a config of a good setup.  Firstly it’s a great reference if you ever need to go back and refer to things, secondly it’s a great way to show what you did was actually correct and that you did configure things correctly from the start!

There is a handy tool provided within OnTap to do entire config dumps, compares and restores. This is limited to the filers base configuration and doesn’t necessarily include areas like volume setup.

filer01> config
Usage:
config clone
config diff [-o ] [ ]
config dump [-f] [-v]
config restore [-v]

The command is very simple and straight forward. You start by dumping out the configuration from the filer.  This automatically goes into /etc/configs. From here you can then clone the config if needed, or compare (diff) the config. Running diff is a very good way of comparing a config between 2 points in time, if you aren’t sure what has changed, or even if you are comparing a filer upgrade and you copy the config files between the 2 systems (checkout [cref netapp-file-copy]).  And finally you can also use the restore feature, although this would probably require a reboot, and may have a knockon affect to what may or may not be required in various other config files within /etc.

Overall a very useful command.  I use this most for taking backups of filer configs and comparing them between similar systems (for instance primary and DR), or even comparing configs over time.

Netapp Admin Pocket Guide

May 10, 2011 2 comments

Here are a few Data Ontap CLI commands that I’ve put together for reference. I will continuously add to this list

General Commands
setup (Re-Run initial setup)
halt (Reboots controller into bootrom)
reboot (Reboots controller back to Data Ontap)
sysconfig -a (System configuration and information)
java netapp.cmds.jsh (limited freebsd cli)
storage show disk (show physical information about disks)
passwd (Change password for logged in user)
sasadmin shelf (shows a graphical layout of your shelves with occupied disk slots)
options trusted.hosts x.x.x.x or x.x.x.x/nn (hosts that are allowed telnet, http, https and ssh admin access. x.x.x.x = ip address, /nn is network bits)
options trusted.hosts * (Allows all hosts to the above command)
sysstat -s 5 (Displays operating statistics every 5 seconds i.e. CPU, NFS, CIFS, NET, DISK, etc)

Diagnostics
Press DEL at boot up during memory test followed by boot_diags and select all (Diagnostic tests for a new install)
priv set diags (Enter diagnostics CLI mode from the Ontap CLI)
priv set (Return to normal CLI mode from diagnostics mode)

Software
software list (Lists software in the /etc/software directory)
software get http://x.x.x.x/8.0_e_image.zip 8.0_e_image.zip (Copy software from http to software directory)
software delete (Deletes software in the /etc/software directory)
software update 8.0_e_image.zip -r (Install software. The -r prevents it rebooting afterwards)

ACP (Alternate Control Path)
options acp.enabled on (Turns on ACP)
storage show acp -a (show ACP status)

Root Volume
The Root Volume can only be on a 32 bit aggregate if you want to create a 64 bit aggregate you must create a seperate aggregate.

Aggregates
aggr create aggregate_name (Creates an Aggregate)
aggr destroy aggregate_name (removes an Aggregate)
aggr offline aggregate_name (takes an Aggregate offline)
aggr online aggregate_name (bring an Aggregate online)
aggr options aggregate_name root (makes an Aggregate root|Only use if your Root Aggregate is damanged)
aggr status (shows status of all aggregates)
aggr status aggregate_name (show status of a specific Aggregate)
aggr show_space aggregate_name (shows specific aggregate space information)
aggr options aggregate_name nosnap=on (Disable snapshot autocreation)
aggr options aggregate_name raidsize=x (x being the number of drives in the RAID)
snap reserve -A aggregate_name 0 (Set Aggregate snap reserve to 0% or any number you enter)

Volumes
vol create volume_name (Creates a volume)
vol autosize volume_name (Shows autosize settings for a given volume)
vol autosize volume_name on|off (Turns Volume autosize on or off)
vol options volume_name (Lists volume options)
vol size volume_name + size k|m|g|t (increase volume size by KB, MB, GB or TB)
vol status -f (lists broken or failed disks)

Qtree’s
qtree create /vol/volume_name/qtree_name (Create a qtree within a volume)
qtree security /vol/volume_name/qtree_name unix|ntfs|mixed (Change security settings of a qtree)
qtree stats qtree_name (Shows CIFS or NFS ops/sec for a given qtree)

Snapshots
snap create volume_name snapshot_name (create a snapshot)
snap list volume_name (List snapshots for a volume)
snap delete volume_name snapshot_name (delete a snapshot on a volume)
snap delete -a volume_name (Deletes all snapshots for a volume)
snap autodelete volume_name show (Shows snapshot autodelete settings for a volume)
snap restore -s snapshot_name volume_name (Restores a snapshot on the specified volume name)
snap sched volume_name weeks days hours@time (Creates a snapshot schedule on a volume i.e. snap sched volume 4 5 1@07)
snap delta volume_name (Shows delta changes between snapshots for a given volume)
snap reserve volume_name (Shows the snap reserve for a given volume)
snap reclaimable volume_name snapshot_name (Shows the amount of space reclaimable if you remove this snapshot from the volume)
options cifs.show_snapshot on (Allows snapshot directory to be browse-able via CIFS)
options nfs.hide_snapshot off (Allows snapshot directory to visible via NFS)

SnapMirror
options snapmirror.enable on (turns on SnapMirror. Replace on with off to toggle)
rdfile /etc/snapmirror.allow (Performed on the Source Filer. You should see you destination filers in this file.)
wrfile /etc/snapmirror.allow (Performed on the Source Filer. Overwrites the file with the specified destination filer name and ip address)
vol restrict volume_name (Performed on the Destination. Makes the destination volume read only which must be done for volume based replication. Don’t use for Qtree based replication)
snapmirror initialize -S srcfiler:source_volume dstfiler:destination_volume (Performed on the destination. This is for full volume mirror. For example snapmirror initialize -S filer1:vol1 filer2:vol2)
snapmirror initialize -S srcfiler:/vol/vol1/qtree dstfiler:/vol/vol1/qtree (Performed on the destination. Performans the same as the command above but for Qtree’s only)
snapmirror status (Shows the status of snapmirror and replicated volumes or qtree’s)
snapmirror quiesce volume_name (Performed on Destination. Pauses the SnapMirror Replication)
snapmirror break volume_name (Performed on Destination. Breaks or disengages the SnapMirror Replication)
snapmirror resync volume_name (Performed on Destination. When data is out of date, for example working off DR site and wanting to resync back to primary, only performed when SnapMirror relationship is broken)
snapmirror update -S srcfiler:volume_name dstfiler:volume_name (Performed on Destination. Forces a new snapshot on the source and performs a replication, only if an initial replication baseline has been already done)
snapmirror release volume_name dstfiler:volume_name (Performed on Destination. Removes a snapmirror destination)
/etc/snapmirror.conf (edit or wrfile this file to enter in a snapmirror schedule. i.e. srcfiler:vol1 dstfiler:vol1 – 15 * * * * This will replicate every 15 minutes. Each * represents a value. Starting from right to left you have day of week, month, day of month, hour minute. Each value can only be a number. i.e. for month enter in 5 for May)

Cluster
cf enable (enable cluster)
cf disable (disable cluster)
cf takeover (take over resources from other controller)
cf giveback (give back controller resources after a take over)

vFiler – Multistore
vfiler status (Displays the status of the vfiler i.e. running or stopped)
vfiler run vfiler_name setup (Runs the vfiler setup wizard)
vfiler run vfiler_name cifs setup (Runs the cifs setup wizard for a vfiler)
vfiler create vfiler_name -i x.x.x.x /vol/volume_name or qtree_name (Creates a vfiler name with ip address x.x.x.x and assigns the volume or qtree to the vfiler)
vfiler add vfiler_name -i x.x.x.x /vol/volume_name (Adds an ip address and additional volume to an existing vfiler name)
vfiler remove vfiler_name -i x.x.x.x /vol/volume_name (Removes an IP address and volume from an existing vfiler)
vfiler rename vfiler_name_old vfiler_name_new (Renames a vfiler from old name to new name)
vfiler stop vfiler_name (Stops a vfiler instance)
vfiler start vfiler_name (Starts a vfiler instance)

Autosupport
options autosupport.support.enable on (Turns Autosupport on)
options autosupport.support.enable off (Turns Autosupport off)
autosupport.doit “description” (creates an autosupport alert with a user defined description)

Hot Spares
Any functioning disk that is not assigned to an aggregate but is assigned to a controller functions as a hot spare disk
disk show
vol status -r (displays which disks are allocated as spare)

Disks
disk show (Show disk information)
disk show -n (Show unowned disks)
disk assign 0d.02.0 -s unowned (Changes ownership from owned to unowned or to other cluster member)
disk assign 0d.02.0 (assigns the disk to the controller you perform the command on)
options disk.auto_assign off (turns auto assign of unowned disks to controllers to off)
options disk.auto_assign on (turns auto assign of unowned disks to controllers to on)
storage show disk -p (displays primary, secondary port, shelf and bay in a metro cluster)

Luns
lun setup (runs the cli lun setup wizard)
lun offline lun_path (takes a lun offline)
lun online lun_path (brings a lun online)
lun show -v (Verbose listing of luns)
lun move /lun_path_source /lun_path_destination (Move lun from source to destination)
lun resize -f lun_path +|- new_size k|m|g|t (Resizes a lun by adding space (+) or subtracting space (-) Note: a lun can only ever grow 10x it’s original size)

Fiber FCP
fcadmin config -t taget 0a (Changes adapter from initiator to target)
fcadmin config (lists adapter state)
fcadmin start (Start the FCP service)
fcadmin stop (Stop the FCP service)
fcp show adapters (Displays adapter type, status, FC Nodename, FC Portname and slot number)
fcp nodename (Displays fiber channel nodename)
fcp show initiators (Show fiber channel initiators)
fcp wwpn-alias set alias_name (Set a fiber channel alias name for the controller)
fcp wwpn-alias remove -a alias_name (Remove a fiber channel alias name for the controller)
igroup show (Displays initiator groups with WWN’s)

iSCSI
iscsi start (Start the iscsi service)
iscsi stop (Stop the iscsi server)
iscsi status (Show whether iscsi server is running or not running)
iscsi interface show (Show which interfaces are enabled or disabled for iscsi)
iscsi interface enable interface_name (Enable an interface for iscsi)
iscsi interface disableinterface_name (Disable an interface for iscsi)
iscsi nodename (Display the controllers iscsi nodename)
igroup show (Displays iSCSI initiators)

Cifs
cifs setup (cifs setup wizard)
cifs terminate (terminate the cifs service)
cifs restart (restarts cifs)
cifs shares (displays cifs shares)
cifs status (show status of cifs)
cifs lookup SID|name (Either displays the SID if you type in the name or name if you type in the SID)
cifs sessions (Show you current cifs sessions)
cifs sessions -s username (Shows the current session for a user)
cifs broadbast -v volume_name “message” (Broadcast a message to all users connected to volume_name)
cifs shares -add share_name /vol/volume_name/qtree_name (Create a cifs share on a specific volume or qtree)
cifs shares -delete share_name (Deletes a share name)
cifs shares share_name (Displays full path and permissions of the share)
cifs access share_name -g user_rights (Grants specific user rights to the share)
cifs access share_name user_name permission (Grants a specific permission to a user for a share. Permissions = Full Control, Change, Read, No Access)
cifs domain info (Lists information about the filers connected Windows Domain)
cifs testdc ip_address (Test a specific Windows Domain Controller for connectivity)
cifs prefdc (Displays configured preferred Windows Domain Controllers)
cifs prefdc add domain address_list (Adds a preferred dc for a specific domain i.e. cifs prefdc add netapplab.local 10.10.10.1)
cifs prefdc delete domain (Delete a preferred Windows Domain Controllers)
cifs gpresult (Displays which Windows Group Policies apply to this filer)
cifs gpupdate (Forces an update of Windows Group Policy)
cifs top (Performance data for cifs. cifs.per_client_stats.enable option must be on to use this feature)
vscan on (Turns virus scanning on)
vscan off (Turns virus scanning off)
vscan reset (Resets virus scanning)

NFS
nfs setup (Runs the NFS setup wizard)
exportfs (Displays current exports)
exportfs -p path (Adds exports to the /etc/exports file)
exportfs -uav (Unexports all current exports)
exportfs -u path (Unexports a specific export from memory)
exportfs -z path (Unexports a specific export and also removes it from /etc/exports)
exportfs -a (Updates memory buffer with contents in /etc/exports)
nfsstat -d (Displays NFS statistics)

HTTP Admin
options httpd.admin.enable on (Turns on http web admin, na_admin)
options httpd.admin.access host=x.x.x.x,x.x.x.x (Allows admin access for specific hosts separated by a comma)

SIS (Deduplication)
sis status (Show SIS status)
sis config (Show SIS config)
sis on /vol/vol1 (Turn on deduplication on vol1)
sis config -s mon-fri@23 /vol/vol1 (Configure deduplication to run every monday – Friday at 11pm on vol1)
sis start -s /vol/vol1 (Run deduplication manually on vol1)
sis status -l /vol/vol1 (Display deduplication status on vol1)
df -s vol1 (View space savings with deduplication)
sis stop /vol/vol1 (Stop deduplication on vol1)
sis off /vol/vol1 (Disables deduplication on vol1)

User Accounts
useradmin user add user_name -g group_name (Adds a user to a group)
useradmin user list (Lists current users)
useradmin user list user_name (List specific user information)
useradmin group list (Lists current groups)
useradmin group delete group_name (Deletes a specific group name)
useradmin group modify group_name -g new_group_name (Modify group name)
useradmin user delete user_name (Delete a specific user)
useradmin user modify user_name -g group_name (Adds a user to a group)
useradmin domain user add user_name -g group_name (Adds a Windows Domain user to a local group)
useradmin domain user list -g group_name (List Windows Domain users in a specific group)

DNS
dns flush (Flushes the DNS cache)

Reading and Writing Files (Deduplication)
rdfile path/file (Reads a file)
wrfile path/file (Writes to a file. Warning this method overwrites the file. Make sure you copy out original contents if you wish to keep it. If you haven’t used this before try on the simulator.)
wrfile -a path/file (Writes to a file by appending the changes)

Logging
/etc/messages (All logging is for the system is stored here)

Network
if config vif0 -addr=x.x.x.x -mask=x.x.x.x -gw=x.x.x.x -dns-x.x.x.x (Sets IP information on the interface named vif0)

Windows Storage Viewer 1.1

May 9, 2011 Leave a comment

Windows Storage Viewer (WSV) version 1.1 is now available.  Major changes since the previous version: The main Server tab now also displays details on physical drive count and iSCSI objects Three new tabs have been added; iSCSI Portals, iSCSI Sessions & Drives iSCSI Portals lists the portals defined to the host.  Any Windows Storage Viewer (WSV) version 1.1 is now available.

  1. The main Server tab now also displays details on physical drive count and iSCSI objects
  2. Three new tabs have been added; iSCSI Portals, iSCSI Sessions & Drives
  3. iSCSI Portals lists the portals defined to the host.  Any can be right-clicked and deleted from the context menu
  4. iSCSI Sessions lists the active sessions and devices.
  5. Drives lists the physical drives, partitions and their associated drives.
  6. The Targets tab has been renamed to iSCSI Targets and now enables any target to be remotely logged on or off.
  7. A new menu option “Add Portal” allows a new portal to be added to a host remotely, in a similar way to the Quick Connect option works on the iSCSI Initiator tool.

You can download version Windows Storage Viewer 1.1.0 here.  Credit goes to the Storage Architect.

WSV Screenshot 1
WSV Screenshot 1
WSV Screenshot 2
WSV Screenshot 2
WSV Screenshot 3
WSV Screenshot 3
WSV Screenshot 4
WSV Screenshot 4
WSV Screenshot 5
WSV Screenshot 5
Categories: iSCSI Tags: , ,
Follow

Get every new post delivered to your Inbox.