Archive

Posts Tagged ‘Snapshots’

NetApp Cluster-Mode Snapshots

February 28, 2012 Leave a comment

NetApp Snapshottechnology is famous for its uniqueness and ease of use. For example, unlike most competing snapshot technologies, you can create a volume snapshot in seconds, regardless of the size of the volume. It is very efficient in storage space usage.  And you can create hundreds of snapshot copies based on your needs. These excellent features are there for you in Data ONTAP 8.1 operating in cluster-mode.

The familiar 7-mode commands, such as snap reserve, snap sched and snap list, are still operational in cluster-mode. But cluster-mode has a new set of commands (see Fig. 1); which you can explore by simply typing the command (e.g., snap create) and hit return (see Fig. 2).

fig1.PNG

Figure 1: Cluster-mode snapshot commands

fig2.PNG

Figure 2: Cluster-mode snap create’s usage

One thing I did observe is that the cluster-mode snapshot policy seems to take precedence over the 7-mode snap sched command. The default snapshot policy in cluster-mode is that the hourly, daily and weekly snapshot schedules are enabled, with the following frequency:

  • Hourly: 6
  • Daily: 2
  • Weekly: 2

If you try to set snapshot schedule using the command snap sched0 0 0, meaning don’t take any scheduled snapshot, you will be surprised that this command is ignored; and hourly, daily and weekly snapshot copies are taken.

There are several ways to change the default snapshot policy in cluster-mode. Here are some examples:

a)     Use the snap policy modify command to disable the policy

b)     Under the  scope of snapshot policy, use add-schedule, modify-schedule, or remove-schedule to change it to your liking (see Fig. 3)

c)      You can also use snap policy create to create new snapshot policy

fig3.PNG

Figure 3: Cluster-mode snapshot policy commands

In summary, the 7-mode commands, by and large, are still valid for snapshot management. But be aware of the new cluster-mode snapshot policy which may take precedence.

Categories: netapp Tags: , ,

NFS – Cannot see snapshots under .snapshot directory for NetApp

May 6, 2011 Leave a comment
When using NFS the snapshots under the .snapshot directory cannot be seen.

Running snap list on the filer will show the snapshots for the volume.  However, when you cd to the .snapshot directory and run lsl from an NFS client, none of the snapshots are listed.

Cause

This is because the option nfs.hide_snapshot was set to ON.  If this option is turned on, the .snapshot directory itself is visible, but the actual snapshots are hidden.  Although, the snapshots are hidden a user who has permissions can still access the snapshots.

Solution

Set the nfs.hide_snapshot option to OFF.

options nfs.hide_snapshot off

You will then need to unmount and then remount the export.

Categories: netapp Tags: , ,

NetApp/ basic commands for creating aggr-vol-lun

May 2, 2011 Leave a comment

#0 – Overview

Here are some basic CLI-commands to manage your NetApp filer. you can find help about
creating aggregates, volumes and luns for your needs.
let’s start.

#1 – Creating a new aggregate

First, we have to create an aggregate, which holds our volumes. below you will find the basics:

filer01> aggr
The following commands are available; for more information
type "aggr help "
add mirror rename split 
copy offline restrict status 
create online scrub undestroy 
destroy options show_space verify 
media_scrub 

filer01> aggr status
 Aggr State Status Options
 aggr0 online raid_dp, aggr root, raidsize=27

filer01> aggr show_space
Aggregate 'aggr0'

Total space WAFL reserve Snap reserve Usable space BSR NVLOG A-SIS
1740275200KB 174027520KB 78312384KB 1487935296KB 0KB 0KB 

Space allocated to volumes in the aggregate

Volume Allocated Used Guarantee
vol0 56798496KB 4879700KB volume
vol_vmware_00 93001356KB 92518652KB none
vol_vmware_01 95498892KB 95016188KB none

.... snip ....

Aggregate Allocated Used Avail
Total space 1128496420KB 894217980KB 359330952KB 
Snap reserve 78312384KB 1715940KB 76596444KB 
WAFL reserve 174027520KB 17760352KB 156267168KB 

now we are going to create the new aggregate:

filer01> aggr create 
aggr create: No aggregate name supplied.
usage:
aggr create 
 [-f] [-l ] [-L [compliance | enterprise]]
 [-m] [-n] [-r ] [-R ]
 [-T {ATA | BSAS | FCAL | LUN | SAS | SATA | XATA | XSAS}]
 [-t {raid4 | raid_dp}] [-v] 
 - create a new aggregate using the disks in <disk-list>;
<disk-list> is either
 <ndisks>[@<disk-size>] 
or
 -d   ... 
 [-d   ... ].
If a mirrored aggregate is desired, make sure to specify
even number for , or to use two '-d' lists.

filer01> 
filer01> aggr create  -l de.UTF-8 -t raid_dp -d   

The disk-name is shown at the second column of the output of sysstat -r

filer01> sysconfig -r
Aggregate aggr0 (online, raid_dp) (block checksums)
 Plex /aggr0/plex0 (online, normal, active)
 RAID group /aggr0/plex0/rg0 (normal)

 RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
 --------- ------ ------------- ---- ---- ---- ----- -------------- --------------
 dparity 0b.16 0b 1 0 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 parity 0b.32 0b 2 0 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.17 0b 1 1 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.33 0b 2 1 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.18 0b 1 2 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.34 0b 2 2 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.19 0b 1 3 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.35 0b 2 3 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.20 0b 1 4 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.36 0b 2 4 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.21 0b 1 5 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.37 0b 2 5 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.22 0b 1 6 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.38 0b 2 6 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.23 0b 1 7 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.39 0b 2 7 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.24 0b 1 8 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.40 0b 2 8 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.25 0b 1 9 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.41 0b 2 9 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.26 0b 1 10 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.42 0b 2 10 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.27 0b 1 11 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.43 0b 2 11 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.28 0b 1 12 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.44 0b 2 12 FC:B - FCAL 10000 68000/139264000 69536/142410400 
 data 0b.29 0b 1 13 FC:B - FCAL 10000 68000/139264000 69536/142410400 


Spare disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 0b.45 0b 2 13 FC:B - FCAL 10000 68000/139264000 69536/142410400 
filer01> 

#2 – Creating the new volume

After you have successfully created an aggregate, you can go on to create a volume on the aggregate with the following steps:

filer01> vol create
vol create: No volume name supplied.
usage:
vol create <vol-name>
 { [-l ] [-s {none | file | volume}]
 <hosting-aggr-name> <size>[k|m|g|t] }
 |
 { [-l ] 
 [size[k|m|g|t]] -S  }
 |
 { [-f] [-l ] [-m] [-n] 
 [-L [compliance | enterprise]]
 [-r ] [-t {raid4 | raid_dp}]
 <disk-list> }
- create a new volume, either a flexible volume from an existing
aggregate, or a traditional volume from a disk list. A disk
list is either
 <ndisks>[@<disk-size>] 
or
 -d   ... 
 [-d <disk-name1> <disk-name2> ... <disk-nameN>].

filer01>

filer01> vol create vol_foobar -l de.UTF-8 -s volume aggr0 5g
Creation of volume 'vol_foobar' with size 5g on containing aggregate
'aggr0' has completed.
filer01>

filer01> aggr show_space -h
Aggregate 'aggr0'

Total space WAFL reserve Snap reserve Usable space BSR NVLOG A-SIS
 1659GB 165GB 74GB 1419GB 0KB 0KB 

Space allocated to volumes in the aggregate

Volume Allocated Used Guarantee
vol0 54GB 4816MB volume
vol_vmware_00 88GB 88GB none
vol_vmware_01 94GB 94GB none

.... snip ....

vol_rdm_01 5302MB 51MB volume
vol_foobar 5148MB 476KB volume

Aggregate Allocated Used Avail
Total space 1083GB 855GB 335GB 
Snap reserve 74GB 25GB 48GB 
WAFL reserve 165GB 17GB 148GB 


filer01>

You have successfully created your new volume.
now you have to do some tweaks for this volume.

#3 – Removing snapshots on the new volume

If you won’t have snapshots on the volume, you have to disable the default snapshot-task:

.... snip from 'snap sched'

Volume vol_rdm_01: 0 0 0
Volume vol_foobar: 4 14 6@8,10,12,14,16,18

filer01> snap sched -V vol_foobar 0 0 0

filer01> snap sched 
Volume vol0: 4 14 6@8,10,12,14,16,18
Volume vol_vmware_00: 0 0 0
Volume vol_vmware_01: 0 0 0

.... snip ....

Volume vol_rdm_01: 0 0 0
Volume vol_foobar: 0 0 0
filer01>

filer01> vol options vol_foobar nosnap on
filer01> vol options vol_foobar nosnapdir on

set also the snapshot-reserve to ‘0’ if snapshots are not taken//not needed.

.... snip from 'snap reserve' ....

Volume vol_rdm_01: current snapshot reserve is 0% or 0 k-bytes.
Volume vol_foobar: current snapshot reserve is 20% or 1048576 k-bytes.

filer01> snap reserve -V vol_foobar 0

filer01> snap reserve 
Volume vol0: current snapshot reserve is 20% or 11296936 k-bytes.
Volume vol_vmware_00: current snapshot reserve is 0% or 0 k-bytes.
Volume vol_vmware_01: current snapshot reserve is 0% or 0 k-bytes.
Volume vol_vmware_02: current snapshot reserve is 0% or 0 k-bytes.

.... snip ....

Volume vol_rdm_01: current snapshot reserve is 0% or 0 k-bytes.
Volume vol_foobar: current snapshot reserve is 0% or 0 k-bytes.
filer01>

Your volume is successfully created and all snapshots-task are remove. we can now create a Logical Unit Number(LUN) for your data//vmfs-stores//whatever.

#4 – Creating a Logical Unit Number

With RawDiskMapping (RDM), you can only attach a LUN to an other host. the host then creates a filesystem on the disk provided to him and will manage the file I/O.

n4t-nas-01> lun create 
lun create: exactly one of -s, -f and -b should be supplied
usage:
lun create -s  -t  [ -o noreserve ] [ -e space_alloc ] 
lun create -f  -t  [ -o noreserve ] [ -e space_alloc ] 
lun create -b  [ -o noreserve ] 
 - create (writable) LUN storage at the LUN path specified.

NOTE: 'lun create -b' has been deprecated.
 Please consider using 'lun clone create' instead.

For more information, try 'lun help create' or 'man na_lun'
n4t-nas-01>

filer01> 

The mandatory ostype argument is one of following:

aix (the lun will be used to store AIX data)
hpux (the lun will be used to store HP-UX data)
hyper_v (the lun will be used to store Hyper-V data)
linux (the lun will be used to store a Linux raw disk without any partition table)
netware (the lun will be used to store NetWare data)
openvms (the lun will be used to store Open-VMS data)
solaris (the lun will be used to store a Solaris raw disk in a single-slice partition)
solaris_efi (the lun will be used to store Solaris_EFI data)
vld (the lun contains a SnapManager VLD)
vmware (the lun will be used to store VMware data)
windows (the lun will be used to store a raw disk device in a single-partition Windows disk using the MBR (Master Boot Record) partitioning style)
windows_gpt (the lun will be used to store Windows data using the GPT (GUID Partiton Type) partitioning style)
windows_2008 (the lun will be used to store Windows data for Windows 2008 systems)
xen (the lun will be used to store Xen data)
Categories: netapp Tags: , , , ,