NetApp Cluster-Mode Snapshots
NetApp Snapshottechnology is famous for its uniqueness and ease of use. For example, unlike most competing snapshot technologies, you can create a volume snapshot in seconds, regardless of the size of the volume. It is very efficient in storage space usage. And you can create hundreds of snapshot copies based on your needs. These excellent features are there for you in Data ONTAP 8.1 operating in cluster-mode.
The familiar 7-mode commands, such as snap reserve, snap sched and snap list, are still operational in cluster-mode. But cluster-mode has a new set of commands (see Fig. 1); which you can explore by simply typing the command (e.g., snap create) and hit return (see Fig. 2).
Figure 1: Cluster-mode snapshot commands
Figure 2: Cluster-mode snap create’s usage
One thing I did observe is that the cluster-mode snapshot policy seems to take precedence over the 7-mode snap sched command. The default snapshot policy in cluster-mode is that the hourly, daily and weekly snapshot schedules are enabled, with the following frequency:
- Hourly: 6
- Daily: 2
- Weekly: 2
If you try to set snapshot schedule using the command snap sched0 0 0, meaning don’t take any scheduled snapshot, you will be surprised that this command is ignored; and hourly, daily and weekly snapshot copies are taken.
There are several ways to change the default snapshot policy in cluster-mode. Here are some examples:
a) Use the snap policy modify command to disable the policy
b) Under the scope of snapshot policy, use add-schedule, modify-schedule, or remove-schedule to change it to your liking (see Fig. 3)
c) You can also use snap policy create to create new snapshot policy
Figure 3: Cluster-mode snapshot policy commands
In summary, the 7-mode commands, by and large, are still valid for snapshot management. But be aware of the new cluster-mode snapshot policy which may take precedence.
NetApp Powershell with Snaps & Cluster-Mode
Many Powershell cmdlets have been developed for NetApp Data ONTAP. This is true for both 7-mode and cluster-mode. Since the cluster-mode cmdlets are relatively new, we’ll take a close look at it here, using a couple cluster-mode cmdlets to demonstrate how to create a volume snapshot and then restore it.
First, two prerequisites:
· Powershell v2.0, which you can download and install from the Microsoft website here.
· Data ONTAP Powershell Toolkit v1.7, or DataONTAP.zip, you can download from NetApp Community here (see Fig.1). You need to login with your NOW credential to download.
Figure 1: Download DataONTAP.zip from NetApp Community
Note: for Powershell background info, here are some useful websites with good info:
After you downloaded the ONTAP Powershell Tool Kit v1.7, on your Windows host, open a command prompt, create a directory C:\psontap. Unzip the DataONTAP.zip kit to C:\psontap\DataONTAP. Fig. 2 shows the contents after unzipping the tool kit.
Figure 2: Unzip DataONTAP.zip
Next, open a powershell prompt by clicking on the icon (see Fig. 3).
Figure 3: Click on the Powershell icon to open the Powershell command prompt
Then, initialize the ONTAP Powershell Tool Kit using import-module dataontap, as shown in Fig. 4.
Figure 4: Import the DataONTAP module
To distinguish cluster-mode cmdlets from 7-mode ones, the mnemonic ‘Nc’ is used. For example, to create a snapshot, you use New-NaSnapshot in 7-mode, but New-NcSnapshot in cluster-mode. Therefore, to discover all the snapshot cmdlets in cluster-mode, you can simply do get-help *NcSnapshot*, as show in Fig. 5. Note, the whildcard is allowed in cmdlets.
Figure 5: Cluster-mode snapshot cmdlets
In order to take a volume snapshot (or manage the FAS controller), you first need to use the cmdlet connect-NcController to establish communication with the NetApp FAS controller (operating in cluster-mode) from your Windows box (see Fig. 6). When prompted, type in the admin password and hit OK. Note again the cmdlet is cluster-mode because of the presence of ‘Nc’.
Figure 6: Establish connection to a FAS controller operating in cluster-mode
Create a Snapshot
Figure 7 shows how to use the Powershell cmdlet New-NcSnapshot to create a volume snapshot called mysnap. Note that here we assumed that the FlexVol volume test_fv and Vserver vc1 are already on the controller. The parameter VserverContext is useful because it can uniquely identify the volume belonging to the specific Vserver, if there are multiple volumes named test_fv while belonging to different Vservers.
Figure 7: Create a snapshot in cluster-mode using Powershell cmdlet
Restore a Snapshot
Suppose after sometime you want to restore the snapshot mysnap for whatever reason, you can do that by using the cmdlet restore-NcSnapshotVolume, as shown in Fig. 8. The parameter PreserveLunIds allows the LUNs within the volume being restored to stay mapped and their identities preserved.
Figure 8: Restore a volume snapshot
You can explore other cluster-mode snapshot cmdlets by doing get-helpfor each cmdlet shown in Fig. 5. I found these cmdlets are quite straightforward to use, although a little bit verbose. And of course, if you have many volumes and snapshot copies, you can write your own scripts based on these cmdlets to streamline the operations.
VMware Command Cheat Sheet
Esxcfg-Commands | |
esxcfg-advcfg | Set/Get Advance Configuration Parameters (Stored in /etc/vmware/esx.conf) |
esxcfg-auth | Configure authentication (ADS, NIS, Kerberos) |
esxcfg-boot | Configure Boot-Options |
esxcfg-configcheck | Checks format of /etc/vmware/esx.conf (e.g. Used after esx-updates) |
esxcfg-dumppart | Configure partition for core-dumps after PSOD |
esxcfg-firewall | Configure ESX-server firewall |
esxcfg-hwiscsi | Configure hardware iSCSI initiators |
esxcfg-info | Get information about hardware, resources, storage, … of the ESX-Server |
esxcfg-init | Used Internally on boot |
esxcfg-linuxnet | Setup/Remove linux network devices (ethX) |
esxcfg-module | Enable/Disable/ Add new/ Query VMKernal modules and set/ get parameters for them. |
esxcfg-mpath | Configure multipathing for Fibre-Channel and iSCSI |
esxcfg-nas | Configure NFS-datastores (“NFS-client”) |
esxcfg-nics | Configure physical nics (VmnicX). |
esxcfg-pciid | Recreate PCI-device list /etc/vmware/{pci.ids, pcitable, pcitable.linux, vmware-device.map } from the configuration files /etc/vmware/pciid/*.xml |
esxcfg-rescan | Rescan a SCSI/FC/iSCSI adapter. |
esxcfg-resgrp | Configure resource groups |
esxcfg-route | Configure the VMKernel default route |
esxcfg-swiscsi | Configure /Rescan software iSCSI initiator |
esxcfg-upgrade | Used for upgrades from ESX2.x to ESX3 |
esxcfg-vmhbadevs | Get information about attached LUNs with /dev/sdX/mappings |
esxcfg-vmknic | Add /Remove /Configure VMKernel NICs. |
esxcfg-vswif | Add/Remove/Configure ServiceConsole NICs |
esxcfg-vswitch | Add/Remove/Configure Virtual Switches |
esx-Commands | |
esxnet-support
|
Diagnostic information about Console NICs (Gives Errors in ESX-3.5.0) |
esxtop
|
Live Statistics of Virtual Machines (with VM-Names) |
esxupdate | Tool for updating ESX-3.x |
Vmware-Commands | |
Vmware-authd | For internal use only (authentication) |
Vmware-cmd | See vmware-cmd section |
vmware-configcheck | Check Virtual Machine configuration files (*.vmx) |
vmware-config.pl | Configure ESX-hostd port, recompile/install VMware VmPerl Scripting API |
vmware-hostd | Demon for VI Client connections (should only be
started by mgmt-vmware start-script) |
vmware-hostd-support | Creates /var/log/vmware/hostd-support.tgz |
vmware-mkinitrd | Creates initrd (initial ramdisk) |
vmware-vim-cmd | Please see vmware-vim-cmd section |
vmware-vimdump | Get information about ESX-Server configuration and Virtual Machines. |
vmware-vimsh | Interactive shell – comparable to vmware-vim-cmd with additional commands |
vmware-watchdog | Watchdog-Demon to keep vmware-hostd running
(should only be started by mgmt-vmware start-script) |
vmware-webAccess | WebAccess-Demon for browser based management (should only be started by
vmware-webAccess start-script) |
Vm-Commands | |
vmfsqhtool | Prints UUID of a device header |
vmfsqueuetool | Formats all partitions in vmfs queue |
vmkchdev | Manage PCI devices (give control over the device to VMKernel or Service Console) |
vmkdump | Manage VMKernel dump partition |
vmkerrcode | Give description of VMKernel error codes base on decimal or hex value |
vmkfstools | Create/Remove/Configure VMFS-Filesystems and Virtual Machine .vdsk files (Virtual Disk File) |
vmkiscsid | iSCSI demon |
vmkiscsi-device | iSCSI device information |
vmkiscsi-ls | List iSCSI devices |
vmkiscsi-tool | Configure software iSCSI initiator |
vmkiscsi-util | Get information about iSCSI devices |
vmkloader | Load and unloads the VMKernel |
vmkload mod | Load/Unload VMKernel modules (e.g. device drivers) |
vmklogger | Create logmessages (like logger for VMKernel
messages) |
vmkpcidivy | deprecated |
vmkping | Ping on VMKernel network |
vmkuptime.pl | Creates HTML output with Uptime/Downtime/Availability |
vmres.pl | deprecated |
vmsnap all | Snapshot all Virtual Machines on a ESX-Server |
vmsnap.pl | deprecated |
vmstat | (this is a standard linux command – lists memory/disk access statistics) |
vm-support | Creates /etc/init.d/esx-<date>.tgz |
vmware | internal use – can not be started manually |
Other Commands | |
vdf | Show free disk space of mounted partitions (like df with vmfs-support) |
Start-Scripts | |
Scripts inside /etc/init.d/ | |
mgmt-vmware | Start/Stop/Restart the demon for the VI-Client connections |
vmkhalt | internal use – can not be started manually |
vmware | internal use – can not be started manually |
vmware-functions | internal use – can not be started manually |
vmware-late | internal use – can not be started manually |
vmware-vmkauthd | internal use – can not be started manually |
vmware-vpxa | Start/Stop/Restart the demon for the Virtual Center connections |
vmware-webAccess | Start/Stop/Restart the demon for the Web-Interface connections |
Running Processes | |
crond | Schedule jobs at specific intervals |
gpm | Mouse support in the text console |
init | First process which runs every other process |
klogd | Kernel log demon |
logger | Logs messages to /var/log |
sshd | Provides secure shell access |
syslogd | Log/Filter demon with a remote logging ability |
vmware-hostd | Demon for VI Client connections |
vmkload app | Loads vmware applications (internal use only) |
vmklogger | Logs VMKernel messages to /var/log/vmware |
wsmand |
Web Services Management |
vmware-vmkauthd | Demon for user authentication |
vmware-vmx | Provides context for a Virtual Machine (internal use only) |
vmware-watchdog | Checks if vmware processes are running (no connection test à does not restart hung
processes) |
vpxa | Virtual Center agent |
webAccess | Web-Interface (TomCat-Server) |
xinetd | Listen on network ports for other demons and start them on-demand |
vmware-cmd Commands | |
Commands for a Virtual Machines (vmware-cmd -h). | |
getconnectedusers | List name and IP of connected users (non-working with esx3.5.0?) |
getstate | Show current state of VM (Ofi/On/…) |
start | Start a VM |
stop | Stop a VM |
reset | Reset a VM |
suspend | Suspend a VM |
setconfig | Set a variable in the vmx-configuration-file |
getconfig | Get a variable from the vmx-file |
setguestinfo | Set guest info variable |
getguestinfo | Get guest info variable |
getproductinfo | Get various product info |
connectdevice | Connect a device |
disconnectdevice | Disconnect a device |
getconfigfile | Get path/filename of config file |
getheartbeat | Get current heartbeat |
gettoolslastactive | Time since last notification from vmware-tools (in seconds) |
getresource | Get a VM resource |
setresource | Set a VM resource |
hassnapshot | Determine if VM has a snap-shot |
createsnapshot | Create a snapshot |
revertsnapshot | Revert to last snapshot |
removesnapshots | Remove all snapshots |
answer | Answer a question (if VM requires input) |
vmware-vim-cmd Commands | |
hostsvc/ | ESX-Server commands |
internalsvc/ | ESX-Server internal com-
mands |
proxysvc/ | Web-SDK proxy commands |
vimsvc/ | VirtualCenter commands |
vmsvc/ | VM commands |
Log Files | |
Logs are in /var/log/vmware/ if no other path is specified) | |
/etc/syslog.conf | Configure logging behaviour |
esxcfg-boot.log | Boot messages |
esxcfg-firewall.log | List of executed firewall commands and log messages |
esxcfg-linuxnet.log | LinuxNet messages |
esxupdate.log | Debug messages for updates |
hostd.log | hostd messages |
vpx-iupgrade.log | Logs for package installations/removals by
Virtual Center (e.g. output of rpm –hiv VMware-vpxa-2.5.0-64192.i386.rpm) |
vpx/vpxa.log | Virtual Center Agent messages |
vmfsqueuetool.log | VMFSQueueTool messages |
webAccess | Web-Access messages |
/proc/vmware/log | VMKernel messages |
/var/log/ storage-Monitor | VMKernel storage monitor messages |
/var/log/ vmkernel | VMKernel messages (info messages only) |
/var/log/ vmkproxy | VMKernel userworld proxy messages |
/var/log/ vmk-summary | VMKernel messages (notice and higher)
|
/var/log/ vmk-warning | VMKernel warning messages |
NetApp commands for Volume / LUN management
vol options <volname> fractional_reserve 0
This command sets the fractional reserve to zero percent, down from the default of 100 percent. Note that fractional reserve only applies to LUNs, not to NAS storage presented via CIFS or NFS.
snap autodelete trigger snap_reserve
This sets the trigger at which Data ONTAP will begin deleting Snapshots. In this case, Snapshots will start getting deleted when the snap reserve for the volume gets nearly full. The current size of the snap reserve can be viewed for a particular volume with the “snap reserve” command.
snap autodelete defer_delete none
This command instructs Data ONTAP not to exhibit any preference in the types of Snapshots that are deleted. Options for this command include “user_created” (delete user-created Snapshot copies last) or “prefix” (Snapshot copies with a specified prefix string).
snap autodelete target_free_space 10
With this setting in place, Snapshots will be deleted until there is 10% free space in the volume.
snap autodelete on
Now that the Snapshot autodelete options have been configured, this command will actually turn the functionality on.
vol options <volname> try_first snap_delete
When a FlexVol runs into an issue with space, this option tells Data ONTAP to first try to delete Snapshots in order to free up space. This command works in conjunction with the next command:
vol autosize on
This enables Data ONTAP to automatically grow the size of a FlexVol if the need arises. This command works hand-in-hand with the previous command; Data ONTAP will first try to delete Snapshots to free up space, then grow the FlexVol according to the autosize configuration options. Between these two options—Snapshot autodelete and volume autogrow—you can reduce the fractional reserve from the default of 100 and still make sure that you don’t run into problems taking Snapshots of your LUNs.
SnapMirror and Deduplication
In a recent blog, I talked about the interaction between deduplication and SnapVault. In this post I’ll discuss SnapVault’s cousin – SnapMirror.
SnapVault was designed to make efficient D2D backup copies, but SnapMirror has a different purpose – making replication copies. Using good old Snapshot technology, SnapMirror transfers snapshots from one storage system to another, usually from the data center to an offsite disaster recovery location.
SnapVault and SnapMirror have many similarities, but there is one important item that distinguishes these two cousins – Unlike SnapVault, SnapMirror relationships are peer-based and can be reversed. In fact, when we talk about SnapMirror pairs, we don’t use the terms primary and secondary as we do with SnapVault, instead we refer to source and destination systems. Either of the SnapMirror systems can be a source or a destination, it just depends on the direction the snapshots are moving. Take a look at the diagram below to get a better understanding of what I mean:
I’ve used this diagram in dozens customer briefings, and I use it to point out the subtle differences between SnapVault and SnapMirror. First of all, notice the arrows. SnapVault’s go from left to right only, but SnapMirror’s arrows travel in both directions. Normally, the SnapMirror source system (the one on the left) controls the flow of application data to servers and clients. However if the source system goes down for some reason, the SnapMirror destination system (on the right) takes control, and we call this a “Failover” event. When we bring up and revert control back to the source system we call this a “Failback”. In either case, Snapshot copies are passed back and forth between the systems to insure that both the source and destination systems are synchronized to the current point and time, using the most current SnapMirror copy.
Now, lets talk about using deduplication with SnapMirror. There are two types of SnapMirror replication, and deduplication behaves differently with each type.
The first type is called Qtree SnapMirror, or QSM. As the name implies, QSM performs replication at the Qtree level. What is a Qtree? Its a logical subset of a NetApp volume. Storage admins like to use Qtrees in NAS systems when they need to administer quotas or set access permissions. Much more info on the why’s and how’s of Qtrees can be found in the Data ONTAP System Administration Guide on the NOW Support site.
In the context of deduplication, QSM presents a bit of a problem. Since replication is done at the logical level, any deduplication done at the source will be re-inflated at the destination, and will need to be re-deduplicated. This kind of defeats the purpose of space reduction. But there is one valuable use case – if you don’t want to dedupe the source, and only want to deduplicate the destination, QSM makes perfect sense. Refer to the following diagram:
As the diagram shows, with QSM, only the Qtree portion of the volume is replicated and it is only deduplicated at the DR site. To configure QSM for deduplication, just enable it on the destination volume and set the deduplication schedule to “auto”. The source volume will remain untouched and the destination volume will deduplicate automatically. Failovers and Failbacks will work just fine, since any replication from the destination back to the source will be un-deduplicated.
The second type of replication is Volume SnapMirror, or VSM, which takes a different approach. VSM replicates entire volumes (including Qtrees) at the physical level. What this means to deduplication is that blocks are replicated once, and any deduplication pointers are sent along with the blocks. By replicating at the +physical +level, this means that the destination volume “inherits” deduplication automatically. Here’s a diagram that shows how VSM works with deduplication:
To configure VSM for deduplication, enable it on both the source and destination volumes but only set the deduplication schedule at the source. The source volume will do all the work and the destination volume will get deduplication for free. After you have a Failover/Failback event, you might want to run a deduplication scan on the source volume (sis start -s) to pick up any duplicate blocks that might have been written to the destination during the Failover, but then again its probably a very small amount that won’t be worth the effort. Your choice.
In a nutshell, that’s how deduplication and SnapMirror work together. If you’d like to read a much more complete description, here is an excellent Technical Report that includes best practices.
SANtricity E-Series
To manage NetApp E-Series arrays, you use the SANtricity Storage Manager software. SANtricity is easy and intuitive to use. Figure 1 shows the first GUI when you have selected an array to manage.
The process of configuring storage on an E-Series array is, in my opinion, really not that different from doing it on a FAS storage system. Of course, the commands are different. However, conceptually, they are quite similar. I’m going to mention just a few very basic storage management tasks on an E-Series array and draw some comparisons between E-Series array and FAS storage systems.
Configure Host Access
This step basically establishes the path(s) between the array and host(s) such that the host can access the storage. This is similar to creating igroup(s) on a FAS storage system. Note that with E-Series arrays, you can configure host access manually or automatically. Auto-config involves a host discovery step by SANtricity. To ensure the path is working, SANtricity creates a tiny 20-MB volume on the array and presents it as a 20-MB disk to the host (see fig. 2). So, don’t be alarmed if you see this disk show up on the host; it’s actually a common practice by number of storage vendors.
Create Volume Groups and Volumes
Volume Group is a logical storage entity that aggregates a number of physical drives. When you create a Volume Group, you select number of disks as well as a RAID level (e.g., 0, 1, 5 or 6) for that Volume Group. Think of it as aggregate and FlexVol combined on FAS, although they are not quite the same. Within a Volume Group, you can create one or more volumes, which are similar to LUNs on FAS. Figure 3 shows the relationship between Volume Groups and Volumes.
Create Host-to-Volume Mappings
This step maps host(s) to volumes so that the host can access the volumes, thus the storage array. It is very similar to LUN mapping on FAS. Figure 4 captures the state after several volumes have been mapped to the host x3550-test. Note, if multi-path is used, then proper DSM should be installed and/or configured.
Configure Hot Spares
On E-Series arrays, hot spares should be configured so they can be used automatically in place of a failed drive in a volume group. When you configure hot spare disks, you can select which disk as well as how many disks should be hot spares. Note, the hot spares are global, meaning they can be used by any volume group in an array. Figure 5 shows two hot spares have been configured. This step is different from FAS, since on FAS storage systems, un-configured disks are usually hot spares automatically.
SANtricity Storage Manager is a powerful storage management tool. Here, I only touched on a few very basic tasks. Yet these simple configuration steps are enough to let a host access an E-Series array and perform I/O operations (read and write) between the host and the array.
Space Reclaimer for NetApp SnapDrive
SnapDrive for Windows 6.3, which was released last year, introduced support for VMDKs on NFS & VMFS Datastores
A couple of quick notes, you need Data ONTAP 7.3.4 or later to use block reclamation with SnapDrive for VMDKs.
You need to have VSC 2.0.1 or later installed, with the Backup and Recovery feature, and also SnapDrive (within the VM) must have network connectivity to the VSC (port 8043, by default) as well as Virtual Center.
Also, SnapDrive cannot create VMDKs for you, in the way it can create RDM LUNs. Instead, you have to create VMDKs the old fashioned way, but once they’re attached to the VM, SnapDrive will be able to manage them.
Okay, so I’ve got a VMDK (my C: Drive), which is in an NFS Datastore.
I copied 5GB worth of Data into the C: drive, then deleted it. This left my VMDK at 10GB in size.
So, Windows took up about 5GB, and the data (which is now deleted), was about 5GB – so let’s kick off space reclamation and see how much space I get back.
Right click on the VMDK, and select “Start Space Reclaimer”.
It will do a quick check, to see if it actually can reclaim any space for you.
The confirmation box reckons I get reclaim up to 3GB? Hmm, I was hoping for a bit more than that. Well, let’s run it anyway and see how well it does.
It’s worth noting the warning on here – while the VSC requires VMs to be shutdown in order to reclaim space, SnapDrive runs space reclamation while the VM is powered up – but, it will take a backseat to any actual I/O that’s going on, so you might want to run it in a low usage period.
So, I clicked okay, and it kicked off space reclamation – and it even gives me a nice little progress bar.
In my lab, it took about 3 minutes, and when it was done, it had shrunk my VMDK down to 5.6 GB.
So it was just being modest earlier when it said I could free up to 3GB!
In total, it has reclaimed 5.2GB – which is actually a little more than the data I copied in and deleted to start with!
NetApp SAN Boot with Windows
Thoughts and ideas concerning booting from SAN when we attempted this with our NetApp array.
- SAN Boot is fully supported by MSFT. The first thing that happened is that we were told that SAN boot is not supported and we could not get Microsoft support for this configuration. It turns out that this is not correct. SAN boot is fully support by Microsoft along with HW partners like NetApp. This TechNet article fully outlines MSFT’s support for SAN Boot: http://support.microsoft.com/kb/305547
- Zoning is the #1 issue with SAN Boot on FC. In talking with NetApp support team (who were a HUGE help on this issue) the most common issue in SAN Boot from Fiber Channel is zoning. Because zoning can be complex, this is the most likely cause of error. We strongly recommend you check and then double-check your zoning before opening a support ticket. In our case, the zoning for the server was correct, but we did make a zoning error on another server that we were able to correct on our own.
- Windows MPIO support at install time is limited. Because WinPE is not MPIO aware, there can be strange results when deploying against a LUN that is visible via multiple paths. Keep in mind that at install time, Windows boots to boot.wim which is running WinPE instead of a full Windows install. After the bits are copied locally, windows reboots to complete the install and at this time Windows is actually running. Because of this, NetApp support team recommends having only one path to the LUN at install time and then adding paths later once Windows is up and running and you can enable Windows MPIO.
- AND YET… MPIO is strongly recommended for SAN Boot. Because a Windows host will blue screen if it’s boot lun dies, MPIO is strongly recommended for boot LUNs. This is documented here: http://technet.microsoft.com/en-us/library/cc786214(WS.10).aspx. This can seem contradictory at first, but the bottom line is that MPIO is good, just add it later once Windows is up and running correctly.
- Yes, but what about Sysprep? It turns out that MPIO is not supported for Sysprep based deployments: http://support.microsoft.com/kb/2504934. So, again you need to configure MPIO post deployment when you are deploying against sysprep’d images. In the case of NetApp, we strongly recommend using Sysprep boot LUNS which you can then clone for new machines. This significantly shortens deployment time as opposed to doing a full Windows install for each new host.
- It’s all about BIOS. Actually installing windows on a boot LUN does require that Windows Setup sees your target LUN as a valid install path. This means that the server must report this drive as a valid install target or Setup will not let you select this disk. For FC, you will need to enable the BIOS setting and select your target LUN in the HBA setup screen. This process varies by vendor. Then you need to make this disk your #1 boot target in your server’s BIOS. Again, this process varies by manufacturer. As noted above, you should only establish one path. This includes dual port HBA’s. Only configure one of the ports.
- Where’s my disk? Once you do all the above correctly, Setup may still refuse to show you the disk. This could be because the correct driver is not present on the install media. One way to fix this is to inject drivers into your Boot.WIM and Install.WIM. This process is required if you are using WDS but optional if you are hand building a server from DVD or ISO. In our case, we were building a single master image that we were going to Sysprep so we simply inserted the media and added the drivers manually during setup.
- OK, the disk is there, but I can’t install! One funny thing about Windows setup is that if you are installing from DVD, that DVD must be present to install (duh). This is fine, unless you used the process above to insert the driver. To do this, you need to remove the disk. Then you get the drives and click install. Windows will fail to install with a fairly obtuse error. You need to remove the drivers DVD at this point and put the install DVD back in. Seems obvious, but it took me a few minutes to figure out what was wrong the first time I tried it.
The Windows Host Utilities Installation and Setup Guide (http://now.netapp.com/NOW/knowledge/docs/hba/win/relwinhu53/pdfs/setup.pdf) has a very detailed description of Windows support for FC LUNs and there is a step by step process in this guide for configuring Boot LUNs.
Avoid Server 2008 VMtools and VUM Woes with PowerCLI
Last week after upgrading to vSphere 4.1U1 I noticed a lot of our guests did not have the proper VMtools installed. After a quick look I realized they were all Windows Server 2008 or 2008 R2 guests. The initial update for all of the guest was done using VUM, but the tools install was completely hung on all those systems.
Prepare to wait. |
Apparently VUM triggers the “Interactive Services Dialog Detection” in Windows which looks like the message below.
Just login and click this on all your guest; you’ll be done by update 2. |
Luckily there is an incredibly easy workaround. Using PowerCLI you can type 2 commands to update your VMtools install without triggering this nasty little message.
Get-VM | Update-Tools
If you don’t want the machines to reboot just add -NoReboot to the end of the Update-Tools command. Here is the syntax for pulling only Windows 2K8 guests in a cluster named QA.
Get-Cluster "QA" | Get-VM | where {$_.Guest -like "*Server 2008*"} | Update-Tools
Set NetApp NFS Export Permissions for vSphere NFS Mounts
One of the things missing from the NetApp VSC is the ability to set permissions on NFS exports when you add a host to an existing cluster. If you have a lot of NFS datastores and don’t feel like setting permissions across NetApp arrays when you add a new host this should ease the pain. Here are a few other use cases.
- You change a VMkernel IP for NFS traffic on a host
- You add a VMkernel IP for NFS traffic on a host
- You add a new host to a cluster
- You remove a host from a cluster
$array1VIF = "10.1.1.40", "10.1.1.41", "10.1.1.42", "10.1.1.43" $array2VIF = "10.1.1.44", "10.1.1.45", "10.1.1.46", "10.1.1.47" $array1Name = "netapp1" $array2Name = "netapp2" $vCenters = "server1", "server2" $vifLength = $array1VIF[0].Length $volStart = $vifLength + 9 #Generated Form Function function GenerateForm { ############################################################## # Code Generated By: SAPIEN Technologies PrimalForms #(Community Edition) v1.0.8.0 # Generated On: 10/24/2010 9:34 PM # Generated By: theselights.com ############################################################## #region Import the Assemblies [reflection.assembly]::loadwithpartialname("System.Windows.Forms") | Out-Null [reflection.assembly]::loadwithpartialname("System.Drawing") | Out-Null #endregion #region Generated Form Objects $form1 = New-Object System.Windows.Forms.Form $cancelButton = New-Object System.Windows.Forms.Button $okButton = New-Object System.Windows.Forms.Button $groupBox1 = New-Object System.Windows.Forms.GroupBox $vcenter = New-Object System.Windows.Forms.ComboBox $groupBox2 = New-Object System.Windows.Forms.GroupBox $nfsDatastores = New-Object System.Windows.Forms.ListBox $InitialFormWindowState = New-Object System.Windows.Forms.FormWindowState #endregion Generated Form Objects #---------------------------------------------- #Generated Event Script Blocks #---------------------------------------------- #Provide Custom Code for events specified in PrimalForms. $handler_vcenter_DropDownClosed= { Connect-VIServer $vcenter.SelectedItem $nfsDS = get-datastore | where {$_.Type -eq "NFS"} | get-view | select Name,@{n="url";e={$_.summary.url}} $nfsDS | % {$nfsDatastores.Items.Add($_.URL.substring($volStart)) | Out-Null } } $handler_vcenter_DropDown= { $nfsDS | % {$nfsDatastores.Items.Remove($_.url.substring($volStart)) | Out-Null } $nfsDatastores.Items.Remove("Select a Virtual Center to gather NFS mounts.")|Out-Null } $okButton_OnClick= { $esxNFSIP = Get-VMHostNetworkAdapter -VMKernel | where {$_.PortGroupName -like "*NFS*"} | select IP -Unique $esxNFSIP = $esxNFSIP | % {$_.IP} Foreach ($ds in $nfsDS) { $nfsVIF = $ds.url.substring(8,$vifLength) $nfsMount = $ds.url.substring($volStart) $nfsName = $ds.name #//// Set permissions on source NFS exports $array1VIF | % { If ($_ -eq $nfsVIF) { $storageArray = $array1Name } } $array2VIF | % { If ($_ -eq $nfsVIF) { $storageArray = $array2Name } } Connect-NaController $storageArray Set-NaNfsExport $nfsMount -Persistent -ReadWrite $esxNFSIP -Root $esxNFSIP } } $cancelButton_OnClick= { $form1.close() } $OnLoadForm_StateCorrection= {#Correct the initial state of the form to prevent the .Net maximized form issue $form1.WindowState = $InitialFormWindowState } #---------------------------------------------- #region Generated Form Code $form1.Text = "Set VMware NFS Permissions" $form1.Name = "form1" $form1.DataBindings.DefaultDataSourceUpdateMode = 0 $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Width = 344 $System_Drawing_Size.Height = 379 $form1.ClientSize = $System_Drawing_Size $cancelButton.TabIndex = 5 $cancelButton.Name = "cancelButton" $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Width = 103 $System_Drawing_Size.Height = 23 $cancelButton.Size = $System_Drawing_Size $cancelButton.UseVisualStyleBackColor = $True $cancelButton.Text = "Cancel" $System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 204 $System_Drawing_Point.Y = 328 $cancelButton.Location = $System_Drawing_Point $cancelButton.DataBindings.DefaultDataSourceUpdateMode = 0 $cancelButton.add_Click($cancelButton_OnClick) $form1.Controls.Add($cancelButton) $okButton.TabIndex = 4 $okButton.Name = "okButton" $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Width = 103 $System_Drawing_Size.Height = 23 $okButton.Size = $System_Drawing_Size $okButton.UseVisualStyleBackColor = $True $okButton.Text = "Set Permissions" $System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 45 $System_Drawing_Point.Y = 328 $okButton.Location = $System_Drawing_Point $okButton.DataBindings.DefaultDataSourceUpdateMode = 0 $okButton.add_Click($okButton_OnClick) $form1.Controls.Add($okButton) $groupBox1.Name = "groupBox1" $groupBox1.Text = "Virtual Center" $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Width = 265 $System_Drawing_Size.Height = 94 $groupBox1.Size = $System_Drawing_Size $System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 42 $System_Drawing_Point.Y = 26 $groupBox1.Location = $System_Drawing_Point $groupBox1.TabStop = $False $groupBox1.TabIndex = 2 $groupBox1.DataBindings.DefaultDataSourceUpdateMode = 0 $form1.Controls.Add($groupBox1) $vcenter.FormattingEnabled = $True $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Width = 226 $System_Drawing_Size.Height = 21 $vcenter.Size = $System_Drawing_Size $vcenter.DataBindings.DefaultDataSourceUpdateMode = 0 $vcenter.Name = "vcenter" $vCenters | % {$vcenter.Items.Add($_) | out-null} $System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 19 $System_Drawing_Point.Y = 35 $vcenter.Location = $System_Drawing_Point $vcenter.TabIndex = 0 $vcenter.add_DropDownClosed($handler_vcenter_DropDownClosed) $vcenter.add_DropDown($handler_vcenter_DropDown) $groupBox1.Controls.Add($vcenter) $groupBox2.Name = "groupBox2" $groupBox2.Text = "NFS Mounts" $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Width = 262 $System_Drawing_Size.Height = 167 $groupBox2.Size = $System_Drawing_Size $System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 45 $System_Drawing_Point.Y = 141 $groupBox2.Location = $System_Drawing_Point $groupBox2.TabStop = $False $groupBox2.TabIndex = 3 $groupBox2.DataBindings.DefaultDataSourceUpdateMode = 0 $form1.Controls.Add($groupBox2) $nfsDatastores.FormattingEnabled = $True $System_Drawing_Size = New-Object System.Drawing.Size $System_Drawing_Size.Width = 226 $System_Drawing_Size.Height = 134 $nfsDatastores.Size = $System_Drawing_Size $nfsDatastores.DataBindings.DefaultDataSourceUpdateMode = 0 $nfsDatastores.Items.Add("Select a Virtual Center to gather NFS mounts.")|Out-Null $nfsDatastores.HorizontalScrollbar = $True $nfsDatastores.Name = "nfsDatastores" $System_Drawing_Point = New-Object System.Drawing.Point $System_Drawing_Point.X = 16 $System_Drawing_Point.Y = 24 $nfsDatastores.Location = $System_Drawing_Point $nfsDatastores.TabIndex = 0 $groupBox2.Controls.Add($nfsDatastores) #endregion Generated Form Code #Save the initial state of the form $InitialFormWindowState = $form1.WindowState #Init the OnLoad event to correct the initial state of the form $form1.add_Load($OnLoadForm_StateCorrection) #Show the Form $form1.ShowDialog()| Out-Null } #End Function #Call the Function GenerateForm