Archive

Archive for April, 2011

VMware ESX & ESXi Comparison

April 27, 2011 Leave a comment
Capability

VMware ESX

VMware ESXi

Service Console Service Console is a standard Linux environment through which a user has privileged access to the VMware ESX kernel. This Linux-based privileged access allows you to highly customize your environment by installing agents and drivers and executing scripts and other Linux-environment code. VMware ESXi is designed to make the server a computing appliance. Accordingly, VMware ESXi behaves more like firmware than traditional software. To provide hardware-like security and reliability, VMware ESXi does not support a privileged access environment like the Service Console of VMware ESX. To enable interaction with agents, VMware has provisioned CIM Providers through which monitoring and management tasks – traditionally done through Service Console agents – can be performed. VMware has provisioned RCLI to allow the execution of scripts.
Remote CLI VMware ESX Service Console has a host CLI command through which VMware ESX can be configured. ESX 3.5 Update 2 supports RCLI. VMware ESX Service Console CLI has been ported to a Remote CLI (RCLI) for VMware ESXi. RCLI is a virtual appliance that interacts with VMware ESXi hosts to enable host configuration through scripts or specific commands.

Note: RCLI is limited to read-only access for the free version of VMware ESXi. To enable full functionality of RCLI on a VMware ESXi host, the host must be licensed with VI Foundation, VI Standard, or VI Enterprise.

The following Service Console CLI commands have not been implemented in RCLI:

  • ESXcfg-info
  • ESXcfg-resgrp
  • ESXcfg-swiscsi
Scriptable Installation VMware ESX supports scriptable installations through utilities like KickStart. VMware ESXi Installable does not support scriptable installations in the manner ESX does, at this time. VMware ESXi does provide support for post installation configuration script using RCLI-based configuration scripts.
Boot from SAN VMware ESX supports boot from SAN. Booting from SAN requires one dedicated LUN per server. VMware ESXi may be deployed as an embedded hypervisor or installed on a hard disk.

In most enterprise settings, VMware ESXi is deployed as an embedded hypervisor directly on the server. This operational model does not require any local storage and no SAN booting is required because the hypervisor image is directly on the server.

The installable version of VMware ESXi does not support booting from SAN.

Serial Cable Connectivity VMware ESX supports interaction through direct-attached serial cable to the VMware ESX host. VMware ESXi does not support interaction through direct-attached serial cable to the VMware ESXi host at this time.
SNMP VMware ESX supports SNMP. VMware ESXi supports SNMP when licensed to a VI Foundation, VI Standard, or VI Enterprise edition. The free version of VMware ESXi does not support SNMP.
Active Directory Integration VMware ESX supports Active Directory integration through third-party agents installed on the Service Console. VMware ESXi with a Virtual Infrastructure license and in conjunction with VirtualCenter allows users to be authenticated via Active Directory. In this configuration, users can log in directly to an ESXi host and authenticate using a local username and password.

The free version of VMware ESXi does not support Active Directory integration at this time.

HW Instrumentation Service Console agents provide a range of HW instrumentation on VMware ESX. VMware ESXi provides HW instrumentation through CIM Providers. Standards-based CIM Providers are distributed with all versions of VMware ESXi. VMware partners may inject their own proprietary CIM Providers in customized versions of VMware ESXi. To obtain a customized version of VMware ESXi, you typically have to purchase a server with embedded VMware ESXi through a server vendor.

At this time, HP also offers its customized VMware ESXi Installable on www.vmware.com. Dell and IBM will soon offer their customized version of VMware ESXi on www.vmware.com.

Remote console applications like Dell DRAC, HP iLO, and IBM RSA are supported with ESXi.

Note: COS agents have a longer lineage than CIM Providers and are therefore more mature. VMware is actively working with its 250+ partners to close the CIM Provider–Service Console agent gap.

Software Patches and Updates VMware ESX software patches and upgrades behave like traditional Linux based patches and upgrades. The installation of a software patch or upgrade may require multiple system boots as the patch or upgrade may have dependencies on previous patches or upgrades. VMware ESXi patches and updates behave like firmware patches and updates. Any given patch or update is all-inclusive of previous patches and updates. That is, installing patch version “n” includes all updates included in patch versions n-1, n-2, and so forth.
VI Web Access VMware ESX supports managing your virtual machines through VI Web Access. You can use the VI Web Access to connect directly to the ESX host or to the VMware Infrastructure Client. VMware ESXi does not support web access at this time.
Licensing VMware ESX hosts can be licensed as part of a VMware Infrastructure 3 Foundation, Standard, or Enterprise suite. VMware ESXi hosts can be individually licensed (for free) or licensed as part of a VMware Infrastructure 3 Foundation, Standard, or Enterprise suite.

Individually licensed ESXi hosts offer a subset of management capabilities (see SNMP and Remote CLI).

 

ESXi – Free License

(ESX not available without VI)

VI Foundation

(with ESX or ESXi)

VI Standard

(with ESX or ESXi)

VI Enterprise

(with ESX or ESXi)

Core hypervisor functionality

Yes

Yes

Yes

Yes

Virtual SMP

Yes

Yes

Yes

Yes

VMFS

Yes

Yes

Yes

Yes

VirtualCenter Agent

Yes

Yes

Yes

Update Manager

Yes

Yes

Yes

Consolidated Backup

Yes

Yes

Yes

High Availability

Yes

Yes

VMotion

Yes

Storage VMotion

Yes

DRS

Yes

DPM

Yes

Advertisements
Categories: VMware Tags: , , ,

Top 6 VMware vSphere Design Questions

April 27, 2011 Leave a comment

Should I use a distributed vSwitch or a standard vSwitch?

  • Do you need to delegate network configuration or the advanced functionality a dvSwitch offers?
  • Trade convenience for additional considerations
  • Network configuration now dependent upon vCenter Server
  • Affects running vCenter Server as a VM
  • Requires Enterprise Plus licensing
  • A “hybrid” approach utilizing both vSwitches, where possible, provides the best of both worlds

Should I run vCenter Server as a virtual machine?

  • Both options are fully supported by VMware
  • Virtual has advantages (can leverage HA, for example)
  • Physical has advantages (no dependencies on the infrastructure it manages)
  • Virtual introduces new considerations:

–      Need vCenter for dvSwitch control plane

–      What if vCenter is VM and runs across dvSwitch?

–      Creates circular dependency

–      Operational concerns with DRS, EVC, VUM

Should I use blades or rack mount servers?

  • From a compute perspective, it’s a wash
  • The impact falls primarily in high availability

–      Must consider HA cluster size and cluster members per chassis

–      Can’t use redundant cards in blades in many instances (no redundant NICs or HBAs)

  • Newer blades offer as much connectivity as many rack mount servers (12 NICs, dual HBAs)
  • More exotic connectivity (InfiniBand, FCoE, PCIe extenders) not as widespread

Should I choose VMware ESX or VMware ESXi?

  • A common but not long-lived question
  • vSphere 4.1 is the last version to contain VMware ESX; all future versions will use only ESXi
  • So, perhaps a better question is, “How can I transition to ESXi?”
  • One step is to familiarize yourself with the vSphere CLI and/or the vSphere Management Assistant
  • If you’re a CLI junkie, get used to “vicfg-” instead of “esxcfg-”

Should I put all my hosts into a single large cluster

  • There is no one right answer!
  • Are you using blades?

–      Keep <5 cluster members per chassis

–      Must scale number of chassis to scale cluster size

  •  With vSphere 4.1 and VAAI, SCSI reservation conflicts are not a gating factor
  • Clusters are not vMotion boundaries, only DRS/HA/FT organizational unit

Should I lump all my VMs together in one big LUN?

  • LUN layout should be driven more by I/O profile than capacity or number of VMs
  • VAAI hardware-assisted locking eliminates SCSI reservation conflicts
  • There are potential performance benefits to multiple LUNs (multiple queues per LUN)
  • Less management overhead with fewer LUNs
  • The key is proper storage design to accommodate I/O requirements
Categories: VMware Tags: , ,

Linksys E4200 wireless connection & https issues

April 27, 2011 Leave a comment

Not to long ago I purchased the Linksys E4200 router for my home environment and right off the bat I started noticing connection issues with my wireless connection.  After reading every blog, email, post, you name it I never really got any answers to help me out.  Being the person that I am I started playing with every option within the E4200 management window and after cloning my MAC address and then rebooting my router my wireless connection is stable.  Setup > MAC Address Clone > Enabled > Clone My PC’s MAC > Save Settings > Reboot the router.  Sounds very simple but it worked for me.

Another issue I was having was getting horrible performance out of the management window taking up to 30 – 60 seconds to click on the different menu options.  I didn’t see this performance issue until after changing my option to use “https”.  After changing back to use “http” those issues cleared up.  Maybe with the next firmware release this will be fixed.  I’m not holding my breath.  Administration > Management

Categories: Linksys Tags: , , ,

NetApp, Configuring TWO vlan’s for ONE VIF.

April 25, 2011 Leave a comment

NetApp

When configuring TWO vlan’s for ONE vif (using two NIC’s) you can use the below script and input into the etc/rc file on the filer.

In the /etc/rc file:

ifconfig e0a mediatype auto flowcontrol full netmask 255.255.255.0 trunked vif0
ifconfig e0b mediatype auto flowcontrol full netmask 255.255.255.0 trunked vif0

vif create multi vif0 –b ip e0a e0b

vlan create vif0 101 106

ifconfig ‘hostname’-vif0-101 Ipaddress1 netmask 0xffffff00 mtusize 1500 broadcast BcastIP partner vif0-101

ifconfig ‘hostname’-vif0-106 Ipaddress2 netmask 0xffffff00 mtusize 1500 broadcast BcastIP partner vif0-106

and ensure the /etc/hosts file contains:

Ipaddress1  filername  filername-vif0-101

Ipaddress2  filername-vif0-106

In the partner’s /etc/rc:

ifconfig e0a mediatype auto flowcontrol full netmask 255.255.255.0 trunked vif0
ifconfig e0b mediatype auto flowcontrol full netmask 255.255.255.0 trunked vif0

vif create multi vif0 –b ip e0a e0b

vlan create vif0 101 106

ifconfig ‘hostname’-vif0-101 Ipaddress3 netmask 0xffffff00 mtusize 1500 broadcast BcastIP partner vif0-101

ifconfig ‘hostname’-vif0-106 Ipaddress4 netmask 0xffffff00 mtusize 1500 broadcast BcastIP partner vif0-106

In the partner’s /etc/hosts file:

Ipaddress3  filername  filername-vif0-101

Ipaddress4  filername-vif0-106

Categories: netapp Tags: , ,

NetApp Fractional Reserve

April 25, 2011 Leave a comment

Fractional Space Reservation
I see a lot of posts (good ones) about FSR (fractional space reservation) for LUN-based volumes, and while they do a great job of showing you the concepts, it might be nice to see an example of how it works. This can enable IT staff to test this capability in their own environments to see how it works.

The first thing we need to do is to create a 200MB volume to hold a LUN. Let’s call it voltest and set it up for holding LUNs:

fas> vol create voltest aggr1 200m

The new language mappings will be available after reboot Creation of volume ‘voltest’ with size 200m on containing aggregate ‘aggr1’ has completed.
fas> snap reserve voltest 0
fas> snap sched voltest 0
Now that we’ve configured a new volume, let’s set the FSR to 0%. This ensures that when snapshots are taken in a LUN-based volume, no additional space is set aside. The FSR default is 100% — meaning, if you try to take a snapshot, ONTAP will attempt to set aside 100% of the used space inside the LUNs in the volume to ensure you have space if all the blocks change. This is a great feature, but definitely takes up a lot more space in your volumes:

fas> vol options voltest fractional_reserve 0
Okay, so now when we take a snapshot, no additional space will be reserved in the volume. Although we’ve done this, we aren’t done.

There are two additional options we should consider using with this volume, both of which can be immensely useful for allowing ONTAP to dynamically handle when using more space for snapshots than anticipated. Sometimes users change more data, and our snapshots require more space. Sometimes we don’t delete manually created snapshots. And sometimes we just grow faster than expected.

The first option to configure is vol autosize. This option lets us automatically increase the size of a volume if we start to use more snapshot space:

fas> vol autosize voltest -i 20m -m 300m
vol autosize: Flexible volume ‘voltest’ autosizesettings UPDATED.
This command tells ONTAP that I’ll let you increase the size of the volume 20MB at a time, but only up to 300MB. If for some reason we need to use more space than expected, ONTAP can grow the volume as needed.

The second option to configure is snap autodelete. This feature tells ONTAP that it can start to delete snapshots if it finds that it needs even more space in the volume:

fas> snap autodelete voltest on
snap autodelete: snap autodelete enabled
fas> snap autodelete voltest
snapshot autodelete settings for voltest:
state : on
commitment : try
trigger : volume
target_free_space : 20%
delete_order : oldest_first
defer_delete : user_created
prefix : (not specified)
destroy_list : none
So when we print out the default parameters for snap autodelete, we see how there are a lot of possible tunable parameters to choose from. For now, let’s leave these at the default settings, although we can always change these in the future if we want to test other possible features. For example, if we change the target_free_space from 20% to 10%, it will delete fewer snapshots when snap autodelete is triggered, so that only 10% space is left in the volume instead of 20%. The key here is that ONTAP says, if space gets really tight I’ll start deleting snapshots for you automatically in order to make sure the LUN stays online. Good stuff.

Let’s finally check out the order of these two features. Clearly they are great, but let’s make sure we try to grow the volume before we start deleting snapshots. This ensures we use space up to a certain limit, and once we hit our volume growth maximums, start deleting snapshots:

fas> vol options voltest
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=on,
convert_ucode=on, maxdirsize=31457, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=0, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off
You can see the try_first parameter is set to volume_grow (the default). This ensures we try to use the volume growth feature first before autodeleting snapshots.

Now that we’ve created a volume, let’s create a LUN. I’m going to manually create a LUN, but it’s just as easy to do this with SnapDrive (for UNIX or Windows) if you want. I’ll create the LUN in ONTAP and manually map it to a Windows server (you’ll have to manually rescan the disks, create a partition and set a drive letter in Disk Management through the MMC, or you can just use the Create Disk feature with SnapDrive for Windows):

fas> lun create -s 100m -t windows /vol/voltest/luntest.lun
lun create: created a LUN of size: 102.0m (106928640)
fas> igroup create -i viaRPC.iqn.1991-05.com.microsoft:w2k3srvr.microsoft.com iqn.1991-05.com.microsoft:w2k3srvr.microsoft.com
fas> lun map /vol/voltest/luntest.lun viaRPC.iqn.1991-05.com.microsoft:w2k3srvr.microsoft.com

If you have SnapDrive, I highly recommend using it to create your LUNs instead of manually doing it, but I also wanted to show you can do this without SnapDrive if you really wanted to.

Now that we’ve created our volume, setup snapshot autodeletion and automatic volume growth, and we’ve also created and mapped a LUN, let’s actually test how this functionality works.

First, we’ll manually create a snapshot for the voltest volume and review how much disk space is taken up:

fas> snap create voltest testsnap.1
fas> df -hr voltest
Filesystem total used avail reserved Mounted on
/vol/voltest/ 200MB 102MB 97MB 0MB /vol/voltest/
/vol/voltest/.snapshot 0GB 0GB 0GB 0GB /vol/voltest/.snapshot
fas> snap list voltest
Volume voltest
working…

%/used %/total date name
———- ———- ———— ——–
0% ( 0%) 0% ( 0%) Mar 23 10:16 testsnap.1

Okay, great. Snapshot is taken, no space is used (snapshots don’t really take up space until there are changes in the volume), and there is no reserved space.

The next thing we do is get a copy of dd for Windows to create large random files quickly and painlessly. This is a very handy tool for the purposes of testing the behavior of ONTAP — of course, you can use anything you’d like, even your own files. In this example, we’ve set the drive letter for the new LUN as D:, so all of our new files will be written to that drive. Also note our input device is /dev/random so we’re writing lots of random data to the files:

C:\Temp> dd.exe of=d:80mbfile.txt bs=1M count=80 if=/dev/random
rawwrite dd for windows version 0.5.

80+0 records in
80+0 records out
C:\Temp> dir d:
Volume in drive D is New Volume
Volume Serial Number is 5261-7E0F
Directory of D:\
03/23/2009 11:23 AM 83,886,080 80mbfile.txt
1 File(s) 83,886,080 bytes
0 Dir(s) 19,428,864 bytes free
Okay, so we’ve written an 80MB file to a 100MB LUN. What happened to the volume? Let’s take a look:

fas> snap list voltest
Volume voltest
working…
%/used %/total date name
———- ———- ———— ——–
44% (44%) 37% (37%) Mar 23 10:16 testsnap.1
fas> vol size voltest
vol size: Flexible volume ‘voltest’ has size 220m.
fas> df -hr voltest
Filesystem total used avail reserved Mounted on
/vol/voltest/ 220MB 182MB 37MB 0MB /vol/voltest/
/vol/voltest/.snapshot 0MB 80MB 0MB 0MB /vol/voltest/.snapshot
So it looks like our volume grew by 20MB, and we’re holding a whole lot of space inside that snapshot (80MB specifically). Remember we took the first testsnap.1 snapshot with nothing in the volume or in the LUN, and now that snapshot has to hold 80MB of data from that new file we created! But so far, everything looks great. There was even an system alert in ONTAP to tell us the volume grew!:

Mon Mar 23 10:23:46 EST [wafl.vol.autoSize.done:info]: Automatic increase size of volume ‘voltest’ by 20480 kbytes done.

Now that we’ve seen the volume automatically grow, let’s make another 30MB file to grow it some more (to capacity):

C:\Temp> dd.exe of=d:30mbfile.txt bs=1M count=30 if=/dev/random
rawwrite dd for windows version 0.5.

Error writing file: 112 There is not enough space on the disk
19+0 records in
18+0 records out
C:\Temp> dir d:
Volume in drive D is New Volume
Volume Serial Number is 5261-7E0F
Directory of D:\
03/23/2009 01:40 PM 18,874,368 30mbfile.txt
03/23/2009 11:23 AM 83,886,080 80mbfile.txt
2 File(s) 102,760,448 bytes
0 Dir(s) 554,496 bytes free
As you can see, we’ve filled up the LUN at this point. Let’s see how ONTAP reacts:

Mon Mar 23 12:41:01 EST [wafl.vol.autoSize.done:info]: Automatic increase size of volume ‘voltest’ by 20480 kbytes done.

The volume has grown again by a bit more. Now let’s get a little aggressive. We’ll take another snapshot, delete all the files on the LUN, and then fill it up again. When we do this, we’ll see an number of things happen:

1. The snapshot growth will be more than the space in the volume, so the size of the volume will have to grow;
2. The volume will not be able to grow beyond its maximum (300MB), and so snapshots will start having to be deleted;
3. The number of snapshots to be deleted will depend on the target free space (20%), so we may end up losing more than one snapshot.

So let’s make a new snapshot in ONTAP for the volume voltest:

fas> snap create voltest testsnap.2
fas> snap list voltest
Volume voltest
working…
%/used %/total date name
———- ———- ———— ——–
0% ( 0%) 0% ( 0%) Mar 23 12:44 testsnap.2
49% (49%) 41% (41%) Mar 23 10:16 testsnap.1
Then we’ll delete the files on the Windows server, and make another really big file:

C:\Temp> del d:\*mbfile.txt
C:\Temp> dd.exe of=d:95mbfile.txt bs=1M count=95 if=/dev/random
rawwrite dd for windows version 0.5.

This program is covered by the GPL. See copying.txt for details
95+0 records in
95+0 records out
C:\Temp> dir d:
Volume in drive D is New Volume
Volume Serial Number is 5261-7E0F
Directory of D:\
03/23/2009 01:46 PM 99,614,720 95mbfile.txt
1 File(s) 99,614,720 bytes
0 Dir(s) 3,700,224 bytes free
Here are the logs that appear in ONTAP on the storage controller when we start making the final large file:

Mon Mar 23 12:46:19 EST [wafl.vol.full:notice]: file system on volume voltest is full
Mon Mar 23 12:46:22 EST [wafl.vol.autoSize.done:info]: Automatic increase size of volume ‘voltest’ by 20480 kbytes done.
Mon Mar 23 12:46:26 EST [wafl.vol.autoSize.done:info]: Automatic increase size of volume ‘voltest’ by 20480 kbytes done.
Mon Mar 23 12:46:32 EST [wafl.vol.autoSize.done:info]: Automatic increase size of volume ‘voltest’ by 20480 kbytes done.
Mon Mar 23 12:46:45 EST [wafl.vol.autoSize.fail:info]: Unable to grow volume ‘voltest’ to recover space: Volume cannot be grown beyond maximum growth limit
Mon Mar 23 12:46:46 EST [wafl.volume.snap.autoDelete:info]: Deleting snapshot ‘testsnap.1’ in volume ‘voltest’ to recover storage
Once we hit our maximum volume growth, we can no longer take space from the aggregate for the volume. At that point, ONTAP moves to the next option which is to delete snapshots to ensure sufficient space in the volume. It looks at what it requires from a free space standpoint and starts deleting from the oldest snapshot to recover storage space within the volume. As the log shows, the oldest snapshot testsnap.1 is deleted, leaving testsnap.2:

fas> snap list voltest
Volume voltest
working…
%/used %/total date name
———- ———- ———— ——–
49% (49%) 32% (32%) Mar 23 12:44 testsnap.2
While this example is great, there are a lot of other things you can do with a FSR value less than 100%. Here are a few other data points:

• In some Exchange environments, it may be ideal to set the FSR to a small percentage and use similar options as above, but set the defer_delete option to prefix and set the prefix value to exchsnap, sqlsnap, etc., depending on the name of the snapshot for the SnapManager product (if used). Check out TR-3578 as a starting point, as there are settings included in the document for base Exchange environments, and then modify as needed for your environment.
• It may be good to use an extremely thin provisioned space by turning off LUN reservations. This post doesn’t go into that detail, but again, test it out in your environment by showing how space can be taken as you make changes in the LUN on the server and how ONTAP reacts.
• While it’s great to see these options in action, it’s always best to size your volumes according to the LUN size(s) and the snapshot growth you expect. That way you always provision your volumes to match up with your expectations and only have to trigger volume autosize or snapshot autodelete if absolutely necessary. The goal here is to maintain your SLAs for snapshot retention and not have to autodelete anything.
• Remember you can always keep FSR at 100%; if you have the capacity and are simply sizing your aggregates for spindle performance, you may have plenty of capacity and you don’t need to change it.

Categories: netapp Tags: , ,

NetApp Java Issues

April 25, 2011 2 comments

With the release of OnTap 7.3 and above you might be running into some java issues when trying to run FilerView. Using Windows 7 under Java properties uncheck “Use TLS 1.0”

Under Control Panel you will see “Java”

Double click on Java or select Properties and select the advanced tab.  Scroll down to the Security setting, expand it out, under General uncheck “Use TLS 1.0”, then OK. If you have Filerview open close it and open it back up.

Categories: netapp Tags: , , ,