Supplemental Data – for vSphere Optimize and Scale Class
This page is intended to be a single source for data to supplement the material covered in the VMware vSphere Optimize and Scale class. It is based mostly on Frequently Asked Questions from students, who attend this class. It provides additional details and URLs on items discussed in class.
Storage I/O Control – Injector Model: Version 5.1 of vSphere Storage I/O Control introduced an “Injector Model” that is discussed in the course. This article from Frank Denneman provides more detail on the Injector and why it may not play well with storage systems that provide Auto Tiering.
Hardware MMU Virtualization – Transparent Page Sharing (TPS) – Large Pages: The class addressed the impact of Large Pages (2 MB) and TPS. It also recommended using Large Pages versus small pages (4KB) when dealing with Java applications to minimize Translation Look-ahead Buffer (TLB) misses and improve the performance of Intel-EPT and AMD-RVI. This article on Boche.net provides more details, including an ESXi host setting (mem.AllocGuestLargePage=0) to force small pages. It discusses the related design trade-off of performance versus VM consolidation ratio.
VMware has posted a performance paper related to this topic, that shows the results of tests aimed at measuring the impact of Hardware based MMU virtualization provided by Intel EPT (versus software based) and Large Pages (versus small pages) on specific memory intensive workloads.
.dvsData folder – Distributed vSwitch Data
DVS information is stored on each ESX host located at /etc/vmware/dvsdata.db. This is a binary file (database) that can be dumped with the net-dvs command and the “-f” switch. The data is also automatically stored on a shared VMFS volume in a folder named “.dvsData”. Here is an interesting KB article (1010913) concerning how DVS data stored on vCenter can get out of sync with a host and what action to take to correct the issue:
dvSwitch – Load Based Teaming
Distributed vSwitches offer an extra choice for NIC Teaming, “Route based on physical NIC load”. Here is a good explanation, plus a comparison to IP-hash based teaming.
dvSwitch Best Practices
Here is a white paper for best practices concerning distributed virtual switches:
Here is more details on Private VLANS.
- Simple Explanation: http://cauew.blogspot.com/2008/08/private-vlans-pvlans.html
- How to configure the dvSwitch and a Cisco switch for Private Vlans: http://www.ntpro.nl/blog/archives/1465-Online-Training-Configure-Private-VLAN-IDs.html
Cisco Nexus 1000V switch
Cisco has created the world’s first 3rd party distributed vswitch for use with vSphere. Here are some details:
DMZ virtualization using Cisco Nexus 1000v: http://www.vmware.com/resources/techresources/10035
IBM 5000V Distributed vSwitch
IBM now offers a distributed vSwitch. Here are some details: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2009685 and http://www-03.ibm.com/systems/networking/switches/virtual/dvs5000v/
EMC PowerPath VE
The native multi-pathing modules in vSphere do not provide true load balancing; however, vSphere provides vStorage APIs and a pluggable archietecture permitting partner storage vendors to produce unique multi-pathing modules. The primary example of this is EMC PowerPath VE, which could be used to provide true load balancing in a vSphere environment.
- Main product page: http://www.emc.com/products/detail/software/powerpath-ve.htm
- Best practices and planning: http://www.emc.com/collateral/software/white-papers/h6340-powerpath-ve-for-vmware-vsphere-wp.pdf
TPGS and SLUA:
Explanation of Target Port Group Support (TPGS) and Asymmetric Logic Unit Access (ALUA): http://developers.sun.com/solaris/articles/tpgs_support.html
Memory Management in vSphere
ESX Server provides a Ballooning Mechanism to borrow RAM from a rich VM and give to a poor VM. Here is a link to a good, detailed article that includes an explanation of ballooning and other memory related information.
Memory compression – New Feature in vSphere 4.1
The vmkernel now looks to compress memory in a memory compression cache within the VM. This step exists just prior to vm swapping step:
- Memory compression summary: http://www.gabesvirtualworld.com/memory-management-and-compression-in-vsphere-4-1/
- Memory compression whitepaper: http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_memory_mgmt.pdf
Here is a very detailed document on ESXTOP. It includes detailed descriptions of many counters, including formulas indicating how some counters are calculated from other counters.
Distributed Power Management (DPM):
- Whitepaper: http://www.vmware.com/files/pdf/distributed_power_management_vsphere.pdf
- Configure DPM for Dell servers: http://it-experts.dk/blogs/rsj/archive/2009/09/05/how-to-configure-dell-servers-for-use-with-vsphere-dpm.aspx
Power Savings in vSphere
- AMD PowerNow Whitepaper; http://www.amd.com/epd/processors/6.32bitproc/8.amdk6fami/x24404/24404a.pdf
The vscsiStats command can be run from the Service Console prompt on an ESX server to collect performance data on the activity of the virtual SCSI disks used by VMs.
To list all VMs, their world IDs, and virtual SCSI devices use this command:
Use the following command to collect SCSI related performance data from a specific VM and save the data in a CSV file.
vscsiStats -p all -c -w <worldID> > /tmp/vmstats-<vmname>.csv
When finished, be sure to stop the data collection with this command:
vscsiStats -x -w <worldID>
Macro used to built histograms from a CSV file produced by vscsiStats:
Performance Best Practices for vSphere
Performance and Scalability for SQL Server on vSphere:
Running Exchange Server on vSphere using NFS, iSCSI, and Fiber
Configuring Microsoft Network Load Balancer on VMware Infrastructure:
VMWare ESX (ESXi) Monitoring Modes
The vmkernel can use Binary Translation to run the virtual CPU of a VM, but new physical CPUs allows the Virtual Machine Monitor (VMM) to be moved to the pCPU (will allow the VMM to run on the CPU beneath Ring 0.) Likewise, newer pCPUs can virtualize the Memmory Management Unit (MMU). Virtualizing the VMM and MMU allows the VM to execute more efficiently. This should occur automatically, whenever the hardware supports it. To determine if a VM is actually utilizing hardware based virtualization, follow the advice in this link:
Host Cache Configuration
In ESXi 5, a new feature called Host Cache has been created. It allows swapping to host cache, which means swapping to a Solid State Drive. Here is a nice article:
Security Hardening and Compliance
Here is the latest released Official VMware Security Hardening Guide for vSphere
vCenter Configuration Manager is worth a look as a tool to help verify compliance with regulatory standards and industry best practices, such as VMware Hardening Guidelines, PCI, FISMA, HIPPA, and SOX:
Deep Dive DRS
Impact of Memory Reservation
MSCS on vSphere 5
vSphere Log Files
Here is a link that explains the purpose of many of the log files located on ESXi servers:
ESXi log files are not persisted during shutdown, so it is best to configure a datastore to hold the logfiles and / or configure central logging via syslog using a syslog receiver or via vilogger using vSphere Management Assistant. Here are some details for syslog:
And more details, including description of log file:
And details for changing the scratch partition:
Commands and scripting
VMware created a set of commands for the ESXi command shell. The legacy commands begin with the string “esxcfg-“, such as esxcfg-nics and esxcfg-vswitch. Today, VMware provides a single command for almost all purposes. The command is esxcli, which involves a complex set of arguments. VMware prefers that such commands are run remotely using the vSphere Command Line Interface (vCLI), which can be installed on an administrator’s desktop (Windows or Linux), or can be run within a deployed VMware vSphere Management Assistant (vMA) virtual appliance. Here is some information to get started:
Several automation tools are available to function with vSphere. The most powerful tool is probably the PowerCLI. Generally, if any vCenter task requires automation beyond what the vSphere client provides or if the client does not adequately provide a desired function, the first place to turn to may be PowerCLI. It provides a command set that allows the creation of scripts to call operations to be run in vCenter and the ESX servers.
- PowerShell Installation Guide: http://tinyurl.com/9suwbvo
- Blog and download for PowerCLI: http://blogs.vmware.com/vipowershell/
- Sample PowerCLI scripts: Waynes World and Virtu-al
- Running PowerCLI scripts automatically from Actions on triggered alarms: http://blogs.vmware.com/vipowershell/2009/09/how-to-run-powercli-scripts-from-vcenter-alarms.html
Potential Causes for Excessive Read Packets Loss
VMware has recently release the VMware vSphere 5.1 Security Hardening Document. Although many parts of vSphere are already well secured, administrators of large and growing environments should certainly read and adhere to this guide.