I’ve spent a considerable amount of time over the last two weeks working on obtaining some performance characterisation statistics for a particular application that we run in our business.
The application in question is a proprietary in-memory financial database that can scale out to run on multiple hosts provided those hosts share a single common backing store – essentially the hosts need to see the same file system so they can peek into each other’s transaction logs to ensure consistency across all the running instances.
What makes this more complex is that the application only runs on Solaris 10 and that the application vendor doesn’t provide any guidance as to the optimum mechanism for providing that shared file system.
I engaged the help of the Dell solution centre in Limerick to assist us in trying to obtain some insight into how this application would function with different possible configurations for providing that shared file system.
Explain DRS / storage DRS affinity and anti-affinity rules
There are two types of DRS rules:
- VM to Host rules
- Allows you to specify rules for whether a particular VM will or won’t run on a particular host or set of hosts
- Alows for Must rules and Should rules
- Must rules will never be violated by DRS, DPM or HA – if the VM needs to power on and there’s no host available allowed by the rule then the VM will not power on.
- Should rules will be respected unless there is no option – if the VM needs to power on and there’s no host available allowed by the rule then the VM will power on and DRS will seek to move the VM to an acceptable host as soon as one is available.
- VM to VM rules
- Used to keep VMs either on the same host or on separate hosts, canonical use cases for these are either to respect software licensing arrangements or to provide for High Availability respectively.
There are three different types of Storage DRS rules:
- Inter-VM Anti-Affinity (Also known as VM Anti-Affinity)
- Prevents virtual machines from residing on the same datastore within a datastore cluster
- Maximises availability of a set of redundant VMs
- Intra-VM Anti-Affinity (Also known as VMDK Anti-Affinity)
- Prevents specific virtual disks associated with a VM from residing on the same datastore within a datastore cluster
- Canonical use case: Useful for separating log and database disks of database VMs
- Intra-VM Affinity
- (The Default) – Keeps a particular virtual machine’s virtual disks together on the same datastore
- Maximises VM availability when all disks needed in order to run
I wanted to secure our VMware view installation with 2-factor authentication, I figured out how to do this using only open source tools.
I’ve put together a walkthrough detailing how to combine totpcgi, Google Authenticator and freeRADIUS in an active directory environment.
Link Here http://vcdxorbust.com/totpcgi-and-freeradius-with-vmware-view
Filed under ESXi, Sysadmin
Tune ESXi host memory configuration
VMware 5.x has five memory management mechanisms:
- Page Sharing
- Memory Compression
- Swap to Host Cache
- Regular Swapping
If an ESXi host is experiencing Regular Swapping that is indication that the VMs running on the host will be experiencing memory related performance problems. To confirm if a host is actively swapping memory follow these steps:
- Navigate to the Hosts and Clusters view in vSphere client
- Select the host and click the performance tab
- Select the advanced view and click chart options
- Under chart options, select Memory / Real-time
- Select the Swap in rate and Swap out rate counters
- Click Apply, OK
A non zero value for either of these counters indicates that the host is actively swapping.
An ESXi host that is actively swapping can be examined using esxtop to discover which (if any) VM guests are experiencing performance problems related to this swapping. The %SWPWT counter indicates the percentage of time that the guest is waiting for swapped pages to be read back from disk.
Identify appropriate BIOS and firmware setting requirements for optimal ESXi host performance
The VMware vSphere 5.0 Performance best practices guide http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf has some things to say about BIOS and firmware settings:
- Make sure you are running the latest version of the BIOS available for your system
- Enable “Turbo Boost” in the BIOS if your processors support it
- Enable hyper-threading support in the BIOS for processors that support it
- Some NUMA capable systems provide an option in the BIOS to disable NUMA by enabling node interleaving, in most cases you will get best performance by disabling node interleaving.
- Make sure any hardware assisted virtualisation features (VT-x, AMD-V, EPT, RVI etc.) are enabled in the BIOS.
- Disable from the BIOS any devices you won’t be using.
- Cache prefetching mechanisms (sometimes called DPL prefetch, Hardware prefetched, L2 streaming prefetch or Adjacent Cache Line Prefetch) usually help performance, especially when memory access patterns are regular. When running applications that access memory randomly, however, disabling these mechanisms might result in improved performance.
- If the BIOS allows the memory scrubbing rate to be configured, VMware recommends leaving it at the manufacturers default setting
- In order to allow ESXi to control CPU power saving features set power management in the BIOS to “OS Controlled Mode” or equivalent.