At the annual Microsoft Management Summit (MMS), visitors can “test drive” the latest Microsoft products in a lab setting. To reduce the number of computers required to host MMS labs and improve the experience of visitors, Microsoft put its Hyper-V virtualization technology to work at MMS 2011. By using Windows Server 2008 R2 with Service Pack 1, which includes Hyper-V with Dynamic Memory, and Microsoft System Center data center solutions, Microsoft created a private cloud. It trimmed servers by 21 percent and memory costs by U.S.$80,000. The Microsoft team used HP Cloud Foundation for Hyper-V to improve overall performance by a stunning 23,600 percent and reduce storage needs by 80 percent. At one point, the MMS labs team was provisioning 530 virtual machines per minute, a level of agility that enabled more visitors to experience the labs. Situation
Each year, Microsoft hosts the Microsoft Management Summit (MMS), its premier event for providing the IT community with deep technical information and training on the latest IT management solutions from Microsoft, partners, and industry experts. Arguably the most popular activities at MMS are the hands-on labs, where IT professionals have the opportunity to experiment with newly released Microsoft products and learn about new features, tips and tricks, and best practices. MMS labs are always busy and constantly booked.
In years past, Microsoft dispatched numerous 18-wheel semi-trucks to deliver hundreds of high-end workstations with multiple server-class processors and hard drives, along with miles of cabling and a large crew of technicians, to set up these labs. In 2009, MMS labs required 650 of these workstations, which consumed 114,000 watts of electricity.
||Provisioning this number of virtual machines in such a short time is unheard of…. MMS 2011 was a resounding success for Hyper-V, System Center, and HP hardware.
Group Program Manager, Virtualization, Windows Server and Cloud Division, Microsoft
Traditionally, labs have been run on individual workstations. Before the event, technicians had to copy the virtual hard drive (VHD) files for those labs to the client computers, a time-consuming and extensive pre-event configuration. To run the labs, each client computer not only had to meet stringent hardware requirements for running the lab, but every client computer also had to be physically identical. Scaling the HOLs (hands-on-labs) to include more lab computers largely depended on the number of client computers available that met the hardware requirements, and lab complexity was limited by the physical capabilities of the computers.
In addition to being resource-intensive, physical workstations required continuous, manual, and time-consuming provisioning, and reprovisioning, as the MMS labs crew reconfigured servers to suit the constantly changing needs of visitors. Lengthy workstation reconfiguration meant longer lines at MMS labs and less chance that visitors would stay to learn about new Microsoft products.
When Microsoft introduced the Windows Server 2008 operating system with Hyper-V technology, it began to use Hyper-V to virtualize some aspects of MMS labs. By MMS 2010, Microsoft used Hyper-V to fully virtualize MMS labs and Microsoft System Center data center solutions to automate virtual machine provisioning. In 2010, the MMS labs team reduced the hardware needs to only three racks of 41 physical servers. On these 41 servers, the MMS team created a highly virtualized environment. While the MMS 2010 labs were successful, there was room for improvement: reducing the physical and power footprint, reducing cabling, and improving memory efficiency.
“The MMS 2010 labs went so smoothly that we were eager to go even further with Hyper-V virtualization at MMS 2011,” says Jeff Woolsey, Group Program Manager, Virtualization in the Windows Server and Cloud Division at Microsoft.Solution
For MMS 2011—held in Las Vegas, Nevada, March 21–25—Microsoft set out to achieve even deeper virtual machine density and use System Center solutions in a private cloud environment to provision virtual machines at record-breaking speed.
The Microsoft team partnered with HP to create the MMS labs private cloud and used HP Cloud Foundation for Hyper-V. This solution, which is based on the Microsoft Hyper-V Fast Track reference architecture
, utilizes HP BL460c G7 blade servers in HP c7000 enclosures, together with HP StorageWorks EVA 4400 storage systems and connected using HP Virtual Connect FlexFabric technology. Moving to the HP Cloud Foundation for Hyper-V enabled the MMS team to pack more servers into less space while achieving greater core counts and symmetric multithreading. The team needed only 32 servers to achieve greater processing power than the 41 servers provided the year before. The HP Cloud Foundation for Hyper-V also enabled the team to take advantage of HP FlexFabric and Virtual Connect for flexible networking and simplified cabling.
“The HP Cloud Foundation system was simply exceptional,” Woolsey says. “We didn’t fully max out the system in terms of logical processors, memory, or I/O acceleration, even at a peak load running 2,000-plus virtual machines. Furthermore, the fact that HP connects with System Center solutions provided cohesion between systems management software and the hardware.”
The MMS Labs team populated each blade with 128 gigabytes (GB) of memory (though each could accommodate 384 GB). By using fewer servers—32 rather than 41—and by using Dynamic Memory, an enhancement to Hyper-V in Windows Server 2008 R2 with Service Pack 1 (SP1), the team was able to reduce the physical-memory footprint by 1 terabyte and deliver more labs running more virtual machines. Dynamic Memory is an enhancement to Hyper-V that allows memory on a host server to be pooled and dynamically distributed to virtual machines based on workload demand, without service interruption. This provides more consistency in system performance and simplifies management for administrators.Storage
In 2010, the MMS team used local disks in every server. In 2011, the team decided to change the storage strategy in the following ways:
IO accelerator card. Each blade included a 320-GB HP IO Accelerator mezzanine card. Differencing disks and all the running virtual machines resided on the IO Accelerator card. This provided a drastic improvement in I/O Operations per Second (IOPS). Each card delivered more than 145,000 IOPS; reads were performed at 750 MB per second and writes at 550 MB per second.
New SAN. The team used two HP StorageWorks EVA 4400 Fiber Channel storage area networks (SANs) populated with 96 15K RPM, 300-GB drives. The SAN housed the virtual hard disks for all the labs. The SAN had 29 terabytes of raw capacity, though only 64 percent was used.
Other storage-related data of interest:
Disk queue length for each of the hosts largely remained at under 1.0, indicating that few, if any, system requests waited for disk access. When the team deployed 1,600 virtual machines simultaneously (to only half the blades), the disk queue peaked at 1.7.
Private Cloud Management
All I/O was configured for high availability and redundancy. Network adapters were teamed and Fiber Channel was configured with Active-Active multipath I/O. None of it was needed, but it was all configured, tested, and working perfectly.
The MMS team used System Center solutions to manage the labs. It used the beta version of Microsoft System Center Operations Manager 2012 to monitor the health and performance of all the servers running both the Windows and Linux operating systems. To monitor health proactively, it used HP Insight Control together with the HP ProLiant and BladeSystem Management Packs for System Center Operations Manager. Insight Control and the HP Management Packs expose the native management capabilities, such as the ability to monitor, view, and receive alerts for HP servers and blade enclosures through the System Center Operations Manager console.
||The MMS 2010 labs went so smoothly that we were eager to go even further with Hyper-V virtualization at MMS 2011.
Group Program Manager, Virtualization, Windows Server and Cloud Division, Microsoft
The team used System Center Virtual Machine Manager 2008 R2 to provision and manage the entire virtualized lab delivery infrastructure and monitor and report on all the virtual machines in the system. The team was able to automatically provision new virtual machines on an as-needed basis, with only seconds between the end of one lab and the beginning of the next. (Unfortunately, there was not enough time to use the beta version of Microsoft System Center Virtual Machine Manager 2012.)
The team also used System Center Configuration Manager 2007 R2 to ensure that all of the host systems were configured in a uniform, consistent manner using Desired Configuration Management (DCM). DCM is a feature in System Center Configuration Manager 2007 R2 that enables IT teams to assess the compliance of computers, such as determining whether the correct Windows operating system is installed and configured appropriately or whether all required applications are installed and configured correctly. Windows Embedded Device Manager was also used to help manage the 650 thin clients on the show floor.
The team used Microsoft System Center Service Manager to manage trouble tickets and incident routing. To streamline the attendee lab experience, one of the main logistical conditions was not to require attendees to log on to use the labs. The MMS lab team created area users who were assigned to handle anonymous incidents. The show floor was divided into areas, and for each self-paced section, a generic user was created. Any incident created in those areas was categorized as tier 1 and auto-assigned to the proctor dedicated to that area. From that initial incident, tier-2 proctors could promote issues according to custom templates, and even escalated to tier-3 proctors. Incidents were created using the System Center Service Manager Self Service Portal and from proctor stations using the System Center Service Manager administrator console.
The System Center solutions ran in a separate virtualized infrastructure: a three-node cluster for high availability. As a precaution, the team relied on the Live Migration feature in Windows Server 2008 R2 to provide failover clustering if necessary. For networking, the team used a 1-gigabit Ethernet (Gb/E) connection, teamed for redundancy. For storage, it used iSCSI over 1 Gb/E with multipath I/O. The SAN was provided by a HP Virtual SAN Appliance running within a Hyper-V virtual machine.Impressive Visual Aid
To help visitors grasp the scale of the MMS lab cloud, the team wrote an interesting and eye-catching application called Hyper-V Mosaic. Hyper-V Mosaic displays thumbnails that represent all the virtual machines running at any given moment (Figure 1). The page was updated every few minutes and the thumbnails grew and shrank depending on how many were running at any given moment.
|Figure 1. Here, the Hyper-V Mosaic application displays the virtual machines in use |
at 3:00 P.M. on Wednesday, March 23, 2011. At the time, 1,246 virtual machines
were running on 32 servers.
Upper Limits Tested
After a few days of successfully running thousands of virtual machines in hundreds of labs and seeing that the HP hardware was not being taxed, the MMS labs team was curious to see how many virtual machines the private cloud could accommodate. So, one night after the labs were closed, the team loaded up the host servers and achieved the following:
Across the board and using every possible metric, the MMS 2011 lab showed significant improvements over the environment used in 2010. It delivered lower server costs, less power consumption, dramatic performance improvements, greater flexibility, and a better experience for MMS attendees.
Hardware Trimmed 21 Percent, Memory Savings of $80,000
Over the course of the week, the MMS labs team provisioned approximately 40,000 virtual machines. This is the same number that was provisioned in 2010, but in 2011 it was done using nine fewer servers and more than a terabyte less RAM. “The Dynamic Memory feature within Hyper-V greatly improved virtual machine density, enabled consistent performance and scaling, and saved approximately [U.S.]$80,000 by using 1 terabyte less memory,” Woolsey says. “This is a very significant memory savings.”
The lab team was allocated 500 square feet for MMS 2011 but only needed 17 square feet due to the greatly reduced number of servers (the labs consumed 306 square feet in 2009). In 2010, the team needed 82 network cables; in 2011, it needed only 12 cables: eight for Ethernet and four for Fiber Channel.
Microsoft required 21 percent less servers and 85 percent less cabling at MMS 2011 to deliver 50 percent more processor cores and 134 percent more CPU threads. The following table shows a side-by-side comparison of the servers needed for MMS 2010 and MMS 2011 and the associated savings.
Massive Performance Improvement, More Flexibility
The storage configuration resulted in massive improvements for the MMS labs team. By using the HP I/O Accelerator cards, the team saw a 23,600 percent improvement in total IOPS performance. Average CPU utilization across the entire pool of servers during the labs hovered around 15 percent, and peaks were recorded at about 20 percent. Even with thousands of Hyper-V virtual machines running, lab users barely taxed the well-architected and balanced system.
By using the SAN, the team was able to reduce local storage by 80 percent and centrally manage and share master virtual machines; every blade server was a target for every lab from every seat at MMS. This strategy delivered an unprecedented amount of flexibility. If the team needed an extra 20 System Center Configuration Manager labs from 1:00 to 2:00 P.M. and then needed to switch them to System Center Virtual Machine Manager labs from 2:00 to 3:00 P.M. and to System Center Operations Manager labs from 3:00 to 4:00 P.M., it could.
Microsoft required 80 percent less local storage and 50 percent less disks at MMS 2011.Better Experience for Show Visitors
The MMS labs team gained additional flexibility and agility from using the System Center solutions. Each System Center program integrated so seamlessly into the private cloud solution that the MMS labs team was able to provision more than 2,600 virtual machines in less than 10 minutes at one point. In another instance, the team provisioned 1,600 virtual machines in three minutes, or about 530 virtual machines per minute.
“Provisioning this number of virtual machines in such a short time is unheard of,” Woolsey says. “MMS 2011 was a perfect showcase for Hyper-V and System Center. With the changes in storage strategy and by using System Center tools, we could provision virtual machines faster and better tailor the labs to meet attendee demand. MMS 2011 was a resounding success for Hyper-V, System Center, and HP hardware.”Microsoft virtualization
Microsoft virtualization is an end-to-end strategy that can profoundly affect nearly every aspect of the IT infrastructure management lifecycle. It can drive greater efficiencies, flexibility, and cost effectiveness throughout your organization. From accelerating application deployments; to ensuring systems, applications, and data are always available; to taking the hassle out of rebuilding and shutting down servers and desktops for testing and development; to reducing risk, slashing costs, and improving the agility of your entire environment—virtualization has the power to transform your infrastructure, from the data center to the desktop.
For more information about Microsoft virtualization solutions, go to: For More Information
For more information about Microsoft products and services, call the Microsoft Sales Information Center at (800) 426-9400. In Canada, call the Microsoft Canada Information Centre at (877) 568-2495. Customers in the United States and Canada who are deaf or hard-of-hearing can reach Microsoft text telephone (TTY/TDD) services at (800) 892-5234. Outside the 50 United States and Canada, please contact your local Microsoft subsidiary. To access information using the World Wide Web, go to:
For more information about HP products and services, visit the website at: