Researchers from fields as diverse as biology and urban planning want to use high-performance computing (HPC), but such systems are traditionally difficult to use and time-consuming to manage. The University of Minnesota is exploring the use of Windows HPC Server 2008, which supports its large computing challenges and extends the familiar Windows interface to non-UNIX users. Linux applications port to it easily, and maintenance time is minimal.
Advances in the three-dimensional reconstruction of MRI (magnetic resonance imaging) and CT (computed tomography) scans, and in computer simulators, allow surgeons-in-training to see the blood vessels and organs that they’ll encounter in a medical procedure. But they can’t easily help doctors train and prepare for the experience of interacting with the human tissue of a specific patient.
Hakizumwami Birali Runesha and his colleagues at the University of Minnesota are exploring how to use the compute power of a high-performance computing (HPC) server to support medical applications that meet this need. Runesha is Director of Scientific Computing and Applications at the Supercomputing Institute for Advanced Computational Research at the University of Minnesota. The university is a pioneer in supercomputing, having been the first school in the United States to acquire a supercomputer, in 1981. The Minnesota Supercomputing Institute now has five supercomputers and seven computer laboratories with high-end workstations, software, and visualization equipment.
||On both the small and large clusters, we showed that the Microsoft software scales perfectly, making it ideal for complex computing problems.
||Hakizumwami Birali Runesha
Director of Scientific Computing and Applications,
University of Minnesota Supercomputing Institute
That expansion of HPC resources has been made possible, in part, by the increasing performance of low-cost processors. Clusters of low-cost computers can work together to quickly solve problems formerly left to single computers working for hours, days, or weeks. With that change has come an increase in the types of researchers who use, or want to use, HPC. Once used mostly for math, physics, and chemistry, HPC now appeals to researchers in fields as diverse as biology and urban planning.
But the software with which these researchers are familiar hasn’t traditionally scaled to support HPC platforms. Researchers who can benefit from HPC resources are not necessarily programmers and aren’t trained in the computer languages and operating systems on which HPC systems have been built. Some researchers don’t work in proximity to such systems, making it difficult to deliver the bandwidth and low latency needed when computing remotely.
Runesha and his colleagues are exploring these HPC issues. One way is by using the Windows HPC Server 2008 operating system, the latest version of the Microsoft platform for HPC.
Windows HPC Server is not a traditional choice for academic research, a field generally dominated by the UNIX and Linux operating systems. But Runesha and his team are interested in Windows HPC Server because most academics already know and use the Windows operating system in other parts of their work. The team is studying whether Windows HPC Server can scale to support the needs of researchers, whether applications written for Linux can be easily ported to it, and whether the computing power can be accessed remotely.
The team uses two clusters of computers that run Windows HPC Server. One is a cluster of 10 nodes, each with two quad-core 2.83 GHz Intel Xeon processors, for a total of 80 processor-cores. The other is a 256-node cluster with eight 1.86 GHz Intel Xeon processors on each, for a total of 2,048 processor-cores.
The study team has ported and run several astrophysics applications on the clusters, including one to model the effect of cosmic rays and magnetic fields on “galactic jets”—streams of electrons and positrons expelled by a massive black hole at the center of a galaxy.
Other experiments have tested virtual-reality medical simulator applications for training. A haptic device attached to a PC with force feedback controls an application on a remote server by using a network-transparent interface. The researchers ran the application over a network to simulate the use of the system in a doctor’s office or medical facility, with the computing power coming from a distant data center.
The Minnesota Supercomputing Institute for Advanced Computational Research study team demonstrated the scalability of Windows HPC Server 2008, as well as the ease in managing and porting applications to it. Specifically, the team found that the operating system offers:
- Scalability. The study shows that Windows HPC Server 2008 is fully scalable, meeting or exceeding the performance of traditional high-performance computing systems on comparable hardware. The study team tested both for weak scaling, in which the system should take an unchanging amount of time to run as the computing load and the number of processors are increased; and for strong scaling, in which the system takes a steadily decreasing amount of time to run as the processors are increased. “On both the small and large clusters, we showed that the Microsoft software scales perfectly, making it ideal for complex computing problems,” says Runesha.
- Effectiveness over remote networks. The team successfully ran the computing-intensive applications via remote network. “Users can run parallel processing with Windows HPC Server from wherever they are,” says Runesha. “They can access massive, remote clusters as well as small, local clusters—without rewriting software.”
- Easy porting. Porting Linux applications to Windows HPC Server 2008 can be accomplished within days, according to David Porter, High Performance Computing Consultant for the University of Minnesota Supercomputing Institute. “The learning curve for porting applications from Linux to Windows is very easy,” he says.
- Streamlined management. The study team finds Windows HPC Server clusters easy to manage. Without any prior HPC knowledge, Eric Sanford, Windows and Macintosh Administrator at the University of Minnesota Supercomputing Institute, deployed a cluster and was running jobs in 45 minutes. While the university schedules four hours a month for cluster maintenance and updating on Windows and Linux systems, Sanford spends only up to 30 minutes a month on such functions for the Windows-based cluster. “We spend less time maintaining the cluster than the Linux staff does,” says Sanford. “Our system maintains, updates, and reboots itself.”
For more information about other Microsoft customer successes, please visit: