Publiceret: 11/16/2009
Visninger: 535
Bedøm casen:

Stork Thermeq Manufacturer Expands Research Capabilities with Switch to Windows HPC Server 2008

Stork Thermeq, a provider of products and services for industrial water-steam systems, used a Red Hat Linux high-performance computing (HPC) system to support its research. But the company found that the system took considerable expertise to run and maintain, pulling researchers away from their work, and would have required outside consultants to expand and maintain the system’s structure. The company migrated to the Windows® HPC Server 2008 operating system because it provided greater scalability and easier management. Stork Thermeq also found that the system delivers greater cluster efficiency and flexibility because of the integrated Job Scheduler and because it performs 30 percent faster per node than the company’s previous Linux system. Due to the accelerated performance, Stork Thermeq researchers can run more compute jobs and gain deeper insight for improved products and services.



Stork Thermeq provides products and services for industrial boilers for use in the power and petrochemical industries. The boilers are fired by burners that produce harmful emissions, which the company devotes considerable effort toward minimizing. Stork Thermeq conducts in-depth computational fluid dynamics (CFD) research as part of optimizing and validating the customized boilers and boiler solutions, with a keen eye to minimizing harmful emissions for its customers.
Stork Thermeq researchers used an eight-node high-performance computing (HPC) environment from Red Hat Enterprise Linux 5.2 to conduct CFD calculations using ANSYS Fluent software, but the system couldn’t keep pace with customers’ needs. “Not only was the number of jobs we were asked to take on rapidly increasing, but each job’s CFD calculations were growing more and more complex,” says Dr. Marco Derksen, Manager of Research and Development at Stork Thermeq. “Our HPC system couldn’t perform fast enough for us to keep up. We needed to remove the bottleneck.”

Another issue for Stork Thermeq was that setting up and maintaining jobs, which could last up to four weeks on the system, was not straightforward. “Linux was a major hassle in that respect,” says Derksen. “For example, we had no integrated, automated means of managing our jobs. Moreover, we lacked experience in setting up such a system in the Linux environment, so we had to handle that manually, even coming in on weekends to start up a new job if one ended during off hours.”

* We found that, node for node ... Windows HPC Server 2008 ran 30 percent more quickly than ... our old Linux environment. *
Dr. Marco Derksen
Manager of Research and Development,
Stork Thermeq
In addition to scheduling woes, Stork Thermeq lacked a straightforward, cost-effective way of boosting its existing cluster’s capacity to accommodate the growth in demand. Derksen says, “As we went through the planning process for expanding our Linux environment, we realized that we didn’t possess the specialized skills necessary to make it work. We needed to establish an HPC environment that we could scale up easily so that we could accomplish more work in less time and better serve more customers.”


Until relatively recently, Stork Thermeq believed that Linux was the only viable option for HPC. All that changed when Derksen learned about the Windows® Compute Cluster Server 2003 operating system at an ANSYS user group meeting. “To tell you the truth, I hadn’t even known that a Windows-based high-performance computing solution existed,” admits Derksen. “My studies and subsequent work experience were all about Linux and UNIX—Windows was just not a part of the vocabulary when it came to HPC.” Impressed by what he saw, Derksen decided to pursue Windows-based high-performance computing to see if it could solve the scalability and management issues that Stork Thermeq had experienced with Linux.

After carrying out a series of tests in conjunction with Dell and using new Dell hardware, Stork Thermeq ultimately decided to configure a 40-core cluster that ran the newer Windows HPC Server 2008 operating system on Dell PowerEdge R610 computers. The operating system is built on the Windows Server® 2008 operating system and includes the Microsoft message passing interface (MPI). “It was surprisingly easy to set up once we had all the hardware and software onsite,” recalls Derksen. “We essentially clicked 'Next' a few times and the whole thing was up and running.”

The company’s researchers primarily use the Windows HPC Server 2008 cluster to run ANSYS 12.0 flow-modeling software, which they have found to perform better on the new system than the previous combination of ANSYS Fluent 6 and Linux. “The interactions between the Windows HPC Server 2008 Job Scheduler and FLUENT are flawless,” says Derksen. “The two are completely integrated, which I’ve not seen before.”

Stork Thermeq researchers and IT staff alike value the new set of management tools available to them. “I particularly like the HPC Cluster Manager,” says Derksen. “It’s unbelievable that such complexity can be made so simple to manage.” HPC Cluster Manager is the central tool for configuring, deploying, and administering Windows HPC Server 2008 clusters.

The cluster’s users also appreciate that Windows HPC Server 2008 offers seamless remote use through the Microsoft® Remote Desktop Protocol (RDP). “The combination of ANSYS 12.0 and the Remote Desktop Protocol represents a big advancement over Linux, whose protocol is really limited,” says Derksen. “Their look and feel is so similar to those of the cluster that I have to check the corner of the screen to confirm that I’m actually working on a remote desktop.”

In fact, Stork Thermeq researchers loaded a highly complex boiler geometry on the cluster’s graphics node using the remote desktop. They discovered that they could manipulate complex geometry in Autodesk Inventor 2009 faster with RDP on the graphics node than the company’s mechanical engineers could do on their computers locally.

The researchers’ initial use of the new operating environment met with such success that Stork Thermeq embraced Windows HPC Server 2008 as the right path for the company. “Our performance tests were so conclusive that we’re now converting our Linux server to run on Windows HPC Server 2008,” says Derksen. “We’re never going back to Linux.”


For Stork Thermeq, using Windows HPC Server 2008 will help support business growth through faster performance and the ability for deeper research. “We’re in a great position with Windows HPC Server 2008, and we can only grow from here,” says Derksen. “We can respond far better to customer demand than in the past—we don’t have to say ‘No’ anymore.”


We’re in a great position with Windows HPC Server 2008, and we can only grow from here. We can respond far better to customer demand than in the past—we don’t have to say ‘No’ anymore.

Dr. Marco Derksen
Manager of Research and Development, Stork Thermeq

Greater Flexibility and Efficiency

Stork Thermeq has improved the efficiency of its HPC environment, and it now can adjust the timing of compute jobs to accommodate business needs. “We’ve dramatically improved cluster efficiency, with increases of 34 percent in cluster efficiency,” says Derksen. “Now we can stack jobs, use scripts to automatically search for available nodes, prioritize jobs, and monitor what’s going on. The Job Scheduler makes it a lot easier to ensure that the cluster is running optimally, 24 hours a day, 7 days a week, so we no longer have to come in on the weekends to start new jobs.”

If the company experiences a sudden flood of demands or needs to reprioritize jobs to focus on an urgent task, it can quickly and easily react. “Our hardware is no longer a bottleneck, and we can always buy temporary software licenses to address a spike in demand, without paying for extra hardware,” says Derksen. “We now have the ability to temporarily postpone a particular job, switch to take care of a higher priority, and then resume the original job without losing any ground. That flexibility is exactly what we need—we’re in a much better position to respond to customer demand than in the past. Also, expanding in computational capacity means simply buying extra compute nodes. We don´t have to change the HPC infrastructure to accommodate greater capacity anymore.”

Faster Performance

Stork Thermeq experienced a significant performance increase with its new HPC cluster. “We found that, node for node, ANSYS 12.0 on Windows HPC Server 2008 ran 30 percent more quickly than ANSYS Fluent 6 on our old Linux environment,” says Derksen.

The company also sees performance improvements when it comes to transferring files. “With Linux, we used to have to use FTP [file transfer protocol] sites, but with Windows HPC Server 2008, thanks to the integration of the RDP protocol with the rest of Windows, we can copy and paste files directly, which makes file handling and data transfer far simpler,” says Derksen. “Plus, because we use the network, we benefit from its speed and are no longer hampered by FTP size limits.”

Enhanced Research

The company’s migration to Windows HPC Server 2008 is resulting in a higher-quality product for its customers. “Because we can run jobs more efficiently, we’re able to do more research and gain deeper insight into a particular challenge within the same amount of time,” says Derksen.

Researchers also spend their time almost exclusively on research now, as opposed to dealing with more mundane tasks, such as manually scheduling their jobs. For example, researchers now can spend more time on research that responds to customer requests to optimize the customized boilers and boiler components to reduce emissions. “The extra time also makes it possible to increase the number of subjects that we can investigate,” says Derksen.

Easier Management and Scalability

Stork Thermeq no longer requires expert knowledge to manage its HPC cluster. “System management is a whole different world now,” says Derksen. “We needed Linux expertise to take care of our previous environment. But working with our new HPC cluster is really straightforward. All you need is a little bit of Windows Server experience.”

Indeed, Stork Thermeq IT staff members now can step in to administer the new HPC cluster as needed. “I can go on holiday because there are others to maintain the system,” says Derksen. “Managing the HPC cluster has gone from a necessity to a hobby for me.”

In the near future, Stork Thermeq will expand its HPC research services to provide for Stork business units beyond Stork Thermeq. “We’re now ready for whatever requests are made of us, whether they come from within our division or from our parent company. We have a widely accessible, fully scalable structure in place—it’s just a matter of quickly adding compute nodes as needed, which we can do ourselves, with no outside experts necessary.”

Microsoft Server Product Portfolio
For more information about the Microsoft server product portfolio, go to:

For More Information

For more information about Microsoft products and services, call the Microsoft Sales Information Center at (800) 426-9400. In Canada, call the Microsoft Canada Information Centre at (877) 568-2495. Customers in the United States and Canada who are deaf or hard-of-hearing can reach Microsoft text telephone (TTY/TDD) services at (800) 892-5234. Outside the 50 United States and Canada, please contact your local Microsoft subsidiary. To access information using the World Wide Web, go to:

For more information about Stork Thermeq products and services, visit the Web site at:

Løsningen - kort fortalt

Organisations størrelse
215 medarbejdere


Based in Hengelo, Netherlands, Stork Thermeq has 215 employees and specializes in the customized design, production, construction, and maintenance of industrial water-steam systems.


Stork Thermeq used a Red Hat Linux high-performance computing (HPC) system but sought a more scalable environment that could be maintained without specialized expertise.


Stork Thermeq migrated to a 40-core cluster that runs the Windows® HPC Server 2008 operating system. Its researchers use the HPC cluster to run ANSYS 12.0 flow-modeling software.

  • Greater flexibility and efficiency
  • Faster performance
  • Easier management

  • Dell PowerEdge R610 rack chassis with quad-core Intel Xeon E5520 and X5550 processors and Cisco Catalyst 3560 switch

Software & Services
  • Windows HPC Server 2008
  • Microsoft HPC Pack 2008

  • Accounting & Consulting
  • Architecture, Engineering & Construction
  • Power & Utilities


  • Cloud & Server Platform
  • Business Productivity

  • High Performance Computing
  • Interoperability


Ansys Dell Inc.