MyOrder, an internal Microsoft legacy application, processes roughly 220,000 purchase orders (POs) every year, which represent $45 billion in internal spending at Microsoft. Until recently, MyOrder was a massive, monolithic, on-premises application. It was costly to maintain, difficult to update, and couldn’t be accessed without a Microsoft-authorized VPN.
MyOrder struggled every May, when traffic could double—or even triple—from 1,000 purchase orders per day to 3,000. When users submitted purchase orders through the ASP.net-based website during these high-load periods, they frequently saw response times as high as 30 seconds, if the application didn’t outright crash or freeze.
Even when it worked as intended, MyOrder’s user experience could be frustrating.
“MyOrder was wizard-based, so users advanced through the app in a particular sequence,” says Vijay Bandi, a software engineer on the MyOrder team in Microsoft Digital. “If you advanced to a point where you didn’t have the information for a required field, you were stuck. It was an awful experience.”
Elsewhere at Microsoft, engineering teams are moving old, monolithic applications to the cloud for increased efficiency, scalability, and security—not to mention vastly improved user experiences. With MyOrder showing its age, the MyOrder team decided it was time to follow suit.
From server-based monolith to agile PaaS
MyOrder—which combined a front end, back end, and all related logic in one solution—was only half of the ancient, monolithic applications that comprised the legacy purchase order system. The other half was the Procurement Services Platform (PSP), a huge middleware services layer. PSP was comprised of about 60 smaller projects and 500 validation paths.
Built on top of PSP, MyOrder collected data from PSP and housed it in one of the 35 servers required to run the application. It was hosted in four separate virtual machines to support the load. The engineering team used a load balancer to distribute the load to each of the VMs. Caches were built into the servers, but because the caches were distributed among four different VMs, they were never in sync.
“Suppose a user creates a purchase order pointing to one server, and the request goes to the next server,” says Atanu Sarkar, also a software engineer on the Microsoft Digital MyOrder team. “In that case, the user could search for a PO but not find it if the cache isn’t updated.”
Fewer resources, greater flexibility with Azure
According to MyOrder Engineering Manager Rajesh Vasan, the team considered several platforms for the new solution before landing on Microsoft Azure.
“We looked at a standalone, private cloud instance of Service Fabric and at Azure App Service,” Vasan says. “Azure was expanding, though. They were investing a lot of time in PaaS (platform as a service) offerings, which meant that we could offload all the networking, configurations, and deployments to Azure, and just concentrate on the application code.”
That would be a welcome change compared to the old monolith.
“A change to a single line of code used to take so much time, because you needed to build the whole solution from scratch with thousands of lines of code,” Vasan says. “Debugging presented similar challenges.”
The legacy app also supported external services like SAP (Microsoft’s financial system of record) and Microsoft Approvals, plus some third-party integrations.
“All that functionality, all those integrations in one monolith, that was a problem,” Vasan says.
By moving to Azure, they could convert each individual function and integration into a single microservice.
“Let’s say I want to change the tax code for a specific country,” Vasan says. “In Azure, I know there’s one microservice that does tax code validation. I go there, I change the code, I deploy. That’s it. It’ll hardly take a week.”
The same scenario in the old software, he says, would take a couple of months.
Migrating databases without downtime
Creating that experience required careful consideration as to how the team would maintain the legacy app while building the new one and migrating from one to the other.
“The first step was building a single source of truth,” Vasan says. “We wanted to put all that data in the cloud so we had a single source for all transactional purchase order data.”
After the team moved the data onto Azure, they built connectors for existing and new components.
“Both the legacy service, which was an Internet Information Services (IIS) web service, and the new service, which would be Azure API components and serverless components acting as individual microservices, would connect to a single source of truth,” Vasan says. “That was the first step.”
The team then needed to decide which microservices to build and which to start building first.
“It gets tricky here,” Vasan says. “Some users were accessing data from the old app, so we had to sync back onto the old one as well, up to the point that all users were no longer using the legacy service.”
The team built APIs to access data and key microservices such as search and the user interface (which they completely remodeled using Angular). Next, they focused on building microservices that were directly related to purchase order processing.
After the team built the core microservices, they started moving tenants to the new infrastructure. By this point, they had eliminated PSP and its database entirely.
“That was a big milestone for us because while we were migrating tenants, we were also working to move everything to the new database,” Vasan says.
At that point, there was no duplicate data.
“We had our single source of truth,” Vasan says. “The entire PO processing pipeline was in the cloud.”
The team then began one of the more challenging aspects of the project: they released one of the microservices with A/B testing in place.
“One of our microservices would call the other microservices and the old PSP in parallel,” Vasan says. “After the call went through both, we compared the results to make sure they were consistent. We flighted this in the production environment until we found and fixed all the issues. Then we went live.”
The next step was designing administration and configuration.
“We completely rewrote all that into the new areas, plus another eight or nine microservices,” Vasan says.
By then, MyOrder was 100 percent Azure, with no legacy components at all.
The benefits of microservices
The MyOrder team leaned on several Azure offerings to create the new infrastructure, including Azure Data Factory, Azure Cache for Redis, Azure Cognitive Search, and Azure Key Vault. The new, modernized version of MyOrder consists of 28 Azure microservices that are “loosely coupled and follow the separation of concern principle,” Vasan says.
Like the tax code example, the microservices architecture made modifying existing capabilities and adding new ones relatively easy. Because it’s built on Azure, it’s highly scalable, so adding new tenants is much simpler.
The team is most thankful, though, for the ease with which they can maintain compliance. Because all code was housed within a single, monolithic application prior to the migration, and because some services within that monolith were financial in nature, the entire application was, in effect, subject to the requirements of the Sarbanes-Oxley (SOX) act.
“With a monolith,” Vasan says, “the moment you deploy code to a server, the entire server has to be SOX compliant.”
Because the team migrated the system to Azure microservices, microservices that are financial in nature are now separated from those that aren’t.
“With monoliths, every change is a SOX change, so it has to go through multiple approvals before it can be deployed,” Vasan says.
Using microservices “means leaner, shorter audits because the audits only apply to the SOX components, not the entire platform,” he says.
Of the 28 new microservices, eight require SOX compliance, and 20 don’t.
“We used to have SOX issues. Now we don’t. We’re more compliant and audit-friendly because of moving to Azure,” Vasan says.
SOX requirements also led to performance issues.
“Maintaining SOX compliance required some IP filtering in the old app,” MyOrder software engineer Ankit Rathod says. “When we deployed IP security policies, there were a lot of packet drops. We wanted a simpler, lightweight, reliable experience.”
Building for the future
One of the tenants the team migrated is Microsoft Real Estate and Security (RE&S), which is responsible for the construction of new datacenters and office buildings at Microsoft. RE&S purchase orders can represent hundreds of millions of dollars in costs. Now that those POs go through the modern MyOrder infrastructure, RE&S has reduced costs by $1.75 million per year, thanks to retiring many now unnecessary servers and reduced operational costs.
Next, the team is focusing on moving MyOrder data into a data lake.
“There’s an overall investment in the Microsoft organization around data lakes right now,” Vasan says. “Azure has a data lake offering, of course, and we’re creating this single source of truth that people are using to build insights around POs. If you want to create a purchase order automatically through an API, for example, you can do that now.”
It’s also possible to build a machine learning model from the data because it’s on a data lake.
Those capabilities are a far cry from those of the massive, monolithic legacy system that the reborn MyOrder has replaced.
Tags: Azure Networking