Evolving the Craft

logo Security Development Lifecycle SDL

menu-icon1 CHAPTERS
Start Scrolling to Continue
hero hero
Life In The
Digital Crosshairs
Share Share this chapter

In the Digital

Across thousands of developers and millions of lines of code, one company learns to build secure software in an increasingly insecure world.

I t was 2 a.m. on Saturday, July 13, 2001, when Microsoft’s then head of security response, Steve Lipner, awoke to a call from cybersecurity specialist Russ Cooper. Lipner was told a nasty piece of malware called “Code Red” was spreading at an astonishing rate. Code Red was a worm — a malicious computer program that spreads quickly by copying itself to other computers across the Internet. And it was vicious.

At the time, ABC News reported that, in just two weeks, more than 300,000 computers around the world were infected with Code Red — including some at the U.S. Department of Defense and Department of Justice.

Internet legend says researchers named Code Red after the flavor of Mountain Dew they were drinking at the time.

It wouldn’t be the last time Lipner was awakened unexpectedly. In those days, as a way to cause maximum havoc, a new worm was often timed to hit early in the weekend, leading to long days for the security team.

“There was no rest. I jumped into action along with our five-person security response team to address the situation,” Lipner recalls.


floatImage The news seemed more often bad than good. The idea of out-of-control software could have come straight out of a Tom Clancy novel, and it gained instant media attention. Headlines were casting doubt and concern about Microsoft and its flagship operating system, Windows, one of the most widely used software products in the world.

The company redoubled its efforts on security, but the cybercriminals weren’t going away. In 2003, the SQL Slammer worm launched, infecting tens of thousands of machines around the world within minutes — in spite of the fact that a patch that could prevent the infection had been available for months.

Whereas Code Red was mostly a nuisance, SQL Slammer shut down ATMs, financial systems and websites. This increase in seriousness demonstrated that the threats were changing as the criminals became more sophisticated.

The Blaster worm came shortly after that, followed by others. Nimda. Code Red II. MyDoom. Sasser. During this time Microsoft’s security team was living in a reactive mode, responding to an endless cycle of security issues and trying to stay ahead of the next attack. For a variety of reasons — from incomplete or unreliable patches, to a lack of tools for deploying updates — the company and its security efforts were losing the trust of customers.


Hacker Heyday

In the 1990s, Microsoft executives realized the connected Web was the Wild West. Like any frontier, a rush was on to test boundaries, survey the landscape and hopefully strike it rich. It was in this formative world that business and enterprise computing on the adolescent Internet was taking off, while the rules were still being sorted out.

"The Internet is indeed a hostile environment," is how Michael Howard and David LeBlanc, security veterans and authors of “Writing Secure Code,” described it back at the turn of the millennium.


floatImage Howard and LeBlanc were not writing for consumers, but for Redmond, Wash.-based Microsoft, a giant of the software world and already the preeminent target for cybercriminals.

In the earliest days, hacking was mostly just for sport. After Code Red made national headlines, cybercriminals took on a kind of swashbuckling romanticism. Recognizing the opportunity for fame, more and more people became fascinated with “breaking” the Web, just because they could. Another exploit. Another black eye for the de facto figurehead of the fertile new software industry. But as time went on, the attacks became more and more malicious.

“At some point, real, serious criminals began investing in ways to exploit software for financial gain,” says Matt Thomlinson, vice president of security for Microsoft. “And they’ve been working on it ever since. They’ve become a lot more sophisticated over the years, but so has the software industry.”

I didn’t happen right away, however. The company felt it had made great strides with Windows XP, for example, but with millions of lines of code, retroactive bug testing in operating systems can be like looking for the keys you accidentally buried at the beach.

Only a few months after Windows XP shipped in 2001, a vulnerability was found in the Universal Plug and Play feature that allowed Windows to connect to printers, modems, networks and the like, offering a wide-open gateway to cybercriminals. Though it was patched right away, this security weakness in Windows XP again made national news. Lipner, now partner director of Program Management at Microsoft, still has a Seattle Post-Intelligencer headline posted on the door outside his office as a reminder: "XP Vulnerable to Hacker Attacks; Microsoft offers a free fix to its newest Windows."

The negative news was another blow for a company that prided itself on working around the clock to build the best software in the world.

“Incidents like these were so significant that they strained our ability to provide support to customers,” Thomlinson says. “I remember at one point our local telephone network struggled to keep up with the volume of calls we were getting. We actually had to bus in engineers, many of whom were working on the next version of Windows, from their offices around campus to the call center. We needed every person available to talk to customers and walk them through how to get their systems cleaned.”

“Everyone knew that something had to change,” Thomlinson says, “and the sentiment went all the way to the top.”



Manual Reboot

Howard and LeBlanc’s just-published book offered a frank assessment of the situation. “We started the book back in 1999 basically to keep us from going to meetings and answering all the same questions,” jokes LeBlanc, still with Microsoft as a software development engineer for Windows. The book gained a lot more street cred when a copy made its way into the hands of Microsoft Founder and then-CEO Bill Gates, who had been working on a manifesto of sorts in response to the recent events.

“I remember my friends coming out of their offices and asking me if I had seen Bill’s note,” recalls LeBlanc. “I said ‘What note?’ And they told me to go check my email.”

In LeBlanc's inbox was a landmark memo that had been sent to every full-time Microsoft employee: Bill Gates' deeply thought-out missive about software security and trust, wherein he coined the phrase “Trustworthy Computing.”

“… if we don’t do this, people simply won’t be willing — or able — to take advantage of all the other great work we do,” Gates said. “Trustworthy Computing is the highest priority.”

Not long after the memo, an early gathering of Microsoft’s fledgling in-house security team convened at a bucolic location in Bellevue, Wash. There an idea was born to stop development of Windows for a time and completely overhaul its code base with security in mind. The plan was bold and almost unthinkable to some.

“It was an approach that had been used with the .NET development platform — stop developing entirely, fix as many bugs as possible in series of pushes, and then start back up,” says Glenn Pittaway, senior director of software security at Microsoft. “Could we do the same with Windows? It was initially seen as a ridiculous idea, because .NET had a few hundred developers, and Windows was this 9,000-person supertanker.”

In spite of its scale, the group decided that a stand-down on development had to be done. Lipner made the proposal to management. “I brought the stand-down idea to the manager of the Windows team, half expecting to get thrown out of his office,” Lipner says.

Instead of being thrown out, he was asked to come back with an updated proposal. This time, he found himself briefing the manager and his vice president. The next time it was the manager, the vice president, the senior vice president and some other executives. Then they briefed the business group vice president, Jim Allchin.


“They wanted details: when we’d do the stand-down, how we’d organize it, what the run-up would be, how we’d do the training,” Lipner says. “Stopping Windows was a radical thing to propose in 2001, but at some point it occurred to me that nobody was saying ‘no’ and that this was actually going to happen.”

Thus, in February 2002 the entire Windows division shut down and diverted all of its developers to security. Everyone was given training to outline expectations and priorities — threat modeling, code reviews, available tools, penetration testing — all designed to modify the default behavior of the system to make it more secure. Their room at the Microsoft Briefing Center was filled to its 950-person capacity twice a day for five days as Lipner and his team worked their way through.

floatImage “After the decision was made, it was all about the logistics of getting all 8,500 rank-and-file developers up to speed on security,” Lipner says. “Remember, we were not renting out a dome somewhere to do this. The work was proprietary. The biggest room we could get was the Microsoft Briefing Center. There was no magic to it. We did the training 10 times.”

“It was a huge endeavor, sure, but at the same time the buzz around the hallways was absolutely amazing,” recalls Howard. “Everyone had this single-minded goal of doing whatever they could possibly do to raise the security in the product.”

Once the basics were explained, Microsoft’s developers went to work. But as one may expect, the early years of practical, secure software development were not always the smoothest. Coding all day and all night, these “bands of security brothers” were up against a huge challenge considering the size of the company and its flagship product.

“We would have a morning meeting with the bug triage team that would update us on the bugs that were fixed the previous day,” Lipner says. “There were no standards, so it had to be done by hand. By 6:30 I would go home, see my wife, have some dinner and then go back to my job responding to the 300 emails that came in during the day of meetings.”

Business leaders had originally pushed for a two-day stand-down, while Lipner’s team asked for a month. In the end, the Windows security push lasted two months. Hundreds of developers had devoted themselves to the task, and the result was a much more secure code base.

Though it was a success, it was just the beginning: Security could not continue to be a retroactive exercise. Microsoft had to build security into its code throughout the cycle of product development. The company needed to let engineers be engineers while also creating new processes to ensure the most secure products possible. They needed to continually improve.

Microsoft began reinventing itself as a more secure computing company, and right away there were good results. An updated version of Internet Information Services that shipped with Windows 2003 was notably solid. SQL Server had made great improvements. The list goes on.

And slowly, outsiders began to take notice: “Bill Gates realized that security was an area where Microsoft could win,” says Jeff Williams, founder of Aspect Security, a Columbia, Md.-based security firm with clients including the Federal Deposit Insurance Corporation, the U.S. Army and the Department of Justice. “It is an incredibly well-organized company that could commit an army of developers to make secure code.”


Evolving the Craft

After the security push, Microsoft had created a team capable of finding new classes of vulnerabilities and building tools to help eradicate them — the beginnings of the company’s “security science” capabilities that continue to this day. To measure progress, the security audit function was created, a group independent of the product teams to review and assess security.

With new tools, new processes, and a new understanding of the security landscape, integrating security into product development was now a primary focus across Microsoft. And by late 2003, early versions of Microsoft’s “Security Development Lifecycle,” or SDL, began to take shape.

It was 2004, though, when the security team took the official proposal for the SDL to senior leadership, and got approval to integrate the approach. It was to be mandatory, embedded into the development cycle, updated periodically, and applied to all products and online services that faced meaningful risk.

“In other words, everything went through the SDL,” says Lipner.


A Call to Arms

According to Tim Rains, director in Trustworthy Computing, the increased security of Microsoft’s own software has been dramatic, in part due to the emphasis on continuing to evolve over the past 10 years. Some significant changes to the SDL since its inception in 2004 include enhanced security mitigations, new privacy standards for development, online service requirements, automated processes to improve consistency, and adapting the SDL for agile development projects.

“As technology evolves, as criminals become more sophisticated, so will the SDL,” Rains says. “Security has become a pillar of our company today, and we will keep pushing, keep developing to ensure we stay ahead.”

It’s an effort, Rains says, that goes well beyond Microsoft, and indeed the SDL has had a major impact on the broader industry as well. Early on, Microsoft decided to make the SDL’s tools, processes and guidance available free of charge to any organization that wanted to adapt it to their own business. As a result, the SDL has not only led to measurable improvements in the security and privacy of Microsoft’s software and services, but also to a fundamental shift in software development at many other companies.

According to Rains, awareness has grown exponentially, and many organizations are now demanding a security development lifecycle when developing and adopting technologies.

“Organizations have recognized that security efforts during development are no longer optional,” Rains says. “Prescriptive security development practices are rapidly becoming competitive differentiators in the marketplace.”


floatImage Two of the world’s largest technology companies, Adobe and Cisco, have adopted it. Beyond the tech industry, governments and even some of the most critical economic organizations in the world follow the practice. Technology-policy group BITS successfully incorporated many of the key elements contained within Microsoft’s SDL in its guidance for secure development. The Financial Services Roundtable, of which BITS is a sub-group, represents 100 of the largest integrated financial services companies for consumers in the United States. Roundtable member companies provide fuel for the United States’ economic engine, accounting for almost $93 trillion in managed assets.

Another example, Itron, is one of the leading providers of energy and water resource management for nearly 8,000 utility providers around the world. The company also implemented Microsoft’s SDL, making it mandatory for the development of all of its software and hardware.

“I got tired of writing six-figure checks to these outside vendors,” says Itron engineering advisor Michael Garrison Stuber, who manages security for the company’s entire product line. “From a business standpoint it just made perfect sense to me that we need to be investing in how we do development so we’re thinking about security throughout the life cycle.”

Governments can also use this framework. For example, the government of India has included secure coding practices in its draft national economic five-year plan. India’s Computer Emergency Response Team, which leads the country’s response to cyberthreats, has already taken steps toward implementing the plan by using Microsoft’s SDL as one of its core tenets for application security. In addition, the National Informatics Centre, part of the Central Government Office of India, has required training in SDL principles, including the training of more than 10,000 of India’s cyberforensic investigators.

“Security is an integral part of the software development life cycle, not external to it. You can’t ignore it. If a piece of code is not secure, it simply can’t be claimed to work,” says Dr. Y.K. Sharma, the deputy director general for the Government of India’s National Informatics Centre. “The SDL is one of the approaches we use to maintain our software security. As professionals, we felt we had to deploy it.”

Since its inception in 2004, and the external release of SDL tools and resources in 2008, Microsoft's SDL guidance has been downloaded more than 1 million times and reached more than 150 countries. From small developer shops to large enterprises, many are seeing benefits from a "baking security in" approach.

The make-or-break effects of implementing an SDL became clear in 2010 when, 350 days after implementing Microsoft’s process, MidAmerican Energy was the only business unit inside its parent holding company, Berkshire Hathaway, that external auditors found to have no security vulnerabilities. The SDL has even influenced international standards that governments, developers and companies can take advantage of to help them demand consistent, more secure development in the technology they use.

With each organization and company that adopts more secure development practices, the impact extends to more and more members of one critical group: consumers. For Microsoft’s part, requirements drawn from the SDL are now mandatory by default in its apps marketplace — meaning developers cannot publish an app in Microsoft’s marketplace without testing it using tools that were built in support of the SDL. Consumers don’t even have to think about whether the apps they use are built more securely. By default, they already are.

Unfortunately, however, not everyone has recognized the importance of infusing security from the beginning. According to Thomlinson, there is still work to be done to spread secure development practices to all corners of the software world.

“While there is progress, it’s far from universal,” Thomlinson says. “Collectively we need to do more — especially in today’s landscape. Security needs to be a staple, from what we teach in schools to the culture we’re fostering in business.”


Basement to Boardroom

Today, the Internet is deeply woven into people’s lives, with mobile apps extending Web services to their fingertips, nearly anywhere. In 2013, for the first time, more people shopped online on Cyber Monday than those who shopped in retail stores on Black Friday.

Every day millions of people trust online retailers and other organizations with vital, personal information. Though, by design, these transactions seem simple to shoppers, they are actually complex processes that send personal information flying across systems.

“That flow of data represents a huge opportunity for crooks, a temptation that is too great to ignore,” Thomlinson says. “Today’s cybercriminals are often well-funded, underground organizations with advanced capabilities. They have everyone, not just businesses, in their sights.”

Cybercrime as a business model isn’t the only thing that’s evolving. The actual threats are evolving too. As companies like Microsoft and others have fortified their underlying architecture, from software to the actual processors inside the hardware, cybercriminals have moved their targets upstream to the application layer, looking for vulnerabilities in any of the thousands of apps on the market today.

According to Thomlinson, in the face of cybercrime’s growing risk, secure development should be the highest priority for software vendors, and organizations that rely on secure code should demand nothing less. Building in security from the beginning helps make it more difficult and less appealing for cybercriminals. That’s why the development, implementation and constant improvement of the SDL remains a strategic investment for Microsoft to this day.

“Security cannot be an afterthought,” says Thomlinson. “With both security technologies and cybercriminals constantly evolving, there’s no question secure development helps keep people safe.”

Journey Through
the Crosshairs


With potent, savvy cybercriminals lurking behind keyboards around the globe, Pittaway — the man who has succeeded Lipner at the helm of the SDL — believes organizations and consumers alike should demand that software vendors build applications using a security development process to minimize security risks and keep their critical information safe.

“Windows today is more than an operating system, it’s an ecosystem,” says Pittaway. “People want to shop, play games, watch movies, chat with family and friends — and they need to be able to trust that they are safe and secure no matter what they are doing online. We are proud to be able to play a big role in enabling the best possible experiences across a range of devices, and a big part of that is not only securing our own software, but helping the broader industry understand how to build security into their code throughout the development process.”

Pittaway’s modern-day take echoes Bill Gates’ early vision, reinforcing the central need for, and importance of, security in technology. In his memo Gates predicted that “within 10 years, computing will be an integral and indispensable part of almost everything we do.” He was right, and with the threat of cybercrime not going away, we should all be asking how securely built is the technology we’re using right now?

The SDL was built on the concept that security should not be an afterthought. Today that approach is as important as ever. With technology becoming more and more woven into the fabric of society, cybercriminals continue to probe for cracks in the system, whether it’s at a company or in someone’s smartphone. Bottom line — the industry must evolve and no longer treat computer security as an afterthought. There’s just too much at stake to do otherwise.

Demanding secure software starts with you. Learn more more_icon
image 7 Phases of SDL
prev next
The basic concepts for building trusted software starts with education of developers.
Defining a broad set of security & privacy standards from the start helps a team apply important guard rails throughout a project.
To reduce the number of costly patches post-launch, security measures specific to the product are integrated into the software’s overall structure.
Thorough testing and analysis of the software by product teams at this stage significantly reduce time-consuming fixes later.
Now in beta, the software undergoes rigorous checks on many levels, including security reviews more strict than the Implementation phase.
A few months before its public release and essentially written in full, the software goes through an independent final security review that checks all previous and current security issues.
After the product is shipped, the focus shifts to responding to any reports of vulnerabilities that emerge. Teams track and respond to any incidents to help protect customers, and any findings are fed back into the SDL to help improve future products.