Security solutions for the modern workplace at Microsoft must meet the challenges of a constantly evolving threat landscape. We’re moving away from traditional perimeter-based network security and implementing software-defined security barriers and network segmentation. These solutions are scalable and flexible, and consistently provide programmatic security through controls on clients, apps, and devices—helping ensure that devices are healthy and that threats are detected and contained swiftly.


Core Services Engineering and Operations (CSEO) is responsible for implementing solutions that enhance employee productivity while securing the environment from the constantly evolving threat landscape. With the acceleration of cloud adoption, more personal devices used for work, the increase in virtualized server environments, and the explosive growth in the Internet of things (IoT), security is a challenge. Mobile productivity is the foundation of the modern workplace, our perimeters are dissolving and, and we need to move beyond traditional methods of hardening perimeter defenses.

At Microsoft, we’re on our way to becoming a cloud-only organization, where employees will connect to the intelligent cloud and work on any device, from anywhere. In the modern, mobile workplace, security controls need to be ubiquitous and portable, programmed into networks, hosts, virtual machines and client machines. They need to be able to follow the source, no matter where the client moves.

As part of our defense-in-depth program, we need to automate our security controls and plan for a future that includes using software defined security to help protect assets against attack and prevent the spread of threat vectors from external and within internal networks—and gives us enough time to remediate issues.

We are implementing a software-defined security approach, along with compensating controls on clients, apps, and services, rather than in the network itself that will help secure our network boundaries in a predictable, controlled, and cost-effective manner. To make it more difficult for attackers to exploit our network, we are creating software-driven security barriers and segmentations that allow for detection and containment of threat vectors.

Evolving threat landscape

Cyberattacks are increasing and becoming more sophisticated. Designed to probe for vulnerabilities and spread fast, if an attacker finds a way in, a threat can spread quickly. We must assume that some attacks are going to make their way in, whether it be through social engineering, human error, or through some other undiscovered exploit.

In the past, it could take months to discover a breach and then investigate the scope of the attack—which means lost time, reputation, and revenue. We want to make it more difficult for attackers to exploit our network architecture by distributing security controls that allow for detection and containment of threat vectors rather than relying on centralized firewalls and perimeter defenses.

Securing an enterprise today can be challenging. While attackers are getting increasingly clever at finding new exploits, internal infrastructure changes make it even more challenging for security professionals. We are facing an increased rate of change, new software development paradigms, larger infrastructure scale, and new network patterns—leading to a more complex and often confusing environment.

Traditional tools are no longer effective

Our traditional network security focused controls on the edges of our networks, providing strong external security walls. We did not implement many internal controls because our strategy assumed that the network was protected at the perimeter. As the workplace became more mobile and modern, this traditional approach began to reveal its weakness at containing lateral malware movement.

Plus, traditional security architecture can become a bottleneck in a large-scale infrastructure, particularly in an on-premises cloud hybrid solution. Security devices and policies are deployed manually by admins and engineers, because of complex underlying systems and services. Manual management can introduce human errors; for example, by failing to change the default password on each edge device or not updating firmware in a consistent manner. We need to operationalize mundane tasks with automated security policies in firewalls, routers, switches, compute, data services, and applications. We quickly realized that scaling and securing enough new infrastructure to support business and network growth would not succeed in a traditional way.

Planning the framework for implementing software-defined security

We recognize the need to scale our security infrastructure, implement security design faster, and reduce human error. We aim to manage risk in our network infrastructure with a proactive security model that improves our visibility, data control, and management paths for our services, data sources and other assets.

Control is not just prevention, it is about visibility, fully understanding what is flowing into our network and allowing only approved traffic patterns. Malicious traffic must quickly and automatically be detected with a focus on rapid response. We must provide effective mitigation controls limit threat movement and recover from the incident.

Table 1. Comparison of existing structure to future framework

Current State

Future State


Private Networks

Public Networks

Lower cost, readily available, less lead time required to provision or acquire network bandwidth.

Distributed control plane (mgmt.)

Centralized control plane (mgmt.)

Ease of control and manageability from central location



Speed of deployment and changes and update

Slow speeds (10 Gigabytes)

High speeds (1000 Gigabytes)

Improved availability

Static security policies

On demand/Dynamic/Adaptive security policies

Ability to adapt to changing client and mobility requirements

Hardware and security appliances

Virtual appliances and security services

Lower cost and separation from physical hardware dependencies, innovation

Moving from separate appliances for separate services

Network and perimeter-centric controls

Security everywhere/Application/Identity controls

Moving from security from the boundaries to everywhere

Multiprotocol Label Switching (MPLS) and dedicated circuits

Overlays and virtual networks across shared networks

Lower cost and greater resiliency


Micro segmentation

More control, more granular/refined policy implementation

Network Latency

Application Latency

End to end, instead of focusing on of quality of service or static pathing

Reactive threat intelligence

Predictive threat intelligence (AI)

Proactive protection that evolves, self-learning (machine learning) versus responding to known threats

Independent security services

Orchestration of security and network services

Services optimization and a holistic view of the network security environment

Security is complex and includes many services and appliances that work together, to make a fundamental shift in the way that we secure our resources, it requires that we meet today’s needs while addressing the foreseeable needs for the future. During our planning efforts, we categorized software-defined security service areas, including:

  • Network Segmentation
  • Orchestration and Management:
  • Abstraction of software from hardware
  • Deployment Automation
  • Reporting, Monitoring, Analysis and Prediction

For each of the focus areas we began documenting our requirements, dependencies, and creating an implementation roadmap.

Orchestration and management

Security comprises several components that are based on functionality of the devices or services, each requiring a series of actions to be taken. We need to orchestrate the process of managing security configurations. Configuration management is typically done through programable interfaces (APIs) that are exposed to higher service layers. It is important that security vendors provide APIs and provide access to their equipment or services. APIs play a significant role in managing and controlling multiple security providers that may use diverse platforms. APIs enable a uniform and consistent way to access resources. Traditional protocol-level access like HTTP(s), SSH, or telnet is not sufficient for large-scale deployments.

Centralized asset management

Centralized asset management is key to securing the endpoints. Security configuration, policy, and control management all are visible in a centralized place that manages all machines and hosts. A centralized asset management service can be one or more main controllers, but all must sync, track device health, report device health, and determine access criteria for assets. The service should integrate with any exception handling and approval service, and central policy servers.

All Microsoft-provisioned client and server devices must be registered in a comprehensive asset management database. All security services must map client and server health from this central source and give an alert if there is drift from established standards or if is malware is detected. Personal devices that are not included in the asset management database are subject to health checks and policy enforcement through Intune.

Noncompliant machines are restricted to minimum access until they are cleaned. Microsoft-provided hosts in non-trusted, unsecured, or unknown state must not be allowed to access the network until the machine returns to a healthy state. We need to be able to enforce a single, uniform policy to each client machine regardless of its host or physical location (coffee shop, office, or home).

Automated security policies

Traditional security tools don’t work, so we must adapt to scale, automate, predict, and self-heal attacks in real time. With the growing number of threats and vast number of events per day, it simply isn’t feasibly to manually create cases, analyze, and take corrective actions. We need technology that can learn from observed risks and user behaviors and predict threats without being programmed.

We want to create logical polices rather than change physical configurations. Our strategy for securing the network is evolving into a service provider model that is virtualized, programmed, and service-oriented. We have been working with product groups and vendors to create the tools we need to fully realize an end-to end, software-defined security strategy.

Uniform policies

We need the ability to push out uniform policies to all resources, simultaneously, in our multivendor environment. We are looking at using predefined labels or tags that can be defined and linked to roles and access controls, with preconfigured policies. Being able to assign appropriate labels and associated policies while application flows are defined, will help provide consistency and speed up security implementations across the environment.

Software independent of hardware

Security controls and software need to be taken off hardware and be independent of the platform. We are not committed to a vendor’s hardware and we can virtualize our services and use them anywhere—on-premises or in the cloud. Security is independent of hardware, and security policies can be applied across our network, allowing for security and compliance consistency in the cloud and on-premises.

Network function virtualization

Network function virtualization (NFV) speeds up deployment for network growth. (NFV is also sometimes referred to as Virtual Network Functions (VNF).) NFV decouples network functions like Network Address Translation (NAT), firewalling, IDS, DNS, and caching from proprietary appliances to run-on software. This accelerates time to market and reduces code dependency on the hardware or platform. Instead of writing millions of lines of code to hardware, it is decoupled and abstracted from the underlying infrastructure to a common layer.

Image contains side-by-side comparison diagrams of traditional network security and network functiom virtualization.

Figure 1. Traditional appliance vs. network function virtualization

It can be difficult to manage individual appliances with hardware dependencies. With NFV, it doesn’t matter what commercial, off-the-shelf hardware a security function resides on, or if the software code is written to the virtual machine monitor (VMM), or hypervisor. We can shift our focus to software development rather than writing code for the hardware, which can speed up the delivery of new patches and codes.

To accomplish this, we need a single programmatic way to manage and push standard approved policy definitions to any hardware. Restful APIs are a desired way to programmatically manage hardware. They help in programming routers, switches, firewalls, etc. not only from a configuration and management perspective, but also to control and block traffic. If we can interface with any vendor hardware or technology, avoiding vendor lock-in, by a common framework and avoid user configuration directly to the hardware – we can provide strong access management from our authorized source.

Decoupling the data and application planes

We can manage all network and security devices from a separate administrative plane, with no human interaction required to access the data plane. All security functions, like intrusion detection systems (IDS), micro-segmentation, and firewalling can be orchestrated from a single platform. This control panel will provide centralized visibility of our security compliance standards and streamline the process of adding new devices—increasing the security posture of our network.

Data plane. By decoupling the management plane from the data plane, all controls are pushed down to hardware from the control panel through a single view of our network.

Application plane. The application plane would be integrated with the orchestration layer to provide security analyst information about network devices—providing them an API interface to see real-time infrastructure changes and to view and refine individual access policies.

Automating deployment

Hardware-based security appliances have long procurement and installation times in datacenters. It can take many months to have a full production system ready. As demand for cloud increases, it further limits the use of hardware appliances. We are working to speed up the security deployment process by using virtual functions in the cloud along with a similar setup in on-premises networks.

We are looking at ways to automate deployment, moving toward using API methods that can interface with any vendor, regardless of proprietary protocols. As design and flow diagrams are put together, network connectivity would be defined, validated, and submitted. Moving forward all resources would then use standard, defined templates for configuration and service creation. We also want to ensure that all exceptions are automatically verified and deployed using an exception approval system.

Moving to a single automation system will also help us develop a central asset management service—useful for reporting and documenting compliance requirements.

Reporting, monitoring, analysis and prediction

Since securing the perimeter is not enough, we need to ensure that internal resources and networks are protected as well. Devices aren’t trusted unless they are healthy. We collect logs and events from every device, which are then analyzed and passed through Machine Learning to help the system identify abnormalities and predict attacks. Deep learning helps us analyze traffic pattern anomalies or DDOS activities and predict attacks. We need a single view of the health of all resources to help ensure that all devices are patched, only approved ports are open, and that our encryption standard is implemented.

Traditional monitoring is prone to some latency, there are sequential Simple Network Management Protocol walks and polls that can delay results. Separate configuration management devices that are specific to vendors are accessed with vendor-proprietary protocols. We would like to implement a single view for security configuration management and reporting at the network level for all devices including routers, switches, firewalls, and hosts, both in the cloud and on premises.

Implementing network segmentation

We have begun our shift to software-defined security by implementing network segmentation at Microsoft. Segmentation of our network into functional areas of purpose is helping reduce risk by ensuring overall corporate security standards for accessing resources are not lowered to accommodate non-standard business scenarios, such as guest access or research, development, and testing. Using a network segmentation strategy, we can determine acceptable data and management traffic and block the rest with least privilege access to isolated environments.

Isolation and controls can be based on business and security purposes and must be placed at multiple layers within in network infrastructure. Segmentation at the Host, Application, OS, and Network layers are mutually exclusive, helping ensure that fault isolation domains are small and that there is fallback at every layer—making it difficult for any incident to spread.

The objectives of our network segmentation strategy for security are:

  • Creating logical separations of devices that are based on purpose and security posture.
  • Ensuring that devices that travel on and off the network do not rely on infrastructure controls for their protection.
  • Providing security protections for devices that do not have the ability to defend themselves.
  • Creating requirements for activity specific access, egress and monitoring controls.
  • Protecting critical assets and supporting our next-steps in identity-based access that place more emphasis on enhanced identity controls.
  • Isolating network segmentation by device purpose, some of the potential categories include:
    • Internet of Things (IOT) for general purpose
    • Industrial IoT for critical services
    • User access and traveling devices, including laptops and mobile devices
    • Critical systems and datacenters
    • Lab, development, and research environments and scenarios

Macro-segmentation and micro-segmentation

We implemented macro-segmentation to broadly separate groups by business and service function. This broader segmentation has network controls that allow communications and protect services inside the zone. For example, in a business service layer, only approved traffic can flow in and out, and only healthy clients or services can access it.

We have just begun to refine our segments into smaller isolation domains, which offer better control over internal communication. Micro-segmentation will help control and limit lateral traffic movement, while isolating traffic at the host level. With micro-segmentation in the business service layer, we can allow required data paths only, effectively blocking any lateral movement and allowing only required flows.

All clients, hosts, servers, and IoTs need to be micro-segmented and put into contained environments to limit risk to other healthy devices, supporting our key security principles—containing threats by preventing them from spreading and recovering from breaches as quickly as possible.

Network segmentation model

Security requirements must be built into the network from inside out. As illustrated below, our models are designed to protect layers of the internal network. The segmentation model we are deploying on-premises and in the cloud in IaaS should be the same.

Image contains diagram that illustrates how our network segmentation models is designed to protect layers of the internal network.

Figure 2. Network segmentation model

The datacenter services that host applications, data, infrastructure services, management, and identity services are segmented from each other and are only published to external networks to enable required services. No full network stack will be allowed in.

Segmentation layers

We established controls for each segmentation layer, based on business requirements.

  • Secure access workstation layer (SAW). All configuration changes and management for SAW devices come from within the SAW environment. Within this the secure environment, we can quickly detect and mitigate any network traffic abnormalities.
  • Publishing service layer. There must not be full access to the network stack anywhere. Information workers, labs, test, and other environments are limited to the application service layer. The publishing service layer should allow encrypted communication to the service only and applications should only publish to the required service port as necessary. No entity can scan, ping, or be allowed into the application layer other than secured access workstations from identified management environment only.
  • Application layer. This environment only hosts applications that service business functions. Applications cannot be accessed by users directly without going through Security boundary controls and Publishing layer. Each Application within this layer to be micro segmented to not to communicate laterally to each other unless required. No entity can scan, ping, or be allowed into the application layer other than secured access workstations from identified management environment only.
  • Data tier layer. Data Tier to hold only databases and no other services. It should be separated by a security control boundary and be accessed on database service and ports only by the applications. No entity can scan, ping, or be allowed into the application layer other than secured access workstations from identified management environment only.
  • Infrastructure services layer. The infrastructure layer is critical as it provides services that make the above layers function. All shared services such as Identity, DNS, DHCP, Management, etc. should be consolidated into a shared segment for usage across all environments. The boundary between the shared services “hub” and the purpose-built segments must have controls in place. Some of the services in this layer can include:
    • Identity service layer. All Domain Controller must be in their own silos. This environment must have continuous monitoring to detect any abnormalities in traffic.
    • Security service layer. This layer to host security tools and must not be mixed with Management layer
    • Management service layer. This layer must only host Network, Storage, Wireless, etc. management tools only. It must not have any internet connection or any un required environment. All devices to must be accessed from Management port that should have separate processor main device processors.
    • IP services layer. This layer must be segregated into a separate area and further isolated by purpose. Separation must be determined by criticality of the environment e.g. Separation of DHCP service for User and data tier. Domain Name Service (DNS) should be separate from Domain controllers. DNS must not have access to internet other than to Authoritative servers. These services should be controlled by security boundaries for communication and to be micro segmented among themselves.
  • Business partner service layer. Any internal or external private connection to Enterprise network must use strong encryption and peered with policy control Routing Protocols with strong authentication and limited routing exchange.
  • Internet of Things (IoT) service layer. Digitally connected devices in various shapes, forms and capabilities enterprise network that function from functioning as critical, general and performance impacting services. We need to ensure based on the need and criticality, IoT needs to be segmented off from itself and within internal enterprise network. It should have treated as mini apps floating around. 
  • Users service layer. This environment hosts Enterprise employees or vendors connecting to the network wired, wireless or from VPN connection via internet. This gives uniform experience to all users when they are at home, Airport Kiosk or inside Enterprise network they can access the publishing layer internally or externally via virtual private network.
  • Business guest service layer. Guests accessing enterprise network to be segmented off and to be put in their own Macro domain. Within this environment all host to be micro segmented and protect and communication or interaction inside and outside to Internet.
  • Labs, test, and dev. Labs and test environments should be in their own private zones. They should adhere to similar architecture. Interconnection with these services to be treated as any internal business connection as described earlier.
  • High-risk environments. These environments must be completely disconnected from all other services.


Network segmentation has helped us prioritize our responses and improve our incident response and mitigation times. Each detection is more valuable, and we are better able to determine if a detection is anomalous or malicious. It has helped minimize the number of false positives and reduced signal noise. With network segmentation, our control activities are more efficient. We can set limits and control movement inside the network, limiting exposure and preventing intruders from advancing further if there is a breach. Overall, every resource will be protected from each other within the corporate network, allowing us and our customers to secure services and drive agility in the cloud.

One major benefit of software-defined security is introducing consistency in the way we provide security and ensure that clients and the environment are healthy. As we move forward with implementing our software-defined security strategy, we are looking forward to realizing other benefits, including:

  • The ability to provide client security protection to all devices used for work, irrespective of location.
  • Centralized management and the ability to better orchestrate security controls.
  • Automatic provisioning of security functions and reducing manual processes that can introduce human error.
  • Increased automation and virtualization that make us more agile and supports faster deployment of security services.
  • Enables just-in-time security and other cloud identity and security capabilities.
  • Lowers total cost of ownership and Improves return on investment.
  • Less dependency on hardware, shorter time to respond to threats.
  • Increased security posture.
  • Reduced dependency on proprietary platforms, or vendor lock-ins. We can move from platform-dependencies with proprietary software, applications, hardware, or equipment that run exclusively or collaboratively with limited and third-party vendor partners.

What's next?

For Microsoft the intelligent edge and the intelligent cloud are manifestations of our investments in Azure and, more specifically, current IoT trends and analytics capabilities. Now, a new trend is starting to emerge—fog computing—referring to the network connections between edge devices and the cloud. Fog includes edge computing, but also incorporates the network needed to get processed data to its destination. It has become crucial that we adopt a software defined security strategy that will enable productivity and security and can adapt to the changing landscape of the modern workplace.

Note: Microsoft is one of the founding members of the OpenFog Consortium, which aims to develop reference architectures for fog and edge computing deployments.

Software-defined security offers the scalability and the flexibility to be agile in the face of change, helping us programmatically secure our resources in an automated, predictive, and cost-effective way. Engineers and architects can focus on planning and proactively work toward addressing future threats, rather than spending valuable time working on routine tasks.

At Microsoft, we have just begun our journey. To get started thinking implementing software defined security in your environment visit the links in the next section to read about the intelligent edge, the intelligent cloud, and how Azure can help secure modern work environments.

For more information

Microsoft IT Showcase

Creating security controls for IoT devices at Microsoft

Client security: shifting paradigms to prepare for a cloud-only future

The next evolution in computing: the intelligent edge

Securing the Intelligent Edge

Azure IoT Edge


© 2019 Microsoft Corporation. This document is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.