Update, June 5, 2020 – The recipients for the 2020 Microsoft Security AI proposal have been announced.
Microsoft is committed to pushing the boundaries of technology to empower every person and every organization on the planet to achieve more. The cornerstone of how Microsoft does this is by building systems that are secure, and by providing tools that enable customers to manage security, legal, and regulatory standards.
The goal of this Request for proposals (RFP) is to spark new AI research that will expand our understanding of the enterprise, the threat landscape, and how to secure our customer’s assets in the face of increasingly sophisticated attacks.
Security is rapidly gaining importance in an ever-growing digital world with expanding heterogeneous systems, an explosion of data, and increasingly motivated and sophisticated adversaries. The asymmetric nature of security adds to this challenge. The defender must protect all assets, yet the attacker need only find one vulnerability. At the same time, availability of data and cloud capabilities create an opportunity for defenders to flip the balance in their favor. AI will enable us to increase awareness and actionable insights, and make our customers more agile than their adversaries when defending their enterprises.
Microsoft is launching a preliminary academic grants program. We will fund one or more projects (up to $300K in total funding for this RFP) in new collaborative research efforts with university partners so that we can invent the future of security together.
Research is an integral part of the innovation loop. Most of the exciting research is happening in universities around the world. The goal of the Microsoft Security AI (MSecAI) RFP is to develop new knowledge and capabilities that can provide a robust defense against future attacks. Through our grants program, we hope not only to support academic research, but also to develop long-term collaborations with researchers around the world who share the same goal of protecting private data from unauthorized access.
Proposals are invited on all areas of computing related to security and AI, particularly in the following areas of interest.
Understanding the enterprise
- An enterprise is a collection of entities (users, computers, files, documents, processes), relationships between entities, and behaviors over time. Assuming data representing an enterprise exists, what are the privacy risks and possible mitigation strategies upon sharing that data with third parties? Assuming data cannot be shared, how could a realistic representation of an enterprise be created to enable subsequent learning tasks?
- We seek approaches to privacy preservation of enterprise data that empower reasoning on this data, while at the same time providing privacy guarantees.
- We desire to understand new technologies as they emerge. Internet of Things and supply chains all form elements of the modern and growing ecosystem. We want to develop ways to discover and automatically understand these new technologies and the data they produce. What would be the relevant data to bring into scope?
- Automatic modeling to provide new insights is critical. How can we empower enterprises with less or no experience to bring powerful AI to bear on further understanding the ecosystem?
Trustworthy machine learning for industry
The aim is for researchers to collaborate with software developers, machine learning (ML) engineers, and security engineers/incident responders to conduct joint research in the design, development, and deployment of secure ML systems .
- The reliability of machine learning systems in the presence of active adversaries has become especially important in recent years. As ML is used for more security-sensitive applications, and is trained with larger amounts of data, the ability for learning algorithms to tolerate worst-case noise is critical. How can we identify the risk to the confidentiality, integrity, and availability of ML models? Can we develop offline and online analysis-based tools to test ML failure modes against adversarial attacks, such as backdoor data poisoning, model inversion, perturbations, and many others, as described in the taxonomy? How can these test harnesses help us validate the trustworthiness and reliability of ML models? How can we design and train ML models to be robust against such attacks?
- As ML models are trained and deployed frequently to capture the latest data insights, how can we effectively perform input validations at scale to our data in order to identify and reject specially crafted adversarial queries, both during training and inference? How can we detect whether an attacker is injecting synthetic traffic to influence the model’s decision boundaries? How can we attribute changes in data distributions to adversaries and not to other factors involved with a production ML pipeline?
- Once a model compromise is detected, what should the data and model provenance look like? Can we effectively “patch” the compromised ML model/dataset or rollback to a previously known good model without compromising the existing model performance?
- Given the black-box nature of ML systems, how do we meaningfully interrogate ML systems under attack to ascertain the root cause of failure? How do we ascertain the blast radius, and threat attribution? What steps can an incident responder perform to respond to such threats?
- Models that are deployed on client machines are highly susceptible to model stealing and tampering attacks. How can we validate the authenticity and integrity of ML models and protect them against such adversarial tampering? What sort of guidelines can an ML engineer follow when designing and deploying such models?
- For additional research areas, please refer to this paper.
Understanding the threat landscape
- Can we devise methods for analysts to understand how an AI-based anomaly-, intrusion-, or malware-detection system came to its conclusions? If users are blocked from performing particular actions, AI must offer compelling reasons for its decisions, particularly if this interferes with productivity. On the other hand, some contexts involve irreducible computation that might not be amenable to simpler explanations. In such situations, how do we instill confidence in a decision being justified?
- Attackers are continuously innovating, and new techniques and campaigns are detected after they are hypothesized or observed in the wild. How can this cycle be broken to discover techniques and campaigns before they’re used? Can we build AI-powered defensive and offensive agents that can stay ahead of adversary innovation?
- Open Source Intelligence (OSINT) combined with proprietary telemetry serves as the basis of threat intelligence. Hunting followed by threat intelligence is the process analysts use to gain insights into tactics, techniques, and procedures (TTPs), assess enterprise risk, and prioritize defensive measures. How can this be done in real-time in a more scalable manner?
AI in supporting defenders
- How can AI-supported security decisions be effectively balanced with minimal impact on productivity? This applies to both end users and security operation teams.
- How can we use dynamic quantification of risk to create visibility for our customers into areas of potential concern, and how can we proactively reduce and mitigate risk as driven by AI predictions?
- How can we use human-in-the-loop AI to enable user feedback through advanced user experiences to update defenses automatically? How do we avoid bias in this process to ensure human input does not preclude the discovery of new malicious behavior? A balance between exploration and exploitation must be achieved.
- How can AI be used to increase the efficacy and agility of threat hunter teams? How do we present high quality actionable insight from AI to humans with an emphasis on recall (finding the breach) rather than precision? The focus should be on behaviors most likely to be related to attacks.
- Given current exponential progress in technology, what major disruptions might shake up the threat landscape, defensive arsenal, or both? During the march towards Artificial General Intelligence (AGI), intermediate discoveries and advancements may well provide both attackers and defenders with game-changing techniques. Quantum computing may offer new capabilities.
Microsoft will provide up to $150,000 USD of funding for each approved proposal (maximum funding for this RFP, $300,000 USD). Microsoft will also consider an additional award of Azure cloud computing credits if warranted by the research and specified in the proposal. The selected winning team of this RFP shall receive funding in the form of an unrestricted gift. A second round of funding pending initial progress and outcomes (see Timeline below) may be considered at some point during this collaboration. All funding decisions will be at the sole discretion of Microsoft. Proposals for this RFP should provide an initial budget and workplan for the research based on the Timeline section below.
Microsoft encourages potential university partners to consider using resources outlined in the RFP in the following manner:
- PhD scholarship stipends.
- Post-doctoral researcher funding.
- Software and hardware research engineer funding.
- Limited but essential hardware and software needed to conduct the research.
Proposal plans should include any of these, or other items, that directly support the proposed research.
Microsoft research collaborators, at no cost to the winning teams, may visit the university partners one or more times to foster collaborative planning and research. These visits will be agreed upon and scheduled after an award decision is made. Likewise, a cadence of meetings will be mutually agreed upon at the start of the collaboration. Proposals are welcome to include other suggestions about how to foster an effective collaborative research engagement.
This RFP is not restricted to any one discipline or tailored to any methodology. Universities are welcome to submit cross-disciplinary proposals if that contributes to answering the proposed research question(s).
To be eligible for this RFP, your institution and proposal must meet the following requirements:
- Institutions must have access to the knowledge, resources, and skills necessary to carry out the proposed research.
- Institutions must be either an accredited or otherwise degree-granting university with non-profit status, or a research organization with non-profit status.
- Proposals that are incomplete or request funds more than the maximum award will be excluded from the selection process.
- The proposal budget must reflect your university’s policies toward receiving unrestricted gifts, and should emphasize allocation of funds toward completing the research proposed.
- Proposals should include a timeline (approximately 12-18 months) or workplan that begins in summer 2020 and ends in fall of 2021.
- To optimize the chances of receiving an award, we encourage researchers from the same university to consider submitting a single, joint proposal (rather than multiple individual proposals) that leverages their various skills and interests to create the strongest possible proposal.
- Multiple universities can submit a joint/single proposal together. Please clearly indicate in the budget section how the budget, not to exceed $150,000 USD, will be shared.
Selection process and criteria
All proposals received by the submission deadline and in compliance with the eligibility criteria will be evaluated by a panel of subject-matter experts chosen by Microsoft. Drawing from evaluations by the review panel, Microsoft will select which proposals will receive the awards. Microsoft reserves the right to fund the winning proposal at an amount greater or lower than the amount requested, up to the stated maximum amount. Note: Microsoft will not provide individual feedback on proposals that are not funded.
All proposals will be evaluated based on the following criteria:
- Addresses an important research area identified above that, if answered, has the potential to have a significant impact on that domain.
- Expected value and potential impact of the research on relevant information security fields.
- Potential for wide dissemination and use of knowledge, including specific plans for scholarly publications, public presentations, and white papers.
- Ability to complete the project based upon adequate available resources, reasonable timelines, and the identified contributors’ qualifications.
- Qualifications of the research team, including previous history of work in the area, successful completion of previous projects, research or teaching awards, and scholarly publications.
- Diversity is highly valued and research teams should strive to reflect a diversity of backgrounds, experiences, and talent reflected in the research teams.
- Evidence of university support contributed in-kind to directly support and supplement the research efforts.
- Budget is strategic to maximize impact of research.
- Possible additional information as requested by the review panel, which might be requested via a conference call.
- May 1, 2020: Proposals due.
- May 31, 2020: Winners announced.
- Summer 2020: Awards made, and planning begins with regularly scheduled meetings, calls, and visit(s) by Microsoft to MSecAI winning university.
- Spring 2021: Review of progress for potential second round of funding (pending progress and availability of funds).
- Fall 2021: Report back.
- As a condition of accepting an award, principal investigators agree that Microsoft may use their name and likeness to publicize their proposals (including all proposal content except detailed budget information) in connection with the promotion of the research awards in all media now known or later developed.
- Researchers will be willing to engage with Microsoft about their project and experience, and provide updates via monthly or quarterly calls.
- The review process is internal, and no review feedback will be given to submitters.
- Microsoft encourages researchers to publish their work in scholarly venues such as journals and conferences. Researchers must provide Microsoft a copy of any work prior to publication. So long as accurate, such publications are not subject to Microsoft’s approval except that, at Microsoft’s request, researcher will delete any Microsoft Confidential Information identified or delay publication to enable Microsoft to file for appropriate intellectual property (IP) protection for any project IP disclosed in such work.
- All data sets and any new IP resulting from this effort will be made public and publicly available for any researcher, developer, or interested party to access to help further the goals of this initiative.
- Funded researchers must seek approval of their institution’s review board for any work that involves human subjects.
- At the completion of the project, the funded researchers will be required to submit to Microsoft a report describing project learnings.
- Any security issues in Microsoft products or services discovered during this research must be reported to the Microsoft Security Response Center.