In Asia, as in other regions, there’s rapidly-growing adoption of artificial intelligence (AI) in the financial services industry, with more than 60 percent of financial services organizations already accelerating their pace of digitalization in response to the pandemic.
Spending on AI by the financial services sector in the region is estimated to reach $4.9 billion in 2024. Currently, financial services spending in Asia Pacific (excluding Japan) represents 15 percent of the worldwide spending on AI, indicating the significant potential for growth ahead as more financial institutions develop and deploy AI solutions.
In this context of growing AI adoption, we are seeing growing interest from our customers, from regulators, and from other stakeholders in how AI can be deployed responsibly. We’re working with customers and regulators directly, as well as through groups like the Monetary Authority of Singapore’s Veritas consortium. Here are five things we are learning from these engagements.
- Regulators are encouraging organizations to build on existing regulatory obligations, and have endorsed a principle-based, technology-neutral approach
Financial institutions operate in a highly regulated industry with robust requirements covering outsourced technology use, risk management, and governance. It’s clear that when it comes to the use of AI, financial institutions aren’t operating in a regulatory vacuum. Because of this, we’re seeing clear indications from regulators in Asia that they are not looking to introduce new financial services regulation specific to AI. Rather they are looking to build on existing regulatory frameworks and associated guidance such as data protection, confidentiality and bank secrecy, technology and risk management, and fair lending, among others to ensure that AI is deployed responsibly.
Regulators across Asia have issued non-binding guidance or principles, and have encouraged collaboration among financial institutions and technology partners to build understanding of how existing controls and governance can be strengthened to implement and demonstrate good practices of responsible AI. This principles-based, technology-neutral regulatory approach can be seen most clearly in Singapore and Hong Kong, echoing the approach globally by jurisdictions like the UK.
One especially interesting development is the emergence of a co-creation approach, with regulators and industry working closely together to ensure the principles can be translated into practical actions. The Monetary Authority of Singapore (MAS) Veritas Consortium is a great example of this, with MAS actively engaging with industry participants to develop a fairness assessment methodology in the first phase of the Consortium. This non-binding methodology was developed through a ground-up approach, based on assessing fairness considerations raised across different financial services uses cases. In the current phase of the Consortium’s work, Microsoft is facilitating discussions on regulatory considerations associated with implementing responsible AI in financial services. The feedback MAS receives from Veritas will then provide them with important learnings for any new future guidance.
This co-creation approach can also be seen globally. In the US, leading organizations across the U.S. financial services, technology and academic industries announced the formation of a new National Council for Artificial Intelligence (NCAI). In the UK, the Artificial Intelligence Public-Private Forum brings together the public and private sectors to encourage further constructive dialogue on the use and impact of AI/ML, including the potential benefits and constraints to deployment, as well as the associated risks.
Another development we are seeing is an increased value placed on coordinating with other jurisdictions, given many financial institutions in Asia operate across multiple markets regionally or globally. There are clear benefits from coordination between regulators across the region to strive for regulatory coherence. Although implementing AI responsibly and in line with regulator expectations means taking an approach tailored to local contexts (for example, an assessment of whether data inputs used for machine learning are appropriate for mitigating bias risks will vary from context to context), cross-border cooperation amongst regulators and with industry has great value for sharing perspectives and learnings.
- Building an understanding of the issues raised in specific financial services use cases is essential for translating principles to practice
Looking back 2-3 years, most discussions around responsible use of AI in finance were centered around defining ethical principles and raising awareness of the need for responsible AI. In Asia, we are seeing efforts increasingly shift to practical considerations in translating principles to practice based on specific use cases, as the actual deployment of AI by financial institutions increases. While the earlier discussions at the level of principles were important, it’s only through their application in real-life scenarios that principles can be translated into practice. Regulator guidelines and ethical principles can help set a baseline and help ensure organizations are asking the right questions, but then a use case-driven approach is needed. The context in which specific AI solutions are deployed will shape what principles are most relevant, and how any risks presented can be mitigated.
The importance of taking this use case-based approach was clear during Microsoft’s collaboration with Standard Chartered, Deutsche Bank, Visa and Linklaters to test the application of Singapore’s AI ethics principles. Through this project we worked collectively to assess the responsible AI considerations involved in three different financial services uses cases: predicting travel patterns based on previous spending; automating verification of wet ink signatures; and regulatory compliance related to Know-Your-Customer checks. A clear learning was that specific principles have different implications in different contexts, and some principles will be more, or less relevant.
A similar, use case-based approach is being taken through the MAS Veritas consortium. By basing its work on specific use cases prepared by the financial institutions involved, together with technology company participants like Microsoft, the group is generating important learnings that strengthen our collective efforts to implement AI responsibly. As more institutions share use cases where they show how they worked through the responsible AI considerations involved, the whole industry will benefit.
We are seeing similar benefits outside the financial services industry through efforts to gather and publish use cases for responsible AI in Asia. This includes the pilot of Australia’s AI ethics principles (where Microsoft was invited as one of five companies to participate), and Singapore’s two volumes of use cases, a number of which relate to financial services.
- A materiality-based approach can help focus efforts on more sensitive use cases
In Asia we are seeing important progress in applying governance frameworks and controls to the use of AI in financial services. Consistent with the global picture from a 2020 survey by the Institute of International Finance, we’re seeing in groups like the Veritas consortium that most financial institutions are using their existing model risk management or other risk management frameworks. A growing priority is identifying the most material use cases of AI so that efforts can focus on sensitives uses where there may be greater risk of harm.
For example, in a recent discussion one financial institution shared their perspective that rather than going through a regular stock take and review of all possible AI use cases, they were focused more on identifying material use cases that required greater scrutiny. This would avoid stalling the deployment of lower risk AI applications, while also ensuring that potentially higher-risk applications could receive sufficient attention. This is similar to the approach we are taking at Microsoft, where a baseline set of responsible AI criteria apply for any team building AI systems, with a more intensive review procedure in place for sensitive uses where there is greater risk of harm.
Looking ahead, an important area to explore is whether more cross-industry collaboration is needed to define shared approaches for assessing materiality. Groups like the Veritas consortium could be helpful in facilitating these discussions.
- It is important for financial institutions and technology partners to promote transparency and accountability
A growing area of interest is in the complementary roles of financial institutions deploying AI technologies, and software developers providing AI solutions that financial institutions might customize or deploy “off-the-shelf”. Even larger financial institutions in Asia with in-house development teams are likely to be building AI systems on top of AI/ML products from third party vendors or on pretrained models downloaded from the internet. That is driving a discussion on how model developers can work most effectively with financial institutions to promote transparency and ensure there is a degree of accountability for the software being developed. Although the institutions deploying AI solutions will ultimately be accountable for ensuring that their use case is implemented responsibly, we do see an important role for developers of AI solutions in promoting the responsible use of their models.
Technology partners can help by sharing technical and non-technical information about the capabilities and limitations of AI software being developed, to help a financial institution assess the best way of deploying that software responsibly. For example, Microsoft has published transparency notes to fill a gap between marketing and technical documentation, to help our customers with the key information they need to know to apply our AI services responsibly.
- The importance of diversity and culture for responsible AI
An ongoing learning is the importance of bringing diverse perspectives into efforts to implement responsible AI. Diverse collaboration is of upmost importance and adds a healthy challenge to ensuring the responsible use principles are appropriately applied, and that potential concerns are identified and dealt with early.
There are many ways in which diverse perspectives can be included. There is a need to involve a range of stakeholders in implementing AI responsibly, such as government, regulators, industry associations, technology developers, technology deployers, civil society groups, and representatives of the end-customer community. It is also important to draw from different perspectives within an organization itself, including the teams that develop the AI applications, but also the perspectives of legal, compliance, audit and risk (including operational, reputational and conduct risk). In addition, those teams must be inherently diverse themselves, with an appropriate representation of gender, ethnic groups, disability, economic circumstances and other aspects of diversity. This is especially true in Asia, given how diverse the region is.
Moving forward
Moving forward, it is critical to build on the existing work done, encourage greater sharing of use case examples to improve understanding of how financial institutions are implementing responsible AI, and examine what regulatory questions financial institutions need to address. Initiatives like Veritas are a powerful platform for financial institutions to co-develop solutions with regulators, technology companies and other industry players. This will strengthen the overall understanding of the use of AI across the financial services industry, as well as instill confidence in regulators and consumers that these issues are being taken into account to ensure AI is being used responsibly.
Learn More
MAS press release announcing the publication of the FEAT principles: https://www.mas.gov.sg/publications/monographs-or-information-paper/2018/FEAT
Microsoft’s press release indicating we are joining the Veritas consortium: Microsoft joins Veritas consortium led by Monetary Authority of Singapore (MAS), in commitment to responsible Artificial Intelligence (AI) use in the Financial Services Industry – Microsoft Stories Asia
MAS press release announcing the conclusion of the 2020 Veritas projects: Veritas Initiative Addresses Implementation Challenges in the Responsible Use of Artificial Intelligence and Data Analytics (mas.gov.sg)
White paper from Veritas Phase 1: https://www.mas.gov.sg/-/media/MAS/News/Media-Releases/2021/Veritas-Document-1-FEAT-Fairness-Principles-Assessment-Methodology.pdf
2019 Principles to Practice publication: https://www.microsoft.com/cms/api/am/binary/RE487kh
Microsoft responsible AI principles: https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3aprimaryr6
The building blocks of Microsoft’s responsible AI program: https://blogs.microsoft.com/on-the-issues/2021/01/19/microsoft-responsible-ai-program/
Bank for International Settlements: “Humans keeping AI in check – emerging regulatory expectations in the financial sector”: https://www.bis.org/fsi/publ/insights35.pdf