Skip to main content
Industry

How to implement an ethical framework in AI

Blogger series graphic showing a person using HoloLens.

AI is already showing its potential for good causes. Whether it’s predicting weather impact, or optimising transport in an intelligent way. Or projecting when maintenance of operational machines will be required and parts needed in manufacturing, revolutionising healthcare using genomics and microbiome R&D. Or supporting small businesses with smart and fast access to capital. It’s even helping to prevent blindness, assisting deaf or hard of hearing students, and aiding cancer research. Incredibly, we’re also seeing AI being used to help in our mission to save endangered species and understand climate change. 

Yet it’s not without challenges. To ensure AI can only do good, we must first understand the risks and a host of ethical issues related to creating thinking machines and relying on them to make important decisions that affect humans and society in general. We must ask ourselves not what AI can do, but what it should do.

‘Should’ companies have been shown to outperform ‘can’ companies by 9%”

– Maximising the AI opportunity, Microsoft UK.

Ethics is key to the future success of AI

The ability to dissect a conclusion that AI takes, together with the predictability of such an intelligent algorithm and our trust in building and operationalising it, along with a robust legal framework addressed adequately by legislation, is key to future proofing the success of AI.

Satya Nadella rightly says, “Unfortunately the corpus of human data is full of biases”. At Build 2018 he also mentioned that Microsoft’s internal AI ethics team’s job is to ensure that the company’s foray into cutting-edge techniques, like deep learning, don’t unintentionally perpetuate societal biases in their products, among other tasks.

Some of the fundamentals of computer science have not been changed by AI, such as “garbage in, garbage out”. However, machine learning and deep learning – that power many AI systems – learn from large data sets. In most situations the more data, the better the predictions and quality of the results. If the input data that’s used in training the model has some bias, it’s likely that the outcome will also be biased.

So, how do we build an ethical framework for AI?

Let’s look at the key elements required for an ethical framework in AI.What should AI be allowed to do?

Fairness

You need to ensure AI is built and executed with a fairness lens by considering the following:

  • Understand how bias can be introduced and affect recommendations
  • Attract diverse pool of talent when building and operationalising AI systems
  • Develop analytical techniques to detect and eliminate bias from the outset
  • Integrate human review and domain expertise throughout the life cycle, and continually collect feedback post execution and iterate development, with fairness in mind

Reliability & safety

Reliability of predictions and the safety of AI models are key for us to have trust in AI. Follow these steps to build these in from the very start:

  • Always explicitly evaluate training data with an intention to understand reliability, safety, and bias
  • Test extensively and ensure that you enable a user feedback loop
  • Monitor ongoing performance, so that you can iterate and tune models on the fly when needed
  • Design to deal with unexpected circumstances, including nefarious attacks
  • Plan regular human audits and check points from the beginning.

Privacy & security

As AI becomes embedded in almost everything we do, it’s fundamentally important for developers and vendors to consider privacy and security requirements as the number one priority:

  • Existing privacy laws (e.g. the GDPR) apply to AI and the data used to train AI
  • Provide transparency about data collection and use, and good controls so people can make choices about their data
  • Design systems to protect against bad actors
  • Use de-identification techniques to promote both privacy and security

Inclusiveness

The problem with bias in data is that it considers too much one of thing or type as the ground truth, and vice versa. Being inclusive in your design and ensuring your data is inclusive of all attributes that will use or benefit from your AI solution is key to the success of AI. Here are some simple examples you must consider:

  • Inclusive design practices to address potential barriers that could unintentionally exclude people
  • Enhance opportunities for those with disabilities
  • Build trust through contextual interaction
  • Creators and operators must have EQ in addition to IQ

Transparency

Transparency is key in building trust. To install transparency when building or operating AI systems, you should consider the following:

  • People should understand how decisions were made. Just like developers should be able to debug, the end user must be able to understand the process. Build systems with that transparency in mind
  • AI systems should be able to provide contextual explanations
  • Make it easier to raise awareness of potential bias, errors, and unintended outcomes

“The big challenge for us, and anyone looking to use AI, is building trust. People who give their data to the kind of infrastructure that we are developing are inherently cautious about how that data is going to be used and who will have access to it” – Richard Tiffin, Chief Scientific Officer, Agrimetrics

Accountability

Ultimately humans are building AI systems, and we ourselves should have accountability for what we are building. Here are some examples of what must do:

  • People must be accountable for how their systems operate
  • Observe norms during system design and in an ongoing manner
  • There is a role for internal review boards when designing and operating AI systems at scale

Graph showing 82% believe that humans need to take responsibility when AI systems behave in unforeseeable ways. 8% neither agree, nor disagree. 3% disagree and 7% don't know.The opportunity

Quoting Satya Nadella again, “AI isn’t just another piece of technology. It could be one of the world’s most fundamental pieces of technology the human race has ever created. We are at a singularity, and it is up to us to build the AI systems that augment what we do, and in the manner in which we do, in a positive way.

To learn more, read Future Computed, our newly launched report to help you maximise the AI opportunity.

Find out more

The AI opportunity in healthcare

Join our AI for Good accelerator programme for startups

How to reskill your employees for the AI era

 

About the author

Pratim Das

Pratim is Head of Solutions Architecture, where he runs a team focused on Data & AI for the Customer Success Unit. Prior to that he was at AWS, as a Specialist SA for Big Data & Analytics, where he advised customers on big data architecture, migration of big data workloads to the cloud, and implementing best practices and guidelines for analytics. Pratim is particularly interested in operational excellence for petabyte to exabyte scale operations, and design patterns covering “good” data architecture including governance, catalog, and lineage. He’s also passionate about advanced analytics, planet scale NoSQL database like Cosmos DB, and using the right mix of technology, business, and pragmatism to ultimately make customers successful.