a close up of a logo

Project EmpowerMD: Medical conversations to medical intelligence

Trustworthy AI through user interface design

Share this page

By Isaac Naveh-Benjamin, Software Engineer

The rise of artificial intelligence has led to a growing problem of transparency. It is increasingly difficult for the layperson, or even for the developer of an AI system, to understand how such systems work. Sometimes, this is due to the use of proprietary algorithms – but more often, it’s because modern AI systems are simply too complex for humans to understand. Such systems are “black boxes” whose functioning depends on millions of internal interdependent variables. 

As AI systems are deployed in the real world, questions of transparency and accountability have become increasingly important. With AI being used in applications like criminal sentencing or loan approval, it’s important to be able to answer questions like “Why did input A result in output B?” or “What should I do differently to get a favorable outcome?”  

When it comes to the medical domain, the stakes are even higher. Doctors can’t afford to trust the recommendations of an AI system whose workings they don’t understandWhile they don’t need to understand the precise technical details of the algorithms, they do need to have an intuitive grasp of how the system works (so that they can understand its limitations). They also need an “audit trail” that explains how a machine came to a specific conclusion. 

At EmpowerMD, a Microsoft incubation project, we’re developing aIntelligent Scribe Service that listens to doctor-patient interactions and automatically generates a medical note. This note is then vetted for accuracy by the doctor, who can edit it as she sees fit. This design is known as human in the loop, and is one of the main ways of making AI systems more transparent. By incorporating human feedback into the process, we can be more confident in the overall integrity of the system. (Note that it is also a legal imperative, since by law, medical notes must be reviewed for accuracy and signed by a doctor). 

In designing the ISS’s user interface, we added several additional features that help make it more transparent. Below, you can see a screenshot of the Intelligent Scribe, with the transcript of conversation on the right-hand side, and the machine-generated note on the left. Whenever the user clicks any part of the medical note, we give a visual indication of where in the conversation it came from by highlighting the corresponding parts of the transcript. This makes it easy for the user to determine the provenance of any part of the medical note. 

[Click to enlarge]

Another feature of the ISS is the layer view. This feature divides the medical note into layers based on where that information came from – whether from the electronic record (EHR), from the clinical conversation, or from the doctor. This allows the user to tell, at a glance, which parts of the note are machine-generated, and which came from a human. 

[Click to enlarge]

Finally, we use the typography of the note to give clues as to the origin of specific phrases. For example, when a phrase is italicized, that indicates that it is a near-verbatim quote from the patient (we call this the “patient’s voice). And when a section of the note appears grayed out, that indicates that it’s a low-confidence suggestion from our ML engine, and that it requires the doctor’s review. 

The EmpowerMD interface incorporates AI that’s transparent by design. In our mind, that has clear advantages over other types of explanatory UI, which are often designed separately from, or as an adjunct to the systems they attempt to explain. At EmpowerMD, we built transparency into the system right from the start. It is integral to the user experience. This, in turn, allows doctors to have more confidence in the system’s suggestions and recommendations.