Skip to main content

This is why artificial intelligence will transform health

[msce_cta layout=”image_center” align=”center” linktype=”blue” imageurl=”” linkurl=”″ linkscreenreadertext=”Download the e-book now” linktext=”Download the e-book now” imageid=”30685″ ][/msce_cta]

In today’s post-EHR health environment, the amount of data generated by digitization is staggering. Dozens of systems feed data across healthcare organizations daily, and IDC predicts that health data volumes will continue to grow at a rate of 48% annually.[1] Yet, despite advances toward becoming a data-rich and data-driven industry, medical errors are still the third-leading cause of death in the US alone.[2]

Though artificial intelligence (AI) is still in early stages of adoption in healthcare, its exceptional ability to manage big data makes it a powerful weapon in the fight against medical errors.  But don’t worry—robots aren’t about to replace clinicians anytime soon. Humans and machines are complementary, as humans have ingenuity and emotional intelligence, for example, while machines are better at tackling repetitive, high volume tasks where accuracy is vital. Where artificial intelligence can add the most value is through amplified intelligence—the idea of AI shouldering the big data burden and working in concert with human intellect, empathy and creativity to solve medical problems and find better ways to do things.

This human and machine companion model has the potential to significantly benefit healthcare providers, enabling them to reimagine processes and refocus clinicians’ efforts where they can deliver the most value. While there are numerous ways amplified intelligence can be applied across health organizations, here are three distinct looks at AI in action—from adding intelligence to everyday tasks like medical transcription to enabling new capabilities in medical imaging.

Delivering an intelligent transcription experience that surfaces insight at the point of care

The clinical workflow is complex, often requiring physicians to multitask while interacting with patients. Taking notes during consultations hinders them from focusing solely on the patient and is time-consuming to edit and approve later. Furthermore, their notes need to be annotated to provide clinical value for the patient’s extended care coordination team. In this scenario, AI has quickly emerged as an ideal intelligent assistant. Using cognitive services such as voice, speech, language understanding and more, AI can capture, transcribe, annotate, and learn from conversations to deliver powerful insights at the point of care.

An exciting example of this is the EmpowerMD project from Microsoft Research, an initiative focused on transforming medical conversations to medical intelligence. Built with custom speech and language understanding and tailored for the medical domain, EmpowerMD’s Intelligent Scribe captures and synthesizes patient-physician conversations[3], noting clinically relevant phrases that naturally occur in dialogue. The phrases are mapped to common areas of an encounter note, and the physician can edit the content before approving it. The system also learns from encounters and physician input to adjust over time.

Another example is intelligent agents—patient-facing chatbots (like the Microsoft Health Bot) that can have a conversation with patients online. The bots ask questions designed to guide patients to the right kind of care; based on the answers, the bot can assist them with next steps, such as making an appointment, talking to a care worker, or even calling an ambulance on the patient’s behalf. It also adds the conversation to the patient’s history, enabling clinicians to pick up right where the chatbot left off.

By using AI to build on and learn from clinician expertise, physicians can focus on patients and minimize high-volume busywork, as well as optimize encounters and gain valuable intelligence for diagnosis and treatment planning.

Detecting risks and preventing deterioration with AI-enhanced video observation

In a typical inpatient setting, clinicians make incremental observations of patients at regular intervals, then use this data to guide treatment decisions. However, incremental observations only generate limited data points, increasing the risk a patient might deteriorate because something was missed.

In contrast, the combination of AI and video can provide a complete picture of the patient in real time. By applying AI to streaming video, second-by-second statistics can be analyzed. When synthesized with physical movement and periodic observations, AI can alert clinicians to check on patients and detect problems before they become critical.

For example, a large US-based children’s hospital is leveraging AI to analyze years of video of infants and using the insights to detect warning signs of deterioration in current patients. The AI solution analyzes everything from the color of the baby’s skin to their breathing and behavior, and alerts care workers when an intervention may be required.

Video AI supports the clinician workflow by enabling more thorough observation, freeing up clinician time and mitigating risks to improve the quality of care.

Driving speed, accuracy and outcomes with advanced medical imaging analysis

Given the proliferation and sophistication of images coming from modern medical imaging systems, clinicians must sift through large volumes of information to determine what’s clinically relevant. In addition, imaging data is often disconnected from other patient data, limiting the clinician’s ability to build a comprehensive picture.

AI has the capability to transform this time-intensive approach. By infusing AI into the imaging workflow, clinicians can surface relevant data from disparate image sources and conduct analysis in a clear, concise and easy-to-digest format.[4] AI can also connect imaging data with other data such as the patient’s medical history, pharmacy information, prior imaging, recent lab results or pathology reports.

An excellent example of this is Project Inner Eye, a Microsoft Research initiative. Inner Eye utilizes computer vision and machine learning to build tools for automatically analyzing 3D radiology images. For example, in a common computed tomography (CT) scan, Inner Eye enables clinicians to automatically detect organ locations, then select specific organs and zoom in for a more detailed analysis. The clinician remains in full control of the results, while their efforts are augmented by the machine learning, enabling AI to become a true consultant to physicians.

By enhancing medical imaging analysis with AI, clinicians can harness the vast landscape of imaging data to accelerate diagnoses, improve accuracy and efficiency, and positively impact patient outcomes.

Partner with artificial intelligence to reimagine care delivery

Amplified intelligence, powered by AI, offers a compelling opportunity for healthcare organizations to achieve their long-term goals. Providers could reduce medical errors and improve quality across the care continuum by enhancing medical imaging, mitigating risks, detecting patient deterioration faster, and putting actionable intelligence in the hands of clinicians. Read our eBook to learn more about Microsoft’s investments in AI.