Searching the Human Body
In recent years, we have seen great advances in medical technology that have improved the accuracy by which medical professionals can diagnose and treat a myriad of ailments. With the ability to literally see inside a patient’s body—by using technologies such as magnetic resonance (MR), computed tomography (CT), and positron emission tomography—doctors now have a plethora of image information to help them address patients’ conditions. However, while the technology for image acquisition has improved enormously, deciphering the information buried in the pixels is a time-consuming process that is subject to the individual clinician’s experience and skill level.
Antonio Criminisi and his team at Microsoft Research Cambridge are addressing new challenges and opportunities in clinical routines. Through a project entitled InnerEye, Criminisi is combining medical expertise and machine-learning theory to design a system that makes computer-aided diagnoses from medical imagery. By working directly with clinicians, such as those from Addenbrooke’s Hospital in Cambridge, the team has been able to design the system for practical use from the ground-up.
Given a CT or MR scan, a clinician’s key challenge is the identification of different body organs. Doctors typically view slices through the scan data, displayed in grey levels, and try to manually inspect different organs. By applying state-of-the-art machine-learning techniques, it is now possible for the computer to automatically identify dozens of key body organs, as well as their size and location. In effect, the system extracts semantic information about the presence and position of organs from the image pixels. This information can be stored in conventional text files and is searchable. Thus, a doctor can quickly retrieve information about, for example, all patients with an enlarged spleen, or patients with kidney stones.
Criminisi’s team is also developing an efficient algorithm for the automatic detection and delineation of brain tumors and the identification of their constituent regions, such as the actively proliferating cells or the necrotic region. This task enables clinicians to better diagnose the tumor type and determine the best course of treatment.
One of the key algorithms used in this work is called Decision Forests. This algorithm can “learn” discriminative features about human organs or tumors from vast amounts of example data. Interestingly, this technique is also the engine of the Microsoft Kinect skeletal tracking capability, exemplifying how fundamental machine-learning research has uses in both medicine and entertainment. Many of the technologies in the Inner Eye project have now been integrated into the Microsoft Amalga healthcare platform, enabling doctors to more effectively work with patients; by automating some of the most difficult tasks, doctors have more time to focus on patient treatment.