{"id":897876,"date":"2021-07-31T06:41:00","date_gmt":"2021-07-31T13:41:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&#038;p=897876"},"modified":"2022-11-17T07:16:38","modified_gmt":"2022-11-17T15:16:38","slug":"supporting-clinicians-diagnose-and-assess-the-severity-of-covid-19-using-artificial-intelligence-and-chest-x-rays","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/supporting-clinicians-diagnose-and-assess-the-severity-of-covid-19-using-artificial-intelligence-and-chest-x-rays\/","title":{"rendered":"Supporting clinicians to diagnose and assess Covid-19 severity using AI and Chest X-rays"},"content":{"rendered":"\n<p><\/p>\n\n\n\n<p>Overview<\/p>\n\n\n\n<p>Microsoft Research\u2019s&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/medical-image-analysis\/\">Project InnerEye team<\/a>&nbsp;in Cambridge (UK) worked with University Hospitals Birmingham NHS Foundation Trust to develop deep learning models to analyze anonymized chest X-Rays and chest computed tomography (CT) scans to assist clinicians in determining disease severity, aid decision making, and improve our understanding of the disease. This collaborative project in Microsoft\u2019s <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/collaboration\/studies-in-pandemic-preparedness\/\">Studies In Pandemic Preparedness program<\/a>, part of our COVID-19 response efforts where researchers at Microsoft worked with teams around the world to address the current situation and better prepare for future pandemic, supported by our <a href=\"https:\/\/www.microsoft.com\/en-gb\/ai\/ai-for-health\">AI for Health team<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"862\" height=\"1005\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig1.jpg\" alt=\"Example chest x-rays from each category\" class=\"wp-image-899115\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig1.jpg 862w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig1-257x300.jpg 257w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig1-768x895.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig1-154x180.jpg 154w\" sizes=\"auto, (max-width: 862px) 100vw, 862px\" \/><\/figure>\n\n\n\n<p>COVID-19 X-rays have been a recommended procedure for patient triaging and resource management in intensive care units (ICUs) throughout the COVID-19 pandemic. The machine learning efforts to augment this workflow have, however, been long challenged due to deficiencies in reporting, model evaluation, and failure mode analysis. A <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.nature.com\/articles\/s42256-021-00307-0\">recent study<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> showed that 415 COVID-19 medical imaging projects had deficiencies that limited their use outside of the research lab. To address some of those shortcomings, we worked closely with our clinical partners to model radiological features with a human-interpretable class hierarchy that aligns with the radiological decision process. A DenseNet-121 backbone was first pre-trained with BYOL self-supervision on the public <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/www.nih.gov\/news-events\/news-releases\/nih-clinical-center-provides-one-largest-publicly-available-chest-x-ray-datasets-scientific-community\">NIH-CXR dataset<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> \u2013 <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/InnerEye-DeepLearning\/blob\/main\/docs\/self_supervised_models.md\">technical details and how to do this using InnerEye-DeepLearning are here<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. This was then fine-tuned with cross-validation using a private COVID-19 training dataset from four hospitals of the UHB group collected during the first COVID-19 wave (March to June 2020). The UHB team used the Azure Machine Learning DICOM image labelling tool to significantly speed up a labelling study involving several clinical annotators, who were asked to classify 400 chest X-rays. The collected labels were then used to benchmark the model&#8217;s performance.<\/p>\n\n\n\n<p>The developed model outperforms the clinicians across all defined sub-tasks with respect to the reference labels. To better understand the model\u2019s failure patterns, we employed an <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/machine-learning\/concept-error-analysis\">error analysis tool in Azure Machine Learning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. This tool trains a decision tree to identify partitions of data on which the model underperforms, according to attributes that are most predictive of mistakes. It is based on work by our MSR colleagues Besmira Nushi, Ece Kamar, and Eric Horvitz: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-accountable-ai-hybrid-human-machine-analyses-for-characterizing-system-failure\/\">Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure &#8211; Microsoft Research<\/a>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"624\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig4-1024x624.png\" alt=\"Error analysis\" class=\"wp-image-899118\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig4-1024x624.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig4-300x183.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig4-768x468.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig4-240x146.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/11\/UHBFig4.png 1124w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>This kind of error analysis is not often found in healthcare-related ML studies, we believe it is crucial for providing transparency and actionable insights about a model\u2019s behavior.&nbsp; The analysis may also be useful after deployment if presented as reliability information alongside the model\u2019s predictions.<\/p>\n\n\n\n<p>You can read the full paper &#8211; <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/hierarchical-analysis-of-visual-covid-19-features-from-chest-radiographs\/\">\u201c<em>Hierarchical Analysis of Visual COVID-19 Features from Chest Radiographs<\/em><\/a>\u201d that was presented as part of ICML\u2019s 1<sup>st<\/sup> workshop on \u201c<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/sites.google.com\/view\/imlh2021\/home\">Interpretable Machine Learning in Healthcare<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u201d.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Overview Microsoft Research\u2019s&nbsp;Project InnerEye team&nbsp;in Cambridge (UK) worked with University Hospitals Birmingham NHS Foundation Trust to develop deep learning models to analyze anonymized chest X-Rays and chest computed tomography (CT) scans to assist clinicians in determining disease severity, aid decision making, and improve our understanding of the disease. This collaborative project in Microsoft\u2019s Studies In [&hellip;]<\/p>\n","protected":false},"author":32522,"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":740356,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-897876","msr-blog-post","type-msr-blog-post","status-publish","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":740356,"type":"project"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/897876","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/32522"}],"version-history":[{"count":8,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/897876\/revisions"}],"predecessor-version":[{"id":899244,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/897876\/revisions\/899244"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=897876"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=897876"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=897876"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=897876"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}