{"id":934299,"date":"2023-04-11T15:56:31","date_gmt":"2023-04-11T22:56:31","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/"},"modified":"2023-05-11T10:21:14","modified_gmt":"2023-05-11T17:21:14","slug":"automatic-segmentation-of-prostate-cancer-metastases-in-psma-pet-ct-images-using-deep-neural-networks-with-weighted-batch-wise-dice-loss","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/automatic-segmentation-of-prostate-cancer-metastases-in-psma-pet-ct-images-using-deep-neural-networks-with-weighted-batch-wise-dice-loss\/","title":{"rendered":"Automatic segmentation of prostate cancer metastases in PSMA PET\/CT images using deep neural networks with weighted batch-wise dice loss"},"content":{"rendered":"<div id=\"abssec0010\">\n<h3 id=\"sectitle0015\" class=\"u-h4 u-margin-m-top u-margin-xs-bottom\">Purpose<\/h3>\n<p id=\"abspara0010\">Automatic and accurate segmentation of lesions in images of metastatic castration-resistant\u00a0<a class=\"topic-link\" title=\"Learn more about prostate cancer from ScienceDirect's AI-generated Topic Pages\" href=\"https:\/\/www.sciencedirect.com\/topics\/medicine-and-dentistry\/prostate-cancer\">prostate cancer<\/a>\u00a0has the potential to enable personalized\u00a0<a class=\"topic-link\" title=\"Learn more about radiopharmaceutical from ScienceDirect's AI-generated Topic Pages\" href=\"https:\/\/www.sciencedirect.com\/topics\/medicine-and-dentistry\/radiopharmaceutical-agent\">radiopharmaceutical<\/a>\u00a0therapy and advanced treatment response monitoring. The aim of this study is to develop a convolutional neural networks-based framework for fully-automated detection and segmentation of metastatic prostate cancer lesions in whole-body PET\/CT images.<\/p>\n<\/div>\n<div id=\"abssec0015\">\n<h3 id=\"sectitle0020\" class=\"u-h4 u-margin-m-top u-margin-xs-bottom\">Methods<\/h3>\n<p id=\"abspara0015\">525 whole-body PET\/CT images of patients with metastatic prostate cancer were available for the study, acquired with the [<sup>18<\/sup>F]DCFPyL\u00a0<a class=\"topic-link\" title=\"Learn more about radiotracer from ScienceDirect's AI-generated Topic Pages\" href=\"https:\/\/www.sciencedirect.com\/topics\/medicine-and-dentistry\/radioactive-tracer\">radiotracer<\/a>\u00a0that targets prostate-specific membrane antigen (PSMA). U-Net (<em>1<\/em>)-based\u00a0<a class=\"topic-link\" title=\"Learn more about convolutional neural networks from ScienceDirect's AI-generated Topic Pages\" href=\"https:\/\/www.sciencedirect.com\/topics\/computer-science\/convolutional-neural-network\">convolutional neural networks<\/a>\u00a0(CNNs) were trained to identify lesions on paired axial PET\/CT slices.\u00a0<a class=\"topic-link\" title=\"Learn more about Baseline models from ScienceDirect's AI-generated Topic Pages\" href=\"https:\/\/www.sciencedirect.com\/topics\/computer-science\/baseline-model\">Baseline models<\/a>\u00a0were trained using batch-wise dice loss, as well as the proposed weighted batch-wise dice loss (wDice), and the lesion detection performance was quantified, with a particular emphasis on lesion size, intensity, and location. We used 418 images for model training, 30 for model validation, and 77 for model testing. In addition, we allowed our model to take n\u00a0=\u00a00,2, \u2026, 12 neighboring axial slices to examine how incorporating greater amounts of 3D context influences model performance. We selected the optimal number of neighboring axial slices that maximized the detection rate on the 30 validation images, and trained five\u00a0<a class=\"topic-link\" title=\"Learn more about neural networks from ScienceDirect's AI-generated Topic Pages\" href=\"https:\/\/www.sciencedirect.com\/topics\/computer-science\/neural-network\">neural networks<\/a>\u00a0with different architectures.<\/p>\n<\/div>\n<div id=\"abssec0020\">\n<h3 id=\"sectitle0025\" class=\"u-h4 u-margin-m-top u-margin-xs-bottom\">Results<\/h3>\n<p id=\"abspara0020\">Model performance was evaluated using the detection rate, Dice similarity coefficient (DSC) and sensitivity. We found that the proposed wDice loss significantly improved the lesion detection rate, lesion-wise DSC and lesion-wise sensitivity compared to the baseline, with corresponding average increases of 0.07 (p-value\u00a0=\u00a00.01), 0.03 (p-value\u00a0=\u00a00.01) and 0.04 (p-value\u00a0=\u00a00.01), respectively. The inclusion of the first two neighboring axial slices in the input likewise increased the detection rate by 0.17, lesion-wise DSC by 0.05, and lesion-wise mean sensitivity by 0.16. However, there was a minimal effect from including more distant neighboring slices. We ultimately chose to use a number of neighboring slices equal to 2 and the wDice loss function to train our final model. To evaluate the model&#8217;s performance, we trained three models using identical hyperparameters on three different data splits. The results showed that, on average, the model was able to detect 80% of all testing lesions, with a detection rate of 93% for lesions with maximum\u00a0<a class=\"topic-link\" title=\"Learn more about standardized uptake values from ScienceDirect's AI-generated Topic Pages\" href=\"https:\/\/www.sciencedirect.com\/topics\/medicine-and-dentistry\/standardized-uptake-value\">standardized uptake values<\/a>\u00a0(SUVmax) greater than 5.0. In addition, the average median lesion-wise DSC was 0.51 and 0.60 for all the lesions and lesions with SUVmax>5.0, respectively, on the testing set. Four additional neural networks with different architectures were trained, and they both yielded stronger performance of segmenting lesions whose SUVmax>5.0 compared to the rest of lesions.<\/p>\n<\/div>\n<div id=\"abssec0025\">\n<h3 id=\"sectitle0030\" class=\"u-h4 u-margin-m-top u-margin-xs-bottom\">Conclusion<\/h3>\n<p id=\"abspara0025\">Our results demonstrate that prostate cancer metastases in PSMA PET\/CT images can be detected and segmented using CNNs. The\u00a0<a class=\"topic-link\" title=\"Learn more about segmentation performance from ScienceDirect's AI-generated Topic Pages\" href=\"https:\/\/www.sciencedirect.com\/topics\/computer-science\/segmentation-performance\">segmentation performance<\/a>\u00a0strongly depends on the intensity, size, and the location of lesions, and can be improved by using specialized loss functions. Specifically, the models performed best in detection of lesions with SUVmax>5.0. Another challenge was to accurately segment lesions close to the bladder. Future work will focus on improving the detection of lesions with lower SUV values by designing custom loss functions that take into account the lesion intensity, using additional data augmentation techniques, and reducing the number of false lesions by developing methods to better separate signal from noise.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Purpose Automatic and accurate segmentation of lesions in images of metastatic castration-resistant\u00a0prostate cancer\u00a0has the potential to enable personalized\u00a0radiopharmaceutical\u00a0therapy and advanced treatment response monitoring. The aim of this study is to develop a convolutional neural networks-based framework for fully-automated detection and segmentation of metastatic prostate cancer lesions in whole-body PET\/CT images. Methods 525 whole-body PET\/CT images [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_publishername":"","msr_publisher_other":"","msr_booktitle":"","msr_chapter":"","msr_edition":"","msr_editors":"","msr_how_published":"","msr_isbn":"","msr_issue":"","msr_journal":"Computers in Biology and Medicine","msr_number":"","msr_organization":"","msr_pages_string":"","msr_page_range_start":"","msr_page_range_end":"","msr_series":"","msr_volume":"","msr_copyright":"","msr_conference_name":"","msr_doi":"","msr_arxiv_id":"","msr_s2_paper_id":"","msr_mag_id":"","msr_pubmed_id":"","msr_other_authors":"","msr_other_contributors":"","msr_speaker":"","msr_award":"","msr_affiliation":"","msr_institution":"","msr_host":"","msr_version":"","msr_duration":"","msr_original_fields_of_study":"","msr_release_tracker_id":"","msr_s2_match_type":"","msr_citation_count_updated":"","msr_published_date":"2023-4-4","msr_highlight_text":"","msr_notes":"","msr_longbiography":"","msr_publicationurl":"","msr_external_url":"","msr_secondary_video_url":"","msr_conference_url":"","msr_journal_url":"","msr_s2_pdf_url":"","msr_year":0,"msr_citation_count":0,"msr_influential_citations":0,"msr_reference_count":0,"msr_s2_match_confidence":0,"msr_microsoftintellectualproperty":true,"msr_s2_open_access":false,"msr_s2_author_ids":[],"msr_pub_ids":[],"msr_hide_image_in_river":0,"footnotes":""},"msr-research-highlight":[],"research-area":[13556,13553],"msr-publication-type":[193715],"msr-publisher":[],"msr-focus-area":[],"msr-locale":[268875],"msr-post-option":[],"msr-field-of-study":[],"msr-conference":[],"msr-journal":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-934299","msr-research-item","type-msr-research-item","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-medical-health-genomics","msr-locale-en_us"],"msr_publishername":"","msr_edition":"","msr_affiliation":"","msr_published_date":"2023-4-4","msr_host":"","msr_duration":"","msr_version":"","msr_speaker":"","msr_other_contributors":"","msr_booktitle":"","msr_pages_string":"","msr_chapter":"","msr_isbn":"","msr_journal":"Computers in Biology and Medicine","msr_volume":"","msr_number":"","msr_editors":"","msr_series":"","msr_issue":"","msr_organization":"","msr_how_published":"","msr_notes":"","msr_highlight_text":"","msr_release_tracker_id":"","msr_original_fields_of_study":"","msr_download_urls":"","msr_external_url":"","msr_secondary_video_url":"","msr_longbiography":"","msr_microsoftintellectualproperty":1,"msr_main_download":"","msr_publicationurl":"","msr_doi":"","msr_publication_uploader":[{"type":"url","viewUrl":"false","id":"false","title":"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0010482523003475","label_id":"243109","label":0},{"type":"doi","viewUrl":"false","id":"false","title":"https:\/\/doi.org\/10.1016\/j.compbiomed.2023.106882","label_id":"243106","label":0}],"msr_related_uploader":"","msr_citation_count":0,"msr_citation_count_updated":"","msr_s2_paper_id":"","msr_influential_citations":0,"msr_reference_count":0,"msr_arxiv_id":"","msr_s2_author_ids":[],"msr_s2_open_access":false,"msr_s2_pdf_url":null,"msr_attachments":[],"msr-author-ordering":[{"type":"user_nicename","value":"Yixi Xu","user_id":39775,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Yixi Xu"},{"type":"text","value":"Ivan Klyuzhin","user_id":0,"rest_url":false},{"type":"text","value":"Sara Harsini","user_id":0,"rest_url":false},{"type":"user_nicename","value":"Anthony Ortiz","user_id":39715,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Anthony Ortiz"},{"type":"text","value":"Shun Zhang","user_id":0,"rest_url":false},{"type":"text","value":"Fran\u00e7ois B\u00e9nard","user_id":0,"rest_url":false},{"type":"user_nicename","value":"Rahul Dodhia","user_id":41401,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Rahul Dodhia"},{"type":"text","value":"Carlos F. Uribe","user_id":0,"rest_url":false},{"type":"text","value":"Arman Rahmim","user_id":0,"rest_url":false},{"type":"user_nicename","value":"Juan M. Lavista Ferres","user_id":39552,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Juan M. Lavista Ferres"}],"msr_impact_theme":[],"msr_research_lab":[],"msr_event":[],"msr_group":[696544],"msr_project":[1017768,778522],"publication":[],"video":[],"msr-tool":[],"msr_publication_type":"article","related_content":{"projects":[{"ID":1017768,"post_title":"Expand Opportunity - AI for Good","post_name":"expand-opportunity-ai-for-good","post_type":"msr-project","post_date":"2024-04-02 09:04:26","post_modified":"2024-05-10 11:18:06","post_status":"publish","permalink":"https:\/\/www.microsoft.com\/en-us\/research\/project\/expand-opportunity-ai-for-good\/","post_excerpt":"Our commitment starts with ensuring everyone has the ability to thrive in a digital, AI-enabled economy, and extends to empowering other organizations to address society\u2019s biggest challenges.","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/1017768"}]}},{"ID":778522,"post_title":"AI for Health","post_name":"ai-for-health","post_type":"msr-project","post_date":"2023-05-16 14:26:13","post_modified":"2024-10-14 15:42:21","post_status":"publish","permalink":"https:\/\/www.microsoft.com\/en-us\/research\/project\/ai-for-health\/","post_excerpt":"AI for Health is a philanthropic program launched by Microsoft, which aims to support nonprofits, researchers, and organizations working on global health challenges. The program provides access to artificial intelligence (AI) technology and expertise in three main areas: population health, imaging analytics, genomics &amp; proteomics.","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/778522"}]}}]},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/934299","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-research-item"}],"version-history":[{"count":1,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/934299\/revisions"}],"predecessor-version":[{"id":934302,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/934299\/revisions\/934302"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=934299"}],"wp:term":[{"taxonomy":"msr-research-highlight","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-highlight?post=934299"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=934299"},{"taxonomy":"msr-publication-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-publication-type?post=934299"},{"taxonomy":"msr-publisher","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-publisher?post=934299"},{"taxonomy":"msr-focus-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-focus-area?post=934299"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=934299"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=934299"},{"taxonomy":"msr-field-of-study","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-field-of-study?post=934299"},{"taxonomy":"msr-conference","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-conference?post=934299"},{"taxonomy":"msr-journal","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-journal?post=934299"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=934299"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=934299"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}