AI for Humanitarian Action projects
Learn about the nonprofits and humanitarian organizations working with the support of our grants program.
Programs making an impact
Using conversational AI to support victims of gender-based violence
Seguro Project has partnered with Paz para las Mujeres and Microsoft to build a chatbot-based solution that provides support and services for women facing violence.Learn about Seguro Project
The global pandemic has compounded existing concerns around domestic abuse leading to a significant increase in cases. Extended periods of lockdown under stressful conditions have highlighted the fact that not everyone can find safety at home. In 2021, the Governor of Puerto Rico declared a state of emergency due to gender-based violence.
Coordinadora Paz para las Mujeres and Seguro Project created a chatbot to provide a confidential and interactive interface for accessing support and guidance in a time of need. It helps victims of gender-based-violence in Puerto Rico to identify local resources and information relevant to their individual situation.
Partnering to tackle the shadow pandemic
Coordinadora Paz para las Mujeres works with a coalition of nonprofits in Puerto Rico to provide services and support to victims and those at risk of gender-based violence. The chatbot solution, built in partnership with Seguro Project and Microsoft, allows them to scale up their services across the island.
How it works
Based on the local expertise from Coordinadora Paz para las Mujeres, the Power Virtual Agents chatbot template was adapted to cover local resources and services across Puerto Rico and deployed in Spanish on their website. Available 24/7, the chatbot reduces the workload on support staff, allowing them to assist more people in need and empower victims of gender-based violence to take back control.
Empowering women at risk of domestic abuse to take back control
Images are used courtesy of Seguro Project and Coordinadora Paz para las Mujeres.
Improving access to critical medical records at scale
Collecting public health records from millions of people requires processing handwritten paper records at scale. During a health crisis, quick access to written records is critical. India Institute of Technology, Delhi and Gram Vaani partnered to create an AI-powered system to quickly recognize handwritten characters to efficiently access crucial patient contact information and health histories.
Data collection for large-scale development programs through digital devices may not always be feasible. The challenge was to collect paper-based forms from an actual field project, scan, and label them, and use this dataset to train an Optical Character Recognition (OCR) model to digitize more paper forms.
In partnership with Gram Vaani, IIT developed and deployed Smartforms, a tool for automated digitization of paper forms through OCR and Optical Mark Recognition (OMR) techniques.
The team used Smartforms to identify and collect phone numbers from vaccination cards and then push almost 4 million voice messages for health and nutrition awareness to Self Help Group (SHG) members in two districts of India. The project is now being scaled to four more districts. Messaging was used during COVID-19 to send updates to people in the SHG program. The system can be used and scaled for many more uses across civic and health initiatives.
How it works
The community coordinators are informed by their social enterprise partner, Gram Vaani, on how to fill out the paper forms. Forms are completed with the phone numbers of the individual SHG members and health histories for pregnant women and children below 2 years. Forms are collected and then scanned in bulk to be processed through the OCR/OMR system so that data can be indexed, tagged, and used efficiently.
Making medical information accessible and understandable improves healthcare access
Advanced evidence analysis platform
UNITAD partnered with Microsoft to develop a system using Microsoft AI technology to rapidly scour files from devices retrieved from Islamic State combatants to identify potential evidence of human rights crimes.Learn more about UNITAD
Investigation teams at UNITAD collect a large amount of evidence from recovered devices such as images and text. Typically, content moderators manually examine each file and tag it based on their interpretation of the content. This process was time consuming—a particular challenge for UNITAD given the significant amounts of digital devices and other evidence that are collected by the team on a regular basis.
UNITAD built a platform for analyzing evidence at scale, allowing for secure upload and preparation of raw data for processing. Azure cognitive service then quickly “looks inside” the recovered files to identify relevant information and label the data with insights, enabling the investigative teams to process the evidence more quickly and effectively.
Investigators can now use the e-discovery tool to search and collate the files needed for their investigation. They can also leverage the Video Indexer portal to help the search and discovery of video files. For example, after video evidence is analyzed, the user can get searchable insights such as known places or objects, text on screen, and transcript of the audio.
AI scours video archives for evidence of human-rights abuses
The proliferation of phones means that nearly everything is filmed, including acts of war. Videos are found online or on recovered devices, in nearly endless supply. UN teams and human rights organizations need evidence from videos to document human rights abuses. But scouring video frames for actions, faces, and places is time-consuming—and can be re-traumatizing for the researchers. Multiple projects are now tackling the issue to bring evidence to light more quickly and accurately.
How it works
A Logic Apps workflow examines the raw data and triggers different Azure Cognitive and ML procedures such as Video Indexer, Computer Vision, Text Translate and Text Analytics. The outputs of the AI services are exported to UNITAD’s systems. Here, the Investigators can access richer content such as image descriptions, locations, landmarks, and transcripts.
Using AI to help reunite displaced families
Turkish Red Crescent partnered with Microsoft and their partner Mart Software to build a content management system to reunite families affected by the Syrian refugee crisis.Learn about Turkish Red Crescent
Previous techniques to reunite family members separated during the Syrian conflict were hindered by incomplete or unknown information, resulting in low frequency in matching missing people.
A facial recognition model was developed to match a photograph of a separated person with a recently registered profile in the Turkish Red Crescent system to provide a list of potential matches to the caseworker as additional input to the reunification process.
The Azure-based tool digitizes the Restoring Family Links (RFL) program project and arm it with AI capabilities. The platform offers a chatbot in Turkish, Arabic, Farsi, and English, in addition to image matching through facial recognition. This immensely improves search efficiency and speed, thanks to geographical information system-specific reporting.
A gold standard cooperative model
The cooperation between TRC, Microsoft, and partner Mart Software is an inspiring example of capacity building, enhancing efficiency, and scaling AI-enabled solutions for humanitarian action. It also demonstrates how swiftly these steps can be taken with the project completed in less than seven months.
How it works
This project took several hundred photos of volunteers to train a model which produces a compact representation of the face and compares the Euclidian distance between a photograph of a new person in order to rank top potential matches.
Images of Turkish Red Crescent projects are used courtesy of TRC.
Acoustic recognition of illegal weapons
Benetech partnered with Microsoft to develop advanced sound-based systems for identifying the suspected use of internationally banned cluster bomb weapons, based on recordings of explosions in warzones.Learn about Benetech
The use of cluster bombs is prohibited under international law, due to their indiscriminate effect on civilian casualties. Identifying the use of cluster bombs can be difficult to confirm, despite often overwhelming anecdotal accounts alleging their presence.
Cluster bombs produce a distinctive popping sound when they explode. Benetech developed an acoustic-based weapons classification system to identify the likelihood of a cluster bomb being used, based on a recording of a given explosion.
Digital evidence to build a more complete picture
The United Nation’s International Impartial and Independent Mechanism (IIIM) is tasked with collecting evidence and building international war crime cases. They will use acoustic analysis as part of Benetech’s JusticeAI video analytics platform. Structured digital evidence alongside eyewitness testimony can enable the IIIM to build a more complete picture of alleged uses of weapons prohibited under international law.
How it works
The project uses recordings of cluster bomb explosions to generate spectrograms, visual representations of audio that show noise frequencies on a graph. The spectrograms are used to train a machine learning algorithm to identify sound frequency patterns. By applying the model to recordings of explosions, investigators can more easily identify and document use of banned cluster bombs.
Aerial mapping for humanitarian response
Humanitarian OpenStreetMap Team (HOT) partnered with Microsoft and Bing to improve the mapping of areas vulnerable to natural disaster and poverty. By working with communities to document at-risk areas, HOT enables relief programs to respond more quickly and effectively after disasters.Learn about OpenStreetMap
In the last ten years, 2 billion people were affected by disasters. In 2017, 18 million people were displaced due to weather related disasters alone. Many of these disaster-affected areas are literally “missing” from existing maps, making it difficult for first responders to prepare and deliver relief programs.
The Humanitarian OpenStreetMap Team is using machine learning resources from Microsoft to generate datasets of buildings in Uganda and Tanzania to improve the mapping process for project managers and cartographers with AI assistance tools. By identifying buildings at a faster pace than manual mapping, the solution helps add underserved communities to the map.
Adding millions of buildings to critical maps
Bing and HOT were able to identify 7 million building footprints in Uganda and 11 million building footprints in Tanzania, using computer vision and “polygonization” to rapidly cover unmapped regions. Working with the footprints found by AI, local volunteer mappers were able to import 2500-3000 buildings per day compared to a previous average of 1500.
Mapping areas vulnerable to disaster
Volunteers are critical to the Missing Maps project. Starting with a satellite image, they can trace buildings and add levels of detail. Their manual work has greater impact when paired with AI-powered tools. When an earthquake struck Nepal in 2015, 600,000 homes were destroyed. HOT quickly responded with 4000 volunteers to help map the affected area.
How AI-powered mapping works
Bing Maps trained a computer vision model using labeled data for Uganda and Tanzania. The model was then used to run inference on Bing Maps aerial imagery in the two countries. A two-step process with semantic segmentation followed by polygonization resulted in 18 million building footprints – 7 million in Uganda and 11 million in Tanzania.
Improving information for peacebuilding
The Carter Center tracks data in high-conflict areas to better inform peacebuilding efforts. In partnership with Microsoft, they created a machine learning tool to automate the analysis of complex conflict data to better inform peacebuilding efforts.Learn about the Carter Center
The ongoing violence in Syria continues to displace millions. To aid in understanding and reducing the conflict, The Carter Center compiles data from conflict incidents mostly using the ACLED database, and publishes weekly reports used by the UN, diplomatic actors, and conflict responders. However, the datasets require manual, time-consuming data cleaning to classify all the information.
Microsoft has partnered with The Carter Center to build a machine learning text analysis tool that will aid in automating the weekly repetitive data cleaning to more efficiently analyze and share information. By minimizing the human supervision cost, conflict researchers can dedicate more time to understanding the trends and key insights needed to support their peacebuilding and humanitarian efforts.
Identifying conflict more quickly
By classifying incident types with a high degree of accuracy and timeliness, the text analysis tool reduces the time spent on manual data transformations by 80% and can identify emerging trends from the context hidden in text data. This will save The Carter Center thousands of hours so they can focus on more crucial tasks toward peacebuilding and resolving conflict.
Accelerating Southern Syria data analysis
Microsoft natural language processing (NLP) simplifies cleaning the Syria Team’s data so they can spend more time analyzing it. With the time saved, their data analyst was able to quickly visualize two key dynamics in southern Syria - attacks against former rebels and estimate foreign presence in Dara’a Governorate - to support their work.
How it works
Using the initial datasets from The Carter Center, the data is preprocessed and input into the pre-trained BERT BASE Uncased model, which is a text classification tool trained on several incident types. The output is then categorized with a multi-label text classifier and sent back to The Carter Center for further conflict data analysis.
Streamlining casework for immigration
Centro Legal supports a large caseload of vulnerable families, youth, refugees, and immigrants. They worked with Microsoft to build a case management platform to scan, translate, and sort paper forms, freeing up staff to focus on clients, instead of paperwork.Learn about Centro Legal
Previously, an attorney paired with a detained immigrant conducted an initial intake and entered data by hand. Due to the high volume of cases and limited staff time, only a small portion of collected information was entered electronically, delaying processing time and limiting operational intelligence.
A customized Optical Character Recognition (OCR) technology and case management platform was built that scans paper intake forms and sorts standardized fields of data into databases remotely for legal service providers in California.
Faster analysis for time-critical legal cases
Data entry time has been reduced from two days per case to nearly instantaneous upload, enabling staff to focus on more valuable analytics roles. By digitizing 90% more of the information from intake forms, Centro Legal has been able to analyze cases in greater detail and identify trends not previously observable.
From data to better tracking and analytics
By capturing more data electronically from intake forms, Centro Legal has improved operational intelligence around tracking the whereabouts of detained clients, identifying hotspots of immigration enforcement, and improved ability to detect anomalies at detention facilities.
Preparing communities for natural disasters
SEEDS India worked with Microsoft Philanthropies to create a system to predict risks of natural disaster using local conditions, allowing vulnerable communities to be better prepared and develop custom response plans before a disaster strikes.Learn about SEEDS India
Current disaster vulnerability assessments and hazard warnings for communities in India offer vague predictions with high variability because they do not consider individual, community, and regional attributes in their assessment.
By incorporating attributes unique to each community, such as building materials and topography, SEEDS India has created more accurate community-specific models to predict vulnerability to natural disasters, allowing communities to be better prepared in advance and develop custom response plans.
Scalable risk models to support communities
A trained machine learning model is being used to score risk, providing communities with an understanding of their risks to take preemptive actions (such as raising sanitation structures or create evacuation plans) that should be taken to mitigate them. This model is currently being expanded to other regions of Puducherry and Chennai, which were recently impacted by cyclone Nivar.
Applying risk assessment to new locations
In November 2020, this risk assessment model was deployed to an entirely different location from where it was trained, to identify highly vulnerable structures in the path of cyclone Nivar, which made landfall in Southern India. Volunteers used this data to make targeted contact with local community groups in anticipation of a destructive storm.
How it works
Community-specific models are built using large volumes of aerial imagery to identify high-risk structures. Each house, school, and building are labeled with specific and unique attributes, such as estimated construction materials from data in similar communities and images.