The need

It’s often costly and risky to collect imagery to train a custom model during disaster recovery or search and rescue operations. Deploying custom vision AI models on edge devices was challenging without advanced processing resources.

The idea

AirSim lets us create a 3D version of the real environment to process the custom vision model. A simulated drone takes pictures, and then the Custom Vision service trains a custom model to find objects or people in the images.

The solution

In one test, a piloted drone took photos of stuffed animals on a simulated soccer field. Next, Custom Vision service trained a model to identify each animal—and the drone sent an alert as it found the animals.

Technical details for AirSim - Drones

For this search and rescue scenario, we created a 3D-generated environment in AirSim to simulate the soccer field on Microsoft campus and placed stuffed animals on the field.

We then created a Python script to fly the drone around the simulated environment and take many pictures of the animals. We then pushed the images into Custom Vision service and trained a model to identify each type of animal in the field.

From there we exported the trained model into TensorFlow format and pushed it into Docker containers.

These containers were then deployed to Azure IoT Edge and then pushed to a drone running a custom board and a Nvidia GPU.

The drone is then able to fly around and send an alert to Azure IoT Hub every time it successfully identifies an animal.

This is a great showcase of how real-time custom AI can run on edge devices, such as a drone.

Resources:

Projects related to AirSim Drones

Browse autonomous systems projects

Explore the possibilities of AI

Jumpstart your own AI innovations with learning resources and development solutions from Microsoft AI.