HAMS: Harnessing AutoMobiles for Safety

HAMS: Harnessing AutoMobiles for Safety


News & features

News & features



Road safety is a major public health issue, accounting for an estimated 1.35 million fatalities, and many more injuries, the world over, each year, placing it among the top 10 causes of death. Middle-income and particularly low-income countries bear a disproportionate burden of road accidents and fatalities. For instance, the estimates of road fatalities in India range from one every 4 minutes to almost a quarter of a million, or 20% of the world’s total, each year. Besides the heavy human cost, road accidents also impose a significant economic cost. So it is no surprise that the problem has attracted attention at the highest levels of the government, including from Prime Minister Modi himself during a radio address in 2015.

The major factors impacting safety — vehicles, roads, and drivers — see little or no ongoing monitoring today, especially in countries such as India. It is our thesis that improving road conditions, vehicle health and, most importantly, driver discipline would help boost road safety. Indeed, among the leading causes of road accidents are such factors as speeding, drunk driving, and driver distractions, all of which can be mitigated through better driver discipline.

HAMS Overview

In the Harnessing AutoMobiles for Safety, or HAMS, project, we use low-cost sensing devices to construct a virtual harness for vehicles. The goal is to monitor the state of the driver and how the vehicle is being driven in the context of a road environment that the vehicle is in. We believe that effective monitoring leading to actionable feedback is key to promoting road safety.

Smartphone setup in HAMS

The sensing device employed in HAMS is an off-the-shelf smartphone. The smartphone is mounted on the windshield, with its front camera facing the driver and the rear camera looking out to the front. The key to the operation of HAMS is the use of multiple sensors simultaneously. For example, when a sharp braking event is detected (using the smartphone’s accelerometer), the distance to the vehicle in front is checked (using the rear camera), along with indications of driver distraction or fatigue (using the front camera). Such sensing and detection in tandem helps provide a holistic and accurate picture of how the vehicle is being driven, enabling appropriate feedback to then be generated.

The research challenges we address in HAMS pertain to effective detection and monitoring in challenging settings. Some of these challenges arises because HAMS is retrofitted onto legacy vehicles, and so it must contend with variation in vehicle configuration, driver seating, and even the smartphone mounting. Other challenges arise because of our goal to be broadly applicable, including in regions where we cannot count on well-marked, fixed-width lanes, for instance, to perform vehicle ranging. We also address the challenge of efficient operation on a smartphone with modest resources, for instance, by combining accurate deep learning models with less expensive traditional computer vision techniques.

As part of the project, we have also explored several use cases for HAMS. One of the earliest we prototyped was a fleet management dashboard, which allowed a supervisor to view safety-related incidents of interest offline. We have also piloted HAMS in the context of driver training, in collaboration with the Institute of Driving and Traffic Research (IDTR), run by Maruti-Suzuki, the largest passenger car manufacturer in India.

More recently, we have been working with several State Transport Departments on using HAMS to automate the driver license test. See Automated License Testing for more details. 

We invite you to visit the links below to learn more about HAMS. 



News articles and related projects


We now outline key research aspects of HAMS

FarSight: Smartphone-based Vehicle Ranging

Student: Aditya Virmani (Research Fellow, 2017-18)

Researchers: Akshay Nambi, Venkat Padmanabhan

Maintaining an adequate separation with respect to the vehicle in front is key to safe driving. Indeed, the two-second rule mandates maintaining a separation of at least 2 seconds of travel distance at the current speed of the vehicle. While technologies such as Radar and Lidar enable vehicle ranging, these are not available in legacy vehicles and would also be expensive to retrofit.

In FarSight, vehicle ranging is performed using just the rear camera of a windshield-mounted smartphone. By identifying the class of vehicle in front (e.g., autorickshaw vs. sedan vs. bus) and a bounding box around it, FarSight uses simple trigonometry to estimate the range based on the approximate width for vehicles in the identified class.

Heterogeneous vehicle identification in FarSight

Identifying a tight bounding box around the vehicle in front is a key task. To ensure efficiency, while maintaining accuracy, FarSight switches, in an adaptive manner, between DNN-based detection, which is accurate, and key point tracking, which is computationally less expensive.

For more information, please see the ACM Ubicomp 2019 paper on FarSight.

DeepLane: Computer Vision based Lane Detection

Student: Ravi Bhandari (PhD candidate at IIT Bombay; intern during summer 2016)

Researchers: Akshay Nambi, Venkat Padmanabhan, Bhaskaran Raman (IIT Bombay)

Current smartphone-based navigation applications fail to provide lane-level information due to poor GPS accuracy. Detecting and tracking a vehicle’s lane position on the road assists in lane-level navigation. For instance, it would be important to know whether a vehicle is in the correct lane for safely making a turn, perhaps even alerting the driver in advance if it is not, or whether the vehicle’s speed is compliant with a lane-specific speed limit.

DeepLane leverages the back camera of a windshield-mounted smartphone to provide an accurate estimate of the vehicle’s current lane. We employ a deep learning-based technique to classify the vehicle’s lane position. DeepLane does not depend on any infrastructure support such as lane markings and works even when there are no lane markings, a characteristic of many roads in developing regions. Our analysis shows that DeepLane has an accuracy of over 90% in determining the vehicle’s lane position.

For more information, please see the ACM BuildSys 2018 paper on DeepLane.

FullStop: Tracking Unsafe Stopping Behaviour of Buses

Student: Ravi Bhandari (PhD candidate at IIT Bombay; intern during summer 2016)

Researchers: Bhaskaran Raman (IIT Bombay), Venkat Padmanabhan

We focus on the stopping behaviour of buses, especially in the vicinity of bus stops, which often leads to accidents. For instance, buses could arrive at a bus stop but continue rolling forward instead of coming to a complete halt, or could stop some distance away from the bus stop, possibly even in the middle of a busy road. Each of these behaviours can result in injury or worse to people waiting at a bus stop as well as to passengers boarding or alighting from buses.

GPS is not accurate enough to detect such safety-related situations. Therefore, in FullStop, we use the view obtained from the rear camera of a windshield-mounted smartphone to detect safety-related situations such as a rolling stop or stopping at a location that is displaced laterally relative to the designated bus stop.

For more information, please see the COMSNETS 2018 paper on FullStop.

AutoRate: Automatically Rating Driver Attentiveness

Student: Isha Dua (master’s candidate at IIIT Hyderabad; intern during summer 2018)

Researchers: Akshay Nambi, C. V. Jawahar (IIIT Hyderabad), Venkat Padmanabhan

Driver inattentiveness, whether due to fatigue or distraction, is a leading cause of road accidents. Prior work has evaluated fatigue and distraction independently. In AutoRate, we leverage the front camera of a windshield-mounted smartphone to monitor the driver’s attentiveness holistically. AutoRate derives a driver’s attention rating by fusing several spatio-temporal features pertaining to the driver’s state and actions, including head pose, eye gaze, eye closure, yawns, use of mobile phone, etc. Our analysis shows that AutoRate’s automatically generated rating has an overall agreement of 0.87 with the ratings provided by human annotators.

Driver attention score derived using AutoRate

For more information, please see the IEEE FG 2019 paper on AutoRate.

InSight: Driver State Monitoring in Low-light Conditions

Students: Ishani Janveja (BVCE college, intern during summer 2019), Shruthi Bannur (RVCE college, intern during 2017-18), Sanchit Gupta (IIIT Delhi, intern during 2017-18), Ishit Mehta (Research Fellow, 2018-19)

Researchers: Akshay Nambi, Venkat Padmanabhan

Road accidents are more common during nighttime than during daytime. However, the poor lighting condition at nighttime makes it challenging to even detect the driver’s face, leave alone detect facial landmarks, using a standard RGB camera of a smartphone.

In InSight, we are developing a suite of techniques spanning special-purpose hardware and deep learning to enable effective face and facial landmark detection in low-light conditions. For instance, we have developed a variant of dlib library to accurately detect landmarks on facial images obtained with a FLIR camera.

Low-light image captured using a smartphone and the corresponding thermal image using FLIR camera along with landmarks from our model.

Stay tuned for more details!

ALT: Automating Driver License Testing

Students: Anurag Ghosh (intern/Research Fellow, 2018 onwards), Vijay Lingam (intern, 2017-18), Ishit Mehta (Research Fellow, 2018-19)

Researchers: Akshay Nambi, Venkat Padmanabhan

Driver license testing is an important step in ensuring that only qualified drivers hit the road. However, testing is typically a manual process, which imposes a significant burden on the human evaluators and therefore leads to a less-than-thorough process. It also means that candidates must contend with the possibly subjective assessment made by the evaluators. The result of these constraints can be stark. For instance, a survey by SaveLIFE Foundation in India reports that a whopping 59% of the respondents did not give a test to obtain a driving license.

Auto calibration process to determine mirror scans in ALT.

The goal of ALT is to automate driver license testing using the standard HAMS setup — a windshield-mounted smartphone. The front camera of the smartphone is used for a range of inward-looking tasks, including ensuring that (a) the person taking the test is the same as the one who had registered for it, (b) the driver is wearing a seatbelt, and (c) the driver scans their mirrors before effecting a turn or a lane change. To accommodate variation in the vehicle geometry, driver seating, and smartphone (and hence camera) mounting, ALT employs a novel autocalibration step to automatically learn the direction of the driver’s gaze relative to the mirror positions, without requiring any manual calibration.

Vehicle trajectory estimated in ALT.

The rear camera is used to track the trajectory of the vehicle as it is driven through various maneuvers such as parallel parking and circling a roundabout. This requires precise tracking to establish the driver’s skill or lack thereof; for instance, determining whether the vehicle strayed outside the designated track or whether the driver stopped for longer than permitted or tried to course-correct by rolling their vehicle forward and backward alternately more times than is allowed. While visual SLAM (Simultaneous Localization and Mapping) is an attractive option for such tracking, existing approaches suffer from either a lack of accuracy or the need for the extensive deployment of markers in the environment. In ALT, we develop a novel hybrid SLAM technique, which requires a minimal deployment of markers only at the points in the track where, for instance, there is a significant scene change say due to a sharp curve.

For more information, please see the ACM SenSys 2019 paper on ALT.

HAMS with ALT functionality enabled has been deployed for conducting driver license tests at Dehradun, Uttarakhand. See the public announcement of this project in collaboration with the Transport Department, Government of Uttarakhand and Institute of Driving and Traffic Research (IDTR). And here are videos introducing the project and showing automated license testing in action.

Application: Fleet monitoring

Students: Amod Agarwal (IIIT-D, intern during summer 2016), Ravi Bhandari (IITB, intern during summer 2016), Shibsankar Das (IISc, intern during summer 2016), Puneeth Meruva (MIT, intern during summer 2016), Deepak Mahendrakar, Abhishek V (PESIT, part-time intern during autumn 2016)

Researchers: Akshay Nambi, Venkat Padmanabhan

Monitoring the driver and their driving is crucial to ensuring safety. One of the earliest applications we prototyped was a fleet management dashboard, which allowed a supervisor to view safety-related incidents of interest offline.

Automated Driver License Testing

Inadequate driver skills and apathy towards/lack of awareness of safe driving practices are key contributing factors for the lack of road safety. The problem is exacerbated by the fact that the license issuing system is broken in India, with an estimated 59% of licenses issued without a test, making it a significant societal concern. The challenges arise from capacity and cost constraints, and corruption that plagues the driver testing process. While there have been efforts aimed at creating instrumented tracks to automate the license test, these have been stymied by the high cost of the infrastructure (e.g., pole-mounted high-resolution cameras looking down on the tracks) and poor test coverage (e.g., inability to monitor the driver inside the vehicle).

HAMS-based testing offers a compelling alternative. It is a low-cost and affordable system based on a windshield-mounted smartphone, though for reasons of scalability (i.e., handling a large volume of tests), we can offload computation to an onsite server or to the cloud. The view inside the vehicle also helps expand the test coverage. For instance, the test can verify that the driver taking the test is the same as the one who had registered for it (essential for protecting against impersonation), verify that the driver is wearing their seat belt (an essential safety precaution), and check whether the driver scans their mirrors before effecting a maneuver such as a lane change (an example of multimodal sensing, with inertial sensing and camera-based monitoring being employed in tandem).

HAMS-based testing allows the entire testing process to be performed without any human intervention. A test report, together with video evidence (to substantiate the test result in case of a dispute), is produced in an automated manner within minutes of the completion of the test. This manner of testing, with the test taken by the driver alone in the vehicle (i.e., no test inspector) has proved to be a boon in the context of the physical distancing norms arising from the COVID-19 pandemic.


To roll out HAMS-based driver testing, we first partnered with the Government of Uttarakhand and the Institute of Driving and Traffic Research (IDTR), run by Maruti-Suzuki. Testing is conducted on a track and includes a range of parameters including verification of driver identity, checking of the seat belt, fine-grained trajectory tracking during maneuvers such as negotiating a roundabout and performing parallel parking, and checking on mirror scanning during lane changing.

  1. HAMS-based driver license testing @ Dehradun, Uttarakhand: HAMS-based license testing went live at Dehradun Regional Transport Office (RTO), the capital of Uttarakhand state in July 2019. Till date, 10000+ automated tests (as of 15 Feb 2021) have been conducted, with an accuracy of 98%. The objectivity and transparency of the automated testing process has won the praise of not just the RTO staff but also the majority of the candidates, including many that failed the test.  The thoroughness of HAMS-based testing is underscored by the fact that now the passing rate is only 54% compared to over 90% with the prior manual testing.
  2. Scaling HAMS deployments across India:  The success in Dehradun has spurred interest in HAMS-based automated testing across India and also overseas. RFPs issued by several states have called for capabilities such as continuous driver identification, gaze tracking, and mirror scan monitoring, that were not available before HAMS. HAMS-based testing has been rolled out in IDTR Aurangabad, Bihar, and is in process of being implemented at multiple RTOs across the country.


Microsoft CEO, Satya Nadella, showcased HAMS as part of his keynotes at the Future Decoded Mumbai CEO Summit and the Future Decoded Bengaluru Tech Summit (Feb 2020)


HAMS-based License Testing Launch Video at Dehradun, Uttarakhand:


HAMS-based License Testing Overview at Dehradun, Uttarakhand:


HAMS-based License Testing at Aurangabad, Bihar (the commentary is in Hindi, but HAMS is spelled out at 1:26 and 2:07 in the video):


  1. Microsoft Research AI project automates driver’s license tests in India, Microsoft News (Oct 2019).
  2. Microsoft’s AI-based ‘HAMS’ automates driver license tests in India, Hindustan Times (Oct 2019).
  3. Microsoft Made a Smartphone App That Can Administer Driving Tests Without an Instructor, GIZMODO (Oct 2019).
  4. Driving license tests just got smarter in India with Microsoft’s AI project, TechCrunch (Oct 2019).
  5. Microsoft provides Indian RTOs with AI software to take driving license tests, Indian Express (Nov 2019).
  6. Microsoft automates driving license tests, Times of India (Nov 2019).
  7. Maruti Suzuki, Microsoft collaborate to develop HAMS technology for driver training, Economic Times (Oct 2020).
  8. For safer roads, Maruti Suzuki and Microsoft join hands to introduce HAMS technology for driver training, Maruti Suzuki (Oct 2020).
  9. Transport minister opens automated driving test track in Aurangabad, Times of India (Dec 2020).