HAMS: Harnessing AutoMobiles for Safety

HAMS: Harnessing AutoMobiles for Safety





Road safety is a major public health issue, accounting for an estimated 1.35 million fatalities, and many more injuries, the world over, each year, placing it among the top 10 causes of death. Middle-income and particularly low-income countries bear a disproportionate burden of road accidents and fatalities. For instance, the estimates of road fatalities in India range from one every 4 minutes to almost a quarter of a million, or 20% of the world’s total, each year. Besides the heavy human cost, road accidents also impose a significant economic cost. So it is no surprise that the problem has attracted attention at the highest levels of the government, including from Prime Minister Modi himself during a radio address in 2015.

The major factors impacting safety — vehicles, roads, and drivers — see little or no ongoing monitoring today, especially in countries such as India. It is our thesis that improving road conditions, vehicle health and, most importantly, driver discipline would help boost road safety. Indeed, among the leading causes of road accidents are such factors as speeding, drunk driving, and driver distractions, all of which can be mitigated through better driver discipline.

HAMS Overview

In the Harnessing AutoMobiles for Safety, or HAMS, project, we use low-cost sensing devices to construct a virtual harness for vehicles. The goal is to monitor the state of the driver and how the vehicle is being driven in the context of a road environment that the vehicle is in. We believe that effective monitoring leading to actionable feedback is key to promoting road safety.

Smartphone setup in HAMS

The sensing device employed in HAMS is an off-the-shelf smartphone. The smartphone is mounted on the windshield, with its front camera facing the driver and the rear camera looking out to the front. The key to the operation of HAMS is the use of multiple sensors simultaneously. For example, when a sharp braking event is detected (using the smartphone’s accelerometer), the distance to the vehicle in front is checked (using the rear camera), along with indications of driver distraction or fatigue (using the front camera). Such sensing and detection in tandem helps provide a holistic and accurate picture of how the vehicle is being driven, enabling appropriate feedback to then be generated.

The research challenges we address in HAMS pertain to effective detection and monitoring in challenging settings. Some of these challenges arises because HAMS is retrofitted onto legacy vehicles, and so it must contend with variation in vehicle configuration, driver seating, and even the smartphone mounting. Other challenges arise because of our goal to be broadly applicable, including in regions where we cannot count on well-marked, fixed-width lanes, for instance, to perform vehicle ranging. We also address the challenge of efficient operation on a smartphone with modest resources, for instance, by combining accurate deep learning models with less expensive traditional computer vision techniques.

As part of the project, we have also explored several use cases for HAMS. One of the earliest we prototyped was a fleet management dashboard, which allowed a supervisor to view safety-related incidents of interest offline. We have also piloted HAMS in the context of driver training, in collaboration with the Institute of Driving and Traffic Research (IDTR), run by Maruti-Suzuki, the largest passenger car manufacturer in India.

More recently, we have been working with the Transport Department, Government of Uttarakhand and IDTR on using HAMS to automate the driver license test at the Regional Transport Office, Dehradun.

We invite you to visit the links below to learn more about HAMS. 



News articles and related projects


We now outline key research aspects of HAMS

FarSight: Smartphone-based Vehicle Ranging

Student: Aditya Virmani (Research Fellow, 2017-18)

Researchers: Akshay Nambi, Venkat Padmanabhan

Maintaining an adequate separation with respect to the vehicle in front is key to safe driving. Indeed, the two-second rule mandates maintaining a separation of at least 2 seconds of travel distance at the current speed of the vehicle. While technologies such as Radar and Lidar enable vehicle ranging, these are not available in legacy vehicles and would also be expensive to retrofit.

In FarSight, vehicle ranging is performed using just the rear camera of a windshield-mounted smartphone. By identifying the class of vehicle in front (e.g., autorickshaw vs. sedan vs. bus) and a bounding box around it, FarSight uses simple trigonometry to estimate the range based on the approximate width for vehicles in the identified class.

Heterogeneous vehicle identification in FarSight

Identifying a tight bounding box around the vehicle in front is a key task. To ensure efficiency, while maintaining accuracy, FarSight switches, in an adaptive manner, between DNN-based detection, which is accurate, and key point tracking, which is computationally less expensive.

For more information, please see the ACM Ubicomp 2019 paper on FarSight.

DeepLane: Computer Vision based Lane Detection

Student: Ravi Bhandari (PhD candidate at IIT Bombay; intern during summer 2016)

Researchers: Akshay Nambi, Venkat Padmanabhan, Bhaskaran Raman (IIT Bombay)

Current smartphone-based navigation applications fail to provide lane-level information due to poor GPS accuracy. Detecting and tracking a vehicle’s lane position on the road assists in lane-level navigation. For instance, it would be important to know whether a vehicle is in the correct lane for safely making a turn, perhaps even alerting the driver in advance if it is not, or whether the vehicle’s speed is compliant with a lane-specific speed limit.

DeepLane leverages the back camera of a windshield-mounted smartphone to provide an accurate estimate of the vehicle’s current lane. We employ a deep learning-based technique to classify the vehicle’s lane position. DeepLane does not depend on any infrastructure support such as lane markings and works even when there are no lane markings, a characteristic of many roads in developing regions. Our analysis shows that DeepLane has an accuracy of over 90% in determining the vehicle’s lane position.

For more information, please see the ACM BuildSys 2018 paper on DeepLane.

FullStop: Tracking Unsafe Stopping Behaviour of Buses

Student: Ravi Bhandari (PhD candidate at IIT Bombay; intern during summer 2016)

Researchers: Bhaskaran Raman (IIT Bombay), Venkat Padmanabhan

We focus on the stopping behaviour of buses, especially in the vicinity of bus stops, which often leads to accidents. For instance, buses could arrive at a bus stop but continue rolling forward instead of coming to a complete halt, or could stop some distance away from the bus stop, possibly even in the middle of a busy road. Each of these behaviours can result in injury or worse to people waiting at a bus stop as well as to passengers boarding or alighting from buses.

GPS is not accurate enough to detect such safety-related situations. Therefore, in FullStop, we use the view obtained from the rear camera of a windshield-mounted smartphone to detect safety-related situations such as a rolling stop or stopping at a location that is displaced laterally relative to the designated bus stop.

For more information, please see the COMSNETS 2018 paper on FullStop.

AutoRate: Automatically Rating Driver Attentiveness

Student: Isha Dua (master’s candidate at IIIT Hyderabad; intern during summer 2018)

Researchers: Akshay Nambi, C. V. Jawahar (IIIT Hyderabad), Venkat Padmanabhan

Driver inattentiveness, whether due to fatigue or distraction, is a leading cause of road accidents. Prior work has evaluated fatigue and distraction independently. In AutoRate, we leverage the front camera of a windshield-mounted smartphone to monitor the driver’s attentiveness holistically. AutoRate derives a driver’s attention rating by fusing several spatio-temporal features pertaining to the driver’s state and actions, including head pose, eye gaze, eye closure, yawns, use of mobile phone, etc. Our analysis shows that AutoRate’s automatically generated rating has an overall agreement of 0.87 with the ratings provided by human annotators.

Driver attention score derived using AutoRate

For more information, please see the IEEE FG 2019 paper on AutoRate.

InSight: Driver State Monitoring in Low-light Conditions

Students: Ishani Janveja (BVCE college, intern during summer 2019), Shruthi Bannur (RVCE college, intern during 2017-18), Sanchit Gupta (IIIT Delhi, intern during 2017-18), Ishit Mehta (Research Fellow, 2018-19)

Researchers: Akshay Nambi, Venkat Padmanabhan

Road accidents are more common during nighttime than during daytime. However, the poor lighting condition at nighttime makes it challenging to even detect the driver’s face, leave alone detect facial landmarks, using a standard RGB camera of a smartphone.

In InSight, we are developing a suite of techniques spanning special-purpose hardware and deep learning to enable effective face and facial landmark detection in low-light conditions. For instance, we have developed a variant of dlib library to accurately detect landmarks on facial images obtained with a FLIR camera.

Low-light image captured using a smartphone and the corresponding thermal image using FLIR camera along with landmarks from our model.

Stay tuned for more details!

ALT: Automating Driver License Testing

Students: Anurag Ghosh (intern/Research Fellow, 2018 onwards), Vijay Lingam (intern, 2017-18), Ishit Mehta (Research Fellow, 2018-19)

Researchers: Akshay Nambi, Venkat Padmanabhan

Driver license testing is an important step in ensuring that only qualified drivers hit the road. However, testing is typically a manual process, which imposes a significant burden on the human evaluators and therefore leads to a less-than-thorough process. It also means that candidates must contend with the possibly subjective assessment made by the evaluators. The result of these constraints can be stark. For instance, a survey by SaveLIFE Foundation in India reports that a whopping 59% of the respondents did not give a test to obtain a driving license.

Auto calibration process to determine mirror scans in ALT.

The goal of ALT is to automate driver license testing using the standard HAMS setup — a windshield-mounted smartphone. The front camera of the smartphone is used for a range of inward-looking tasks, including ensuring that (a) the person taking the test is the same as the one who had registered for it, (b) the driver is wearing a seatbelt, and (c) the driver scans their mirrors before effecting a turn or a lane change. To accommodate variation in the vehicle geometry, driver seating, and smartphone (and hence camera) mounting, ALT employs a novel autocalibration step to automatically learn the direction of the driver’s gaze relative to the mirror positions, without requiring any manual calibration.

Vehicle trajectory estimated in ALT.

The rear camera is used to track the trajectory of the vehicle as it is driven through various maneuvers such as parallel parking and circling a roundabout. This requires precise tracking to establish the driver’s skill or lack thereof; for instance, determining whether the vehicle strayed outside the designated track or whether the driver stopped for longer than permitted or tried to course-correct by rolling their vehicle forward and backward alternately more times than is allowed. While visual SLAM (Simultaneous Localization and Mapping) is an attractive option for such tracking, existing approaches suffer from either a lack of accuracy or the need for the extensive deployment of markers in the environment. In ALT, we develop a novel hybrid SLAM technique, which requires a minimal deployment of markers only at the points in the track where, for instance, there is a significant scene change say due to a sharp curve.

For more information, please see the ACM SenSys 2019 paper on ALT.

HAMS with ALT functionality enabled has been deployed for conducting driver license tests at Dehradun, Uttarakhand. See the public announcement of this project in collaboration with the Transport Department, Government of Uttarakhand and Institute of Driving and Traffic Research (IDTR). And here are videos introducing the project and showing automated license testing in action.

Application: Fleet monitoring

Students: Amod Agarwal (IIIT-D, intern during summer 2016), Ravi Bhandari (IITB, intern during summer 2016), Shibsankar Das (IISc, intern during summer 2016), Puneeth Meruva (MIT, intern during summer 2016), Deepak Mahendrakar, Abhishek V (PESIT, part-time intern during autumn 2016)

Researchers: Akshay Nambi, Venkat Padmanabhan

Monitoring the driver and their driving is crucial to ensuring safety. One of the earliest applications we prototyped was a fleet management dashboard, which allowed a supervisor to view safety-related incidents of interest offline.