Accurate indoor localization has the potential to transform the way people navigate indoors in a similar way that GPS transformed the way people navigate outdoors. Over the last 15 years, several indoor localization technologies have been proposed and experimented by both academia and industry, but we have yet to see large scale deployments. This competition aims to bring together real-time or near real-time indoor location technologies and compare their performance in the same space.
The competition is on! We are happy to announce that we have received 36 submissions from 32 teams spanning academia, industry, and startups!
21 teams with 22 different approaches eventually showed up for the competition. The final ranking for all teams can be found in the table below. Teams in blue belong to the infrastructure-based category, while teams in green belong to the infrastructure-free category. Congratulations to all the participants! It was a blast!
An overview of the results along with some pictures from the event can also been found here.
* Submissions from Adler et al. and Li et al. achieved almost identical location errors (2.034m vs. 2.039m). Given the small difference, we considered this to be a tie, and according to the rules of the competition, the team that deployed the least number of anchor points wins. Li et al. deployed 5 LED lamps (out of the maximum 10 they could deploy), and were therefore awarded the second place in the infrastructure-based category.
All submissions have been assigned to one of two categories depending on the requirement to deploy custom hardware or not. The list of all submissions in each category can be seen below.
All teams highlighted in green have successfully registered for the competition and will be allowed to compete. The teams highlighted in red either withdrew from the competition or did not successfully registered for the competition and will not be allowed to compete.
The time slot assigned to each team corresponds to the exact time during the evaluation day (Monday, April 14th) that this team will be evaluated. All teams are encouraged to be present at all times to learn about the competing systems and observe them in action. In any case,all teams should be in the evaluation area at least 1.5 hours prior to their assigned time slot.
Pirkl et al.
Indoor Localization Based on Resonant Oscillating Magnetic Fields
Li et al.
Indoor Localization with Multi-modalities (Light)
Ashok et al.
InfraRad: A Radio-Optical Beaconing Approach for Accurate Indoor Localization
Adler et al.
FUBLoc – Accurate Range-based Indoor Localization and Tracking
Selavo et al.
Localization Using Digitally Steerable Antennas
Bestmann et al.
EasyPoint – Indoor Localization and Navigation Low Cost, Reliable and Accurate
Schmid et al.
High-Resolution Indoor RF Ranging
Ehrig et al.
A 60GHz System for Simultaneous Time of Flight Ranging and High-Speed Wireless Data Communication
Sark at al.
A Software Defined Radio for Time of Flight Based Ranging and Localization
Taylor et al.
Low-cost Hybrid Indoor Localization with Light Fixtures
Kleunen et al.
Locus: Space-based Indoor Positioning
Lazik et al.
ALPS: An Ultrasonic Localization System
Sigg et al.
Passive Device-Free Indoor Localization from RSSI
Dentamaro et al.
Nextome – Indoor Positioning and Navigation System
Jiang et al.
HiLoc: A TDoA-Fingerprint Hybrid Indoor Localization System
Abrudan et al.
IMU-Aided magneto-inductive Localization
Burgess at al.
Indoo.rs (iPhone + BLE)
Nikodem et al.
Indoor Localization Based on Low-Power Chirp Transceivers
Sigg et al.
Device-Free Indoor Localization
Yang et al.
A step into mm-scale treatment! Multipath-Resistant Tracking for Mobile RFID Tags
Brucato et al.
Modeling and Prototyping a Personal Object Finder. The NEVERLOST Real Time Localization System Use Case
Ferraz et al.
Ubee.in – An Indoor Location Solution for Mobile Devices
Li et al.
Indoor Localization with Multi-modalities (WiFi + sensors)
Zhang et al.
MaWi: A Hybrid Magnetic and Wi-Fi System for Scalable Indoor Localization
Laoudias at al.
Accurate Multi-Sensor Localization on Android Devices
Klepal et al.
MapUme – WiFi Based Localization System
Jiang et al.
FreeLoc: Infrastructure-Free Indoor Localization
A Novel Hybrid/Geomagnetic Field Based Technology for Indoor Navigation
Zou et al.
WiFi Based Indoor Localization System by Using Weighted Path Loss and Extreme Machine Learning
Xiao et al.
Indoor Tracking Using Conditional Random Fields
Yun et al.
Vision-Based 3D Indoor localization
Marcaletti et al.
Tracking of Mobile Devices with WiFi Time-Of-Flight
Burgess at al.
Indoo.rs (Android + WiFI)
Ghose et al.
UnsupLoc – A System for infrastructure Friendly Unsupervised Indoor Localization
Quintas et al.
Indoor Localization and Tracking Using 802.11 networks and Smartphones
Ohrt et al.
Room-Based Indoor Localization using WiFi-Fingerprinting and Machine Learning
Evaluation area: The evaluation area will include two rooms and the small hallway surrounding them. The total evaluation area is approximately 2500 square feet. More information about the evaluation area could and should be collected by the teams during the setup day.
The evaluation area will contain furniture. There will also be people present in the rooms during evaluation. In addition, the furniture placement will change between the setup and evaluation days.
Setup Day (Sunday, April 13th): all teams will be given a 7-hour window (9am-4pm) to setup their systems. This is the time to deploy your custom hardware (if any), profile the space, and calibrate your systems in the best possible way. You won’t be allowed to make any changes to your systems after the setup day. Early in the morning of Sunday, we will share with you, and also clearly mark in the evaluation area, a specific point that will serve as the origin point in the coordinate system that will be used for the evaluation. All locations reported by the teams in the competition should be with respect to this origin point. For instance, if a person is standing 0.5m along the X axis from the origin point, and 3 meters along the Y axis, then your system is expected to report the following location: (0.5m,3m).
In order to localize anchor nodes or points used during fingerprinting, each team will have to measure the distances along the X and Y axis from the indicated origin point. It is up to the team to decide how to do this (i.e., measurement tape, laser range finder etc.). Since we expect a large number of participants, we recommend each team to be equipped with a laser range finder to simplify the process of distance measurements from the origin point.
Note that each team can choose its own way of showing the estimated location (i.e., webpage, text file, phone, laptop etc.) as long as a member of the team can clearly point the estimated location out to the evaluator in real-time. It is not enough for the evaluator to be told what the estimated location is. The evaluator will also have to see that location somehow on the system under test.
Each team should report the most accurate location possible. For instance, if the system under test can report mm level position, it should report location (0.515m,3.001m) instead of just (0.5m,3m).
Evaluation Day (Monday, April 14th): After the teams have completed their setup on Sunday, the organizers will mark on the floor a number of points and manually measure the ground truth location for every point in the reference coordinate system. The contestants will not be aware of these points until Monday morning where the actual evaluation will take place. During evaluation, the organizers will carry the device indicated by each team to all of the test points, and record the location reported by the device at these points. The final score for the team will be the average localization error for all test points. For every test point, we will compute the Euclidean distance between the inferred location provided by the system under test and the ground truth location of the test point. At the end, we will compute the average of the distance between the measured and ground truth locations across all test points (average localization error). The team that achieves the lowest average localization error wins.
Note that the system under test could be continuously computing its location. The evaluator will only record the location at the pre-specified test points. The evaluator carrying the device will stand at each of these points for a few seconds, and will record the reported location by the system under test.
Important:The system under test cannot assume that when powered up it is at a known location that is either predefined or manually entered. Any systems that do this will be disqualified.
WiFi Access points: A lot of the submissions rely on generic WiFi access points. To avoid the deployment of 10 WiFi access points from each participating team, we will be providing the WiFi access points. We will deploy 10 WiFi routers at different places within the evaluation area. The access points will be called MSLocalizationX (X=1,…,10). Participants that rely on generic WiFi access points should only use these 10 WiFi routers. The hotel’s access points in the area are only 3 and can only be used for connectivity and NOT localization purposes. If your team relies on generic WiFi access points, there is no need to bring and deploy your own access points. Only those teams that require specialized WiFi access points (i..e special software/hardware) will be allowed to deploy their own WiFi access points. If you want to deploy your own access points, you first need to request permission from the organizers (email@example.com).
Hardware Deployment: the hotel is happy to support our hardware deployment requirements. We will be able to deploy hardware on the floor, on the walls, on top of metallic/plastic tripods, and if really needed on the ceiling (this is a really complicated option, please contact me if your team really needs this!). Please make sure you bring with you the necessary equipment for deploying your custom hardware (i.e., tripods etc.). The hotel should be able to accommodate some of our needs, but given the number of teams participating in the competition, it might be hard to accommodate every single request. Please try to be as self-sufficient as possible, and remember that the competition is taking place at a hotel, where there are always restrictions on what can and how it can be deployed.
Poster Session: Every participating team will have the chance to present its approach during the poster session on Tuesday, April 15th. The preferred poster size is the following: DIN A1 posters in portrait orientation (594 mm x 841 mm, resp. 23.39 in x 33.11 in – http://en.wikipedia.org/wiki/Paper_size ).
Indoor Localization Panel: On Wednesday, April 16th, a panel on Indoor Localization will be held from 4pm to 5:15pm (http://ipsn.acm.org/2014/program.html ). All teams are encouraged to participate. The panel will consist of people from academia and industry. The panel’s main goal will be to discuss the current state-of-the-art in indoor localization, future directions, and of course the results and experiences of the competition.
Call for Contesters
Both academia and industry submissions are encouraged. All location techniques, such as ranging, fingerprinting, infrastructure, or device free, are welcome, except those that require end users’ manual measurements. Contesters can deploy their own infrastructure of up to 10 devices. Normal RF interference is expected, but no jammers for other deployments are allowed. The results must be shown on a portable device, such as a phone or a tablet/laptop that a person can easily carry around.
Demo submissions that do not meet one or more of the guidelines above will be included in the poster session and will be evaluated as a regular submission, but they will not be considered for prizes.
The competition will take place if at least 5 teams respond to this preliminary call for competition.
Depending on the nature and number of submissions multiple categories might be defined based on the accuracy (i.e., point-based vs. area based), the size, the cost, or the type (i.e., software vs. hardware) of the proposed solution. The final set of categories will be announced after the registration deadline.
A poster session dedicated to all competition participants will be organized during the conference. Participants will have the opportunity to explain their system to conference attendees.
Evaluation and Prize
Results are judged based on both room/zone level accuracy and absolute accuracy, and an award will be given for the top 2 teams in each category. When accuracy ties, infrastructure requirements will be used for tie breaking. The winning team in each category will be invited to present their approach at the conference, and receive a cash award.
Contesters must submit an abstract describing their approach and deployment requirements by the contest registration deadline. Submissions are treated as confidential until the competition. Submissions must be at most one (1) single-spaced 8.5″ x 11″ pages, including figures, tables, and references. Submission should follow the exact same format as regular, full IPSN 2014 papers. Templates can be found here.