Microsoft Research Blog

Microsoft Research Blog

The Microsoft Research blog provides in-depth views and perspectives from our researchers, scientists and engineers, plus information about noteworthy events and conferences, scholarships, and fellowships designed for academic and scientific communities.

(Th)Inking with data — tapping into the potential of the digital pen

May 2, 2019 | By Nathalie Henry Riche, Researcher

ActiveInk aims to capture the freedom of a traditional pen, making consuming information and externalizing thoughts about it a cohesive digital experience. The system’s active ink turns pen strokes into a means of executing an intended action, such as highlighting, hiding, or cutting specific information.

ActiveInk aims to capture the freedom of a traditional pen, making consuming information and externalizing thoughts about it a cohesive digital experience. The system’s active ink turns pen strokes into a means of executing an intended action, such as highlighting, hiding, or cutting specific information.

Whether at work or at home, people are regularly gathering and interpreting information to build their base of knowledge and gain a deeper understanding of the world around them. In this endeavor, they encounter data in many shapes and forms—lines of text in books, timelines and charts in magazines and newspapers, photos in print and online. They’re searching webpages and browsing interactive maps to identify good schools for their children or to find a great house to purchase; they’re monitoring their medical and financial data to make informed decisions in their life.

When making sense of information on paper, many people engage in active reading, crossing out parts of data visualizations that are irrelevant to their current analysis, identifying outliers in a scatterplot, or marking passages of interest. They may also jot down their hypotheses and interpretations directly on top of the data, such as drawing a correlation line. As people conduct more of their lives electronically, it’s natural to offer them a way to engage in these active reading activities on their computers with a digital pen.

But it’s not enough to try to replicate the pen-and-paper experience. In fact, it might not be possible, as even with hardware and software advances in the past decade, writing on a piece of glass doesn’t feel the same as writing on paper. The digital experience doesn’t match the sensory aspect—the tactile, visual, and auditory feedback—of the analog experience. Affordability and reliability are also among the many reasons why people might choose pen and paper over the digital option. Instead of seeking to emulate the analog experience, our aspiration is to uncover the unique capabilities of the digital pen. Our research aims to give the digital pen superpowers for marking documents in ways that go beyond what is possible with physical pen and paper.

The power of active ink

With the web-based system ActiveInk, we’re enabling users to seamlessly transition between exploring data on screen and externalizing their thoughts on screen using pen and touch. ActiveInk, inspired by such previous work as InkSeine, allows the natural use of pen for active reading behaviors while supporting analytic actions on underlying data by activating any of the ink strokes laid on the screen. So marks serve not only as reminders of an action to be applied later—whether that be extract, remove, enhance, or ignore that particular information—they become the means to accomplish the action. It’s active ink. More than a set of strokes, it facilitates an interaction between users’ thoughts and the underlying content.

Let’s say, for example, you’re working with a map of the United States to identify those regions of the country where you might want to live and work. You decide to focus on only the East and West coasts. You can cross out or scribble across the Midwestern states—your indication that the information is unnecessary—and use the same markup to initiate the desired action, which in this case is to delete those states.

With one pen, users can mark things freely in a variety of colors and stroke styles and execute or fix them later. Activating these annotations by tapping on them reveals a set of commands, such as highlight, remove, and label, and analytical functions, such as compute a regression. The digital pen can also be used as a precision input to cut off outliers in cluttered visual.

The driving force

Switching between digesting data on screen and externalizing resulting thoughts and analysis with pen and paper has physical and cognitive costs, including time expended writing down enough notes to remember or retrieve later the chart or table you need, for example. At the heart of ActiveInk, which we’re presenting at the ACM CHI Conference on Human Factors in Computing Systems (CHI), is three principles to minimize those costs:

  1. Provide space to think. Sensemaking is not linear: People need to cross-reference different pieces of information from different files. They may need to go back to previously seen charts, look for new ones, or skim through a long text to identify a missing piece of information. Thinking often requires capturing fleeting thoughts and revisiting them multiple times before finalizing an analysis. ActiveInk provides users with an infinite canvas to do that. They have free reign to drop information in and reorganize it at will. Marks made to an individual piece of data stay with it as users move around to available parts of the canvas, never running out of room to work.
  2. Empower the digital pen to both interact with data and allow for externalizing thoughts. Two interface strategies were developed and evaluated to operate on the infinite canvas:
    • In the prefix method, users select an action to be taken from a menu of options either with a finger on their non-dominant hand or the pen in their dominant hand, and the selected action is applied as the pen is used.
    • In the postfix method, everything done with the digital pen results in a pen stroke that can be activated after the fact, as described above; the marks that result in the action remain as a means to recall or undo the action.
  3. Avoid requiring memorization of gestures. The ways people annotate information is different, so instead of asking users to change the way they work, ActiveInk supports how they naturally use a pen.

The best of both worlds

For the prefix method of interacting and externalizing to be worthwhile, it requires the user to have some idea of what they want to do with the information, which isn’t always the case when interpreting data in the moment. The postfix method addresses that and encourages free thinking, as it isn’t necessary to choose the tool before making a mark. Whereas the prefix interface is similar to what users have seen in other programs where editing tool options are visible throughout the experience, the postfix interface requires a tapping on the ink to reveal the action options, a dragging motion to preview it, and finally a lifting of the finger to execute it. In a qualitative study involving eight participants, we found that the prefix and postfix methods were leveraged by users in different scenarios: the former for successive actions and the latter for more note-taking. To offer the best of both worlds, we implemented a hybrid of the two methods.

ActiveInk has the potential to make the digital pen a power tool for both interacting and thinking with information, providing a single space for people to compare and contrast files from different sources using such functions as those offered by Excel and Power BI, as well as note-taking tools. ActiveInk enables people to (th)ink with data, realizing the thoughts they have in ways that are not possible with pen and paper.

Up Next

Human-computer interaction

Beautiful data with Dr. Nathalie Riche

Episode 84, August 7, 2019 - Dr. Nathalie Riche envisions a future in which all of our data will be accessible, meaningful, compelling and artistic. And as a researcher in human computer interaction and information visualization at Microsoft Research, she’s working on technical tools that will help us wrangle our data, extract knowledge from it, and communicate with it in a memorable, persuasive and aesthetically pleasing way. In other words, she wants our data to be both smart… and beautiful! Today, Dr. Riche shares her passion for the art of data driven storytelling, reveals the two superpowers of data visualization, gives us an inside look at some innovative projects designed to help us th(ink) with digital ink, and tells the story of how a young woman with an artist’s heart headed into computer science, took a detour to the beach, paid for it with research and ended up with a rewarding career that brings both art and computing together.

Microsoft blog editor

Human-computer interaction

Prototype tablet tricked out with sensors just proves Mom was always right: Posture is important!

The mobility of tablets affords interaction from a wide diversity of postures: Hunched over a desk with brow furrowed in concentration. On the go with the tablet gripped in one hand, while operating it with the other. Or kicked back on a couch to relax with some good old-fashioned Cat vs. Laser Pointer internet-video action. […]

Ken Hinckley

Senior Principal Research Manager

Hardware and devices, Human-computer interaction

Project Zanzibar: Blurring the distinction between the digital and the physical worlds via tangible interaction in a portable implementation

It was a love of toys, a shared appreciation for the intrinsic beauty of physical objects and a recognition of their absence in the daily computer interactions of a world that currently spends most of its time gazing at and touching two-dimensional glass that propelled a team of Microsoft researchers in Cambridge, UK and Redmond, […]

Microsoft blog editor