Getting good VIBEs from your computer with Dr. Mary Czerwinski

Published

Dr. Mary Czerwinski. Photo courtesy of Maryatt Photography.

Episode 20, April 18, 2018

Emotions are fundamental to human interaction, but in a world where humans are increasingly interacting with AI systems, Dr. Mary Czerwinski, Principal Researcher and Research Manager of the Visualization and Interaction for Business and Entertainment group at Microsoft Research, believes emotions may be fundamental to our interactions with machines as well. And through her team’s work in affective computing, the quest to bring Artificial Emotional Intelligence – or AEI – to our computers may be closer than we think.

Today, Dr. Czerwinski tells us how a cognitive psychologist found her way into the research division of the world’s largest software company, suggests that rather than trying to be productive 24/7, we should aim for Emotional Homeostasis instead, and tells us how, if we do it right, our machines could become a sort of “emotional at-work DJ,” sensing and responding to our emotional states, and helping us to become happier and more productive at the same time.

Related:


Transcript

Mary Czerwinski: We were calling it like a DJ. It’s your emotional at work DJ. So, you know, when you need everything really ramped up and going strong like a DJ knows how to do… but then when it’s time to calm the crowd down, maybe our software can learn, “Okay, Mary needs to take a break. Mary needs to slow down. Mary needs to go get a glass of water…”

Host: You’re listening to the Microsoft Research Podcast, a show that brings you closer to the cutting edge of technology research and the scientists behind it. I’m your host, Gretchen Huizinga.

Emotions are fundamental to human interaction, but in a world where humans are increasingly interacting with AI systems, Dr. Mary Czerwinski, Principal Researcher and Research Manager of the Visualization and Interaction for Business and Entertainment group at Microsoft Research, believes emotions may be fundamental to our interactions with machines as well. And through her team’s work in affective computing, the quest to bring Artificial Emotional Intelligence – or AEI – to our computers may be closer than we think.

Today, Dr. Czerwinski tells us how a cognitive psychologist found her way into the research division of the world’s largest software company, suggests that rather than trying to be productive 24/7, we should aim for Emotional Homeostasis instead, and tells us how, if we do it right, our machines could become a sort of “emotional at-work DJ,” sensing and responding to our emotional states, and helping us to become happier and more productive at the same time.

That and much more on this episode of the Microsoft Research Podcast.

Host: Mary Czerwinski welcome to the podcast with today, it’s great to have you with us.

Mary Czerwinski: Thank you, it’s an honor to be here.

Host: So, you work under the larger umbrella of Human Computer Interaction at Microsoft Research, but more specifically, you’re the Principle Researcher and the Research Manager of the VIBE Group.

Mary Czerwinski: Right.

Host: Tell us what VIBE stands for and what gets people in your group up in the morning.

Mary Czerwinski: VIBE stands for Visualization and Interaction for Business and Entertainment. Most of the people in the group are very deeply into various aspects of information visualization, so helping people work with big data better. We’re building tools for programmers, so programmers can deal with all the vast amounts of data we have coming in these days. Or in the area of affective computing, which is where I’m squarely putting my research these days.

Host: Microsoft Research has a number of social science researchers now.

Mary Czerwinski: Yes.

Host: But you actually were the first social science researcher. How did you end up here? Did they come looking for you? Um, who decided they needed a PhD in Cognitive Psychology to help with computer science research?

Mary Czerwinski: That’s a great question. I took the initiative to meet with Dan Ling who was managing all of research at that time and talked to him about social science and the benefits of having social scientists on the research teams. But it was actually Eric Horvitz and George Robertson that actually sought me out as a psychologist to partner with them in the research around attention and rendering on the screen and redesigning Windows in 3D, and so the two of them really kind of helped usher me over to Research. I was actually in Product at the time. So, I was in Microsoft, just wasn’t in the Research group. So, it was a great time to come out into the field as a cognitive psychologist because back then, we had one computer screen and it was all about users being able to look at the screen and make sense of it. Well, that’s what cognitive psychologists study, perception and cognition around information, learning, memory, making decisions, attending to various aspects of the displays. So, it was perfect timing for me to put what I did my PhD in, to work for Microsoft.

Host: You know, I would have not even gone there in terms of that thinking, just given where we are today with the attention economy they are calling it…

Mary Czerwinski: Yes. Yes.

Host: And there’s so many more demands on our attention. So, tell me how it’s evolved since when you started, to now.

Mary Czerwinski: Yeah. Yeah, well, I mean back then, the experiments we would do for instance, with the astronauts at Johnson Space Center, I would be looking at their layouts of their displays and they’d be running nine experiments at a time and they had to monitor them all the time, in addition to the various space station systems like thermonuclear control, which is kind of important. And so, we looked character-based user interfaces and how people can track that kind of information. That all changed when graphical user interfaces came along. And then it was more about how you grouped things, and how you arranged them on the screen, and it’s still about that today, that hasn’t gone away. But now, you know, everything is more social and now we have multiple devices and multiple displays we have to look at. So, it’s just fanned our attention out even more broadly. And harder to control, obviously.

Host: Yeah, so makes your job a little harder as well.

Mary Czerwinski: It does, yes.

Host: But that’s exciting.

Mary Czerwinski: Yeah, it’s really fun.

Host: Your research focuses on a couple of big ideas: emotion tracking, information worker task management, and healthcare and wellness for individuals and groups. And we’ll get to each of those in turn.

Mary Czerwinski: Okay.

Host: But for starters, let’s talk a little bit about this big idea of Affective Computing and the quest for Artificial Emotional Intelligence, or AEI. What is it and why is it important?

Mary Czerwinski: Well, I don’t think we’ll actually have quite natural user interfaces until we can actually talk to our systems the way you and I naturally talk to each other. And so, a large part of that is these really natural, almost automatic, social signals that we cue off of each other. So, the way you’re nodding your head gently in agreement with me is a way that I know that you’re mimicking me, basically, and my thoughts.

Host: Right.

Mary Czerwinski: So, we believe that systems will need to have that kind of emotional intelligence, EQ, in order for them to be natural enough for us to really engage with them and want to keep engaging with them, so there’s not that freaky uncanny valley where it just doesn’t feel natural, it doesn’t feel right. So, we’ve been pursuing this emotion tracking in order to track your emotion as a user, so that the system can respond more like a human would, um, emotionally, appropriately.

Host: So, do you think we’ll ever get real artificial emotional intelligence with a machine?

Mary Czerwinski: Well, that’s TBD. I would rather be on the edge exploring those things as they happen, as a scientist, and trying to make the most of it.

Host: Part of your research involves designing what you call “delightful systems” but it isn’t always easy to tell whether they are really delightful or not. How are you tackling this, Mary?

Mary Czerwinski: I like to say, “How can we say that we’re designing systems that delight our users if we don’t actually know?” So that is why we are using cameras and microphones to look at our users faces to see what their facial gestures are. Are they smiling, are they squinting their eyes, are they frowning? But also listen to their voices. Hear how they are saying things about working with our systems. And if we can build these kinds of emotion-sensing platforms into our all our products, then we’ll know from the get-go if we’re actually designing, uh, systems and tools that users are delighted by. Or are they frustrated? And then we should fix those areas that seem on to bring on the frustration. So, I think if you build these tools in to the platform of all of our systems that we build in the future, we’ll know better if we’re making, you know, systems and software that people love.

Host: You were part of a recorded presentation called, “And how does that make you feel?” Which I loved the name of…

Mary Czerwinski: Yes.

Host: You know it’s like the psychiatrist: how does that make you feel…

Mary Czerwinski: Yes, exactly.

Host: And at several points you got frustrated with the fact that your video wouldn’t play. And about the third time, you actually flashed a big smile and go, “That’s just great!” And I remember thinking, how would I, as a machine, know that you weren’t really happy and delighted with me?

Mary Czerwinski: That’s a great question. That’s a great question. Because we never think – now I’m channeling Daniel McDuff on my team – we never believe that a one- to ten-second slice of a user’s face is going to give us anything worthwhile. What we believe in is that you have watch and listen to this user for a long time, longitudinally, so that we really understand the range of a person’s emotions. People are all over the place with how labile their emotional state is. I’m pretty up and down. I have a broad range, um, but some people aren’t. Some people are much, you know, flatter or just more controlled with their emotions. And so, you really do have to study an individual for a long time before you know. But then also, the machine needs context about what you’re doing. So, if the machine had been watching and seeing I couldn’t play my video, the machine would learn that that smile is probably a smile of complete frustration, right?

Host: Or a grimace.

Mary Czerwinski: Yeah, or a grimace, a grimacing smile it could have been. But yeah, so the machine needs the context and the longitudinal data about you, yourself, as a person, before it can have really good assuredness in that it’s making the right emotional classification.

Host: So that’s an interesting thread to drill in on. Because you want to build a system that will work for many people, not just one person. And if my emotional ups and downs are different than hers then how are you going to bring the research together and make a machine that can tell a difference, if you need to track me for a long time and him for a long… you know, times a million.

Mary Czerwinski: Yeah, well, because you can do this now with simple webcam and mic on any laptop or PC. It’s pretty easy, and the user doesn’t have to do anything to do this kind of tracking longitudinally. And we’re doing right now here in Building 99. Many of us are getting our emotions classified while we’re using everyday software systems and doing our regular work. So now, we have thousands and thousands of hours of these emotions tracked and we can bring them back to bear… Not only with generalized models, but also personalized models that could be more relevant just to me. So, we can do both scales quite well.

Host: Okay, so you could even employ machine-learning – AI – to actually customize in a particular product for a particular person…

MaryCzerwinski: Correct, yes. Yes, that’s the angle.

Host: That would be cool…

MaryCzerwinski: Mmm hmmm.

(music plays)

Host: You framed some of your research in the category of productivity tools. Tell us about the technology you’re working on to help mitigate negative interruptions, multitasking… or, I think you really would call it task switching, because there is no such thing…

Mary Czerwinski: Right.

Host: At least that is what I tell my daughter… “You’re not multitasking.” Um. And some of the general lack of focus that we experience at work and if we’re honest, in life.

Mary Czerwinski: Yes. You know, I do a lot of work with Shamsi Iqbal and Gloria Mark. And Gloria has really opened my eyes to the fact that everybody has a really different personality. Some people, it turns out, are quite adept at managing their workflow. So, they stay pretty concentrated and focused for longer periods of time than other people. And they take breaks when they know that they need to, when they need to refresh because no one can stay focused 24/7, that’s a myth. So, we like to refer to something called Emotional Homeostasis, which is, you stay focused, you work hard, maybe you are even a little stressed. And then you do something to balance that out, right? So maybe you use social media, if you’re one of those people. But other people have a harder time staying focused for longer periods of time. And actually, every time they hear a ping, or every time they get an email notification, they go to it. The little numbers on the icons are the worst thing for these kinds of people, because they see they have something new and they have to go check it. So…

Host: Me.

Mary Czerwinski: …for those kinds of people, we can actually use machine learning, and this personalized kind of software we’ve been talking about, to see when you need to be in the flow, to see when you actually do your best work, and to kind of help you stay there. So perhaps that’s the time – when you’re really starting to get into your work – perhaps that’s the time we turn off notifications or hold less high-priority notifications away from you. Turn off that inbox, you know, dinger, and let you focus. We might even turn off social media if we go so far. The user gives us permission to do that. We’ll do whatever the user wants us to do to keep them in the zone. And then, you know, when you tend to kind of come out of that flow – maybe it’s right before lunch – let it all flow back in and it will be just fine!

Host: And the machine can see that you’re moving out of the zone and it’s going to turn back on automatically?

Mary Czerwinski: This is current research we’re doing right now. This is our hypothesis, that we’ll be able to see when you prefer to have your hardcore work moments, and when you are okay with letting notifications through and possible work breaks through. Or we can see if you’re one of those people that stays really focused. Maybe we can see you’ve been focused a little too long for yourself. Maybe you are getting a little stressed out, and a little walk would be quite welcome at that point. But it’s all work that’s in progress. So, we still don’t know how these ebb… we kind of like, we were calling it like a DJ. It’s your emotional at work DJ. So, you know, when you need everything really ramped up and going strong like a DJ knows how to do… but then when it’s time to calm the crowd down, maybe our software can learn, “Okay, Mary needs to take a break. Mary needs to slow down. Mary needs to go get a glass of water.” Bio break. Something, you know. So, we’ll see how much users will be welcoming such software, and we’ll design it until they do welcome it. We’ll iterate it and do it better.

Host: If we’re using these tools to get in the zone and stay there, should we call it artificial productivity?

Mary Czerwinski: No, it’s real productivity.

Host: I’m kidding.

Mary Czerwinski: It’s artificially assisted.

Host: Artificially assisted productivity.

Mary Czerwinski: Yeah. Yeah.

Host: Well, so let me ask you this, as a psychologist, what happened to good old-fashioned self-discipline? Have we just lost that in this digital world?

Mary Czerwinski: Well, I will tell you, in a study we ran two summers ago, a couple of our participants, we turned off all their social media, all their notifications. They weren’t allowed to use their phone for eight hours a day. And a couple of our participants did make note of the fact that they had lost sight of how often they were going to social media and checking their phones and they were shocked by how much more time they had. So, in some sense, the sad truth is yes, we have trained our self to do this task-switching on an almost constant basis. And on a great note, software can help. So, you know, maybe we can train it back!

Host: Let’s get a little meta on that topic and this whole concept of digital assistants and affective agents. And kind of on the same note, have we really reached the point where we need technology to save us from our technology?

Mary Czerwinski: I just did a press interview yesterday where I did actually make the claim that we need technology to help us from our technology, so I can stand by that. I’ve said that. But as I said, all of our task-switching “bad” behavior was trained. It was trained by the tools and the features that we asked software companies to build for us, or tech companies in general. So, we can use the technology to retrain ourselves to focus. I believe we can. I know we can. The research shows we can. And we’ll make better decisions. We’ll do better at our jobs. And we’ll be more productive. So yes, if we have to start using technology as training wheels to get rid of all the bad technology that’s, you know, bifurcating and dividing our attention, um, I don’t think that’s a bad thing. I think it’s a good thing. We’ve come around full circle.

Host: From a psychological point of view, aren’t we at the risk of outsourcing some basic human skill sets that are emotional… that are necessary for emotional growth and health?

Mary Czerwinski: Right. So, I wouldn’t say that we’ve gone that far, but it is nice that we can use the machine intelligence to kind of protect us again and get us back into focus. I think that’s going to be great. In terms of the emotional side of things, I really do think that the advent of robots taking care of children and the elderly, it’s almost upon us. That is an outsourcing, in my personal opinion. Possibility a necessary one, but it’s still an outsourcing. And so, that’s why I’m really, really vehemently opposed to not studying what the effects of children using robots and personal assistants is on their own communication, growth, and behavior. But also, I’m really adamant that machines need to have EQ. Not because I don’t want you to know that working with a machine or a robot. But because I want that conversation to have an emotional balance and to be emotionally mature so that kids don’t grow up with this imbalance, with lack of EQ, lack of real intelligence.

Host: Right.

Mary Czerwinski: So, I think it goes both ways and that’s why I want to study both of those aspects of it because it could be worrisome, right?

Host: Absolutely. So, on that thread, affective computing, as I’m looking at it, seems to have two sides to it. One is designing machines that can interpret human emotional states and adapt to them. And the other is designing machines that can help humans interpret their own emotional states. First, am I right? And second, if so, why is this two-sided approach necessary and happening?

Mary Czerwinski: Most people aren’t actually very aware of their emotional state. And so, what we find when we first start doing our experiments, is it actually takes people a day or two to actually start understanding how they actually feel. I’m usually in the positive quadrant of how we do these self-ratings. And it took me even a little while to realize, I’m not always, you know, happy and high energy, right? Sometimes I am kind of low energy. Sometimes I am sad. So, after you track yourself for a while, and you are honest with yourself, then you start to realize how the states move around. And so, in any experiment, we always know that at the very beginning, users might not be super good at it, but they’ll get better at it over time. And I think that it is very useful because we forget our emotional states very, very quickly. A quick little survey we did showed people forget how they were feeling in about a day. And if it’s a really big event that happened in your life, you’re probably going to remember how you felt. But it’s those little patterns of “micro-badness” and “micro-goodness” we don’t actually pay that much attention to, and it could really help us make better decisions. Like for instance, people were telling us after like a week of tracking themselves and whatnot that they wouldn’t remember these things a year from now. So, having a system that tracked it would be quite useful. Now, it has to track it accurately. So, there should be a way that you can correct the system when it’s wrong. But I think it’s kind of nice to go back and look at how I was feeling. And actually, I’m always surprised that am happier most of the time, you know. And frustrated very little of the time, but there are moments you would forget otherwise, so it’s useful to you. It might be useful if you were taking care of a loved one, to be able to see those tracks. Right now, most people go to a therapist’s office or a doctor’s office they fill out a paper form for how they’ve been feeling the last four weeks. Well, people don’t remember how they were feeling the last four weeks. So, um, these tools can be very useful if they could be shared with loved ones and caregivers.

Host: Yeah, so let me ask a question about that, if I’m going to be monitored, in all these situations, what’s watching me? What’s recording… What I want to know is, I’m in front of my computer right now.

Mary Czerwinski: Okay.

Host: And it’s a laptop and I might be working at a desktop at my desk. But out in the wild, I don’t have these devices.

Mary Czerwinski: Oh, right no you would have to have some device that has a camera and/or a mic, but you can have either one.

Host: Okay.

Mary Czerwinski: That’s constantly listening to you in those particular contexts that you might want it to. Right now, it’s very easy when people are just at their desk or in a meeting.

Host: Right.

Mary Czerwinski: So, actually, we’re tackling meetings next which will be fascinating.

Host: Oh, wow.

Mary Czerwinski: Yeah…But yeah, you would have to have some kind of recording device on you if you wanted to get it 24/7 all day long.

Host: Well, let’s talk about that for a second, and then we’ll swing back to some other topics. Increasingly, the mechanisms that are collecting data include video, audio, maybe wearables, ingestibles, implantables, lunchables… I don’t know. How do we reconcile the desire to have this knowledge, this quantified-self, this self-awareness, with our desire for privacy and our fear of big brother?

Mary Czerwinski: Right, no that’s why we do everything we can to keep your signal private to you. There’s no indefinable information that goes into the Cloud at all. We, as researchers, can’t go in and look at your data if you’re running our system. We can’t.

Host: And so, is it encrypted? I mean, how would I be confident?

Mary Czerwinski: You are completely de-identifiable. We can’t find you unless you give us a code back, you know, your ID, and tell us you want your data, which of course we can give you, if you want it. But that’s the only way we know who you are.

Host: Well, I just interviewed Kristin Lauter, who works in cryptography.

Mary Czerwinski: Right.

Host: And it’s like all of the different teams that come together to make these tools both fantastic and trustworthy at the same time, is super important.

Mary Czerwinski: And we will surely use her stuff. We work with Ronnie, on her team, so he talks to me about what they’re doing all the time.

Host: Oh, good.

Mary Czerwinski: It will be a good thing to have.

Host: Yeah. In fact, you used a funny phrase to frame the Human Computer Interaction Group’s relationship to other groups at Microsoft. Tell us what it is and why you say it.

Mary Czerwinski: Yeah, human computer interaction people often play glue between various technology teams because we’re usually the user-facing part of the experiment. And we can see when, you know, there’s a technology that’s very right to present to a user but it needs something. Like, we were just talking about. It needs encrypted privacy. So, we’ll bring the two teams that need to talk to each other together because we go out and put our user-interfaces on all of their technologies so we kind of know what they’re doing. So that’s why I say that.

Host: Sure. So, tell me what is going on in the research stage and even in the product stage on things that can help me track my emotions or my state-of-being at work?

Mary Czerwinski: What we do know is that users really could use help from our technology to turn off notifications, to possibly turn off non-work-related websites like social media, and help the user focus. And maybe the user just wants to say something like, “I just want to turn all that off for a half hour. Then I’ll be fine.” And the system should help them do that. Eventually the goal would be to do that automatically for the user, so the user doesn’t have to remember to do something manually, on their own.

Host: Right. Hmmm. That’s an interesting thing… I don’t know if it’s better if I develop the skill to remember to turn it off or to have somebody do it for me.

Mary Czerwinski: Right, we have to do studies on all of this.

Host: Right.

Mary Czerwinski: Right.

(music plays)

Host: One of the biggest problems I have – and I know is I’m not alone – is wrangling and making sense of my data. How are you helping me?

Mary Czerwinski: Ahhh! Well, I know you did a podcast of Steven Drucker.

Host: I did. He was so cool.

Mary Czerwinski: So, he’s one of the people at our group that does wonderful work in information visualization. But we have many researchers that do work in information visualization at Microsoft Research. And they’re all fabulous. They all take different aspects of how you look at your data. Some of them like to work with networked data. Some of them like to work with time-based data. So, there are many, many tools in the “info vis” community – info vis stands for information visualization.

Host: Thank you.

Mary Czerwinski: You’re welcome… that can be thrown at these kinds of big data problems. Another thing that Danyel Fisher has done in our group, which I thought was really nice, is he starts to show you your data as it’s coming in. But if there is, so much data that it can’t be all rendered at once, it just gives you a feel for what the data is going to look like, and you decide when you think you have enough to make a decision. So, they are all working with these kinds of interactive visualization tools to help you see patterns in your data that you might not have been able to see, you know, in an Excel spreadsheet, using standard bar graphs or line graphs.

Host: So, you said, um, some of this still in the research stage, but some of it’s already incorporated into BI, and Excel, and things like that.

Mary Czerwinski: Yes. Power BI has been a great vehicle for taking visualization work and exposing it to end-users and I think it’s been very helpful. It’s very hard to come up novel visualizations, so it takes a lot of creativity, a lot of work, to cook up something that people can use pretty quickly because they’re usually pretty sophisticated, versus the standard bar chart or pie chart. So, it does take users a little while to learn how to use them, but then when they do they are very powerful and useful.

Host: Yeah, yeah. Let’s look to the future for a second. Your group is publishing prolifically. You’re going to CHI in bit, with a lot of papers. What excites you most about what the field is doing right now and what might be exciting to the next generation of human computer interaction scholars?

Mary Czerwinski: Right. Well, I think the whole movement towards the gig economy is really new and exciting and it’s studied pretty hard at CHI. It’s represented pretty well at CHI. So, I tend to go into two tracks when I’m at CHI because there are like 14 parallel tracks, so you have to pick. And I tend to go into mental health and e-health. And then I try to go to the gig economy tracks because that world is moving so fast. So, the advent of micro-tasks are emerging… you know, even, just think of writing a Word document now. We have researchers in the building who are working on just taking little parts of writing that document and making it a task for somebody to do out there, right?

Host: Jamie Teevan and I did a whole a podcast on this.

Mary Czerwinski: Jamie Teevan, OK, good. And Shamsi Iqbal is doing it as well as some others. But that economy, whether you’re talking about taxi drivers to airplane designers, that economy is going gig. And so, it is just fascinating to listen, to hear the research they’re doing and the trends that are happening, and how those gig workers actually work with each other to make sure each other succeed and they make a good wage. So, to me, I always follow those two tracks and I think the young guns will need to do a lot of work on the gig economy.

Host: Right.

Mary Czerwinski: So that’s exciting, but also health and mental health, I mean really moving fast and using machine learning to look at, you know, precision psychology, for instance is a huge topic. So, I’m hoping the young people look at those two areas, but there are many more very hot areas, those are just my two favorite.

Host: We talked about what gets you up in the morning. Is there anything, particularly, that keeps you up at night, Mary?

Mary Czerwinski: Yes, I do get worried about our systems that will eventually have good EQ, but they don’t yet. And how working with the systems today when they are not truly intelligent, they don’t filter by age, gender, location, etcetera, in terms of what they say pretty much these days. I worry that it changes people’s communication patterns because they are going to pattern off the system. So, I really get worried, in particular, about young people. The elderly… I image their morals are already built in and they learned their communication style pretty well by now, but I really worry about the young generation, and the generation that maybe, as we said, maybe they’re going to be working more with a robot than they are with, you know, a teaching assistant in the future. So, I want to make sure that that modeling is what we, as a society, see as appropriate, moral, and positive. So, I really want to focus on generational issues like that. And also, what is the appropriate EQ for machine, right? Do you want to know that it’s not human? I think most of us want to know that it’s not human. And so, how do you do that, and yet still make the conversation feel natural and make the conversation feel appropriate?

Host: That question is so profound because what much of the work happening here is about is making computers more human-like, and yet when you asked that question, “Do you want to know?” I’m nodding my head really hard. Yes, I want to know it’s not… I don’t want to be fooled by the Turing Test.

Mary Czerwinski: Right. Right. So, we have to, again, as a society come up with a plan, basically.

Host: Hmmm. You doing that?

Mary Czerwinski: Thinking about it hard.

Host: If you weren’t a researcher, what would you be?

Mary Czerwinski: I really wanted to go back and get my PhD in bio-psychology at one point. I… you know… I can’t not be a researcher. I’m sorry!

Host: I was just going to say, you answered that by saying, “I’d be a different kind of researcher.”

Mary Czerwinski: I mean if I would be a professional tennis player I’d do it, but I don’t have the talent, so it’s not going to happen.

Host: Last question. What advice would you give to your 25-year-old self?

Mary Czerwinski: Mmm…

Host: This is for the researchers out there who would be looking at, “What do I do? Where do I put my time, talent, treasure?”

Mary Czerwinski: Yeah, what I tend to tell young people that I mentor that are still in graduate school, for instance, because that’s about 25. I tell them not to think that life is just a straight line, right? It’s never a straight line. You don’t get your PhD and go from point A to point B to point C to point D. Sometimes, if you stay in academia, that can happen, but even then, I don’t think your career is going to go in a straight line. Too much happens. And in my particular case, technology changes way too fast, so you always have to be open to something. I thought for sure I was going to go into academia. I couldn’t, because of a two-body problem. I ended up in industry. Oh, my god, was that the best thing for me ever. That was just perfect. And then I jumped around, right? I went to Johnson Space Center. I went to Bell Communications Research. I kept my foot in academia, because I thought, you know, someday, I might want to teach again. Got this wonderful job at Compaq. Which led me to Microsoft. Which led me to Microsoft Research. You know, you just got to stay open, and you’ll know when it’s time to leave… things won’t feel right. And then that great opportunity might be out there and just be open to it.

(music plays)

Host: Mary Czerwinski, thanks for joining us today.

Mary Czerwinski: Thank you.

Host: And sharing so much about, what you’re doing, and we can’t wait to see what is going to happen.

Mary Czerwinski: Thank you it was my pleasure.

Host: To learn more about Dr. Mary Czerwinski, and how to have a better relationship with your computer, visit Microsoft.com/research.

Related publications

Continue reading

See all podcasts