{"id":1009941,"date":"2024-02-29T06:00:00","date_gmt":"2024-02-29T14:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=1009941"},"modified":"2024-06-10T10:15:53","modified_gmt":"2024-06-10T17:15:53","slug":"abstracts-february-29-2024","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/podcast\/abstracts-february-29-2024\/","title":{"rendered":"Abstracts: February 29, 2024"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-1024x576.png\" alt=\"MSR Podcast - Abstracts hero with a microphone icon\" class=\"wp-image-995001\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-240x135.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-1280x720.png 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788.png 1400w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n<div class=\"wp-block-msr-podcast-container my-4\">\n\t<iframe loading=\"lazy\" src=\"https:\/\/player.blubrry.com\/?podcast_id=131187950&modern=1\" class=\"podcast-player\" frameborder=\"0\" height=\"164px\" width=\"100%\" scrolling=\"no\" title=\"Podcast Player\"><\/iframe>\n<\/div>\n\n\n\n<p>Members of the research community at Microsoft work continuously to advance their respective fields. <em>Abstracts<\/em> brings its audience to the cutting edge with them through short, compelling conversations about new and noteworthy achievements.&nbsp;<\/p>\n\n\n\n<p>In this episode, Senior Behavioral Science Researcher\u202f<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/t-levt\/\">Lev Tankelevitch<\/a> joins host Gretchen Huizinga to discuss \u201c<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/the-metacognitive-demands-and-opportunities-of-generative-ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">The Metacognitive Demands and Opportunities of Generative AI<\/a>.\u201d In their paper, Tankelevitch and his coauthors propose using the scientific study of how people monitor, understand, and adapt their thinking to address common challenges of incorporating generative AI into life and work\u2014from crafting effective prompts to determining the value of AI-generated outputs.\u202f&nbsp;<\/p>\n\n\n\n<p>To learn more about the paper and related topics, register for\u202f<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/aka.ms\/researchforum\/\" target=\"_blank\" rel=\"noopener noreferrer\">Microsoft Research Forum<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, a series of panel discussions and lightning talks around science and technology research in the era of general AI.<\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a data-bi-type=\"button\" class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/the-metacognitive-demands-and-opportunities-of-generative-ai\/\">Read the paper<\/a><\/div>\n<\/div>\n\n\n\n<section class=\"wp-block-msr-subscribe-to-podcast subscribe-to-podcast\">\n\t<div class=\"subscribe-to-podcast__inner border-top border-bottom border-width-2\">\n\t\t<h2 class=\"h5 subscribe-to-podcast__heading\">\n\t\t\tSubscribe to the <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/podcast\">Microsoft Research Podcast<\/a>:\t\t<\/h2>\n\t\t<ul class=\"subscribe-to-podcast__list list-unstyled\">\n\t\t\t\t\t\t\t<li class=\"subscribe-to-podcast__list-item\">\n\t\t\t\t\t<a class=\"subscribe-to-podcast__link\" href=\"https:\/\/itunes.apple.com\/us\/podcast\/microsoft-research-a-podcast\/id1318021537?mt=2\" target=\"_blank\" rel=\"noreferrer noopener\">\n\t\t\t\t\t\t<svg class=\"subscribe-to-podcast__svg\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"black\" viewBox=\"0 0 32 32\">  <path d=\"M7.12 0c-3.937-0.011-7.131 3.183-7.12 7.12v17.76c-0.011 3.937 3.183 7.131 7.12 7.12h17.76c3.937 0.011 7.131-3.183 7.12-7.12v-17.76c0.011-3.937-3.183-7.131-7.12-7.12zM15.817 3.421c3.115 0 5.932 1.204 8.079 3.453 1.631 1.693 2.547 3.489 3.016 5.855 0.161 0.787 0.161 2.932 0.009 3.817-0.5 2.817-2.041 5.339-4.317 7.063-0.812 0.615-2.797 1.683-3.115 1.683-0.12 0-0.129-0.12-0.077-0.615 0.099-0.792 0.192-0.953 0.64-1.141 0.713-0.296 1.932-1.167 2.677-1.911 1.301-1.303 2.229-2.932 2.677-4.719 0.281-1.1 0.244-3.543-0.063-4.672-0.969-3.595-3.907-6.385-7.5-7.136-1.041-0.213-2.943-0.213-4 0-3.636 0.751-6.647 3.683-7.563 7.371-0.245 1.004-0.245 3.448 0 4.448 0.609 2.443 2.188 4.681 4.255 6.015 0.407 0.271 0.896 0.547 1.1 0.631 0.447 0.192 0.547 0.355 0.629 1.14 0.052 0.485 0.041 0.62-0.072 0.62-0.073 0-0.62-0.235-1.199-0.511l-0.052-0.041c-3.297-1.62-5.407-4.364-6.177-8.016-0.187-0.943-0.224-3.187-0.036-4.052 0.479-2.323 1.396-4.135 2.921-5.739 2.199-2.319 5.027-3.543 8.172-3.543zM16 7.172c0.541 0.005 1.068 0.052 1.473 0.14 3.715 0.828 6.344 4.543 5.833 8.229-0.203 1.489-0.713 2.709-1.619 3.844-0.448 0.573-1.537 1.532-1.729 1.532-0.032 0-0.063-0.365-0.063-0.803v-0.808l0.552-0.661c2.093-2.505 1.943-6.005-0.339-8.296-0.885-0.896-1.912-1.423-3.235-1.661-0.853-0.161-1.031-0.161-1.927-0.011-1.364 0.219-2.417 0.744-3.355 1.672-2.291 2.271-2.443 5.791-0.348 8.296l0.552 0.661v0.813c0 0.448-0.037 0.807-0.084 0.807-0.036 0-0.349-0.213-0.683-0.479l-0.047-0.016c-1.109-0.885-2.088-2.453-2.495-3.995-0.244-0.932-0.244-2.697 0.011-3.625 0.672-2.505 2.521-4.448 5.079-5.359 0.547-0.193 1.509-0.297 2.416-0.281zM15.823 11.156c0.417 0 0.828 0.084 1.131 0.24 0.645 0.339 1.183 0.989 1.385 1.677 0.62 2.104-1.609 3.948-3.631 3.005h-0.015c-0.953-0.443-1.464-1.276-1.475-2.36 0-0.979 0.541-1.828 1.484-2.328 0.297-0.156 0.709-0.235 1.125-0.235zM15.812 17.464c1.319-0.005 2.271 0.463 2.625 1.291 0.265 0.62 0.167 2.573-0.292 5.735-0.307 2.208-0.479 2.765-0.905 3.141-0.589 0.52-1.417 0.667-2.209 0.385h-0.004c-0.953-0.344-1.157-0.808-1.553-3.527-0.452-3.161-0.552-5.115-0.285-5.735 0.348-0.823 1.296-1.285 2.624-1.291z\"\/><\/svg>\n\t\t\t\t\t\t<span class=\"subscribe-to-podcast__link-text\">Apple Podcasts<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/li>\n\t\t\t\n\t\t\t\t\t\t\t<li class=\"subscribe-to-podcast__list-item\">\n\t\t\t\t\t<a class=\"subscribe-to-podcast__link\" href=\"https:\/\/subscribebyemail.com\/www.blubrry.com\/feeds\/microsoftresearch.xml\" target=\"_blank\" rel=\"noreferrer noopener\">\n\t\t\t\t\t\t<svg class=\"subscribe-to-podcast__svg\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" viewBox=\"0 0 32 32\"><path fill=\"currentColor\" d=\"M6.4 6a2.392 2.392 0 00-2.372 2.119L16 15.6l11.972-7.481A2.392 2.392 0 0025.6 6H6.4zM4 10.502V22.8a2.4 2.4 0 002.4 2.4h19.2a2.4 2.4 0 002.4-2.4V10.502l-11.365 7.102a1.2 1.2 0 01-1.27 0L4 10.502z\"\/><\/svg>\n\t\t\t\t\t\t<span class=\"subscribe-to-podcast__link-text\">Email<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/li>\n\t\t\t\n\t\t\t\t\t\t\t<li class=\"subscribe-to-podcast__list-item\">\n\t\t\t\t\t<a class=\"subscribe-to-podcast__link\" href=\"https:\/\/subscribeonandroid.com\/www.blubrry.com\/feeds\/microsoftresearch.xml\" target=\"_blank\" rel=\"noreferrer noopener\">\n\t\t\t\t\t\t<svg class=\"subscribe-to-podcast__svg\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" viewBox=\"0 0 32 32\"><path fill=\"currentColor\" d=\"M12.414 4.02c-.062.012-.126.023-.18.06a.489.489 0 00-.12.675L13.149 6.3c-1.6.847-2.792 2.255-3.18 3.944h13.257c-.388-1.69-1.58-3.097-3.179-3.944l1.035-1.545a.489.489 0 00-.12-.675.492.492 0 00-.675.135l-1.14 1.68a7.423 7.423 0 00-2.55-.45c-.899 0-1.758.161-2.549.45l-1.14-1.68a.482.482 0 00-.494-.195zm1.545 3.824a.72.72 0 110 1.44.72.72 0 010-1.44zm5.278 0a.719.719 0 110 1.44.719.719 0 110-1.44zM8.44 11.204A1.44 1.44 0 007 12.644v6.718c0 .795.645 1.44 1.44 1.44.168 0 .33-.036.48-.09v-9.418a1.406 1.406 0 00-.48-.09zm1.44 0V21.76c0 .793.646 1.44 1.44 1.44h10.557c.793 0 1.44-.647 1.44-1.44V11.204H9.878zm14.876 0c-.169 0-.33.035-.48.09v9.418c.15.052.311.09.48.09a1.44 1.44 0 001.44-1.44v-6.719a1.44 1.44 0 00-1.44-1.44zM11.8 24.16v1.92a1.92 1.92 0 003.84 0v-1.92h-3.84zm5.759 0v1.92a1.92 1.92 0 003.84 0v-1.92h-3.84z\"\/><\/svg>\n\t\t\t\t\t\t<span class=\"subscribe-to-podcast__link-text\">Android<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/li>\n\t\t\t\n\t\t\t\t\t\t\t<li class=\"subscribe-to-podcast__list-item\">\n\t\t\t\t\t<a class=\"subscribe-to-podcast__link\" href=\"https:\/\/open.spotify.com\/show\/4ndjUXyL0hH1FXHgwIiTWU\" target=\"_blank\" rel=\"noreferrer noopener\">\n\t\t\t\t\t\t<svg class=\"subscribe-to-podcast__svg\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" viewBox=\"0 0 32 32\"><path fill=\"currentColor\" d=\"M16 4C9.383 4 4 9.383 4 16s5.383 12 12 12 12-5.383 12-12S22.617 4 16 4zm5.08 17.394a.781.781 0 01-1.086.217c-1.29-.86-3.477-1.434-5.303-1.434-1.937.002-3.389.477-3.403.482a.782.782 0 11-.494-1.484c.068-.023 1.71-.56 3.897-.562 1.826 0 4.365.492 6.171 1.696.36.24.457.725.217 1.085zm1.56-3.202a.895.895 0 01-1.234.286c-2.338-1.457-4.742-1.766-6.812-1.747-2.338.02-4.207.466-4.239.476a.895.895 0 11-.488-1.723c.145-.041 2.01-.5 4.564-.521 2.329-.02 5.23.318 7.923 1.995.419.26.547.814.286 1.234zm1.556-3.745a1.043 1.043 0 01-1.428.371c-2.725-1.6-6.039-1.94-8.339-1.942h-.033c-2.781 0-4.923.489-4.944.494a1.044 1.044 0 01-.474-2.031c.096-.023 2.385-.55 5.418-.55h.036c2.558.004 6.264.393 9.393 2.23.497.292.663.931.371 1.428z\"\/><\/svg>\n\t\t\t\t\t\t<span class=\"subscribe-to-podcast__link-text\">Spotify<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/li>\n\t\t\t\n\t\t\t\t\t\t\t<li class=\"subscribe-to-podcast__list-item\">\n\t\t\t\t\t<a class=\"subscribe-to-podcast__link\" href=\"https:\/\/www.blubrry.com\/feeds\/microsoftresearch.xml\" target=\"_blank\" rel=\"noreferrer noopener\">\n\t\t\t\t\t\t<svg class=\"subscribe-to-podcast__svg\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" viewBox=\"0 0 32 32\"><path fill=\"currentColor\" d=\"M6.667 4a2.676 2.676 0 00-2.612 2.13v.003c-.036.172-.055.35-.055.534v18.666c0 .183.019.362.055.534v.003a2.676 2.676 0 002.076 2.075h.002c.172.036.35.055.534.055h18.666A2.676 2.676 0 0028 25.333V6.667a2.676 2.676 0 00-2.13-2.612h-.003A2.623 2.623 0 0025.333 4H6.667zM8 8h1.333C17.42 8 24 14.58 24 22.667V24h-2.667v-1.333c0-6.618-5.382-12-12-12H8V8zm0 5.333h1.333c5.146 0 9.334 4.188 9.334 9.334V24H16v-1.333A6.674 6.674 0 009.333 16H8v-2.667zM10 20a2 2 0 11-.001 4.001A2 2 0 0110 20z\"\/><\/svg>\n\t\t\t\t\t\t<span class=\"subscribe-to-podcast__link-text\">RSS Feed<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/li>\n\t\t\t\t\t<\/ul>\n\t<\/div>\n<\/section>\n\n\n<div class=\"wp-block-msr-show-more\">\n\t<div class=\"bg-neutral-100 p-5\">\n\t\t<div class=\"show-more-show-less\">\n\t\t\t<div>\n\t\t\t\t<span>\n\t\t\t\t\t\n\n<h2 class=\"wp-block-heading\" id=\"transcript\">Transcript<\/h2>\n\n\n\n<p>[MUSIC PLAYS]<\/p>\n\n\n\n<p><strong>GRETCHEN HUIZINGA:<\/strong> Welcome to <em>Abstracts<\/em>, a Microsoft Research Podcast that puts the spotlight on world-class research in brief. I\u2019m Dr. Gretchen Huizinga. In this series, members of the research community at Microsoft give us a quick snapshot\u2014or a <em>podcast abstract<\/em>\u2014of their new and noteworthy papers.&nbsp;&nbsp;<\/p>\n\n\n\n<p>[MUSIC FADES]&nbsp;<\/p>\n\n\n\n<p>Today, I\u2019m talking to Dr. Lev Tankelevitch, a senior behavioral science researcher from Microsoft Research. Dr. Tankelevitch is coauthor of a paper called \u201cThe Metacognitive Demands and Opportunities of Generative AI,\u201d and you can read this paper now on arXiv. Lev, thanks for joining us on <em>Abstracts<\/em>!&nbsp;<\/p>\n\n\n\n\t\t\t\t<\/span>\n\t\t\t\t<span id=\"show-more-show-less-toggle-1\" class=\"show-more-show-less-toggleable-content\">\n\t\t\t\t\t\n\n\n\n<p><strong>LEV TANKELEVITCH:<\/strong> Thanks for having me.&nbsp;<\/p>\n\n\n\n<p><strong>HUIZINGA:<\/strong> So in just a couple sentences\u2014a <em>metacognitive elevator<\/em> pitch, if you will\u2014tell us about the issue or problem your paper addresses and, more importantly, why we should care about it.&nbsp;<\/p>\n\n\n\n<p><strong>TANKELEVITCH:<\/strong> Sure. So as generative AI has, sort of, rolled out over the last year or two, we\u2019ve seen some user studies come out, and as we read these studies, we noticed there are a lot of challenges that people face with these tools. So people really struggle with, you know, writing prompts for systems like Copilot or ChatGPT. For example, they don\u2019t even know really where to start, or they don\u2019t know how to convert an idea they have in their head into, like, clear instructions for these systems. If they\u2019re, sort of, working in a field that maybe they\u2019re less familiar with, like a new programming language, and they get an output from these systems, they\u2019re not really sure if it\u2019s right or not. And then, sort of, more broadly, they don\u2019t really know how to fit these systems into their workflows. And so we\u2019ve noticed all these challenges, sort of, arise, and some of them relate to, sort of, the unique features of generative AI, and some relate to the design of these systems. But basically, we started to, sort of, look at these challenges, and try to understand what\u2019s going on\u2014how can we make sense of them in a more coherent way and actually build systems that really augment people and their capabilities rather than, sort of, posing these challenges?\u00a0<\/p>\n\n\n\n<p><strong>HUIZINGA:<\/strong> Right. So let\u2019s talk a little bit about the related research that you\u2019re building on here and what unique insights or directions your paper adds to the literature.&nbsp;<\/p>\n\n\n\n<p><strong>TANKELEVITCH:<\/strong> So as I mentioned, we were reading all these different user studies that were, sort of, testing different prototypes or existing systems like ChatGPT or GitHub Copilot, and we noticed different patterns emerging, and we noticed that the same kinds of challenges were cropping up. But there weren\u2019t any, sort of, clear coherent explanations that tied all these things together. And in general, I\u2019d say that human-computer interaction research, which is where a lot of these papers are coming out from, it\u2019s really about building prototypes, testing them quickly, exploring things in an open-ended way. And so we thought that there was an opportunity to step back and to try to see how we can understand these patterns from a more theory-driven perspective. And so, with that in mind, one perspective that became clearly relevant to this problem is that of metacognition, which is this idea of \u201cthinking <em>about<\/em> thinking\u201d or how we, sort of, monitor our cognition or our thinking and then control our cognition and thinking. And so we thought there was really an opportunity here to take this set of theories and research findings from psychology and cognitive science on metacognition and see how they can apply to understanding these usability challenges of generative AI systems.&nbsp;<\/p>\n\n\n\n<p><strong>HUIZINGA:<\/strong> Yeah. Well, this paper isn\u2019t a traditional report on empirical research as many of the papers on this podcast are. So how would you characterize the approach you chose and why?<\/p>\n\n\n\n<p><strong>TANKELEVITCH<\/strong><strong>:<\/strong> So the way that we got into this, working on this project, it was, it was quite organic. So we were looking at these user studies, and we noticed these challenges emerging, and we really tried to figure out how we can make sense of them. And so it occurred to us that metacognition is really quite relevant. And so what we did was we then dove into the metacognition research from psychology and cognitive science to really understand what are the latest theories, what are the latest research findings, how could we understand what\u2019s known about that from that perspective, from that, sort of, fundamental research, and then go back to the user studies that we saw in human-computer interaction and see how those ideas can apply there. And so we did this, sort of, in an iterative way until we realized that we really have something to work with here. We can really apply a somewhat coherent framework onto these, sort of, disparate set of findings not only to understand these usability challenges but then also to actually propose directions for new design and research explorations to build better systems that support people\u2019s metacognition.&nbsp;<\/p>\n\n\n\n<p><strong>HUIZINGA:<\/strong> So, Lev, given the purpose of your paper, what are the major takeaways for your readers, and how did you present them in the paper?&nbsp;<\/p>\n\n\n\n<p><strong>TANKELEVITCH:<\/strong> So I think the key, sort of, fundamental point is that the perspective of metacognition is really valuable for understanding the usability challenges of generative AI and potentially designing new systems that support metacognition. And so one analogy that we thought was really useful here is of a manager delegating tasks to a team. And so a manager has to determine, you know, what is their goal in their work? What are the different subgoals that that goal breaks down into? How can you communicate those goals clearly to a team, right? Then how do you assess your team\u2019s outputs? And then how do you actually adjust your strategy accordingly as the team works in an iterative fashion? And then at a higher level, you have to really know how to\u2014actually <em>what<\/em> <em>to<\/em> delegate to your team and how you might want to delegate that. And so we realized that working with generative AI really parallels these different aspects of what a manager does, right. So when people have to write a prompt initially, they really have to have self-awareness of their task goals. What are you actually trying to achieve? How does that translate into different subtasks? And how do you verbalize that to a system in a way that system understands? You might then get an output and you need to iterate on that output. So then you need to really think about, what is your level of confidence in your prompting ability? So is your prompting the main reason why the output isn\u2019t maybe as satisfactory as you want, or is it something to do with the system? Then you actually might get the output [you\u2019re] happy with, but you\u2019re not really sure if you should fully rely on it because maybe it\u2019s an area that is outside of your domain of expertise. And so then you need to maintain an appropriate level of confidence, right? Either to verify that output further or decide not to rely on it, for example. And then at a, sort of, broader level, this is about the question of task delegation. So this requires having self-awareness of the applicability of generative AI to your workflows and maintaining an appropriate level of confidence in completing tasks manually or relying on generative AI. For example, whether it\u2019s worth it for you to actually learn how to work with generative AI more effectively. And then finally, it requires, sort of, metacognitive flexibility to adapt your workflows as you work with these tools. So are there some tasks where the way that you\u2019re working with them is, sort of, slowing you down in specific ways? So being able to recognize that and then change your strategies as necessary really requires metacognitive flexibility. So that was, sort of, one key half of our findings.&nbsp;&nbsp;<\/p>\n\n\n\n<p>And then beyond that we really thought about how we can use this perspective of metacognition to design better systems. And so one, sort of, general direction is really about supporting people\u2019s metacognition. So we know from research from cognitive science and psychology that we can actually design interventions to improve people\u2019s metacognition in a lasting and effective way. And so similarly, we can design systems that support people\u2019s metacognition. For example, systems that support people in planning their tasks as they actually craft prompts. We can support people in actually reflecting on their confidence in their prompting ability or in assessing the output that they see. And so this relates a little bit to AI acting as a coach for you, which is an idea that the Microsoft Research New York City team came up with. So this is Jake Hofman, David Rothschild, and Dan Goldstein. And so, in this way, generative AI systems can really help you reflect <em>as a coach<\/em> and understand whether you have the right level of confidence in assessing output or crafting prompts and so on. And then similarly, at a higher level, they can help you manage your workflows, so helping you reflect on whether generative AI is really working for you in certain tasks or whether you can adapt your strategy in certain ways. And likewise, this relates also to explanations about AI, so how you can actually design systems that are explainable to users in a way that helps them achieve their goals? And explainability can be thought about as a way to actually reduce the metacognitive demand because you\u2019re, sort of, explaining things in a way to people that they don\u2019t have to keep in their mind and have to think about, and that, sort of, improves their confidence. It can <em>help<\/em> them improve their confidence or <em>calibrate<\/em> their confidence in their ability to assess outputs.\u00a0<\/p>\n\n\n\n<p><strong>HUIZINGA:<\/strong> Talk for a minute about real-world impact of this research. And by that, I mean, who does it help most and how? Who\u2019s your main audience for this <em>right now<\/em>?<\/p>\n\n\n\n<p><strong>TANKELEVITCH:<\/strong> In a sense, this is very broadly applicable. It\u2019s really about designing systems that people can interact with in any domain and in any context. But I think, given how generative AI has rolled out in the world today, I mean, a lot of the focus has been on productivity and workflows. And so this is a really well-defined, clear area where there is an opportunity to actually help people achieve more and stay in control and actually be more intentional and be more aligned with their goals. And so this is, this is an approach where not only can we go beyond, sort of, automating specific tasks but actually use these systems to help people clarify their goals and track with them in a more effective way. And so knowledge workers are an obvious, sort of, use case or an obvious area where this is really relevant because they work in a complex system where a lot of the work is, sort of, diffused and spread across collaborations and artifacts and softwares and different ways of working. And so a lot of things are, sort of, lost or made difficult by that complexity. And so systems, um, that are flexible and help people actually reflect on what they want to achieve can really have a big impact here.\u00a0<\/p>\n\n\n\n<p><strong>HUIZINGA:<\/strong> Mm-hmm. Are you a little bit upstream of that even now in the sense that this is a \u201cresearch direction\u201d kind of paper. I noticed that as I read it, I felt like this was how researchers can begin to think about what they\u2019re doing and how that will help downstream from that.&nbsp;<\/p>\n\n\n\n<p><strong>TANKELEVITCH:<\/strong> Yes. That\u2019s exactly right. So this is really about, we hope, unlocking a new direction of research and design where we take this perspective of metacognition\u2014of how we can help people think more clearly and, sort of, monitor and control their own cognition\u2014and design systems to help them do that. And in the paper, there\u2019s a whole list of different questions, both fundamental research questions to understand in more depth how metacognition plays a role in human-AI interaction when people work with generative AI systems but also how we can then actually design new interventions or new systems that actually support people\u2019s metacognition. And so there\u2019s a lot of work to do in this, and we hope that, sort of, inspires a lot of further research, and we\u2019re certainly planning to do a lot more follow-up research.\u00a0<\/p>\n\n\n\n<p><strong>HUIZINGA:<\/strong> Yeah. So I always ask, if there was just one thing that you wanted our listeners to take away from this work, a sort of golden nugget, what would it be?&nbsp;<\/p>\n\n\n\n<p><strong>TANKELEVITCH:<\/strong> I mean, I\u2019d say that if we really want generative AI to be about augmenting human agency, then I think we need to focus on understanding how people think and behave in their real-world context and design for that. And so I think specifically, the real potential of generative AI here, as I was saying, is not just to automate a bunch of tasks but really to help people clarify their intentions and goals and act in line with them. And so, in a way, it\u2019s kind of about building tools for thought, which was the real vision of the early pioneers of computing. And so I hope that this, kind of, goes back to that original idea.<\/p>\n\n\n\n<p><strong>HUIZINGA:<\/strong> You mentioned this short list of open research questions in the field, along with a list of suggested interventions. You\u2019ve, sort of, curated that for your readers at the end of the paper. But give our audience a little overview of that and how those questions inform your own research agenda coming up next.\u00a0<\/p>\n\n\n\n<p><strong>TANKELEVITCH:<\/strong> Sure. So on the, sort of, fundamental research side of things, there are a lot of questions around how, for example, self-confidence that people have plays a role in their interactions with generative AI systems. So this could be self-confidence in their ability to prompt these systems. And so that is one interesting research question. What is the role of confidence and calibrating one\u2019s confidence in prompting? And then similarly, on the, sort of, output evaluation side, when you get an output from generative AI, how do you calibrate your confidence in assessing that output, right, especially if it\u2019s in an area where maybe you\u2019re less familiar with? And so there\u2019s these interesting, nuanced questions around self-confidence that are really interesting, and we\u2019re actually exploring this in a new study. This is part of the AI, Cognition, and [the] Economy pilot project. So this is a collaboration that we\u2019re running with Dr. Clara Colombatto, who\u2019s a researcher in University of Waterloo and University College London, and we\u2019re essentially designing a study where we\u2019re trying to understand people\u2019s confidence in themselves, in their planning ability, and in working with AI systems to do planning together, and how that influences their reliance on the output of generative AI systems.\u00a0<\/p>\n\n\n\n<p>[MUSIC PLAYS]&nbsp;<\/p>\n\n\n\n<p><strong>HUIZINGA:<\/strong> Well, Lev Tankelevitch, thank you for joining us today, and to our listeners, thanks for tuning in. If you want to read the full paper on metacognition and generative AI, you can find a link at aka.ms\/abstracts, or you can read it on arXiv. Also, Lev will be speaking about this work at the upcoming Microsoft Research Forum, and you can register for this series of events at researchforum.microsoft.com. See you next time on <em>Abstracts<\/em>!&nbsp;<\/p>\n\n\n\n<p>[MUSIC FADES]<\/p>\n\n\t\t\t\t<\/span>\n\t\t\t<\/div>\n\t\t\t<button\n\t\t\t\tclass=\"action-trigger glyph-prepend mt-2 mb-0 show-more-show-less-toggle\"\n\t\t\t\taria-expanded=\"false\"\n\t\t\t\tdata-show-less-text=\"Show less\"\n\t\t\t\ttype=\"button\"\n\t\t\t\taria-controls=\"show-more-show-less-toggle-1\"\n\t\t\t\taria-label=\"Show more content\"\n\t\t\t\tdata-alternate-aria-label=\"Show less content\">\n\t\t\t\tShow more\t\t\t<\/button>\n\t\t<\/div>\n\t<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Can how we think about our thinking help us better incorporate generative AI in our lives & work? Explore metacognition\u2019s potential to improve the tech\u2019s usability on \u201cAbstracts,\u201d then sign up for Microsoft Research Forum for more on this & other AI work.<\/p>\n","protected":false},"author":42735,"featured_media":995001,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"https:\/\/player.blubrry.com\/id\/131187950","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_hide_image_in_river":0,"footnotes":""},"categories":[240054],"tags":[],"research-area":[13556,13554],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243990],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[268128],"class_list":["post-1009941","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-msr-podcast","msr-research-area-artificial-intelligence","msr-research-area-human-computer-interaction","msr-locale-en_us","msr-post-option-podcast-featured","msr-podcast-series-abstracts"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"https:\/\/player.blubrry.com\/id\/131187950","podcast_episode":"","msr_research_lab":[199561],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[1053711,483294],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Lev Tankelevitch","user_id":43209,"display_name":"Lev Tankelevitch","author_link":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/levt\/\" aria-label=\"Visit the profile page for Lev Tankelevitch\">Lev Tankelevitch<\/a>","is_active":false,"last_first":"Tankelevitch, Lev","people_section":0,"alias":"levt"},{"type":"guest","value":"gretchen-huizinga-2","user_id":"444834","display_name":"Gretchen Huizinga","author_link":"<a href=\"https:\/\/www.linkedin.com\/in\/gretchen-huizinga-phd-3a4b2921?trk=people-guest_people_search-card\" aria-label=\"Visit the profile page for Gretchen Huizinga\">Gretchen Huizinga<\/a>","is_active":true,"last_first":"Huizinga, Gretchen","people_section":0,"alias":"gretchen-huizinga-2"}],"msr_type":"Post","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-960x540.png\" class=\"img-object-cover\" alt=\"Microsoft Research Podcast - Abstracts hero with a microphone icon\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-960x540.png 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-300x169.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-1024x576.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-768x432.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-1066x600.png 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-655x368.png 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-343x193.png 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-240x135.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-640x360.png 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788-1280x720.png 1280w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2023\/09\/Podcast_Abstracts_Hero_Feature_No_Text_1400x788.png 1400w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","byline":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/levt\/\" title=\"Go to researcher profile for Lev Tankelevitch\" aria-label=\"Go to researcher profile for Lev Tankelevitch\" data-bi-type=\"byline author\" data-bi-cN=\"Lev Tankelevitch\">Lev Tankelevitch<\/a> and <a href=\"https:\/\/www.linkedin.com\/in\/gretchen-huizinga-phd-3a4b2921?trk=people-guest_people_search-card\" title=\"Go to researcher profile for Gretchen Huizinga\" aria-label=\"Go to researcher profile for Gretchen Huizinga\" data-bi-type=\"byline author\" data-bi-cN=\"Gretchen Huizinga\">Gretchen Huizinga<\/a>","formattedDate":"February 29, 2024","formattedExcerpt":"Can how we think about our thinking help us better incorporate generative AI in our lives &amp; work? Explore metacognition\u2019s potential to improve the tech\u2019s usability on \u201cAbstracts,\u201d then sign up for Microsoft Research Forum for more on this &amp; other AI work.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1009941","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42735"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1009941"}],"version-history":[{"count":16,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1009941\/revisions"}],"predecessor-version":[{"id":1022829,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1009941\/revisions\/1022829"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/995001"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1009941"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1009941"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1009941"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1009941"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1009941"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1009941"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1009941"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1009941"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1009941"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1009941"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1009941"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}