S. Craig Watkins is the Ernest A. Sharpe Centennial Professor at the University of Texas at Austin, an internationally recognized expert in media, founding director of the Institute for Media Innovation, and the Lead Principal Investigator for UT’s Good Systems Designing AI for Racial Justice research project. Before coming to Microsoft, Stephen Elkin served as CIO for the City of Austin, Texas. While engaged in separate sectors, Craig and Stephen share guiding strategies and principles for approaching AI and its role in society, specifically on how to use AI technology to solve health, humanitarian, and accessibility issues while keeping transparency, equity, and ethics at the forefront. Stephen and Craig recently sat down to discuss Craig’s latest research on AI ethics and the future of equity in the AI landscape.
AI and Our Shared Obligation of Equity
In 2019, the University of Texas launched a grand design challenge to bring researchers together across disciplines to solve world issues. This resulted in Good Systems, a multidisciplinary, campus-wide project focused on understanding and building AI technologies that benefit society.
“As AI becomes more central in our lives, it’s critical to think about the processes that go into its design,” explained Craig. “Specifically, whose ideas, values, and views of the world guide AI design is a privilege. The first iterations of these systems were largely designed by companies lacking significant diversity. That has serious consequences for the products being developed, the way AI is conceptualized, and how AI and machine learning models are built.”
At Good Systems, Craig leads a team examining how AI can help address racial inequality. They are investigating how bias and lack of transparency in technologies can affect different populations, and researching those concerns from ethical, design, development, and sociological perspectives. “The fundamental question is: ‘what makes a good system good?’ We have to examine it in terms of development, engineering, use of data, outcomes, and identify who it’s good for,” said Craig.
One challenge both Craig and Microsoft identified is the need for increased transparency. “It’s no longer feasible or acceptable for an exclusive community of people to understand AI systems,” Craig said. “Most people don’t know how they function, make predictions, and make assumptions about the world. It can’t only be engineers or data scientists developing a healthcare algorithm. There needs to be a greater diversity of people at the table throughout the life cycle of any AI/ML product.”
The Accelerate initiative is one way Microsoft is addressing that need. The effort, in collaboration with the Texas Education Agency, provides digital skills designed to address economic recovery through skilling underserved communities and re-skilling Americans impacted by COVID-19. The Future Ready A(i) Forum provides digital training for future workforce skills with AI workshops and seminars.
“Helping close the digital divide through educating the future and current workforce about AI is a critical step towards realizing broader equity,” said Stephen.
Reimagining AI to Improve Access to Mental Health Resources
One project Craig leads at the Institute for Media Innovation, is the development of an AI-based healthcare app designed to make mental healthcare more accessible to youth. Over the past decade, Craig studied how youth adopt and interact with different technology and began noticing a significant increase among that population using social media to address mental health issues.
“Especially with the pandemic, youth are increasingly expressing mental health concern and they’re using technology as they do for so many other aspects of their lives – to navigate daily life, access information, and facilitate conversations to better understand mental health issues.”
Craig’s team began developing an AI-based app that allows a mental health professional (psychologist, therapist, or a non-clinical position) to work more effectively and efficiently with a patient to monitor moods and activities while away from each other. Using data gathered, the app helps professionals craft customized wellness plans and better identify and understand triggers or conditions that lead to depression, anxiety, or more positive mood outcomes. The hope is to provide a platform that provides access, builds trust, and offers social support.
“Your goals for this app are similar to those of our AI for Health initiative,” said Stephen. “We’ve awarded over 180 grants to partners who are helping accelerate research, increase global health insights, and address health equity and care for underserved populations.”
Microsoft’s AI for Accessibility program is focused on minimizing the mental health care gaps around the globe. Mental Health America, Northwestern University and University of Toronto are working to build an adaptive-text messaging service to provide more accessible and engaging interventions.
Placing Equity at the Heart of AI
When discussing the application of AI across state and local initiatives, equity being a guidepost is critical.
“As a previous government employee, one of the things I’ve always strongly emphasized is the need for government to use AI to increase efficiencies and modernize the way it connects, interacts, and provides services to their communities,” said Stephen. “Now, more than ever, it’s important to take that approach with an equity mindset.”
“Absolutely, equity needs to be at the center of the discussion,” noted Craig. “That’s true for all sectors – understanding unique equity challenges is critical to adopting and deploying these systems.”
To help address those concerns, Craig worked with the City of Austin’s Equity Office to explore a toolkit that allows departments to conduct internal assessments of their racial equity processes across budget, leadership, workforce, policy, and operations. “The toolkit facilitates an opportunity for departments to look at their AI use and other data-based systems through a racial equity lens, so they can begin to rethink and refine their strategic use of these systems.”
“That toolkit is very similar to what we advise among our government partners,” noted Stephen. “While government is starting to leverage AI more, they’re still behind the curve in technology adoption and ethics and governance strategies. Part of the dialogue we have with governments is around their responsibility to anticipate and mitigate unintended consequences of AI. Establishing governing practices to guide AI efforts is critical. Engaging in both public and private partnerships dialogue can greatly advance responsible use of AI.”
As academic and technology stakeholders continue to expand and refine AI and machine learning, the underlying theme across all industries is to apply responsible and ethical AI strategies while growing the population of workers who understand these systems.
“To me, this is the challenge and opportunity,” said Craig. “It’s not something that only engineers or data scientists can solve. It must include ethicists, non-tech researchers, social scientists, and business. Being a part of cross-disciplinary teams is exciting because that’s where the real future of the work lies and where the real impact will happen.”
Microsoft is committed to the advancement of AI driven principles that put people first. Learn more about Good Systems and the racial equity work by reading the following:
- Lone Stars on the Medical Frontier: S. Craig Watkins Explores Tech Tools to Improve Mental Health
- The Digital Edge