a tall building lit up at night

Microsoft Research Lab – Asia

Where AI meets neuroscience: Yansen Wang’s pursuit of human-centered innovation

Published

“Curiosity drives scientific breakthroughs, and the tools we create often reflect the human motivations behind that curiosity.”

For Yansen Wang, a senior researcher at Microsoft Research Asia, this philosophy has guided his work at the intersection of AI and neuroscience.

Wang’s interest in science began early. While his classmates spent hours searching for information to complete an assignment, he solved it in 10 minutes by writing a few lines of code. Since then, programming became his way of tackling complex challenges, and the satisfaction of creating practical solutions fueled his passion for computer science. As his studies advanced, his goals crystallized: he wanted to create technology that genuinely serves people.

Yansen Wang
Yansen Wang, senior researcher at Microsoft Research Asia

This people-centered approach has shaped Wang’s career. As an undergraduate at Tsinghua University, he witnessed AI defeat the world’s Go champion—a moment that sparked a realization: AI could help us understand how humans think. This insight led him to pursue research in multimodal AI for his master’s studies before joining Microsoft Research Asia – Shanghai. Today, in the AI/ML Group, Wang works to bridge AI and neuroscience.

A two-way journey between AI and neuroscience

Wang’s research focuses on two complementary directions: advancing neuroscience by decoding how the brain works (AI for Brain), and drawing inspiration from the brain to improve AI architectures and algorithms (Brain for AI).

For the AI for Brain project, Wang and his colleagues use noninvasive EEG(electroencephalogram) to build brain-computer interfaces, which record the electric signals from our brains without surgery, as their research platform. “Our understanding of the brain is still very limited,” he explains. “Its multitasking abilities and rapid adaptability are far beyond what AI can achieve, so we’re using AI to analyze the EEGsignals and uncover the links between perception, intention, and brain activity.”

The team has already made substantial progress. In terms of perception, they have decoded broad visual features, such as colors and simple moving scenes. Using diffusion models, they transformed neural signals into matching visual content and developed EEG2Video, a baseline model that reconstructs video clips from EEG signals. To improve generalization across different contexts, the team has built multiple datasets linking EEG signals to everyday behaviors. See the video below for an example of this work.

EEG2Video demo
EEG2Video demo 2
EEG2Video demo: The original video input is shown at the top and the reconstructed video at the bottom.

For controlling with commands, Yansen and his team tackled the challenge of decoding letters from brain signals. They introduced an innovative codebook approach: instead of having participants imagine letter shapes, which are difficult for devices to recognize, they guided participants to imagine body movements, mathematical calculations, and other semantic information that devices could more easily recognize. AI then mapped these signals to specific letters. With portable devices, this method has achieved 30%–40% accuracy with 36 options (26 letters and 10 digits).

“We’re now working to extend this approach to controlling mobile phones and interacting on the web, exploring new interaction modes beyond letter input,” he says.

Still, the approach of using noninvasive brain-computer interfaces comes with many challenges. EEG signals have low signal-to-noise ratios and are easily disrupted by environmental and physiological interference, such as eye movements or muscle activity, making reliable readings difficult on portable devices. EEG data is scarce and must be collected in a controlled setting. And individual differences mean that systems often don’t generalize well across users and tasks.

“We’re advancing research on EEG foundation models and hope to make them more robust with more data and larger models, much like large language models,” Wang explains.

Learning from the brain to make AI more efficient

For the Brain for AI project, Wang and his colleagues are exploring how brain function can address AI’s energy demands.

“The brain can accomplish tremendous thinking and computation with just the energy supplied from a bowl of rice, while AI requires vast resources and electricity to achieve similar results,” he observes. “Even more remarkable is that the brain efficiently handles many tasks without complex networks, like fine motor control. People only need a few examples to learn new tasks, but current AI models often need massive amounts of data to relearn. There must be design principles at play here that AI can learn from.”

The key differences lie in how neurons are structured and operate. Neurons use a spiking mechanism, firing and transmitting signals only when they reach their activation threshold. This results in extremely low energy consumption when they are at rest. Artificial neural networks, by contrast, perform large-scale computations even when there is very little information to process, using far more energy than the brain requires for similar tasks.

Using this insight, Wang and his team developed a more efficient spiking neural network (SNN) framework. In time-series prediction tasks, SNNs now perform comparably to traditional neural networks but can theoretically reduce energy consumption to a quarter of the latter, offering a new path for low-power AI (Figure 1).

diagram
Figure 1. The SNN framework and workflow for time-series prediction

“The spiking neural network research is just one part of our work,” says Wang. “Neurons in the brain are sparsely connected—each one typically links to only a few nearby neurons, while in artificial neural networks, a single neuron connects to thousands of others. The brain’s sparse connectivity also helps reduce energy consumption. If we continue learning from how the brain operates, AI will be able to generalize better and become more energy-efficient.”

Yansen Wang
Yansen Wang gives a lecture at the TEDxBeijing Innovation Conference

Breaking boundaries: From outsider to domain expert

For Wang, cross-disciplinary research in AI and neuroscience was new territory. He had no formal training in neuroscience or EEG analysis, but through dedicated study, active collaboration, and strong team support, he developed deep expertise.

When he began EEG research, Wang studied medical textbooks and sought guidance from collaborating physicians. “I set a rule: temporarily set aside my AI perspective and approach this as a doctor would. I studied medical textbooks thoroughly and learned to read EEG signals. Only then did I consider how AI could help.” This approach helped him understand clinical problems rather than imposing familiar frameworks.

Cross-disciplinary collaboration is not just about combining knowledge; it’s about how different perspectives collide. For example, in research on epilepsy detection, Wang discovered that AI researchers and physicians approach problems differently. AI researchers often assume that with enough data, models can learn to identify features of epileptic seizures, such as spikes and slow waves. But physicians can quickly spot the rare abnormality in massive amount of hard-to-interpret EEG signals based on experience. Models can miss these abnormalities, even when trained on vast amounts of data.

“This showed me that machine learning progress cannot rely solely on brute force. In data-scarce fields like medicine, we must incorporate domain expertise and build in the right assumptions to improve model performance.”

To help researchers build cross-domain knowledge, Microsoft Research Asia – Shanghai established a neuroscience study group, with weekly classes, homework, and discussions. After six months, Yansen had learned the fundamentals of neuroscience and gained practical guidance from senior researchers. “This collective learning atmosphere means we’re no longer working in isolation but instead growing together as a community,” he says.

Microsoft Research Asia encourages open exploration and open exchange. At the Shanghai lab’s weekly “Grand Challenge” meetings, researchers rigorously challenge one another’s work. “At first, I wasn’t used to this style of questioning,” Wang admits. “But I realized that these challenges expose blind spots and allow research to improve through iterative refinement. The toughest questions often lead to the most important breakthroughs.”

a group of people in a meeting
Yansen Wang (third from right) discusses research questions with colleagues.

Research with purpose: Building AI that serves people

For Wang, technology should serve people. Whether developing brain-computer interfaces or creating explainable AI for Go, the focus of the work should be on making AI useful and accessible.

In 2022, Yansen and his colleagues launched what they called a “human salvation project” for Go players. AI had surpassed top players, causing anxiety among professionals. Players could imitate AI moves but couldn’t understand the reasoning behind them. They memorized patterns without developing their own strategic thinking. “I thought, ‘If AI could explain its logic, players could truly understand the strategies behind the moves,’” Wang says. “We wanted to help people improve alongside AI.” Wang and the team are actively collaborating with Go lovers and professional Go players to verify the feasibility of the explanations. “That is so impressive!” one of the teachers in a Go learning institute says, “and I see the future of teaching human how to play Go”.

For Wang, this captures what drives his research: not the number of papers, but tangible impact. Perhaps it’s the moment when a player grasps a brilliant move, or when someone finds more convenient ways to interact with devices, or when researchers apply new approaches for energy-efficient AI.

“At Microsoft Research Asia, I can follow my interests and work with partners to solve meaningful problems for humanity,” he says.

Continue reading

See all blog posts