Microsoft Research Blog

Microsoft Research Blog

The Microsoft Research blog provides in-depth views and perspectives from our researchers, scientists and engineers, plus information about noteworthy events and conferences, scholarships, and fellowships designed for academic and scientific communities.

RAVEN: Reducing Power Consumption of Mobile Games without Compromising User Experience

February 20, 2018 | By Miran Lee, Outreach Director of Microsoft Research; Yunxin Liu, Principal Research Manager

In the last decade, mobile gaming has grown into a huge industry. According to Newzoo, the global mobile games market will reach $46.1 billion in 2017, a 19.4% increase from the year before.

Players can enjoy amazing gaming experience on mobile devices due to the increasingly powerful processing capability of modern mobile GPUs. However, such gaming experience comes at a big cost: power consumption. The power consumption of mobile GPUs linearly increases with the amount of graphics computation. As a result, high-end mobile games with rich graphics content are extremely power hungry and drain batteries very quickly.

To solve the above problem, researchers from Microsoft Research Asia (MSRA) and Korea Advanced Institute of Science & Technology (KAIST) have developed a new system, called RAVEN, to reduce the power consumption of mobile games without compromising user experience.

RAVEN is based on a key observation in mobile games: many frames continuously rendered in a game are either perceptually the same or very similar. The differences of those frames are too small to be perceptible to game players. However, mobile games always render frames at a high frame rate of 60 frames per second (FPS), no matter how similar the frames are. Based on the measurement study done by the researchers, those perceptually redundant frames may make up more than 50% of the total frames in many games. Clearly, eliminating the rendering of those perceptually redundant frames could significantly reduce power consumption.

RAVEN is a novel system which leverages human visual perception for scaling the rate of rendering frames. To accomplish this, RAVEN is introducing the use of perception-aware scaling (PAS) of frame-rendering rates. This energy-saving methodology reduces a game’s rate of rendering frames whenever succeeding frames are predicted to be perceptually similar enough.

RAVEN works by setting up a side channel to track the rendered frame sequences to tailor a user’s perception of graphics changes during game-play. In this way, RAVEN opportunistically reduces GPU power consumption.

The RAVEN system consists of three major components which collectively scale the rate of game-frame rendering: Frame Difference Tracker (F-Tracker), Rate Regulator (R-Regulator), and Rate Injector (R-Injector). The system works in a pipelined fashion. First, F-Tracker measures perceptual similarity between two recent frames. Then, R-Regulator predicts the level of similarity between the current and next frame(s). The prediction is done based on how similar the current frame and the previous frame(s) are. If the next frames are similar enough (determined by a threshold) to the current one, R-Injector limits frame-rendering rates by injecting certain delay in a rendering loop and skip graphics processing for unnecessary frame(s). Presently, RAVEN can skip up to a maximum of three frames, and thus, inflict a frame-rate drop down to 15 FPS.

The key challenge RAVEN addresses is how to determine frame similarity at a low computational cost. The direct method to compare similarity is computing frames’ structural similarity (SSIM) score. Determining SSIM score is computation intensive and therefore uses a lot of power, particularly for large frames. Today’s mobile devices, including smartphones, usually have a high display resolution of 1920×1080 pixels or greater, making computing each SSIM score an unfeasible method for RAVEN to employ.

To address the challenge, the researchers employed two novel techniques. First, they developed an energy-efficient method to measure perceptual similarity based on the susceptibility of human eyes to color difference. This method leverages the difference in the luminance component (i.e., the Y component in the YUV color space) between frames. They extensively evaluated the method by comparing it with SSIM under various settings. The results showed that the luminance-based method efficiently measured perceptual similarity at a low computational cost.

Demonstration at MobiCom 2017. Left: Chanyou Hwang; Middle: Yunxin Liu; Right: Saumay Pushp.

Second the researchers built a virtual display, cloned from the mobile device main display but with a much lower resolution (e.g., 80 x 45 pixels). The system reads the graphical contents of the virtual display for the similarity measurement. Because the resolution of the virtual display is significantly smaller, the computational and energy overheads are also much smaller. The above two techniques effectively reduce the energy overhead of RAVEN.

As a next step, the researchers implemented the RAVEN system on a Nexus 5x smartphone. In an 11-person user study, they conducted comprehensive experiments using various gaming applications to evaluate RAVEN’s performance. Results showed an average 21.8% up to a high of 34.7% reduction in energy-per-game session while maintaining quality, user experiences.

RAVEN is the first system designed to achieve frame rate scaling and energy savings based on perceptual similarity for mobile games. The paper describing the RAVEN system “RAVEN: Perception-aware Optimization of Power Consumption for Mobile Games” has been published and demonstrated at MobiCom 2017. The authors include Chanyou Hwang, Saumay Pushp, Changyoung Koh, Jungpil Yoon, Seungpyo Choi and Junehwa Song from KAIST and Yunxin Liu from MSRA.

Up Next

Systems and networking

Live video analytics and research as Test Cricket with Dr. Ganesh Ananthanarayanan

Episode 85, August 14, 2019 - In an era of unprecedented advances in AI and machine learning, current gen systems and networks are being challenged by an unprecedented level of complexity and cost. Fortunately, Dr. Ganesh Ananthanarayanan, a researcher in the Mobility and Networking group at MSR, is up for a challenge. And, it seems, the more computationally intractable the better! A prolific researcher who’s interested in all aspects of systems and networking, he’s on a particular quest to extract value from live video feeds and develop “killer apps” that will have a practical impact on the world. Today, Dr. Ananthanarayanan tells us all about Video Analytics for Vision Zero (an award-winning “killer app” that aims to reduce traffic-related fatalities to zero), gives us a wide-angle view of his work in geo-distributed data analytics and client-cloud networking, and explains how the duration and difficulty of a Test Cricket match provides an invaluable lesson for success in life and research.

Microsoft blog editor

Ben Cutler MSR Natick

Algorithms, Computer vision, Ecology and environment, Graphics and multimedia, Systems and networking

Putting the cloud under the sea with Ben Cutler

Episode 40, September 5, 2018 - In today’s podcast we find out a bit about what else the Special Projects team is up to, and then we hear all about Project Natick and how Ben and his team conceived of, and delivered on, a novel idea to deal with the increasing challenges of keeping data centers cool, safe, green, and, now, dry as well!

Microsoft blog editor

Systems and networking

Brokering Peace Talks in the Networking and Storage Arms Race with Dr. Anirudh Badam

Episode 15, March 7, 2018 - Dr. Badam discusses the historic trade-offs between volatile and non-volatile memory, shares how software-defined batteries are changing the power-supply landscape, talks about how his research is aiming for the trifecta of speed, cost and capacity in new memory technologies, and reminds us, once again, how one good high school physics teacher can inspire the next generation of scientific discovery.

Microsoft blog editor