Edge-Preserving Decomposition for Multi-Scale Tone and Detail Manipulation


August 7, 2008


Dani Lischinski and Zeev Farbman


The Hebrew University of Jerusalem


Many recent computational photography techniques decompose an image into a piecewise smooth base layer, containing large scale variations in intensity, and a residual detail layer capturing the smaller scale details in the image. In many of these applications, it is important to control the spatial scale of the extracted details, and it is often desirable to manipulate details at multiple scales, while avoiding visual artifacts.

In this paper we introduce a new way to construct edge-preserving multi-scale image decompositions. We show that current base-detail decomposition techniques, based on the bilateral filter, are limited in their ability to extract detail at arbitrary scales. Instead, we advocate the use of an alternative edge-preserving smoothing operator, based on the weighted least squares optimization framework, which is particularly well suited for progressive coarsening of images and for multi-scale detail extraction. After describing this operator, we show how to use it to construct edge-preserving multi-scale decompositions, and compare it to the bilateral filter, as well as to other schemes. Finally, we demonstrate the effectiveness of our edge-preserving decompositions in the context of LDR and HDR tone mapping, detail enhancement, and other applications


Dani Lischinski and Zeev Farbman

Dani Lischinski is an associate professor at the School of Computer Science and Engineering at the Hebrew University of Jerusalem, Israel, where he runs the Computer Graphics Lab. He received his PhD from the Department of Computer Science and the Program of Computer Graphics at Cornell University in 1994. His areas of interest span a wide variety of topics in the fields of computer graphics, visualization, virtual reality, and image and video processing. In particular, I have worked on algorithms for photorealistic image synthesis, simulation of global illumination, robust triangulation and mesh generation, interactive visualization of complex virtual scenes, computer-generated illustration, facial animation, image-based modeling and rendering, texture synthesis, video compression, medical visualization, tone mapping, and physically-based animation.

Zeev Farbman is a PhD student at the School of Computer Science and Engineering at the Hebrew University of Jerusalem, Israel.