Recent technological advances have made possible a number of new applications in the area of 3D video. One of the enabling technologies for many of these 3D applications is multiview video coding, which has received significant attention in the last several years. However, the fundamental need of multiview coding for applications like immersive tele-conferencing has not been addressed. In this paper we define the boundaries of the problem, and show how a simple algorithm can yield gains of up to 2X reduction in bitrate with similar PSNR in the synthesized view. Our algorithm is based on using an estimate of the viewer position to compute the expected contribution of each pixel to the synthesized view, and encoding each macroblock of each camera views with quality proportional to the likelihood that the pixel will be used in the synthetic image.