Real time voice applications typically produce uniformly spaced voice packets and faithful reconstruction demands that these be played out at the same intervals. Best effort packet networks, however, produce variable delays on different packets and the receiver is required to buffer the received packets before playout. Excessive buffering delays deteriorate the system performance for interactive audio and so intelligent algorithms that keep this delay minimum while maintaining an acceptable packet loss have to be employed. In this research, we develop a new “ α -adaptive” algorithm which offers considerable reduction in delays compared to existing algorithms, especially for low packet losses. A generic jitter control procedure is also proposed which may be used with any buffering algorithm to enhance its jitter performance without significantly affecting the delay loss tradeoff. Further, an existing algorithm based on Normalized Least Mean Squares filter is discussed and modifications are proposed for its practical implementation. All suggestions are supported by simulations on internet delay traces.