The authors investigate how earlier work of van den Berg and Resing and of D’Ambrosio and Melen (1993) can be combined with classical bounds for the GI/G/1 queue to give insight into the output buffer required for a smooth playout of cells. The effect of both the number of stages across the network and the rate of the stream is considered. A possibly surprising conclusion is that as the rate of the stream decreases, the required output buffer size may actually increase. While the cell delay variation of small rate streams may cause no problems for buffers within the network, the cell delay variation introduced by the network may cause problems for the customer on output