We describe an algorithm based on acoustic clustering and acoustic adaption to significantly improve speech recognition performance. The method is particularly useful when speech from multiple speakers is to be recognized and the boundary between speakers is not known. We assume that each test data segment is relatively homogeneous with respect to the acoustic background and speaker. These segments are then grouped using an agglomerative acoustic clustering algorithm. The idea is to group together all test segments that are acoustically similar. The speech recognition models are then adapted separately to each test data cluster. Finally these adapted models are used to recognize the data from that cluster. This algorithm was used in SRI’s system for the 1996 DARPA Hub4 partitioned evaluation. Experimental results are presented on the 1996 H4 development data set. It was found that an improvement of 9.5% was achieved by using this algorithm.