This paper is an empirical study on the performance of different discriminative approaches to reranking the N-best hypotheses output from a large vocabulary continuous speech recognizer (LVCSR). Four algorithms, namely perceptron, boosting, ranking support vector machine (SVM) and minimum sample risk (MSR), are compared in terms of domain adaptation, generalization and time efficiency. In our experiments on Mandarin dictation speech, we found that for domain adaptation, perceptron performs the best; for generalization, boosting performs the best. The best result on a domain-specific test set is achieved by the perceptron algorithm. A relative character error rate (CER) reduction of 11% over the baseline was obtained. The best result on a general test set is 3.4% CER reduction over the baseline, achieved by the boosting algorithm.