Hoeffding racing algorithms are used to achieve computational speedups in settings where the goal is to select a “best” option among a set of alternatives, but the amount of data is so massive that scoring all candidates using every data point is too costly. The key is to construct confidence intervals for scores of candidates that are used to eliminate options sequentially as more samples are processed. We propose a tighter version of Hoeffding racing based on empirical Bernstein inequalities, where a jackknife estimate is used in place of the unknown variance. We provide rigorous proofs of the accuracy of our confidence intervals in the case of U-statistics and entropy estimators, and demonstrate the efficacy of our racing algorithms with synthetic experiments.