Recently, there have been intensive studies of margin-based learning for automatic speech recognition (ASR). It is our believe that by securing a margin from the decision boundaries to the training samples, a correct decision can still be made if the mismatches between testing and training samples are well within the tolerance region specified by the margin. This nice property should be effective for robust ASR, where the testing condition is different from those in training. In this paper, we report on experiment results with soft margin estimation (SME) on the Aurora2 task and show that SME is very effective under clean training with more than 50% relative word error reductions in the clean, 20db, and 15db testing conditions, and still gives a slight improvement over conventional multi-condition training approaches. This demonstrates that the margin in SME can equip recognizers with a nice generalization property under adverse conditions.