The use of multiple distant microphones has been widely studied for meeting recognition. The two most widely used approaches are 1) combination at the signal level, via blind beamforming, followed by recognition of a single enhanced audio signal, and 2) independent, logically parallel recognition of the multiple audio sources followed by hypothesis-level combination. In this paper we investigate how these two approaches compare for state-of-the-art recognition systems applied to meeting data from the two most recent NIST Rich Transcription evaluations. Our results show that beamforming is the superior approach, giving more accurate results while being inherently less computationally demanding. We then propose a hybrid approach that leverages both beamforming and signal-level diversity for system combination, and show that this approach gives gains over either of the old methods.