In a recent paper, Cheng, Greiner, Kelly, Bell and Liu (Artificial Intelligence 137:43- 90; 2002) describe an algorithm for learning Bayesian networks that—in a domain consisting of n variables—identifies the optimal solution using O(n4) calls to a mutual-information oracle. This seemingly incredible result relies on (1) the standard assumption that the generative distribution is Markov and faithful to some directed acyclic graph (DAG), and (2) a new assumption about the generative distribution that the authors call monotone DAG faithfulness (MDF). The MDF assumption rests on an intuitive connection between active paths in a Bayesiannetwork structure and the mutual information among variables. The assumption states that the (conditional) mutual information between a pair of variables is a monotonic function of the set of active paths between those variables; the more active paths between the variables the higher the mutual information. In this paper, we demonstrate the unfortunate result that, for any realistic learning scenario, the monotone DAG faithfulness assumption is incompatible with the faithfulness assumption. In fact, by assuming both MDF and faithfulness, we restrict the class of possible Bayesian-network structures to one for which the optimal solution can be identified with O(n2) calls to an independence oracle.