Education is known to be the key determinant of economic growth and prosperity [8,12]. While the issues in devising a high-quality educational system are multi-faceted and complex, textbooks are acknowledged to be the educational input most consistently associated with gains in student learning . They are the primary conduits for delivering content knowledge to the students and the teachers base their lesson plans primarily on the material given in textbooks . With the emergence of abundant online content, cloud computing, and electronic reading devices, textbooks are poised for transformative changes. Notwithstanding understandable misgivings (e.g. Gutenberg Elegies ), textbooks cannot escape what Walter Ong calls ‘the technologizing of the word’ . The electronic format comes naturally to the current generation of ‘digital natives’ . Inspired by the emergence of this new medium for “printing” and “distributing” textbooks, we present our early explorations into developing a data mining based approach for enhancing the quality of electronic textbooks. Speciﬁcally, we ﬁrst describe a diagnostic tool for authors and educators to algorithmically identify deﬁciencies in textbooks. We then discuss techniques for algorithmically augmenting different sections of a book with links to selective content mined from the Web. Our tool for diagnosing deﬁciencies consists of two components. Abstracting from the education literature, we identify the following properties of good textbooks: (1) Focus : Each section explains few concepts, (2) Unity: For every concept, there is a unique section that best explains the concept, and (3) Sequentiality: Concepts are discussed in a sequential fashion so that a concept is explained prior to occurrences of this concept or any related concept. Further, the tie for precedence in presentation between two mutually related concepts is broken in favor of the more signiﬁcant of the two. The ﬁrst component provides an assessment of the extent to which these properties are followed in a textbook and quantiﬁes the comprehension load that a textbook imposes on the reader due to non-sequential presentation of concepts [1,2]. The second component identiﬁes sections that are not written well and can beneﬁt from further exposition. We propose a probabilistic decision model for this purpose, which is based on the syntactic complexity of writing and the notion of the dispersion of key concepts mentioned in the section . For augmenting a section of a textbook, we ﬁrstidentifythe set of key concept phrases contained in a section. Using these phrases, we ﬁnd web articles that represent the central concepts presented in the section and endow the section with links to them . We also describe techniques for ﬁnding images that are most relevant toa section of the textbook, while respecting the constraint that the same image is not repeated indifferent sections of the same chapter. We pose this problem of matching images to sections in a textbook chapter as an optimization problem and present an efﬁcient algorithm for solving it .