A Computational Network
- Li Deng ,
- Dong Yu
in Automatic Speech Recognition --- A Deep Learning Approach
Published by Springer | 2014 | Automatic Speech Recognition --- A Deep Learning Approach edition
In previous chapters we have discussed various deep learning models for automatic speech recognition (ASR). In this chapter we introduce the computational network (CN), a unified framework for describing a wide range of arbitrary learning machines, such as deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs) including its long short term memory (LSTM) version, logistic regression, and maximum entropy models. All these learning machines can be formulated and illustrated as a series of computational steps. A CN is a directed graph in which each leaf node represents an input value or a parameter and each non-leaf node represents a matrix operation acting upon its children. We describe algorithms to carry out forward computation and gradient calculation in the CN and introduce most popular computation node types used in a typical CN.
This work [Chapter Title] is subject to copyright. All rights are reserved by Springer-Verlag London Ltd, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Duplication of this publication or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher's location, in its current version, and permission for use must always be obtained from Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law.