# Towards Understandable Neural Networks for High Level AI Tasks – Part 5

## Date

November 16, 2015

## Speaker

Paul Smolensky

## Affiliation

Microsoft / Johns Hopkins University

## Series

## Overview

Towards understandable neural networks for high level AI tasks – Part 5

Overview of talk series: Current AI software relies increasingly on neural networks (NNs). The universal data structure of NNs is the numerical vector of activity levels of model neurons, typically with activity distributed widely over many neurons. Can NNs in principle achieve human-like performance in higher cognitive domains – such as inference, planning, grammar – where theories in AI, cognitive science, and linguistics have long argued that abstract, structured symbolic representations are necessary? The work I will present seeks to determine whether, and precisely how, distributed vectors can be functionally isomorphic to symbol structures for computational purposes relevant to AI – at least in certain idealized limits such as unbounded network size. This work – defining and exploring Gradient Symbolic Computation (GSC) – has produced a number of purely theoretical results. Current work at MSR is exploring the use of GSC to address large-scale practical problems using NNs that can be understood because they operate under the explanatory principles of GSC. Part I is available at http://resnet/fullvideo.aspx?id=36339. Part 2 at http://resnet/fullvideo.aspx?id=36370 Part 3 at http://resnet/fullvideo.aspx?id=36371 Part 4 at http://resnet/fullVideo.aspx?id=36402

Topics include:

- Review from Part I how recursive structures built of symbols can be compositionally encoded as distributed numerical vectors: tensor product representations (TPRs) – review and extend the discussion from Part I of how TPRs can be used to compute recursive symbolic functions with massive parallelism – argue for the crucial role of distributed representations in neural networks, and show how distributed TPRs (unlike localized representations) generalize based on similarity of different structural positions/roles – present a generative model that can in principle be used to reverse-engineer a trained network to determine whether that network has learned a TPR scheme – illustrate, through the example of wh-questions, how networks deploying TPRs go beyond the capabilities of symbol processing because their representations include TPRs not only of purely discrete structures, but also structures in which discrete positions are occupied by blends of numerically-weighted (‘gradient’) symbols – or equivalently, structures built of discrete symbols that occupy numerically-weighted positions (or roles). – how certain symbolic constraint-based grammars can be encoded as interconnection-weight matrices which asymptotically compute the TPRs of grammatical structures – how certain symbolic Maxent models can be encoded as weight matrices of networks that produce the TPRs of alternative structures with a log-linear probability distribution

## Speakers

## Paul Smolensky

Paul Smolensky is Krieger-Eisenhower Professor of Cognitive Science at Johns Hopkins University. His research addresses mathematical unification of the continuous and the discrete facets of cognition: principally, the development of grammar formalisms that are grounded in cognitive and neural computation. A member of the Parallel Distributed Processing (PDP) Research Group at UCSD (1986), he developed Harmony Theory, proposing what is now known as the ‘Restricted Boltzmann Machine’ architecture. He then developed Tensor Product Representations (1990), a compositional, recursive technique for encoding symbol structures as real-valued activation vectors. Combining these two theories, he co-developed Harmonic Grammar (1990) and Optimality Theory (1993), general grammatical formalisms now widely used in phonological theory. His publications include the books Mathematical perspectives on neural networks (1996, with M. Mozer, D. Rumelhart), Optimality Theory: Constraint interaction in generative grammar (1993/2004, with A. Prince), Learnability in Optimality Theory (2000, with B. Tesar), and The harmonic mind: From neural computation to optimality-theoretic grammar (2006, with G. Legendre). He was awarded the 2005 David E. Rumelhart Prize for Outstanding Contributions to the Formal Analysis of Human Cognition, a Blaise Pascal Chair in Paris (2008-9), and the 2015 Sapir Professorship of the Linguistic Society of America.

Webpage: http://cogsci.jhu.edu/people/smolensky.html