Uncertainty in Artificial Intelligence (UAI) ’97

Home

UAI '97 at Brown University

UAI ’97 was located in historic buildings on the Main Green at Brown University Copyright © 1995 Brown University. All Rights Reserved. John Forasté, Photographer.

The UAI conference is organized under the auspices of the Association for Uncertainty in AI (AUAI). The Association home page contains information on several issues, including the UAI mailing list for email postings and discussions of topics related to the representation and management of uncertain information. UAI ’97 was the Thirteenth Conference on Uncertainty in Artificial Intelligence.

UAI ’97 Full-Day Course

A one-day intensive UAI course was given on Thursday, July 31, the day before the start of the main UAI ’97 conference. The course provided an immersive review of principles and applications of uncertain reasoning. Access the Full-Day Course Program.

Program Chairs

General Conference Chair

Program Committee

  • John Mark Agosta, SRI, Advanced Automation Technology Center
  • Bruce D’Ambrosio, Oregon State University
  • Fahiem Bacchus, University of Waterloo
  • J. F. Baldwin, Univeristy of Bristol
  • Riccardo Bellazzi, Universita’ di Pavia
  • Salem Benferhat, IRIT Universite Paul Sabatier
  • Carlo Berzuini, Universita’ di Pavia
  • Philippe Besnard, IRISA, Campus de Beaulieu
  • John Bigham, London University
  • Mark Boddy, Honeywell Technology Center
  • Remco Bouckaert, New Zealand
  • Craig Boutilier, University of British Columbia
  • Jack Breese, Microsoft Research
  • Luis M. de Campos, Universidad de Granada
  • Enrique Castillo, Universidad de Cantabria
  • Greg Cooper, University of Pittsburgh
  • Paul Dagum, Stanford University
  • Adnan Darwiche, American University of Beirut
  • Tom Dean, Brown University
  • Rina Dechter, Tel-Aviv University
  • Denise Draper, Rockwell Palo Alto Lab.
  • Marek Druzdzel, University of Pittsburgh
  • Didier Dubois, IRIT Universite Paul Sabatier
  • Nir Friedman, University of California
  • Robert Fung, Prevision
  • Linda van der Gaag, Utrecht University
  • Alex Gammerman, University of London
  • Hector Geffner, Universidad Simon Bolivar
  • Lluis Godo, Campus Universitat Autonoma de Barcelona
  • Robert Goldman, Honeywell Technology Center
  • Moises Goldszmidt, SRI International
  • Adam Grove, NEC Research Institute
  • Jerzy W. Grzymala-Busse, University of Kansas
  • Peter Haddawy, School of App. Math, NIDA, Thailand
  • Petr Hajek, Academy of Sciences, Czech Republic
  • Joseph Halpern, Cornell University
  • Steve Hanks, University of Washington
  • Othar Hansson, Thinkbank
  • David Heckerman, Microsoft Research
  • Eric Horvitz, Microsoft Research
  • Yen-Teh Hsia, Chung Yuan Christian University, China
  • Jean Yves Jaffray, LAFORIA
  • Finn V. Jensen, Aalborg Univesity
  • Frank Jensen, Hugin Expert A/S
  • Michael Jordan, MIT
  • Ezawa Kazuo, AT&T Laboratories
  • Uffe Kjaerulff, Aalborg University

  • Juerg Kohlas, University of Fribourg
  • Daphne Koller, Stanford University
  • Paul Krause, Philips Research Labs
  • Rudolf Kruse, Otto-von-Guericke-Universitaet
  • Henry Kyburg, University of Rochester
  • Jerome Lang, IRIT Universite Paul Sabatier
  • Kathryn Laskey, George Mason University
  • Steffen Laurizen, Stanford University
  • John Lemmer, Rome Laboratory
  • Tod Levitt, IET
  • David Madigan, University of Washington
  • Franco Malvestuto, University “La Sapienza” of Rome
  • Ramon L.de Mantaras, Spanish Sci. Research Council, CSIC
  • Christopher Meek, Microsoft Research
  • Paul Andre Monney, University of Fribourg
  • Serafin Moral, Universidad de Granada
  • Richard Neapolitan, Northeastern Illinois University
  • Pierre Ndilikilikesha, Duke University
  • Eric Neufeld, University of Saskatchewan
  • Ann Nicholson, Monash University
  • Simon Parsons, Queen Mary and Westfield College
  • Ramesh Patil, Information Sciences Institute, USC
  • Judea Pearl, University of California, Los Angeles
  • Mark Peot, Knowledge Industries
  • Kim Leng Poh, National University of Singapore
  • David Poole, University of British Columbia
  • Henri Prade, IRIT Universite Paul Sabatier
  • Greg Provan, Rockwell
  • Silvana Quaglini, Universita’ di Pavia
  • Jirousek Radim, University of Economics, Czech Rep.
  • Enrique Ruspini, SRI International
  • Ross Shachter, Stanford University
  • Solomon Eyal Shimony, Ben Gurion University
  • Philippe Smets, IRIDIA Universite libre de Bruxelles
  • Peter Spirtes, Carnegie Mellon University
  • Milan Studeny, Academy of Sciences, Czech Republic
  • Jaap Suermondt, Hewlett Packard Laboratories
  • Marco Valtorta, University of South Carolina
  • Michael Wellman, University of Michigan
  • Michael Wong, University of Regina
  • Yang Xiang, University of Regina
  • John Yen, Texas A&M University
  • Nevin Lianwen Zhang, Hong Kong University of Science & Technology

Program

Thursday, July 31

Time Session
8:00–8:30
Conference and Course Registration
8:30–6:00

Friday, August 1

Time Session
8:00–8:25
Main Conference Registration
8:25–8:30

Opening Remarks
Dan Geiger and Prakash P. Shenoy

8:30–9:30

Invited talk I: Local Computation Algorithms
Steffen L. Lauritzen

9:30–10:30
Invited talk II: Coding Theory and Probability Propagation in Loopy Bayesian Networks
Robert J. McEliece
10:30–11:00
Break
11:00–12:00

Plenary Session I: Modeling

  • Object-Oriented Bayesian Networks
    (winner of the best student paper award)
    Daphne Koller and Avi Pfeffer
  • Problem-Focused Incremental Elicitation of Multi-Attribute Utility Models
    Vu Ha and Peter Haddawy
  • Representing Aggregate Belief through the Competitive Equilibrium of a Securities Market
    David M. Pennock and Michael P. Wellman
12:00–1:30
Lunch
1:30–3:00

Plenary Session II: Learning & Clustering

  • A Bayesian Approach to Learning Bayesian Networks with Local Structure
    David Maxwell Chickering, David Heckerman, and Chris Meek
  • Batch and On-line Parameter Estimation in Bayesian Networks
    Eric Bauer, Daphne Koller, and Yoram Singer
  • Sequential Update of Bayesian Networks Structure
    Nir Friedman and Moises Goldszmidt
  • An Information-Theoretic Analysis of Hard and Soft Assignment Methods for Clustering
    Michael Kearns, Yishay Mansour, and Andrew Ng
3:00–3:30
Poster Session I: Overview Presentations
3:30–5:30
Poster Session I

  • Algorithms for Learning Decomposable Models and Chordal Graphs
    Luis M. de Campos and Juan F. Huete
  • Defining Explanation in Probabilistic Systems
    Urszula Chajewska and Joseph Y. Halpern
  • Exploring Parallelism in Learning Belief Networks
    T. Chu and Yang Xiang
  • Efficient Induction of Finite State Automata
    Matthew S. Collins and Jonathon J. Oliver
  • A Scheme for Approximating Probabilistic Inference
    Rina Dechter and Irina Rish
  • Limitations of Skeptical Default Reasoning
    Jens Doerpmund
  • The Complexity of Plan Existence and Evaluation in Probabilistic Domains
    Judy Goldsmith, Michael L. Littman, and Martin Mundhenk
  • Learning Bayesian Nets that Perform Well
    Russell Greiner, Dale Schuurmans, and Adam Grove
  • Model Selection for Bayesian-Network Classifiers
    David Heckerman and Christopher Meek
  • Time-Critical Action: Representations and Application
    Eric Horvitz and Adam Seiver
  • Composition of Probability Measures on Finite Spaces
    Radim Jirousek
  • Computational Advantages of Relevance Reasoning in Bayesian Belief Networks
    Yan Lin and Marek J. Druzdzel
  • Support and Plausibility Degrees in Generalized Functional Models
    Paul-Andre Monney
  • On Stable Multi-Agent Behavior in Face of Uncertainty
    Moshe Tennenholtz
  • Cost-Sharing in Bayesian Knowledge Bases
    Solomon Eyal Shimony, Carmel Domshlak and Eugene Santos Jr.
  • Independence of Causal Influence and Clique Tree Propagation
    Nevin L. Zhang and Li Yan

Saturday, August 2

Time Session
8:30–9:30

Invited talk III: Genetic Linkage Analysis
Alejandro A. Schaffer

 

9:30–10:30

Plenary Session III: Markov Decision Processes

  • Model Reduction Techniques for Computing Approximately Optimal Solutions for Markov Decision Processes
    Thomas Dean, Robert Givan and Sonia Leach
  • Incremental Pruning: A Simple, Fast, Exact Algorithm for Partially Observable Markov Decision Processes
    Anthony Cassandra, Michael L. Littman and Nevin L. Zhang
  • Region-based Approximations for Planing in Stochastic Domains
    Nevin L. Zhang and Wenju Liu
10:30–11:00

Break

11:00–12:00

Panel Discussion

12:00–1:30

Lunch

1:30–3:00

Plenary Session IV: Foundations

  • Two Senses of Utility Independence
    Yoav Shoham
  • Probability Update: Conditioning vs. Cross-Entropy
    Adam J. Grove and Joseph Y. Halpern
  • Probabilistic Acceptance
    Henry E. Kyburg Jr.
3:00–3:30

Poster Session II: Overview Presentations

3:30–5:30

Poster Session II

  • Network Fragments: Representing Knowledge for Probabilistic Models
    Kathryn Blackmond Laskey and Suzanne M. Mahoney
  • Correlated Action Effects in Decision Theoretic Regression
    Craig Boutilier
  • A Standard Approach for Optimizing Belief-Network Inference
    Adnan Darwiche and Gregory Provan
  • Myopic Value of Information for Influence Diagrams
    Soren L. Dittmer and Finn V. Jensen
  • Algorithm Portfolio Design Theory vs. Practice
    Carla P. Gomes and Bart Selman
  • Learning Belief Networks in Domains with Recursively Embedded Pseudo Independent Submodels
    J. Hu and Yang Xiang
  • Relational Bayesian Networks
    Manfred Jaeger
  • A Target Classification Decision Aid
    Todd Michael Mansell
  • Structure and Parameter Learning for Causal Independence and Causal Interactions Models
    Christopher Meek and David Heckerman
  • An Investigation into the Cognitive Processing of Causal Knowledge
    Richard E. Neapolitan, Scott B. Morris, and Doug Cork
  • Learning Bayesian Networks from Incomplete Databases
    Marco Ramoni and Paola Sebastiani
  • Incremental Map Generation by Low Cost Robots Based on Possibility/Necessity Grids
    M. Lopez Sanchez, R. Lopez de Mantaras, and C. Sierra
  • Sequential Thresholds: Evolving Context of Default Extensions
    Choh Man Teng
  • Score and Information for Recursive Exponential Models with Incomplete Data
    Bo Thiesson
  • Fast Value Iteration for Goal-Directed Markov Decision Processes
    Nevin L. Zhang and Weihong Zhang

7:15–9:30

UAI ’97 Dinner Banquet

How I Became Uncertain
Eugene Charniak

Sunday, August 3

Time Session

8:20–9:20

Invited talk IV: Gaussian processes – a replacement for supervised neural networks?
David J.C. MacKay

9:20–10:40

Plenary Session V: Applications of Uncertain Reasoning

  • Bayes Networks for Sonar Sensor Fusion
    Ami Berler and Solomon Eyal Shimony
  • Image Segmentation in Video Sequences: A Probabilistic Approach
    Nir Friedman and Stuart Russell
  • Lexical Access for Speech Understanding using Minimum Message Length Encoding
    Ian Thomas, Ingrid Zukerman, Bhavani Raskutti, Jonathan Oliver, David Albrecht
  • Perception, Attention, and Resources: A Decision-Theoretic Approach to Graphics Rendering
    Eric Horvitz and Jed Lengyel

10:40–11:00

Break

11:00–12:00

Panel Discussion

12:00–1:30

Lunch

1:30–3:00

Plenary Session VI: Developments in Belief and Possibility

  • Decision-making under Ordinal Preferences and Comparative Uncertainty
    D. Dubois, H. Fargier, and H. Prade
  • Inference with Idempotent Valuations
    Luis D. Hernandez and Serafin Moral
  • Corporate Evidential Decision Making in Performance Prediction Domains
    A.G. Buchner, W. Dubitzky, A. Schuster, P. Lopes P.G. O’Donoghue, J.G. Hughes, D.A. Bell, K. Adamson, J.A. White, J. Anderson, M.D. Mulvenna
  • Exploiting Uncertain and Temporal Information in Correlation
    John Bigham
3:00–3:30

Break

3:30–5:00

Plenary Session VII: Topics on Inference

  • Nonuniform Dynamic Discretization in Hybrid Networks
    Alexander V. Kozlov and Daphne Koller
  • Robustness Analysis of Bayesian Networks with Local Convex Sets of Distributions
    Fabio Cozman
  • Structured Arc Reversal and Simulation of Dynamic Probabilistic Networks
    Adrian Y. W. Cheuk and Craig Boutilier
  • Nested Junction Trees
    Uffe Kjaerulff

Abstracts

Invited talk I: Local Computation Algorithms

Speaker: Steffen L. Lauritzen

Inference in probabilistic expert systems has been made possible through the development of efficient algorithms that in one way or another involve message passing between local entities arranged to form a junction tree. Many of these algorithms have a common structure which can be partly formalized in abstract axioms with an algebraic flavor. However, the existing abstract frameworks do not fully capture all interesting cases of such local computation algorithms. The lecture described the basic elements of the algorithms, giving examples of interesting local computations that are covered by current abstract frameworks, and also examples of interesting computations that are not, with a view towards reaching a fuller exploitation of the potential in these ideas.

Invited talk II: Coding Theory and Probability Propagation in Loopy Bayesian Networks

Speaker: Robert J. McEliece

In 1993 a group coding researchers in France devised, as part of their astonishing “turbo code” breakthrough, a remarkable iterative decoding algorithm. This algorithm can be viewed as an inference algorithm on a Bayesian network, but (a) it is approximate, not exact, and (b) it violates a sacred assumption in Bayesian analysis, viz., that the network should have no loops. Indeed, it is accurate to say that the turbo decoding algorithm is functionally equivalent to Pearl’s algorithm applied to a certain directed bipartite graph in which the messages circulate around indefinitely, until either convergence is reached, or (more realistically) for a fixed number of cycles. With hindsight, it is possible to trace a continuous chain of “loopy” belief propagation algorithms within the coding community beginning in 1962 (with Gallager’s iterative decoding algorithm for low density parity check codes), continued in 1981 by Tanner and much more recently (1995-1996) by Wiberg and MacKay-Neal. This talk challenged the UAI community to reassess the conventional wisdom that probability propagation only works in trees, since the coding community has now accumulated considerable experimental evidence that in some cases at least, “loopy” belief propagation works, at least approximately. McEliece brought the AI audience up to speed on the latest developments in coding, emphasizing convolutional codes, since they are the building blocks for turbo-codes. Two of the most important (pre-turbo) decoding algorithms, viz. Viterbi (1967) and BCJR (1974) can be stated in orthodox Bayesian network terms. BCJR, for example, is an anticipation of Pearls’ algorithm on a special kind of tree, and Viterbi’s algorithm gives a solution to the “most probable explanation” problem on the same structure. Thus coding theorists and AI people have been working on, and solving, similar problems for a long time. It would be nice if they became more aware of each other’s work.

Invited talk III: Genetic Linkage Analysis

Speaker: Alejandro A. Schaffer

Genetic linkage analysis is a collection of statistical techniques used to infer the approximate chromosomal location of disease susceptibility genes using family tree data. Among the widely publicized linkage discoveries in 1996 were the approximate locations of genes conferring susceptibility to Parkinson’s disease, prostate cancer, Crohn’s disease, and adult-onset diabetes. Most linkage analysis methods are based on maximum likelihood estimation. Parametric linkage analysis methods use probabilistic inference on Bayesian networks, which is also used in the UAI community. Schaffer gave a self-contained overview of the genetics, statistics, algorithms, and software used in real linkage analysis studies.

Invited talk IV: Gaussian processes - a replacement for supervised neural networks?

Speaker: David J.C. MacKay

Feedforward neural networks such as multilayer perceptrons are popular tools for nonlinear regression and classification problems. From a Bayesian perspective, a choice of a neural network model can be viewed as defining a prior probability distribution over non-linear functions, and the neural network’s learning process can be interpreted in terms of the posterior probability distribution over the unknown function. (Some learning algorithms search for the function with maximum posterior probability and other Monte Carlo methods draw samples from this posterior probability). In the limit of large but otherwise standard networks, Neal (1996) has shown that the prior distribution over non-linear functions implied by the Bayesian neural network falls in a class of probability distributions known as Gaussian processes. The hyperparameters of the neural network model determine the characteristic lengthscales of the Gaussian process. Neal’s observation motivates the idea of discarding parameterized networks and working directly with Gaussian processes. Computations in which the parameters of the network are optimized are then replaced by simple matrix operations using the covariance matrix of the Gaussian process. In this talk MacKay reviewed work on this idea by Neal, Williams, Rasmussen, Barber, Gibbs and MacKay, and assessed whether, for supervised regression and classification tasks, the feedforward network has been superceded.