UAI ’97 Full-Day Course on Uncertain Reasoning

Program

Time Session
8:25–8:30

Opening Remarks
Dan Geiger and Prakash P. Shenoy

Part I: Foundations

8:30–9:20

Fundamental Principles of Representation and Inference
Instructor: Ross Shachter, Stanford University

9:20–9:30
Discussion
9:30–10:20
Graphical Models in the Real World
Instructors: Mark Peot and Michael Shwe, Knowledge Industries
10:20–10:30

Discussion

10:30–11:00
Coffee Break
11:00–11:50

A Unifying View on Inference
Instructor: Rina Dechter, University of California–Irvine

11:50–12:00
Discussion
12:00–1:30
Lunch
Part II: Advanced Topics
1:30–2:20
Advances in Learning Bayesian Networks
Instructor: David Heckerman, Microsoft Research
2:20–2:30
Discussion
2:30–3:20
Approximate Inference via Variational Techniques
Instructor: Michael Jordan, M.I.T.
3:20–3:30
Discussion
3:30–4:00
Coffee Break
4:00–4:50
Causality: From Metaphysics to Inference and Reasoning I
Instructor: Judea Pearl, UCLA
4:50–5:00
Discussion
5:00–5:50
Causality: From Metaphysics to Inference and Reasoning II
Instructor: Judea Pearl, UCLA
5:50–6:00
Discussion

Abstracts

Advances in Learning Bayesian Networks

Instructor: David Heckerman, Microsoft Research

David Heckerman will discuss methods for learning with Bayesian networks–namely, methods for updating the parameters and structure of a Bayesian network given data. He will begin with a review of Bayesian statistics, touching on the concepts of subjective probability, objective probability, random sample, exponential family, sufficient statistics, and conjugate priors. He will then discuss how methods for model averaging and model selection from Bayesian statistics can be adapted to Bayesian-network learning. Topics will include criteria for model selection, techniques for assigning priors, and search methods. Time permitting, he will discuss methods for handling missing data, including Monte-Carlo and Gaussian approximations. At least one real-world application will be presented.

Approximate Inference via Variational Techniques

Instructor: Michael Jordan, M.I.T.

For many graphical models of practical interest, exact inferential calculations are intractable and approximations must be developed. In this tutorial Jordan will describe the principles behind the use of variational methods for approximate inference. These methods provide bounds (upper and lower) on probabilities on graphs. They are complementary to the exact techniques in the sense that they tend to be more accurate for dense networks than for sparse networks; moreover, they can readily be combined with exact techniques. Jordan will describe the application of variational ideas in a number of settings, including the QMR database. (This is joint work with Zoubin Ghahramani, Tommi Jaakkola, and Lawrence Saul).

Causality: From Metaphysics to Inference and Reasoning I

Instructor: Judea Pearl, UCLA

The traditional conception of Bayesian networks as carriers of conditional independence information is rapidly giving way to a causal conception, based on mechanisms and interventions. The result is a more natural understanding of what the networks stand for, what judgments are used in constructing these network and, most importantly, how actions and plans are to be handled within the framework of standard probability theory. Pearl’s aim in this tutorial is to explain the mathematical foundation and inferential capabilities of causal Bayesian networks, and to advocate their use as the standard tool of analysis. To this end, Pearl will focus on the non-controversial aspects of causation and on the basic tools and skills required for the solution of tangible causal problems. Philosophical speculations will be kept to a minimum. Starting with functional description of physical mechanisms. we will derive the standard probabilistic properties of Bayesian networks and show, additionally:

  • How the effects of unanticipated actions can be predicted from the network topology
  • How qualitative judgments can be integrated with statistical data (with unobserved variables) to assess the strength of causal influences
  • How actions interact with observations
  • How counterfactuals sentences can be interpreted and evaluated
  • What assumptions are needed for inferring causes from data, and what guarantees accompany such inference