- Dr. Adnan Darwiche
Adnan Darwiche is a professor and former chairman of the computer science department at UCLA. He earned his PhD and MSc degrees in computer science from Stanford University in 1993 and 1989, respectively. His research focuses on the theory and practice of symbolic and probabilistic reasoning, with recent applications to machine learning. Professor Darwiche served as editor-in-chief of the Journal of Artificial Intelligence Research (JAIR) and is a AAAI fellow. He is also author of “Modeling and Reasoning with Bayesian Networks”, published by Cambridge University Press in 2009.
Tractable Learning in Structured Probability Spaces
Over the past few decades, various approaches have been introduced for learning probabilistic models, depending on whether the examples are labeled or unlabelled, and whether they are complete or incomplete. In this talk, I will introduce an orthogonal class of machine learning problems, which have not been treated as systematically before. In these problems, one has access to Boolean constraints that characterize examples which are known to be impossible (e.g., due to known domain physics). The task is then to learn a tractable probabilistic model over a structured space defined by the constraints.
I will describe a new class of Arithmetic Circuits, the PSDD, for addressing this class of learning problems. The PSDD is based on advances from both machine learning and logical reasoning and can be learned under Boolean constraints. I will also provide a number of results on learning PSDDs. First, I will contrast PSDD learning with approaches that ignore known constraints, showing how it can learn more accurate models. Second, I will show that PSDDs can be utilized to learn, in a domain-independent manner, distributions over combinatorial objects, such as rankings, game traces and routes on a map. Third, I will show how PSDDs can be learned from a new type of datasets, in which examples are specified using arbitrary Boolean expressions. A number of case studies will be illustrated throughout the talk, including the unsupervised learning of preference rankings and the supervised learning of classifiers for routes and game traces.
- Dr. Robert Holte
Professor Robert Holte of the Computing Science Department at the University of Alberta is a former editor-in-chief of the journal Machine Learning and co-founder and former director of the world-renowned Alberta Innovates Center for Machine Learning (AICML, now known as Amii). His current research is on single-agent heuristic search, with seminal contributions on bidirectional search, methods for predicting the run-time of a search algorithm, and the use of machine learning to create search heuristics. Professor Holte was elected a Fellow of the AAAI in 2011.
Heuristic Search: Something Old and Something New
I begin this talk with a review of long-established results in heuristic search and the early history of bidirectional heuristic search. I then describe a recent breakthrough in bidirectional heuristic search (the MM algorithm), which challenges long-held assumptions and exposes exciting new research directions. Although the technical details in this talk are focused on heuristic search, the general lessons with which I conclude are relevant to researchers in all branches of A.I.
- Dr. Hugo Larochelle
Hugo Larochelle is a Research Scientist at Twitter and an Assistant Professor at the Université de Sherbrooke (UdeS). Before 2011, he spent two years in the machine learning group at the University of Toronto, as a postdoctoral fellow under the supervision of Geoffrey Hinton. He obtained his Ph.D. at Université de Montréal, under the supervision of Yoshua Bengio. He is the recipient of two Google Faculty Awards. His professional involvement includes associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), member of the editorial board of the Journal of Artificial Intelligence Research (JAIR) and program chair for the International Conference on Learning Representations (ICLR) of 2015, 2016 and 2017.
Autoregressive Generative Models with Deep Learning
In AI, the two dominating approaches to learning generative models of data has mostly been based on either directed graphical models or undirected graphical models. In this talk, I'll discuss a third approach, which has become popular only recently: autoregressive generative models. An appealing property of these models is that they can learn powerful yet tractable estimates of data distributions. Thanks to neural networks, this family of models has been shown to be very competitive for distributions as complex as that of natural images, both in terms of the realism of the data they can generate and the data representation they can learn. I'll discuss a variety of such neural autoregressive models and dissect the advantages and disadvantages of this approach.