Keynote Speakers
Dr. Doina Precup
McGill University
"Building AI Agents With Reinforcement Learning"
Friday May 28 at 14:00 - 15:00 EDT | 11:00 - 12:00 PDT
Webinar link: https://zoom.us/j/91650665548
Reinforcement learning allows autonomous agents to learn how to act in a stochastic, unknown environment, with which they can interact. Deep reinforcement learning, in particular, has achieved great success in well-defined application domains, such as Go or chess, in which an agent has to learn how to act and there is a clear success criterion. In this talk, I will focus on the potential role of reinforcement learning as a tool for building general AI agents, which can continue to expand their knowledge over time and to handle complex, evolving tasks. I will argue that learning from interaction and reward optimization can lead naturally to the emergence of different intelligence traits. I will also discuss the way in which reinforcement learning agents can build abstract predictive and procedural knowledge. Finally, I will discuss the challenge of how to evaluate reinforcement learning agents whose goal is not just to control their environment, but also to build knowledge about their world.
Biography
Doina Precup splits her time between McGill University, where she co-directs the Reasoning and Learning Lab in the School of Computer Science, and DeepMind Montreal, where she has led the research team since its formation in October 2017. Her research interests are in the areas of reinforcement learning, deep learning, time series analysis, and diverse applications of machine learning in health care, automated control, and other fields. She became a senior member of the Association for the Advancement of Artificial Intelligence in 2015, Canada Research Chair in Machine Learning in 2016, Senior Fellow of the Canadian Institute for Advanced Research in 2017, and received a Canada CIFAR AI (CCAI) Chair in 2018. Dr. Precup is also involved in activities supporting the organization of Mila and the wider Montreal and Quebec AI ecosystem.
Dr. Graham Taylor
University of Guelph
"Advances in Conditional Generative Models"
Thursday May 27 at 13:00 - 14:00 EDT | 10:00 - 11:00 PDT
Webinar link: https://zoom.us/j/91650665548
In this talk, I will provide an overview of my group's recent work in the domain of conditional generative models. Conditional generative models take some context (the condition) and perform controlled synthesis: text, images, or some other kind of structured output. They are computationally intensive, unwieldy to train and we struggle to evaluate them, given the subjective nature of their creations.
I'll start by showing that more data is not always better when it comes to generative models: a novel automatic data selection process can make training easier and resulting models more robust, with little reduction of diversity. I will also describe a technique for iterative image generation, inspired by the way a sketch artist composes a scene. I'll show how to quantitatively evaluate generative models' outputs in a way that captures quality, diversity and consistency with the instructions we provide them. I'll point to different applications along the way: from hair style transfer to assembling LEGO.
Biography
Graham Taylor is a Canada Research Chair and Associate Professor of Engineering at the University of Guelph. He directs the University of Guelph Centre for Advancing Responsible and Ethical AI and is a member of the Vector Institute for AI. He has co-organized the annual CIFAR Deep Learning Summer School, and trained more than 60 students and researchers on AI-related projects. In 2016 he was named as one of 18 inaugural CIFAR Azrieli Global Scholars. In 2018 he was honoured as one of Canada's Top 40 under 40. In 2019 he was named a Canada CIFAR AI Chair. He spent 2018-2019 as a Visiting Faculty member at Google Brain, Montreal. Graham co-founded Kindred, which was featured at number 29 on MIT Technology Review's 2017 list of smartest companies in the world. He is the Academic Director of NextAI, a non-profit accelerator for AI-focused entrepreneurs.
Dr. Osmar R. Zaïane
University of Alberta
"From an interpretable predictive model to a model agnostic explanation"
Wednesday May 26 at 12:30 - 13:30 EDT | 9:30 - 10:30 PDT
Webinar link: https://zoom.us/j/91650665548
Today, the limelight is on Deep Learning. With the huge success of deep learning, other machine learning paradigms have had to take the back stage. Yet other models, particularly rule-based learning methods, are more readable and explainable and can even be competitive when labelled data is not abundant, and therefore could be more suitable for some applications where transparency is a must. One such rule-based method is the less known Associative Classifier. The power of associative classifiers is to determine patterns from the data and perform classification based on the features that are most indicative of prediction. Early approaches suffer from cumbersome thresholds requiring prior knowledge. We will present a new associative classifier approach that is even more accurate while generating a smaller model. It can also be used in an explainable-AI pipeline to explain inferences from other classifiers, irrespective of the predictive model used inside the black box.
Biography
Osmar R. Zaïane is a Professor in Computing Science at the University of Alberta, Canada, Fellow of the Alberta Machine Intelligence Institute (Amii), and Canada CIFAR AI Chair. Dr. Zaiane obtained his Ph.D. from Simon Fraser University, Canada, in 1999. He has published more than 330 papers in refereed international conferences and journals. He is Associate Editor of many International Journals on data mining and data analytics and served as program chair and general chair for scores of international conferences in the field of knowledge discovery and data mining. Dr. Zaïane received numerous awards including the 2010 ACM SIGKDD Service Award from the ACM Special Interest Group on Data Mining, which runs the world’s premier data science, big data, and data mining association and conference.