Keynote Speakers and Tutorial
We are glad to announce that Canadian AI 2020 has the following confirmed keynote speakers:
- Dr. Giuseppe Carenini (University of British Columbia),
- Dr. Pascal Poupart (University of Waterloo), and
- Dr. Csaba Szepesvari (University of Alberta),
as well as a tutorial by Pierre-Luc Bacon from MILA.
Please see the program page for YouTube links to each presentation, as well as additional material.
"Taming Discourse Parsing and Text Planning: an integrated AI approach towards a data-driven theory of Discourse"
Discourse parsing is a fundamental NLP task aiming to uncover the structure of coherent multi-sentential documents. Not only has discourse parsing been shown to enhance many key downstream tasks, like text classification, summarization and sentiment prediction, but it also appears to complement powerful contextual embeddings, like BERT, in tasks where discourse information is critical, such as argumentation analysis.
Historically, the development of effective discourse parsers has been hampered by the lack of annotated data, which often leads to overfitting and prevents the adoption of deep-learning solutions. In this talk, we will describe a novel approach that uses distant supervision to automatically generate abundant data to train discourse parsers. This approach synergistically combines several key AI techniques, including multiple instance learning, optimal tree parsing strategies and heuristic search algorithms inspired by reinforcement learning.
Remarkably, experiments indicate that this automatically generated treebank is superior to human-annotated corpora for training a discourse parser on the challenging and useful task of inter-domain discourse parsing, where the parser is trained on one domain (e.g., news) and tested/applied on another one (e.g., instruction manuals).
We conclude with a discussion of the potential of our approach to not only further boost performance on downstream tasks like sentiment analysis and summarization, but also in providing a new framework for text planning and more generally in moving towards a data-driven linguistic theory of discourse.
Giuseppe Carenini is a Professor in Computer Science at UBC (Vancouver, Canada). Giuseppe has broad interdisciplinary interests. His work on natural language processing and information visualization to support decision making has been published in over 120 peer-reviewed papers (including best paper at UMAP-14 and ACM-TiiS-14). Dr. Carenini was the area chair for ACL'09 in "Sentiment Analysis, Opinion Mining, and Text Classification", for NAACL'12, EMNLP'19 and ACL’20 in "Summarization and Generation", and for ACL'19 in "Discourse". He was also the Program Co-Chair for IUI 2015, and the Program Co-Chair for SigDial 2016. In 2011, he published a co-authored book on “Methods for Mining and Summarizing Text Conversations”. In his work, Dr. Carenini has also extensively collaborated with industrial partners, including Microsoft and IBM. Giuseppe was awarded a Google Research Award, an IBM CASCON Best Exhibit Award, and a Yahoo Faculty Research Award in 2007, 2010 and 2016 respectively.
"Are we Experiencing a Technological Singularity?"
Ever since mathematician I.J. Good speculated about the conception of a first ultra intelligent machine in 1965, it has been hypothesized that a technological singularity will occur when the invention of artificial superintelligence will abruptly trigger runaway technological growth. Increasing computational resources and network connectivity coupled with the accumulation of large amounts of data and advances in machine learning are fueling beliefs that unbounded self-evolving systems will soon emerge. In this talk, I will discuss recent advances in machine learning that are enabling increasingly adaptive systems as well as important limitations that still need to be overcome.
Pascal Poupart is a Professor of Machine Learning in the David R. Cheriton School of Computer Science at the University of Waterloo and a Canada CIFAR AI Chair at the Vector Institute. He also founded the RBC Borealis AI Research Lab in Waterloo and he is a founding member of the Waterloo AI Institute. He serves as scientific advisor for Huawei Technologies and ProNavigator. He received the B.Sc. in Mathematics and Computer Science at McGill University in 1998, the M.Sc. in Computer Science at the University of British Columbia in 2000 and the Ph.D. in Computer Science at the University of Toronto. His research focuses on the development of algorithms for machine learning with application to Natural Language Processing, Health Informatics, Computational Finance, Telecommunication Networks and Sports Analytics. He is most well known for his contributions to the development of reinforcement learning algorithms. Notable projects that his research team are currently working on include probabilistic deep learning, robust machine learning, data efficient reinforcement learning, conversational agents, machine translation, adaptive satisfiability, sports analytics and knowledge graphs.
He received a Canada CIFAR AI Chair (2018-2021), Cheriton Faculty Fellowship (2015-2018), a best student paper honourable mention (SAT-2017), an outstanding collaborator award from Huawei Noah's Ark (2016), a top reviewer award (ICML-2016), the best main track solver and best application solver (SAT-2016 competition), a best reviewer award (NIPS-2015), an Early Researcher Award from the Ontario Ministry of Research and Innovation (2008), two Google research awards (2007-2008), a best paper award runner up (UAI-2008) and the IAPR best paper award (ICVS-2007). He published more than 100 research articles in top tier AI venues including JAIR, JMLR, NIPS, ICML, AISTATS, ICLR, IJCAI, AAAI, UAI, AAMAS and SAT. He serves as a member of the editorial board of the Journal of Machine Learning Research (JMLR) (2009 - present). He routinely serves as area chair or senior program committee member for NeurIPS, ICML, AISTATS, IJCAI, AAAI and UAI.
"Recent Progress in Model-based Reinforcement Learning"
Model-based reinforcement learning refers to reinforcement learning methods that explicitly construct and reason with models of the environment of the learning agent. There are many reasons to believe that model based reinforcement learning is crucial for increasing the flexibility and data efficiency of reinforcement learning methods. In particular, models can be used in a planning or inference process, which can increase the range of policies that an agent can represent, while models can also retain crucial aspects of past experiences. Equally importantly, principled approaches to exploration (optimism) is arguably a more natural fit to model based RL than to model-free RL. Yet, model based RL methods rarely make it to the very top of leaderboards. In this talk, I will review the reasons behind why it is challenging to construct efficient model based RL methods and describe some novel developments which may hold the promise to change the present not-so-great record of model based RL methods.
Csaba Szepesvari is a Canada CIFAR AI Chair, the team-lead for the "Foundations" team at DeepMind and a Professor of Computing Science at the University of Alberta. He earned his PhD in 1999 from Jozsef Attila University, Szeged, Hungary. He has authored three books and about 200 peer-reviewed journal and conference papers. He serves as the action editor of the Journal of Machine Learning Research and Machine Learning, as well as on various program committees. Dr. Szepesvari's interest is artificial intelligence (AI) and, in particular, principled approaches to AI that use machine learning. He is the co-inventor of UCT, a widely successful Monte-Carlo tree search algorithm. UCT ignited much work in AI, such as DeepMind's AlphaGo which defeated the top Go professional Lee Sedol in a landmark game. This work on UCT won the 2016 test-of-time award at ECML/PKDD.
Reinforcement Learning and Optimal Control: A Perspective
This tutorial provides an overview of the algorithmic foundations of reinforcement learning in relation to optimal control and decision science. Fundamental concepts of dynamic programming, stochastic approximation and derivative estimation techniques will first be introduced. Equipped with these tools, we will then embark on a tour of modern reinforcement learning concepts: from value-based temporal difference learning ideas to policy gradient methods. We will also see how optimal control, in both discrete and continuous time, can find its way in both supervised learning and reinforcement learning. This presentation is accompanied by a set of Google Colab notebooks.
Pierre-Luc Bacon is an assistant professor at University of Montreal's DIRO. He is also a member of Mila Quebec, the Institute for Data Valorization (IVADO) and holds a Facebook CIFAR AI chair. He obtained his PhD in computer science at McGill University and pursued a postdoc at Stanford University in 2018. His research is broadly concerned by the problem of learning to take decisions over long time spans and its ramifications in optimization and representation learning.