Menu
  • Home
  • Call for Papers
  • Student Symposium
  • Industry Track
  • Responsible AI
  • Committees
  • Program
  • Speakers
  • Registration
  • Venue
  • Become a Sponsor

Keynote Speakers

Canadian AI 2023 proudly presents this year’s keynote speakers as listed below.

Main Conference

  • Jimmy Lin
  • Alona Fyshe
  • Marwa El-Halabi
  • Adriana Romero Soriano

Awards Talks

  • Richard Zemel
  • Sriram Ganapathi Subramanian
  • Puyuan Liu

Responsible AI

  • Emily Denton
  • Allison Marchildon
  • Natalie Meyerhofer
  • Nathalie de Marcellis-Warin
  • Samira Abbasgholizadeh-Rahimi
  • Benjamin Fung
  • Jacob Jaremko

Industry Track

  • David Beach

Panel "Future of AI: Trends, Challenges and Prospects"

  • Karim Ali
  • Alona Fyshe
  • James Elder
  • Anna Koop
 

 

     

    Main Conference Speakers

    6 June 2023

    Dr. Jimmy Lin

    David R. Cheriton School of Computer Science
    University of Waterloo
    Waterloo, Ontario, Canada

    Information Access in the Era of Large Language Models
    Abstract. — Information access – the challenge of connecting users to previously stored information that is relevant to their needs – dates back millennia. The technologies have changed – from clay tablets to books to digital documents - but the aims have not. With the advent of large language models such as ChatGPT, LLaMA, Sydney, Bard, and their ilk, we have been bombarded by tremendous noise and hype from every corner. In this talk, I'll share some perspectives on the future of information access in this light - discussing representation learning and different architectures for retrieval, reranking, and information synthesis. I'll argue that the vision of effortlessly connecting users to relevant information has not changed, but the tools at our disposal have very much improved, creating both tremendous opportunities as well as challenges. It's an exciting time for research!

    Professor Jimmy Lin holds the David R. Cheriton Chair in the David R. Cheriton School of Computer Science at the University of Waterloo. Lin received his PhD in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2004. For a quarter of a century, Lin's research has been driven by the quest to develop methods and build tools that connect users to relevant information. His work mostly lies at the intersection of information retrieval and natural language processing, with a focus on two fundamental challenges: those of understanding and scale. He is a Fellow of the ACM.

    7 June 2023

    Dr. Alona Fyshe

    Departments of Computing Science and Psychology
    University of Alberta
    Alberta, Canada

    What can AI tell us about the brain and what can the brain tell us about AI?
    Abstract. — When you think through a problem, your mind constructs and manipulates representations of the relevant information. When a computer computes, it too must operate over some internal representation. We can study such internal representations from multiple angles in order to better understand how particular information processing systems (e.g. deep neural nets (DNNs), the human brain) solve complex problems. In a DNN, representations are computed in the hidden layers, and are leveraged to solve some prediction task (e.g. What object is in this image? What word comes next in this sentence?). The brain’s representations can be detected with brain imaging techniques that record the activation of groups of neurons in response to some stimuli. By studying how these representations change depending on the stimuli we can begin to draw parallels between the algorithms used by DNNs and the brain. Tracing these breadcrumbs is a step towards better understanding deep learning models, and to unraveling the mysteries of the brain.

    Alona Fyshe is an Associate Professor in the Computing Science and Psychology Departments at the University of Alberta, a fellow at the Alberta Machine Intelligence Institute (Amii) and holds a Canada CIFAR AI Chair. Alona received her BSc and MSc in Computing Science from the University of Alberta, and a PhD in Machine Learning from Carnegie Mellon University. Alona uses machine learning to analyze brain images collected while people read or view images, which allows her to study how the human brain represents meaning. Alona also studies how computers learn to represent meaning when trained on text or images. There are interesting connections between meaning representations in computer models and those in the human brain. Those connections serve to advance both our understanding of the brain, and the state of the art in machine learning.

    Student Symposium Speakers

    5 June 2023

    Dr. Marwa El Halabi

    Research scientist
    Samsung SAIT AI Lab
    Montreal, Canada
    https://sites.google.com/view/marwaelhalabi/home

    Data-Efficient Structured Pruning via Submodular Optimization
    Abstract. — Structured pruning is an effective approach for compressing large pre-trained neural networks without significantly affecting their performance. However, most current structured pruning methods do not provide any performance guarantees, and often require fine-tuning, which makes them inapplicable in the limited-data regime. We propose a principled data-efficient structured pruning method based on submodular optimization. In particular, for a given layer, we select neurons/channels to prune and corresponding new weights for the next layer, that minimize the change in the next layer’s input induced by pruning. We show that this selection problem is a weakly submodular maximization problem, thus it can be proveably approximated using an efficient greedy algorithm. Our method is guaranteed to have an exponentially decreasing error between the original model and the pruned model outputs w.r.t the pruned size, under reasonable assumptions. It is also one of the few methods in the literature that uses only a limited-number of training data and no labels. Our experimental results demonstrate that our method outperforms state-of-the-art methods in the limited-data regime.

    Marwa El Halabi is a research scientist at the Samsung SAIT AI Lab Montreal, within Mila. Before that, she was a PostDoc in the Machine Learning Group  at MIT. She completed her PhD in 2018 at the Computer and Communication Sciences department at EPFL. Her main research interest is discrete & continuous optimization problems in Machine learning. In particular, her research has been in submodular optimization, convex optimization, neural networks compression and structured sparsity models.

    5 June 2023

    Dr. Adriana Romero Soriano

    Research scientist at Meta AI (FAIR),
    Adjunct professor at McGill University,
    and Core industry member at Mila
    https://sites.google.com/site/adriromsor/home

    Seeing the unseen: Visual content recovery and creation from limited sensory data
    Abstract. — As humans we never fully observe the world around us and yet we are able to build remarkably useful models of it from our limited sensory data. Machine learning systems are often required to operate in a similar setup, that is the one of inferring unobserved information from the observed one. For example, when inferring 3D shape from a single view image of an object, or when modelling a data distribution from a limited set of data points. These partial observations naturally induce data uncertainty, which may hinder the quality of the model predictions. In this talk, I will present our recent work in content recovery and creation from limited sensory data, which leverages active acquisition strategies and user guidance to improve the model predictions.

    Adriana Romero Soriano is currently a research scientist at Meta AI (FAIR), an adjunct professor at McGill University, and a core industry member of Mila. Her research focuses on developing models that are able to learn from multi-modal data, reason about conceptual relations, and leverage active and adaptive acquisition strategies. The playground of her research has been defined by problems which require inferring full observations from limited sensory data, building models of the world with the goal to improve impactful downstream applications responsibly. She has received her Ph.D. from University of Barcelona, where she worked with Dr. Carlo Gatta, and spent two years as post-doctoral researcher at Mila working with Prof. Yoshua Bengio.

    Awards Talks Speakers

    8 June 2023

    Dr. Richard Zemel

    Professor, Department of Computer Science
    Columbia University
    Lifetime Achievement Award
     

    Gaining Trust in ML Systems via Distribution Shift Robustness and Flexible Performance Guarantees
    Abstract. — Learning-based predictive algorithms are widely used in real-world systems and have significantly impacted our daily lives. However, many algorithms are deployed without sufficient testing or a thorough understanding of likely failure modes. This is especially worrisome in high-stakes application areas such as healthcare, finance, and autonomous transportation. I will present two strands of work to address this critical challenge. The first aims to develop learning approaches that gracefully handle distribution shifts. This research links the areas of domain generalization, robust optimization and fairness. A second strand of work aims to provide flexible and rigorous guarantees of model performance. The focus is on societal consequences of the model, in particular the extent to which different members of a population experience unequal effects of decisions made based on a model's prediction.

    Richard Zemel is the Trianthe Dakolias Professor of Engineering and Applied Science in the Computer Science Department at Columbia University. He is the Director of the new AI Institute for Artificial and Natural Intelligence (ARNI). He was the Co-Founder and inaugural Research Director of the Vector Institute for Artificial Intelligence. He is an Associate Fellow of the Canadian Institute for Advanced Research and is on the Advisory Board of the Neural Information Processing Society. He is an Amazon Scholar and a CIFAR AI Chair. He has received an NVIDIA Pioneers of AI Award and an ONR Young Investigator Award. His research contributions include foundational work on systems that learn useful representations of data with little or no supervision; robust and fair learning algorithms; graph-based machine learning; and algorithms for fair and robust machine learning. His research has been supported by grants from NSERC, CIFAR, Google, Microsoft, Samsung, DARPA, IARPA, and ONR.

    8 June 2023

    Sriram Ganapathi Subramanian

    Postdoctoral Fellow, The Vector Institute, Toronto
    PhD, Department of Electrical and Computer Engineering,
    University of Waterloo
    PhD Dissertation Award

    Multi-Agent Reinforcement Learning in Large Complex Environments
    Abstract. — Multi-agent reinforcement learning (MARL) has seen much success in the past decade. However, these methods are yet to find wide application in large-scale real-world problems due to two important reasons. First, MARL algorithms have poor sample efficiency, where many data samples need to be obtained through interactions with the environment to learn meaningful policies, even in small environments. Second, MARL algorithms are not scalable to environments with many agents since, typically, these algorithms are exponential in the number of agents in the environment. In this talk, I will describe critical aspects of our research that addresses both MARL challenges. Towards improving sample efficiency, we leverage existing knowledge as advisors to help improve reinforcement learning in multi-agent domains. To this end, we propose a general framework for learning from external advisors in MARL and show that desirable theoretical properties such as convergence to a unique solution concept exist. Furthermore, extensive experiments illustrate that the proposed algorithms: can be used in a variety of environments, have performances that compare favourably to other related baselines, are applicable to environments with large state-action spaces, and are robust to poor advice from advisors. Towards scaling MARL, we explore the use of mean field theory, which abstracts other agents in the environment by a single virtual agent. Prior work has used mean field theory in MARL, however, they suffer from several stringent assumptions such as requiring fully homogeneous agents, full observability of the environment, and centralized learning settings, that prevent their wide application in practical environments. Our research extends mean field methods to environments having heterogeneous agents, partially observable settings, and decentralized approaches.

    Sriram Ganapathi Subramanian is a Postdoctoral Fellow at the Vector Institute, Toronto. Previously, he completed a PhD in the department of Electrical and Computer Engineering at the University of Waterloo. His primary research interest is in the area of multi-agent systems. Particularly he is interested in the issues of scale, non-stationarity, communication, and sample complexity in multi-agent learning systems. His long-term research vision is to make multi-agent learning algorithms applicable to a variety of large-scale real-world problems and to bridge the widening gap between the theoretical understanding and empirical advances of multi-agent reinforcement learning. Sriram has been a recipient of several prestigious fellowships such as the MITACS Globalink Research award, MITACS Graduate Fellowship, Pasupalak fellowship in AI, and Vector postgraduate research award.

    8 June 2023

    Puyuan Liu

    Applied Machine Learning Scientist at OpenTable
    MSc, University of Alberta
    MSc Thesis Award
     

    Non-Autoregressive Unsupervised Summarization with Length-Control Algorithms
    Abstract. — Text summarization aims to generate a short summary for an input text and has extensive real-world applications such as headline generation. State-of-the-art summarization models are mainly supervised; they require large labeled training corpora and thus cannot be applied to less popular areas, where paired data are rare, e.g., less spoken languages. In this talk, I will present a non-autoregressive unsupervised summarization model, which does not require parallel data for training. Our approach first performs edit-based search towards a heuristically defined score, and generates a summary as pseudo-groundtruth. Then, we train an encoder-only non-autoregressive Transformer based on the search results. Further, we design two length-control algorithms for the model, which perform dynamic programming on the model output and are able to explicitly control the number of words and characters in the generated summary, respectively. Such length control is important for the summarization task, because the main evaluation metric for summarization systems, i.e., ROUGE score, is sensitive to the summary length, and because real-word applications generally involve length constraints. Experiments on two benchmark datasets show that our approach achieves state-of-the-art performance for unsupervised summarization, yet largely improves inference efficiency. Further, our length-control algorithms are able to perform length-transfer generation, i.e., generating summaries of different lengths than the training target.

    Puyuan Liu currently works as an Applied Machine Learning Scientist at OpenTable. He received both his Bachelor's and Master's degrees in Computing Science from the University of Alberta in 2020 and 2022, respectively. His research interests include Natural Language Processing, Deep Learning, and Reinforcement Learning. While pursuing his Master's degree, he authored papers on unsupervised summarization for ACL 2022 and NeurIPS 2022. In addition, he has served and is presently a reviewer for ML conferences such as NeurIPS, ICML, and CIKM.

    Panelists, Tuesday 7 June, 13:00-14:15

    Dr. Karim Ali
    CEO at Invision AI
    Invision AI
    Ecole polytechnique fédérale de Lausanne
    Website
    Dr. Alona Fyshe
    Departments of Computing Science and Psychology
    University of Alberta
    Alberta, Canada
    Website
    Dr. James Elder
    Professor and York Research Chair in Human and Computer Vision
    Co-director, Centre for AI & Society (CAIS)
    York University
    Website
    Dr. Anna Koop
    Research Engineer
    Google DeepMind, Montreal
    Website

     

     

    Responsible AI Speakers

    Thursday, June 8, 10:45–11:45

    Emily Denton Keynote Address

    Opportunities and Challenges for Responsible Generative AI
    Recent advancements in generative modeling has led to the rapid development of text- and image-based generative AI applications with impressive capabilities. These emerging technologies are already impacting people, society, and culture in complex ways, foregrounding the importance of responsible development and governance frameworks. This talk will offer a broad overview of key ethical challenges associated with generative AI, and considerations for responsible development.

    Emily Denton (they/them) is a Staff Research Scientist at Google, within the Technology, AI, Society, and Culture team, where they study the sociocultural impacts of AI technologies and conditions of AI development. Their recent research centers on emerging text- and image-based generative AI, with a focus on data considerations and representational harms. Prior to joining Google, Emily received their PhD in Computer Science from the Courant Institute of Mathematical Sciences at New York University, where they focused on unsupervised learning and generative modeling of images and video. Prior to that, they received their B.S. in Computer Science and Cognitive Science at the University of Toronto. Though trained formally as a computer scientist, Emily draws ideas and methods from multiple disciplines and is drawn towards highly interdisciplinary collaborations, in order to examine AI systems from a sociotechnical perspective. They've published in multiple top-tier venues spanning social science and computing disciplines, including Big Data & Society, CSCW, FAccT, and NeurIPS.

    Thursday, June 8, 13:00–13:30

    Allison Marchildon talk

    Ethics and AI:  from Discourse to Practice

    AI developers and companies seem increasingly open to talking about ethics and adopting ethical principles. But their discourses on the matter also face growing criticism, as they don't always translate into actual responsible practices. And when they do, it is questionable the extent to which these practices correspond to what the actors impacted by them consider responsible.
    The challenge then becomes fostering practices that are aligned with the values that are important not only for those who develop AI systems, but also - and above all – for the actors likely to be affected by these systems. In this talk, I will therefore discuss some of the mechanisms that we are working on to help develop AI systems that lead to consequences and responsibilities that are valued by the actors who will most probably be impacted by them. Inspired by a pragmatist approach to ethics, these mechanisms are reflexive, collaborative and solutions-oriented, in order to develop new practices and responsibilities that are adapted to the new issues raised by AI.

    Allison Marchildon is full professor and Applied Ethics programs lead in the Department of Philosophy and Applied Ethics at Université de Sherbrooke. She is also co-leader of the Ethics, Governance and Democracy theme of the International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technology (OBVIA) and sits on the steering committee of the journal Éthique publique. Her research focuses on ethics, governance, responsibility and power in different fields of activities, including artificial intelligence and new technologies. She is currently advising the Quebec Department of Cybersecurity and Digital Technology in developing an ethical framework for public sector use of AI. Her publications include the books Quels lendemains pour la responsabilité? Perspectives multidisciplinaires (2018), co-edited with André Duhamel; Former à l’éthique en organisation : une approche pragmatiste (2017), co-authored with André Lacroix and Luc Bégin; and Corporate responsibility or corporate power? CSR and the shaping of the definitions and solutions to our public problems (2016), published in the Journal of Political Power.

    Thursday, June 8, 13:30–14:00

    Natalie Mayerhofer talk

    The Human Factor: Upskilling Healthcare Actors to Better Integrate AI for the Benefit of the Population
    How can healthcare organizations support the adoption of responsible AI? They must invest in developing their workforce, through relevant, innovative and inclusive learning strategies. After 5 years of testing and integrating AI at the CHUM (Quebec’s “Smartest Hospital”), this session will cover how its School of AI in Healthcare (SAIH) develops critical mindsets, skillsets and learning solutions to build the future of healthcare.

    Natalie Mayerhofer is Deputy Chief Learning Officer at the CHUM. During her international career, she has developed expertise in innovation policy, program evaluation, and strategic planning. She thrives on innovative projects, including her greatest challenge, the development of CHUM’s School of Artificial Intelligence in Healthcare.

    Thursday, June 8, 14:00–14:30

    Nathalie de Marcellis-Warin talk

    Deliberative Governance and Social Acceptability: Promoting Responsible Innovation in AI

    Deliberative governance and social acceptability are important for promoting responsible innovation in the field of artificial intelligence. Given the rapid advancements and potential impacts of AI, involving citizens and stakeholders in decision-making processes and promoting transparency and accountability in AI development are critical.

    Promoting transparency and accountability requires questioning the governmentality of AI systems not only at the technical level but also in their organizational and societal deployment. Moreover, there is an emerging consensus that AI is a global challenge that must be addressed for the public good.

    Ensuring that AI technologies are widely accepted by society and addressing concerns related to the ethical and societal implications of AI are also important for promoting responsible innovation and implementing deliberative governance.

    This conference aims to address responsible AI innovation from the perspective of deliberative governance in relation to social acceptability in society.

    Nathalie de Marcellis-Warin is President and Chief executive officer of CIRANO. She is a full professor in the Department of Mathematics and Industrial Engineering at Polytechnique Montréal and a Visiting Scientist at the Harvard T. Chan School of Public Health. In addition, she is a member of the Commission de l'éthique en science et en technologie (CEST) du Québec and a research fellow at the International Observatory for Societal Impacts of AI and Digital Transformations (OBVIA). Holder of a PhD in Management Science (in risks and insurance management) from École Normale Supérieure de Cachan (France), her research interests are risk management and decision-making theory in different contexts of risk and uncertainty as well as public policies implementation. She collaborates on major research projects with public and private organizations on the issues of emerging technology adoption and societal impacts.

    Thursday, June 8, 14:30–15:00

    Samira Abbasgholizadeh-Rahimi talk

    From Theory to Practice: The Responsible Implementation of AI in Primary Health Care
    In this presentation, we will discuss the opportunities and difficulties of integrating artificial intelligence (AI) into primary healthcare settings. We will delve into the current state of AI-driven primary healthcare solutions, talk about the ethical implications of their use, and provide practical guidance on how to responsibly integrate AI into primary care practices.

    Professor Samira A.Rahimi is Assistant Professor in the Department of Family Medicine, Associate Academic Professor of Mila-Quebec AI Institute, Associate member of Faculty of Dentistry, and an Affiliated scientist at Lady Davis Institute for Medical Research of Jewish General Hospital. She is an Associate Member of the College of Family Physicians of Canada, Vice President of the Canadian Operational Research Society (CORS), and Director of Artificial Intelligence in Family Medicine (AIFM). Professor Rahimi is Fonds de Recherche du Québec-Santé (FRQS) Junior 1 Research Scholar in human-centered AI in primary health care, and her work as Principal Investigator has been funded by the Fonds de recherche du Québec – Santé (FRQS), Natural Sciences and Engineering Research Council (NSERC), Roche Canada, Brocher Foundation (Switzerland), and the Strategy for Patient-Oriented Research (SPOR)-Canadian Institutes of Health Research (CIHR). In recognition of her outstanding work, Professor Rahimi has received numerous awards, including the prestigious 2022 New Investigator Primary Care Research Award from the North American Primary Care Research Group (NAPCRG).

    Friday, June 9, 9:00–9:45

    Benjamin Fung talk

    Machine Learning for Cybersecurity and Privacy
    Three research directions in cybersecurity and privacy will be presented in this session. The first research direction is on privacy-preserving data publishing. The objective is to share large volumes of data for machine learning without compromising the privacy of individuals. We will discuss multiple data sharing scenarios in privacy-preserving data publishing. The second research direction is on authorship analysis. The objective is to identify the author or infer the author's characteristics based on his/her writing styles. The third problem is on malware analysis. Assembly code analysis is one of the critical processes for mitigating the exponentially increasing threats from malicious software. However, it is a manually intensive and time-consuming process even for experienced reverse engineers. An effective and efficient assembly code clone search engine can greatly reduce the effort of this process. I will briefly describe our award-winning assembly clone search engine.

    Dr. Benjamin Fung is a Canada Research Chair in Data Mining for Cybersecurity, a Full Professor of School of Information Studies (SIS) at McGill University, and an Associate Editor of IEEE Transactions on Data and Engineering (TKDE) and Elsevier Sustainable Cities and Society (SCS). He received a Ph.D. degree in computing science from Simon Fraser University in 2007. Collaborating closely with the national defense, law enforcement, transportation, and healthcare sectors, he has published over 140 refereed articles that span across the research forums of data mining, machine learning, privacy protection, and cybersecurity with over 14,000 citations. His data mining works in crime investigation and authorship analysis have been reported by media, including New York Times, BBC, CBC, etc. Dr. Fung is a licensed professional engineer in software engineering. See his research website http://dmas.lab.mcgill.ca/fung for more information.

    Friday, June 9, 9:45–10:30

    Jacob Jaremko talk

    Using AI Responsibly for Medical Imaging Analysis
    AI is increasingly being applied to assist (or in some settings potentially even replace) radiologists and other clinicians' diagnostic interpretations of medical images.  This talk will review how issues of data privacy, data ownership, consent and liability apply to those wishing to perform AI for medical image analysis in an ethically and legally appropriate and responsible way.

    Jacob Jaremko is a Professor of Radiology and Adjunct Professor of Computing Science at the University of Alberta, a practicing board-certified Pediatric and Musculoskeletal radiologist and partner at Medical Imaging Consultants, and co-founder of 2 startup companies including MEDO.ai.  He has a PhD in Biomedical Engineering.  He is a Canada CIFAR AI Chair and Fellow of the Alberta Machine Intelligence Institute. His research has focused on developing objective imaging biomarkers of disease in ultrasound and MRI, and on implementing AI-augmented medical imaging diagnostic tools at the clinical point of care — building the 21st-century stethoscope.

    Conference chairs

    Farhana Zulkernine
    School of Computing, Queen’s University, Kingston, Ontario

    Amilcar Soares
    Department of Computer Science, Memorial University of Newfoundland

    Contact

    • Conference Chairs

    • Webmaster

    • www.caiac.ca

    How to become a sponsor

    Our thanks to:

    Our host society
    Our sponsors

    © 2023 Canadian Artificial Intelligence Association