Menu
  • Home
  • Program
  • Speakers
  • Student Symposium
  • Responsible AI
  • Industry Track
  • Registration
  • Venue
  • Committees
  • Call for Papers
  • Become a Sponsor

Responsible AI

Following the successful second edition of the Responsible AI event in 2023, we are excited that the event will be held again in 2024. We strongly believe in the importance and urgency of the Responsible and Ethical Development of Artificial Intelligence for Social Good.

As outlined by the UNESCO Recommendation on the Ethics of Artificial Intelligence, AI technology may have unintended by-products that lead to discrimination, reinforce inequalities, infringe upon human rights, socially sort and disrupt democratic processes, limit access to services and intensify surveillance and unfair treatment of marginalized and minority groups. As such, we are committed to organizing a cohesive and dynamic program that embodies the paradigm of responsible development of AI so that AI researchers and practitioners can engage in critical analysis and integration of fairness, ethics, transparency, and algorithmic accountability in their work.

This year's program will consist of the following events and will be open to all participants of the Canadian AI conference:

  • Seven Invited Talks of 30 minutes each. The talks will consist of speakers with practical and theoretical expertise at the intersection of various domains and Responsible AI
  • A keynote talk by international leaders in Responsible AI
  • A hands-on tutorial that offers useful practical training on the alignment problem in AI
  • A student 3-minute-thesis (3MT) competition
  • A student research poster session
  • A panel featuring leaders in Responsible AI
  • A live AI Ethics debate

Responsible AI Co-chairs

Ebrahim Bagheri

Professor

Electrical, Computer, and Biomedical Engineering, Ryerson University

Website

Sébastien Gambs

Canada Research Chair in Privacy-preserving and Ethical Analysis of Big Data

Université du Québec à Montréal (UQAM)

Website

Ulrich Aïvodji

Assistant Professor, Department of Software Engineering and IT, École de technologie supérieure

Website

Nidhi Hegde

Associate Professor, Department of Computer Science, University of Alberta

Website

Responsible AI arrangements chair

Calvin Hillis

PhD Student, Toronto Metropolitan University

Program Schedule

Times are approximate and subject to change by up to 30 minutes.

Day 1, Wednesday
29 May 2024
08:30 — 09:00 Buddy Group Get Together / Welcome and IntroductionsBuddy Groups
09:00 — 09:30 Welcome and IntroductionsEbrahim Bagheri, Sebastien Gambs, Ulrich Aivodji, Nidhi Hedge
09:30 — 10:30 Keynote SpeakerIshtiaque Ahmed and Shion Guha
10:30 — 11:00 Coffee
11:00 — 11:30 Invited SpeakerBhaskar Mitra
11:30 — 12:00 Invited SpeakerParvin Mousavi
12:00 — 12:30 Invited SpeakerEvent cancelled
12:30 — 13:30 Lunch
13:30 — 14:00 Lunch / Poster SessionStudents
14:00 — 15:00 Poster SessionStudents
15:00 — 15:30 Coffee
15:30 — 16:00 Invited SpeakerJude Kong
16:00 — 17:00 DebateStudents
Day 2, Thursday
30 May 2024
08:30 — 09:00 Buddy Group Get Together / Welcome and IntroductionsBuddy Groups
9:00 — 09:30 Invited SpeakerMaura Grossman
09:30 — 10:00 Invited SpeakerGolnoosh Farnadi
10:00 — 10:30 Invited SpeakerJulia Rubin
10:30 — 11:00 Coffee
11:00 — 12:30 3MTTBD
12:30 — 13:00 Lunch / 3MT
13:00 — 14:00 Lunch
14:00 — 15:30 Panel: "The Future of Responsible AI and AI for Social Good in Canada"Joanna Redden, Leila Kosseim, Elissa Strome, Eleni Stroulia, Geoffrey Rockwell
15:30 — 15:45 Coffee
15:45 — 17:00 Hands on TutorialTravis LaCroix

Keynote speakers

 
Ishtiaque Ahmed
University of Toronto
Toward Making AI Responsible: From Mixing Methods to Embracing Differences
With the burgeoning growth of AI technologies, concerns about the associated harms have also surfaced all around us. The mandate of making AI responsible by offering safer, fair, responsible, accessible, and accountable outcomes is challenged by the stark differences between theoretical assumptions and real-life incidents. This eventually puts forth the necessity to investigate the assumptions, implementations, and deployment of AI tools from the perspectives of community-based, ethnographic, and social justice scholarship. Based on our decade-long engagement with various communities, government programs, and industries, both in the Western world and in the Global South, we chart the challenges and opportunities toward making AI technologies responsible. We discuss how such an endeavor would require careful and participatory interweaving of epistemologies, methodologies, and theories.

Syed Ishtiaque Ahmed is an Assistant Professor of Computer Science at the University of Toronto and the founding director of the ‘Third Space'' research group. His research focuses on the challenges of developing computing systems by incorporating the voices of marginalized populations. He received the International Fulbright Science and Technology Fellowship in 2011, Intel Science and Technology Fellowship in 2014, Fulbright Centennial Fellowship in 2019, Schwartz Reisman Fellowship in 2021, Massey Fellowship in 2021, and Connaught Scholarship in 2023. He is also the winner of Microsoft AI & Society Fellowship, Google Inclusion Research Award, and Facebook Faculty Research Award.

 
Shion Guha
University of Toronto
Toward Making AI Responsible: From Mixing Methods to Embracing Differences
With the burgeoning growth of AI technologies, concerns about the associated harms have also surfaced all around us. The mandate of making AI responsible by offering safer, fair, responsible, accessible, and accountable outcomes is challenged by the stark differences between theoretical assumptions and real-life incidents. This eventually puts forth the necessity to investigate the assumptions, implementations, and deployment of AI tools from the perspectives of community-based, ethnographic, and social justice scholarship. Based on our decade-long engagement with various communities, government programs, and industries, both in the Western world and in the Global South, we chart the challenges and opportunities toward making AI technologies responsible. We discuss how such an endeavor would require careful and participatory interweaving of epistemologies, methodologies, and theories.

Shion Guha is an Assistant Professor in the Faculty of Information and cross-appointed to the Department of Computer Science at the University of Toronto. His research interests include human-computer interaction, data science, and public policy. He’s been involved in developing the field of Human-Centred Data Science. This intersectional research area combines technical methodologies with interpretive inquiry to address biases and structural inequalities in socio-technical systems. He is the author of Human-Centered Data Science: An Introduction, an Amazon Best Selling textbook published by MIT Press in 2022. Shion wants to understand how algorithmic decision-making processes are designed, implemented and evaluated in public services. In doing so, he often works with marginalized and vulnerable populations, such as child welfare, homelessness, healthcare systems, etc. His work has been supported by grants from Canadian Institute for Advanced Research, National Science and Engineering Research Council, National Science Foundation, American Political Science Association etc. He has been featured in the media (Newsweek, Associated Press, ACLU, ABC, NBC, Gizmodo etc.) Shion has been awarded a Way-Klingler Early Career Award in 2019, a Connaught New Researcher Award in 2021 and a Schwartz-Reisman Institute for Technology and Society Faculty Fellowship from 2023-25. Previously, he received an MS from the Indian Statistical Institute in 2010 and a PhD from Cornell University in 2016.

Speakers

 
Bhaskar Mitra
Microsoft
Search and Society: Reimagining Information Access for Radical Futures
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.

Bhaskar Mitra is a Principal Researcher at Microsoft Research based in Montréal, Canada. His research focuses on AI-mediated information and knowledge access and questions of fairness and ethics in the context of these sociotechnical systems. He is interested in evaluation and benchmarking, and co-organized the MS MARCO ranking leaderboards, the TREC Deep Learning Track (2019-2023), and the TREC Tip-of-the-Tongue Track (2023). Before joining Microsoft Research, he worked on search technologies at Bing for 15+ years. He received his Ph.D. in Computer Science from University College London.

 
Parvin Mousavi
Queens
Clinical Decision Making and Trustworthy AI: A case Study in Computer-Assisted Surgical Interventions
 

Parvin Mousavi is a Professor of Computer Science, Medicine, Pathology and Biomedical and Molecular Sciences at Queen’s University, and a member of the Royal Society of Canada, College of New Scholars. She holds a Canada CIFAR AI Chair and a faculty position at the Vector AI Institute . She has previously held a Senior Scientist position at Brigham and Women’s Hospital in Boston and visiting professorships at Harvard Medical School and the University of British Columbia. Her research focus is on developing and leveraging machine learning in computer assisted medical interventions, contributing to the societal impact of AI on the global community. She is a co-founder of Women in MICCAI, the first society of women in medical image computing. She also leads training of the next generation of AI talent through a national training program in Medical Informatics.

 
Golnoosh Farnadi
CIFAR, Mila, McGill University
Responsible AI in the age of Foundation Models
The emergence of foundation models presents significant opportunities for advancing AI, yet it also brings forth considerable challenges, particularly in relation to existing risks and inequalities. In this talk, we focus on the complexities surrounding responsible AI in the context of foundation models. We argue that disparities faced by marginalized communities – encompassing issues of performance, representation, privacy, robustness, interpretability, and safety – are not isolated concerns but interconnected elements contributing to a cascade of disparities. Through a comparative analysis with traditional ML models, we highlight the potential for exacerbating disparities against marginalized groups. By defining marginalized communities within the machine learning realm, we explore the multifaceted nature of disparities and examine their origins across the data creation, training, and deployment processes. We conclude the talk with future directions towards responsible development of AI.

Dr. Golnoosh Farnadi is an Assistant Professor at the School of Computer Science at McGill University and an Adjunct Professor at University Montréal. She is a visiting faculty researcher at Google, a core academic member at MILA (Quebec Institute for Learning Algorithms) and holds Canada CIFAR AI chair. She is a co-director of McGill’s Collaborative for AI & Society (McCAIS), and the founder of the EQUAL (EQuity & EQuality Using AI and Learning algorithms) lab at Mila/McGill University. Dr. Farnadi's contributions have been acknowledged with prestigious awards, including the Google Scholar Award and Facebook Research Award in 2021. She has also received a Google award for inclusion research in 2023 and was recognized as a finalist for the WAI Responsible AI Leader of the Year award. Dr. Farnadi's commitment to advancing ethical AI practices has also earned her recognition as one of the 100 Brilliant Women in AI Ethics in 2023.

 
Maura Grossman
University of Waterloo
Is Responsible AI Possible Today?
Many individuals and organizations claim to be interested in developing and implementing “responsible AI” (“RAI”). While we do not yet have a consensus definition of what “RAI” is, we can probably agree on the minimum set of criteria necessary for AI to be considered “responsible” or “trustworthy.” Using those criteria and examples from the justice system, we will discuss whether RAI is possible today and if not, potential approaches that developers and organizations offering or using AI might want to consider.

Maura Robin Grossman is a research professor and former Director of Women in Computer Science in the David R. Cheriton School of Computer Science at the University of Waterloo.

 
Jude Kong
University of Toronto
Ensuring Responsible AI: From Concept to Deployment
Artificial intelligence solutions and data science approaches are increasingly being used across the globe to identify risks, conduct predictive modeling, and provide evidence-based recommendations for policy and action. Despite the promise of using these innovative tools to improve societal outcomes, there are important ethical, legal, and social implications that, if not appropriately managed and governed, can translate into significant risks to individuals and populations. Responsible AI entails intentional design to enhance equity and gender equality and avoid amplifying existing inequalities and biases. In this talk, I will discuss the concept of responsible artificial intelligence (AI) and provide a systematic approach to embedding "responsibility" throughout the AI lifecycle, starting from data gathering and cleaning, through algorithm development and training, to implementation and deployment. Moreover, I will introduce a framework for quantitatively assessing how responsible an AI solution is.

Dr Kong is a professor in the Dalla Lana School of Public Health and Mathematics Department (cross-appointed), University of Toronto, where he serves as the director of the AI and Mathematical Modeling lab. Additionally, he is the Director of the Africa-Canada AI and Data Innovation Consortium and the Global South AI for Pandemic and Epidemic Preparedness and Response Network. He is also the Regional Node Liaison to the steering committee of the Canadian Black Scientist Network. He obtained his Ph.D. in Mathematics with a certificate in Artificial Intelligence from the University of Alberta, his MSc in Mathematical Modelling from the University of Hamburg, Germany, and the University of L'Aquila, Italy. His B.Sc. in Mathematics and Computer Science was acquired at the University of Buea, Cameroon, and his B.Ed. in Mathematics was earned at University of Yaounde I, Cameroon. He did a 2-years of postdoc at Princeton University. Dr Kong is an expert in AI, data science, mathematical modelling, and mathematics education. His principal research program focuses on developing and deploying innovative AI, mathematical and data science methodologies and technologies for decision­makers in communities, public health, government and industry in order to provide important insights into local and global-scale socio-ecological challenges.

 
Julia Rubin
University of British Columbia
Effects of Data on Machine Learning Robustness Against Evasion Attacks
Recent advances in Machine Learning (ML) have led to the development of numerous accurate and scalable ML-based techniques. Yet, concerns related to the reliability of ML models could substantially impede their widespread adoption. ML model reliability largely depends on two factors: data, i.e., the sample and feature set used in training, and learning procedures applied. In this talk, we will focus on the first factor: the data. We will discuss the main properties of data that can lead to increased robustness against adversarial evasion attacks and techniques for improving adversarial robustness through data transformation and augmentation. We will then outline promising future research directions in this area.

Julia Rubin is an Associate Professor in the Department of Electrical and Computer Engineering at the University of British Columbia, Canada. She is a Canada Research Chair in Trustworthy Software and the lead of the UBC Research Excellence Cluster on Trustworthy ML. Her research focuses on quality, security, and reliability of software and AI systems. Julia received her PhD in Computer Science from the University of Toronto and worked as a postdoctoral researcher in CSAIL at MIT. She also spent almost 10 years in industry, working for IBM Research, where she was a Research Staff Member and Research Group Manager.

 
Jake Okechukwu Effoduh
Toronto Metropolitan University
Tackling Algorithmic Bias in Health AI Systems
In healthcare, the use of AI is advancing practices of telemedicine and medical informatics and improving clinical operations such as interpreting staining images and aiding the performance of high-risk surgeries. Many of these innovations in healthcare are unprecedented. However, one of the biggest challenges in the use of AI for healthcare (and other service domains) is the issue of bias: instances when the application of an AI algorithm compounds existing inequities in ways that could amplify discrimination or adversely impact inequities in health systems. One of the many ways that this phenomenon of bias from AI health systems could occur is when an AI algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process. How do biases enter these AI systems? Are there harmful effects of such algorithmic bias in health AI? What are the legal and regulatory responses to this issue of algorithmic bias in health AI systems? This is what this talk hopes to explore.

Jake Okechukwu Effoduh is an Assistant Professor at the Lincoln Alexander School of Law of Toronto Metropolitan University. He has gained significant expertise in international human rights advocacy at various ranks of legal systems. As Chief Councillor of the Africa – Canada Artificial Intelligence and Data Innovation Consortium, he provided human rights compliance expertise on the use of AI and Big Data in Canada and across 20 African countries. Effoduh holds two master's degrees in international law from the University of Oxford in the UK and Osgoode Hall Law School of York University in Canada.

Panelists

 
Elissa Strome
CIFAR

Elissa Strome is the Executive Director of the Pan-Canadian Artificial Intelligence Strategy at CIFAR. She works with leaders at Canada’s three national AI Institutes in Edmonton (Amii), Montreal (Mila), and Toronto (Vector Institute) and across the country to advance Canada’s leadership in AI research, training and innovation.

 
Teresa Scassa
University of Ottawa

Dr. Teresa Scassa is the Canada Research Chair in Information Law and Policy at the University of Ottawa, Faculty of Law. Teresa has served on several national and provincial advisory panels on law and technology related issues. She has written widely in the area of privacy law, data governance, intellectual property law, law and technology, artificial intelligence, and smart cities. She is a co-editor of the books AI and the Law in Canada (2021), Law and the Sharing Economy (2017), and The Future of Open Data (2022). She is co-author of Digital Commerce in Canada (2020) and Canadian Intellectual Property Law (2013, 2018 and 2022).

 
Joanna Redden
Western University

Joanna Redden is an Associate Professor in the Faculty of Information and Media Studies at Western University. She co-directs the Canadian-based Starling Centre and the UK-based Data Justice Lab. She is co-author of Understanding Media: Communication, Power and Social Change (2024), Data Justice (2022), the author of The Mediation of Poverty: The News, New Media and Politics (2014) and co-editor of Compromised Data: From Social Media to Big Data (2015). Her work focuses on the social justice implications of AI.

 
Leila Kosseim
Concordia University

Dr. Kosseim is a professor in the Computer Science & Software Engineering (CSSE) Department at Concordia University in Montreal in the area of Natural Language Processing. Together with Sabine Bergler, Dr. Kosseim co-directs the CLaC lab. Dr. Kosseim obtained my PhD from the University of Montreal in 1995 on the topic of Natural Language Generation. Between 1995 and 1997, Dr. Kosseim obtained an NSERC Industrial Postdoctoral Fellowship and worked on the development of the Antidote Software at Druide informatique inc. From 1998 to 2001, Dr. Kosseim then served as a lecturer and researcher at the University of Montreal within the RALI group. In 2001, Dr. Kosseim then joined Concordia, and co-founded the CLaC (Computational Linguistics @Concordia). Since then, Dr. Kosseim has graduated 8 PhD students and over 20 Master’s students. Dr. Kosseim has had the honor to serve as Vice-President (2017-2019), President (2019-2021) and Past-President (2021-2023) of the Canadian AI Association (CAIAC). Dr. Kosseim currently serves as Graduate Program Director for the PhD Programs in the CSSE Department.

 
Eleni Stroulia
University of Alberta

Dr. Eleni Stroulia is the Vice Dean of the Faculty of Science and a Professor in the Department of Computing Science at the University of Alberta. From 2019 to 2021, she served as the Director of the University of Alberta's AI4Society Signature Area. She is renowned for her industry-focused research program leveraging advances in the Internet of Things, artificial intelligence, and machine learning. Her research, notably in healthcare, centers on enhancing independence for individuals with chronic conditions, especially related to aging and frailty, through technologies like the Smart Condo and Virtual Gym. She has mentored over 60 graduate students and PDFs who have moved on to stellar careers in academia and industry.

 
Geoffrey Rockwell
University of Alberta

Dr. Geoffrey Martin Rockwell is a Professor of Philosophy and Digital Humanities and a Canada CIFAR AI Chair at the University of Alberta, Canada. He has a Ph.D. in Philosophy from the University of Toronto and has published on subjects such as artificial intelligence and ethics, philosophical dialogue, textual visualization and analysis, digital humanities, instructional technology, computer games and multimedia. He has published books such as Defining Dialogue: From Socrates to the Internet (Humanity Books, 2003) and Hermeneutica, co-authored with Stéfan Sinclair (MIT Press, 2016). Hermeneutica is part of a hybrid text and tool project with Voyant Tools (voyant-tools.org), an award-winning suite of analytical tools. He recently co-edited a book on Right Research: Modelling Sustainable Research Practices in the Anthropocene (Open Book Publishers, 2021) and On Making in the Digital Humanities (UCL Press, 2023).

Hands-on Tutorial

 
Travis LaCroix
Dalhousie University / Durham University
Artificial Intelligence and the Value Alignment Problem.
The value alignment problem is the problem of ensuring that AI systems are aligned with the values of humanity. However, this standard definition raises more questions than it answers. In this tutorial, we will explore current conceptions of the value alignment problem for artificial intelligence, underscoring the shortcomings of the standard definition. We will then explore a novel, structural conceptualisation of the value alignment problem for artificial intelligence by analogy with the principal-agent framework from economics. This structural definition comprises three distinct axes that can give rise to value misalignment between human principals and AI systems: misspecified objectives, asymmetric information, and relative principals. This tutorial covers aspects of these three axes and their interaction effects. Specifically, we will explore several case studies illustrating value misalignment along these three axes in real-world systems. We will see how understanding the contexts that give rise to value misalignment underscore that the value alignment problem is neither primarily technical nor even normative: it is fundamentally a social problem arising from the dynamics of multi-agent interactions. If time permits, we will explore how scaling hypotheses for present-day AI systems exacerbate misalignment and how these problems can be mitigated in practice.

Dr Travis LaCroix is currently an assistant professor in the Department of Philosophy at Dalhousie University, where he also teaches in the Faculty of Computer Science. In Fall 2024, he will join the philosophy department at Durham University. He is the author of Artificial Intelligence and the Value Alignment Problem: A Philosophical Introduction (under contract with Broadview Press). In addition to the philosophy and ethics of artificial intelligence, Dr LaCroix's research explores social dynamics, norms, and conventions, and the philosophy of autism (for which he recently received an Insight Development Grant from the Social Sciences and Humanities Research Council).

Program Chairs

Fattane Zarrinkalam
School of Engineering, University of Guelph

Randy Goebel
Department of Computing Science/Alberta Machine Intelligence Institute (Amii), University of Alberta

Contact

  • Conference Chairs

  • Webmaster

  • www.caiac.ca

How to become a sponsor

Our thanks to:

Our host society
Our sponsors

Platinum sponsors

Regular sponsors

© 2025 Canadian Artificial Intelligence Association