Responsible AI
Following the successful third edition of the Responsible AI event in 2025, we are excited that the event will be held again in 2026. We strongly believe in the importance and urgency of the Responsible and Ethical Development of Artificial Intelligence for Social Good.
As outlined by the UNESCO Recommendation on the Ethics of Artificial Intelligence, AI technology may have unintended by-products that lead to discrimination, reinforce inequalities, infringe upon human rights, socially sort and disrupt democratic processes, limit access to services and intensify surveillance and unfair treatment of marginalized and minority groups. As such, we are committed to organizing a cohesive and dynamic program that embodies the paradigm of responsible development of AI so that AI researchers and practitioners can engage in critical analysis and integration of fairness, ethics, transparency, and algorithmic accountability in their work.
This year's program will consist of the following events and will be open to all participants of the Canadian AI conference:
- Two keynote talks by international leaders in Responsible AI
- A student 3-minute-thesis (3MT) competition
- A student research poster session
- A panel featuring leaders in Responsible AI
- A live AI Ethics debate
Responsible AI Co-chairs
Ebrahim Bagheri
Professor
Faculty of Information, University of Toronto
Website
Sébastien Gambs
Canada Research Chair in Privacy-preserving and Ethical Analysis of Big Data
Université du Québec à Montréal
Website
Maite Taboada
Professor, Department of Linguistics, Simon Fraser University
Director, Discourse Processing Lab
Website
Nabilah Chowdhury
Director, Network Management, UBC AI and Health Network
Responsible AI arrangements chair
Calvin Hillis
PhD Student, Toronto Metropolitan University
Program Schedule
Forthcoming.
Keynote speakers
My research interests lie at the intersection of Natural Language Processing, Machine Learning and Artificial Intelligence. My long-term vision is transforming LLMs into responsible, reliable and trustworthy systems, which are available and fair across languages and different socio-demographic groups. In my research, I focus on three main threads: (1) Control and interpretation of models: understanding model behavior and controlling model generation; (2) Reliability, safety and fairness: making models more consistent and safe, and mitigating biases and risks; (3) Multilinguality: creating NLP tools that equitably serve speakers of as many languages as possible, as well as understanding the emergent property of cross-linguality in models. Additionally, I have recently developed an interest in employing NLP tools and methodologies in the health domain. Before joining UBC, I was a postdoctoral researcher at the Paul G. Allen School of Computer Science & Engineering at the University of Washington, at Meta AI and at Amazon. Prior to that I did my Ph.D in Computer Science at the NLP lab at Bar Ilan University. I obtained my M.Sc. in Computer Science from the Hebrew University.
I am a Canada Research Chair in Natural Language Processing and Machine Learning, and Associate Professor in the School of Information and Department of Linguistics (Joint Appointment), and Computer Science (Associate Member), at The University of British Columbia. My research is in deep learning and natural language processing. My research program focuses on deep representation learning and natural language socio-pragmatics, with a goal to innovate more equitable, efficient, and `socialÆ machines for improved human health, safer social networking, and reduced information overload. Applications of my work currently span a wide range of speech and language understanding and generation tasks. For example, my group works on language models, automatic speech processing, machine translation, and computational socio-pragmatics in social media. I direct the UBC Deep Learning & NLP Group, co-direct the SSHRC-funded I Trust AI Partnership Grant, and co-lead the SSHRC Ensuring Full Literacy Partnership Grant. I am a founding member of the Center for Artificial Intelligence Decision making and Action and a member of the Institute for Computing, Information, and Cognitive Systems.
Panelists
Prof. Nick Vincent is an Assistant Professor in Computing Science at Simon Fraser University. He studies the content ecosystems and data supply chains that fuel data-dependent technologies like search engines, recommender systems, and generative AI. This involves exploring avenues for people to control how data flow and participate in the governance of AI systems. The overarching goal of this research is to work towards highly capable and widely beneficial AI technologies that mitigate -- rather than exacerbate -- inequalities in wealth and power.
Ife Adebara is an Assistant Professor at the University of Alberta, jointly appointed in Modern Languages and Cultural Studies and Media and Technology Studies, and a Fellow at the Alberta Machine Intelligence Institute (Amii). She holds a PhD in Linguistics (Cognitive Systems) from the University of British Columbia, where her dissertation, Towards Afrocentric Natural Language Processing, laid the foundation for her research on inclusive language technologies. She also holds masterÆs degrees in Computer Science from Simon Fraser University and the University of Birmingham, and a BA in Linguistics from the University of Ibadan. IfeÆs research focuses on building inclusive and explainable language technologies for low-resource and underrepresented languages, with particular emphasis on African and Indigenous languages. Her work addresses challenges in ethical data curation, model development, language policy, and AI governance. She leads the development of large-scale multilingual systems such as AfroLID, Serengeti, and Cheetah, which support more than 500 African languages and language varieties. Her research has been published in leading venues including ACL, EMNLP, and COLING. She serves as an organizer of the Cross-Cultural Considerations in NLP workshop, and contributes as an ad-hoc member to UNESCOÆs International Decade of Indigenous Languages. She is the recipient of multiple international recognitions, including IRCAIÆs Global Top 100 Outstanding AI Solutions under UNESCO auspices and the AI for Good Diversity, Equity & Inclusion AI Leader of the Year 2023 Award. Outside academia, Ife is the co-founder and Chief Technology Officer of EqualyzAI, where she develops agentic AI systems grounded in AfricaÆs most comprehensive language datasets.
Dr. Alissa Centivany is an Assistant Professor in the Faculty of Information and Media Studies at the University of Western Ontario working on technology policy, law, and ethics. She holds a PhD in Information and a JD specializing in intellectual property and technology law. Prior to joining Western, Dr. Centivany was a Microsoft Research Fellow at the Berkeley Center for Law and Technology, UC-Berkeley School of Law, and a researcher at the Centre for Innovation Law and Policy, University of Toronto Faculty of Law. She was also an instructor at the University of Toronto and University of Michigan iSchools. Dr. Centivany co-founded and co-directs the Starling Centre for Just Technologies & Just Societies, co-chairs and serves as a core expert on the CIFAR & Mila AI Insights for Policymakers Program (AIPP), is co-founder and executive director of the Canadian Repair Coalition, and is a member of the Rotman Institute of Philosophy. Dr. Centivany has provided expert testimony before the Canadian House of Commons and Senate, successfully advocating for reforms to the CanadaÆs Copyright Act. She is an active participant in Canadian policy consultations on a range of contemporary sociotechnical topics including AI, sustainable tech, durable and interoperable design, and approaches to participatory policymaking. Dr. CentivanyÆs expertise is internationally recognized; sheÆs been invited to speak before the G20 and United Nations and has participated in U.S. and EU policy consultations. Centivany regularly shares her work with public audiences through news media interviews with the CBC, Globe & Mail, Toronto Star, The National, Global News, The Agenda, and others. Her work is motivated by interdisciplinarity, curiosity, and care. In her spare time, she makes and enjoys art, tends to living things, plays pinball whenever possible, and occasionally (secretly) co-hosts a late-night college radio show.
Dr. Mo Chen is an Associate Professor in the School of Computing Science at Simon Fraser University, Burnaby, BC, Canada, where he directs the Multi-Agent Robotic Systems Lab. He holds a Canada CIFAR AI Chair position and is an Amii Fellow. Dr. Chen completed his PhD in the Electrical Engineering and Computer Sciences Department at the University of California, Berkeley in 2017, and received his BASc in Engineering Physics from the University of British Columbia in 2011. From 2017 to 2018, He was a postdoctoral researcher in the Aeronautics and Astronautics Department in Stanford University. Dr. ChenÆs research interests include multi-agent systems, safety-critical systems, human-robot interactions, control theory, reinforcement learning, and their intersections.
Peter West is an assistant professor at the University of British Columbia, broadly working on the capabilities and limits of LLMs. For example: the divergence of AI from human intuitions of intelligence, unpredictability and creativity in models, evaluation and benchmark design. Peter completed his PhD at the University of Washington, Paul G School of Computer Science and Engineering, and a postdoc at the Stanford Institute for Human-Centered AI. His work has been recognized with best, outstanding, and spotlight papers in NLP and AI conferences.
Kevin Leyton-Brown is a professor of Computer Science and a Distinguished University Scholar at the University of British Columbia. He holds a Canada CIFAR AI Chair at the Alberta Machine Intelligence Institute and is an associate member of the Vancouver School of Economics. He received a PhD and an M.Sc. from Stanford University (2003; 2001) and a B.Sc. from McMaster University (1998). He studies artificial intelligence, mostly at the intersection of machine learning with either the design and operation of electronic markets or the design of heuristic algorithms. He has helped to design a government auction that reallocated North American radio spectrum; an electronic market that linked Ugandan farmers with buyers for surplus crops; and widely used open source software such as SATzilla (an algorithm portfolio for solving satisfiability problems), Mechanical TA (peer grading software used at universities around the world), and AutoWEKA (a machine learning tool that both selects a model family and optimizes its hyperparameters). He is increasingly interested in large language models, particularly as components of agent architectures. He believes we have both a moral obligation and a historical opportunity to leverage AI to benefit underserved communities, particularly in the developing world.



