Menu
  • Home
  • Call for Papers
  • Student Symposium
  • Industry Track
  • Responsible AI
  • Committees
  • Program
  • Speakers
  • Registration
  • Venue
  • Become a Sponsor

Responsible AI Track

Following the successful first edition of the Responsible AI event in 2022, we are excited that the event will be held again in 2023. We strongly believe in the importance and urgency of the Responsible and Ethical Development of Artificial Intelligence for the Social Good.

As outlined by the UNESCO Recommendation on the Ethics of Artificial Intelligence, AI technology may have unintended by-products that lead to discrimination, reinforce inequalities, infringe upon human rights, socially sort and disrupt democratic processes, limit access to services and intensify surveillance and unfair treatment of marginalized and minority groups. As such, we are committed to organizing a cohesive and dynamic program that embodies the paradigm of responsible development of AI so that AI researchers and practitioners can engage in critical analysis and integration of fairness, ethics, transparency, and algorithmic accountability in their work.

This year's program will consist of the following events and will be open to all participants of the Canadian AI conference:

  • Six Invited Talks of 30 minutes each. The talks will consist of speakers with practical and theoretical expertise at the intersection of various domains and Responsible AI
  • A keynote speaker by an international leader in Responsible AI
  • A half-day hands-on tutorial that offers useful practical introductory training on the use of fairness-aware AI tools

Detailed Program

The track will take place in the Strathcona Anatomy and Dentistry building.
Thursday: Room 2/36
Friday: Room M/1

Thursday, June 8, 10:45–11:45

Emily Denton Keynote Address

Opportunities and Challenges for Responsible Generative AI
Recent advancements in generative modeling has led to the rapid development of text- and image-based generative AI applications with impressive capabilities. These emerging technologies are already impacting people, society, and culture in complex ways, foregrounding the importance of responsible development and governance frameworks. This talk will offer a broad overview of key ethical challenges associated with generative AI, and considerations for responsible development.

Emily Denton (they/them) is a Staff Research Scientist at Google, within the Technology, AI, Society, and Culture team, where they study the sociocultural impacts of AI technologies and conditions of AI development. Their recent research centers on emerging text- and image-based generative AI, with a focus on data considerations and representational harms. Prior to joining Google, Emily received their PhD in Computer Science from the Courant Institute of Mathematical Sciences at New York University, where they focused on unsupervised learning and generative modeling of images and video. Prior to that, they received their B.S. in Computer Science and Cognitive Science at the University of Toronto. Though trained formally as a computer scientist, Emily draws ideas and methods from multiple disciplines and is drawn towards highly interdisciplinary collaborations, in order to examine AI systems from a sociotechnical perspective. They've published in multiple top-tier venues spanning social science and computing disciplines, including Big Data & Society, CSCW, FAccT, and NeurIPS.

Thursday, June 8, 11:45–12:15

Student Posters Presentation

Thursday, June 8, 13:00–13:30

Allison Marchildon talk

Ethics and AI:  from Discourse to Practice

AI developers and companies seem increasingly open to talking about ethics and adopting ethical principles. But their discourses on the matter also face growing criticism, as they don't always translate into actual responsible practices. And when they do, it is questionable the extent to which these practices correspond to what the actors impacted by them consider responsible.

The challenge then becomes fostering practices that are aligned with the values that are important not only for those who develop AI systems, but also - and above all – for the actors likely to be affected by these systems. In this talk, I will therefore discuss some of the mechanisms that we are working on to help develop AI systems that lead to consequences and responsibilities that are valued by the actors who will most probably be impacted by them. Inspired by a pragmatist approach to ethics, these mechanisms are reflexive, collaborative and solutions-oriented, in order to develop new practices and responsibilities that are adapted to the new issues raised by AI.

Allison Marchildon is full professor and Applied Ethics programs lead in the Department of Philosophy and Applied Ethics at Université de Sherbrooke. She is also co-leader of the Ethics, Governance and Democracy theme of the International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technology (OBVIA) and sits on the steering committee of the journal Éthique publique. Her research focuses on ethics, governance, responsibility and power in different fields of activities, including artificial intelligence and new technologies. She is currently advising the Quebec Department of Cybersecurity and Digital Technology in developing an ethical framework for public sector use of AI. Her publications include the books Quels lendemains pour la responsabilité? Perspectives multidisciplinaires (2018), co-edited with André Duhamel; Former à l’éthique en organisation : une approche pragmatiste (2017), co-authored with André Lacroix and Luc Bégin; and Corporate responsibility or corporate power? CSR and the shaping of the definitions and solutions to our public problems (2016), published in the Journal of Political Power.

Thursday, June 8, 13:30–14:00

Natalie Mayerhofer talk

The Human Factor: Upskilling Healthcare Actors to Better Integrate AI for the Benefit of the Population
How can healthcare organizations support the adoption of responsible AI? They must invest in developing their workforce, through relevant, innovative and inclusive learning strategies. After 5 years of testing and integrating AI at the CHUM (Quebec’s “Smartest Hospital”), this session will cover how its School of AI in Healthcare (SAIH) develops critical mindsets, skillsets and learning solutions to build the future of healthcare.

Natalie Mayerhofer is Deputy Chief Learning Officer at the CHUM. During her international career, she has developed expertise in innovation policy, program evaluation, and strategic planning. She thrives on innovative projects, including her greatest challenge, the development of CHUM’s School of Artificial Intelligence in Healthcare.

Thursday, June 8, 14:00–14:30

Nathalie de Marcellis-Warin talk

Deliberative Governance and Social Acceptability: Promoting Responsible Innovation in AI

Deliberative governance and social acceptability are important for promoting responsible innovation in the field of artificial intelligence. Given the rapid advancements and potential impacts of AI, involving citizens and stakeholders in decision-making processes and promoting transparency and accountability in AI development are critical.

Promoting transparency and accountability requires questioning the governmentality of AI systems not only at the technical level but also in their organizational and societal deployment. Moreover, there is an emerging consensus that AI is a global challenge that must be addressed for the public good.

Ensuring that AI technologies are widely accepted by society and addressing concerns related to the ethical and societal implications of AI are also important for promoting responsible innovation and implementing deliberative governance.

This conference aims to address responsible AI innovation from the perspective of deliberative governance in relation to social acceptability in society.

Nathalie de Marcellis-Warin is President and Chief executive officer of CIRANO. She is a full professor in the Department of Mathematics and Industrial Engineering at Polytechnique Montréal and a Visiting Scientist at the Harvard T. Chan School of Public Health. In addition, she is a member of the Commission de l'éthique en science et en technologie (CEST) du Québec and a research fellow at the International Observatory for Societal Impacts of AI and Digital Transformations (OBVIA). Holder of a PhD in Management Science (in risks and insurance management) from École Normale Supérieure de Cachan (France), her research interests are risk management and decision-making theory in different contexts of risk and uncertainty as well as public policies implementation. She collaborates on major research projects with public and private organizations on the issues of emerging technology adoption and societal impacts.

Thursday, June 8, 14:30–15:00

Samira Abbasgholizadeh-Rahimi talk

From Theory to Practice: The Responsible Implementation of AI in Primary Health Care
In this presentation, we will discuss the opportunities and difficulties of integrating artificial intelligence (AI) into primary healthcare settings. We will delve into the current state of AI-driven primary healthcare solutions, talk about the ethical implications of their use, and provide practical guidance on how to responsibly integrate AI into primary care practices.

Professor Samira A.Rahimi is Assistant Professor in the Department of Family Medicine, Associate Academic Professor of Mila-Quebec AI Institute, Associate member of Faculty of Dentistry, and an Affiliated scientist at Lady Davis Institute for Medical Research of Jewish General Hospital. She is an Associate Member of the College of Family Physicians of Canada, Vice President of the Canadian Operational Research Society (CORS), and Director of Artificial Intelligence in Family Medicine (AIFM). Professor Rahimi is Fonds de Recherche du Québec-Santé (FRQS) Junior 1 Research Scholar in human-centered AI in primary health care, and her work as Principal Investigator has been funded by the Fonds de recherche du Québec – Santé (FRQS), Natural Sciences and Engineering Research Council (NSERC), Roche Canada, Brocher Foundation (Switzerland), and the Strategy for Patient-Oriented Research (SPOR)-Canadian Institutes of Health Research (CIHR). In recognition of her outstanding work, Professor Rahimi has received numerous awards, including the prestigious 2022 New Investigator Primary Care Research Award from the North American Primary Care Research Group (NAPCRG).

Thursday, June 8, 15:15–17:00

Three Minute Thesis Event

Ahmed Haj Yahmed
Software Engineering for Artificial Intelligence
Towards Reliable, Production-Ready Self-Learning systemsAbstract
The field of artificial intelligence (AI) has seen significant development in recent years. The performance of Machine Learning (ML) models and the relative accessibility of powerful computers made, almost anyone with a little coding knowledge and access to the internet, piece together an accurate implementation of a proof of concept solution spanning from computer vision to Natural Language Processing. However, the gap between any proof of concept solution and a production-ready ML system is rather substantial. ML code represents only a small proportion of real-world ML systems, and the rest is a broad set of surrounding infrastructure and operations to support these systems’ evolution. The major challenges that such systems may encounter are now linked to their production rather than their development. In this thesis, we are interested in examining the production phase of self-learning systems. On one hand, we focus on testing the production readiness of these systems. On the other hand, we investigate techniques to repair these systems to maintain their intended behavior. The goal of this thesis is to offer a broad overview of testing and repairing self-learning systems in production.
Alor Ebubechukwu
Software Engineering for Artificial Intelligence
An Approach for Auto Generation of Labeling Functions for Software Engineering ChatbotsAbstract
Software engineering (SE) chatbots have become increasingly prevalent. Natural Language Understanding (NLUs) platforms serve as the core components of chatbots, enabling them to interpret user queries. Prior to using NLUs, there is a need to train them with labelled data. Prior work shows that acquiring labelled data can be challenging due to the scarcity in the software engineering domain. Therefore, chatbot practitioners resort to manually annotating users’ queries to the posed to the chatbot. However, this tedious process is time-consuming and expensive in terms of resources. To address this issue, we propose a weak supervision-based approach to automatically labelling the user’s queries. We evaluate the effectiveness of our approach by applying it on the queries three diverse SE datasets and measure the performance improvement gained from training the NLU with the labelled queries. Our findings demonstrate promising results for our approach in generating high-quality labels, and their potential to enhance the NLU performance of SE chatbots. Our approach leads to average F1-Score improvements of up to 17.74%. We believe that our approach can save time and resources labeling the users’ queries. Furthermore, it allows practitioners to focus on core chatbot functionalities rather than labeling queries.
Aude Marie Marcoux
Responsible Development of Artificial Intelligence
AI Ethics as PracticeAbstract
The adoption of artificial intelligence (AI) technologies by organizations allows for massive opportunities, but entails a variety of challenges in company restructuring, training needs, and cultural transformation. It also raises several serious ethical questions such as risks of discrimination, invasion of privacy, lack of transparence and accountability, and unwanted biases (Pasquale, 2015; O’Neil, 2016; Mittelstadt and al., 2019). Many reports aimed to define the normative frameworks that should delineate those challenges have been published, but their integration in day-to-day business practice in organizations is still a challenge. The gap between ethical principles and practice is large and there is an urgent need for a translation from the “what” of AI ethics to the “how” of applied AI ethics (Morley et al., 2020). In my doctoral thesis, which will take the form of three papers, I focus on the management of the ethical dimension of artificial intelligence. More specifically, I am interested in the process by which organizational actors make sense of the new ethical issues brought about by the adoption of artificial intelligence (AI) technologies and about the way they tackle and deal with them concretely. The main goal of this thesis is to move the discussion from theory to practicality, to build the bridge between knowing and doing. The first paper is a scoping review of the relevant literature providing guidance or actionable quality criteria on the matter of AI business ethics strategies and practices in organizations. The goal is to answer the following question: What are the strategies, practices and practical solutions aiming at bridging the gap between principled and applied AI ethics in organizations proposed in the literature? The second article is an empirical case study that aims to understand how the employees of an organization in the insurance sector make sense of the ethical issues related to the valuation of data, among other things through artificial intelligence techniques. It also seeks to understand how they make sense of initiatives aimed at institutionalizing AI ethics in the organization and to make recommendations on how to integrate these issues within the organization. The third article is a theorical contribution that aims to develop a conceptualization of IA ethics as a practice. Inspired by the "practice turn" in organizational strategy, the third article is a theoretical contribution that aims to initiate a necessary "practice turn" in the ethics of AI. This approach is concerned with what people do daily – actions, routines, which together “constitute” a strategy conceptualized as something that “is done” and not as something that “is”. Hence an interesting parallel with the process of sensemaking which, thanks to the interactions between practitioners, together “constitute” what is seen as “ethical”. This theoretical contribution aims to initiate a conceptualization of the ethics of AI as a practice. By keeping the focus on the actionability and practicability, the three studies raise several contributions, not only to the advancement of knowledge and practice in AI ethics, but also to increase the likelihood of AI ethics to be embedded in practice, so to bridge the gap between principled and applied AI ethics.
Fanny Rancourt
Responsible Development of Artificial Intelligence
Investigating Self-Rationalizing Models for Commonsense ReasoningAbstract
The rise of explainable NLP spurred a bulk of work on datasets augmented with human explanations as well as technical approaches to leverage them. Notably, generative large language models (LLMs) offer new possibilities as they can output the prediction as well as an explanation in natural language. This work investigates the capabilities of fine-tuned T5 models for commonsense reasoning and explanation generation. Our experiments suggest that, while self-rationalizing models achieve interesting results, a significant gap remains: classifiers consistently outperformed self-rationalizing models, and a substantial fraction of model-generated explanations are not valid. Furthermore, training with expressive free-text explanations substantially altered the model inner representation, suggesting that they supplied additional information, and may bridge the knowledge gap.
Fazle Rabbi
Software Engineering for Artificial Intelligence
Adversarial Robustness on Large Language Code Generation ModelAbstract
The vast usability of the Large Language Model (LLM) let it be successfully applied in multiple areas of Natural Language Processing (NLP) and it is being performed remarkably to summarize, translate and generate textual data. In recent days, the LLM is being used to generate code from human-written documentation or comment or from another code. Although many works had been made to test the robustness of NLP models using adversarial attacks, the robustness of the LLM for code generation task is yet to explore. Here, we are working on the large laguage models to find their vulnerabilities for code generation task using adversarial attack.
Harsh Patel
Software Engineering for Artificial Intelligence
Post-deployment Machine Learning Model RecyclingAbstract
ML model selection is performed before model training during the development phase. Once a model is deployed, the same model is retrained on a scheduled interval to make sure the model reflects current data and environment trends to prevent issues such as dataset shifts. This study applies the idea of post-deployment techniques to model selection. This study proposes a post-deployment model recycling technique to address the problems like model retraining costs and dataset shifts that often are described as maintenance challenges of ML models. In this study, we look at the performance of the post-deployment model recycling approach on bug prediction task using a Just-In-Time defect prediction model trained on multiple Apache projects datasets. In this study, we evaluate the proposed technique against the traditional approach on different model evaluation metrics and sensitivity analysis on different parameters.
Jaskirat Singh
Software Engineering for Artificial Intelligence
Deployment Strategies for Edge AIAbstract
The rise of AI use cases catered towards the Edge, where devices have limited computation power and storage capabilities, motivates the need for optimized AI Deployment strategies to provide faster inference, enhanced data privacy, and optimal accuracy. This study aims to empirically assess the impact of Edge AI deployment strategies on the inference time and accuracy for Computer Vision and Natural Language Processing tasks across the Edge Environment. In this paper, we conduct Inference experiments with 1) Monolithic Deployment Strategies and 2) Edge AI Deployment Strategies across the nodes of the Edge Environment for investigating the optimal inference approaches from the point of view of AI developers.
Junjie Li
Software Engineering for Artificial Intelligence
Improving the Code Generation of LLMs by Modifying the PromptAbstract
With the rapid improvement in natural language processing (NLP) areas, many large language models have been proposed to handle many variants downstream. NLP task e.g., translation, semantic analysis, etc. One NLP task that has been gaining more attention is code generation. One commercial application, GitHub Copilot powered by Codex, shows a promising result on code generation. After that, many models are released such as CodeGen and GPT-J. However, it is still unclear the influence of the prompts in code generation. In this study, we explored the prompts and tried to modify them to improve code generation.
Khaled Badran
Software Engineering for Artificial Intelligence
Using Large Language Models To Augment Software Engineering Chatbot DatasetsAbstract
The rapid growth of natural language processing and artificial intelligence has enabled the development of sophisticated chatbots that assist in various domains, including software engineering. A crucial factor in the performance of these chatbots is the quality and relevance of their training examples. In this paper, we present a comprehensive evaluation of state-of-the-art large language models, such as GPT-4 and Codex, on their abilities to generate high-quality training examples for software engineering chatbots.
Marianne Ozkan
Responsible Development of Artificial Intelligence
ADMISSIBILITÉ DE PREUVES ISSUES DE TECHNIQUES D’APPRENTISSAGE AUTOMATIQUE EN DROIT CRIMINEL CANADIENAbstract
Ce mémoire se penche sur l’un des effets de l’émergence d’outils d’intelligence artificielle sur la pratique du droit. En particulier, nous traitons de l’admissibilité de la preuve issue d’outils utilisant la technique de l’apprentissage automatique, une branche de l’intelligence artificielle. Nous cherchons à établir la fiabilité de cette technique pour fins d’admissibilité en tant que preuve. Nous débutons en cernant la notion de fiabilité d’une preuve scientifique en droit canadien. Nous abordons ensuite les composantes et le fonctionnement de l’apprentissage automatique. Nous analysons les divers aspects de sa fiabilité en soulevant ses vulnérabilités, ce qui nous permet de dégager les conditions propices à la fiabilité de la technique. Nous recensons les instruments légaux qui imposent ou renforcent ces conditions et terminons avec une illustration concrète d’un témoignage expert sur une telle preuve, soit le cas d’un outil visant à cerner l’identité d’un locuteur. Notre démarche nous incite à remettre en question le rôle du tribunal dans l’établissement de la fiabilité d’un outil d’apprentissage automatique, une tâche qui défavorise l’inculpé.
Nanda Kishore Sreenivas
Responsible Development of Artificial Intelligence
Deliberation and Voting in multi-winner electionsAbstract
Citizen-focused democratic processes where participants deliberate on alternatives and then vote to make the final decision are increasingly popular today. While the computational social choice literature has extensively investigated voting rules, there is limited work that explicitly looks at the interplay of the deliberative process and voting. In this work, we build a deliberation model using established models from the opinion-dynamics literature and study the effect of different deliberation mechanisms on voting outcomes achieved when using well-studied voting rules. Our results show that deliberation generally improves welfare and representation guarantees, but the results are sensitive to how the deliberation process is organized. We also show, experimentally, that simple voting rules, such as approval voting, perform as well as more sophisticated rules such as proportional approval voting or method of equal shares if deliberation is properly supported.
Omid Shokrollahi
Responsible Development of Artificial Intelligence
Intersectionality and Quantum Theory: A Novel Approach to FairnessAbstract
This presentation highlights the pressing need to integrate intersectionality, the coexistence of multiple social identities, in fairness research. By critiquing the oversight of this crucial factor in most current studies, it emphasizes the importance of understanding the emergent nature of intersectional bias. The talk introduces quantum theory's wisdom and particularly the concept of entanglement as a promising solution to create fairer language models. It is capable of capturing the intricate correlations of such bias and simultaneously addressing privacy and explainability concerns of responsible AI. It asserts that the future of ethical AI must focus on creating more equitable systems that honor the diversity of intersectional groups which leads to understanding the dynamics of power and oppression.
Owen Chambers
Responsible Development of Artificial Intelligence
User-specific explanations of AI systems attuned to psychological profilesAbstract
This presentation will discuss a model aimed at supporting user-specific explanations from AI systems and present the results of a user study conducted to determine whether the algorithms used to attune the output to the user match well with the user’s own preferences. This is achieved through a dedicated study of certain elements of a user model: levels of neuroticism and extroversion and degree of anxiety towards AI. Our work provides insights into how to test AI theories of explainability with real users, including questionnaires to administer and hypotheses to pose. We also shed some light on the value of a model for generating explanations that reasons about different degrees of and modes of explanation. We conclude with commentary about the continued merit of integrating user modeling into the development of AI explanation solutions, and the challenges, with next steps, to balance the design of theoretical models with the use of empirical evaluation, within the research conducted in the field.
Rached Bouchoucha
Software Engineering for Artificial Intelligence
Deep Reinforcement Learning Quality Assurance: From debugging, to testing, till maintenanceAbstract
Deep Reinforcement learning (DRL) algorithms have been shown to be effective in sequential decision-making tasks that require determining the appropriate actions in any given state of a dynamic unsupervised and complex environment. This behavior is developed by training a neural network based agent in interaction with its specific environment, to learn its optimal policy through a trial and error mechanism. In fact, DRL approaches can be extremely effective in various industrial applications such as robotics, autonomous driving, games, etc. [2]. Recently, we have witnessed a rapid development of DRL approaches, and many studies have proved their efficiency in a variety of domains and use cases. However, in contrast to other deep learning techniques, and despite the rapid development of DRL, there are few researches on the debugging, testing, and maintenance of DRL algorithm behaviour. Indeed, the aforementioned steps are required for the development of trustworthy DRL models, as these models will be used in contexts where errors cannot be tolerated
Sananda Sahoo
Responsible Development of Artificial Intelligence
Accountable AI for Responsible Elections of the FutureAbstract
This presentation aims to highlight the specific cases of AI usage in elections in Canada, the US, and the UK and the discourses around its usage through a review of media coverage and to expand the definition of Accountable AI to propose regulation of political actors in the deployment of AI for election purposes. By political actors, I refer to politicians, campaign organizers, and volunteers besides corporations who develop and deploy software for usage in elections. The rationale behind these objectives is to remind us of the often-forgotten everyday use of AI in elections that has ramifications for election outcomes and to make political actors accountable.
Sara Salamat
Responsible Development of Artificial Intelligence
Learning to Retrieve Convincing ArgumentsAbstract
The Information Retrieval community has made strides in developing neural rankers, which have show strong retrieval effectiveness on large-scale gold standard datasets. The focus of existing neural rankers has primarily been on measuring the relevance of a document or passage to the user query. However, other considerations such as the convincingness of the content are not taken into account when retrieving content. We have collected a dataset for convincing information retrieval and trained neural rankers for this purpose. Through extensive experiments on this dataset, we report that there is a close association between convincingness and relevance that can have practical value in how convincing content are presented and retrieved in practice.
Scarlett Xu
Responsible Development of Artificial Intelligence
Exploring social network alignment using network embedding and writing style representationAbstract
Network alignment is the task of aligning nodes that belong to the same entity from different networks. Our study focuses on some popular social networks, such as Twitter, FourSquare, etc. to map user accounts from two different social networks that belong to the same person. Considering different social networks serve their own content purpose, the topics of users’ posts vary greatly but the writing styles are worth exploring their consistency. In this study, we adopt the representation learning approach to align users across different social platforms where the social structures and writing styles of users are exploited. Additionally, the study aims to augment public datasets by mining public user posts.
Sharon Ho
Software Engineering for Artificial Intelligence
From Development to Dissemination: Social and Ethical Issues with Text-to-Image AI-Generated ArtAbstract
Text-to-image generative artificial intelligence (AI) have made global news headlines for not only having the ability to generate high-fidelity artworks, but also for causing increased discussion on the ethicality of its impact on living artists, the automation and commodification of art production, the frequent non-consensual collection and usage of sensitive and copyrighted images as training data, and the routinely exhibited cultural and social biases in their generated outputs. In addition, there are concerns that open-sourced text-to-image generative AI models, such as Stable Diffusion, and techniques like Textual Inversion, allow for technical restrictions on the content subject matter to be removed and for generated images to be subject specific, which could be utilized as a new medium for disinformation and sexual or targeted abuse. Because ethical discussions on AI-generated art using text-to-image generative AI models have only come to light in the last quarter of 2022, academic research on the social and ethical implications of this technology have yet to be thoroughly explored. Therefore, it is imperative for research to be done on these implications with regards to technological development, evaluation, perception, creation, and moderation of AI-generated artworks while text-to-image generative AI systems are still in the early stages of public dissemination and adoption.
Shirin Seyedsalehi
Responsible Development of Artificial Intelligence
Gender Biases in Information Retrieval SystemsAbstract
While neural rankers continue to show notable performance improvements over a wide variety of information retrieval tasks, there have been recent studies that show such rankers may intensify certain stereotypical biases. We investigate whether neural rankers introduce retrieval effectiveness (performance) disparities over queries related to different genders. We specifically study whether there are significant performance differences between male and female queries when retrieved by neural rankers. Through our empirical study over the MS MARCO collection, we find that such performance disparities are notable and that the performance disparities may be due to the difference between how queries and their relevant judgements are collected and distributed for different gendered queries. More specifically, we observe that male queries are more closely associated with their relevant documents compared to female queries and hence neural rankers are able to more easily learn associations between male queries and their relevant documents.
Soulas, Thomas
Responsible Development of Artificial Intelligence
Bringing voice-to-text transcription tools across various scientific communitiesAbstract
The goal of my research topic is to build an open source, non-tech accessible pipeline from audio to text. Without stopping there, this module would then be used to collect textual resources for research on preemptive risk decision in mental health. There are already many open source approaches, Nvidia Nemo and Speechbrain to name a few, this pipeline would be based on one of these approaches and built on top of it.
Yu Shi
Software Engineering for Artificial Intelligence
Planning task executions in distributed machine learning systemsAbstract
Distributed machine learning (DML) systems are becoming increasingly popular due to their ability to process large amounts of data and perform complex computations. However, managing the training process in such systems can be challenging, as it involves coordinating multiple and heterogeneous computing devices (i.e., CPU, GPU, TPU, mobiles, and IoT devices). This project proposes using automated planning techniques to manage the training process of distributed machine learning systems. Specifically, we use the Planning Domain Definition Language (PDDL) to formalize the training process and generate task plans that maximize training performance. We try to demonstrate that automated planning can effectively manage task executions in distributed machine learning systems.
Friday, June 9, 9:00–9:45

Benjamin Fung talk

Machine Learning for Cybersecurity and Privacy
Three research directions in cybersecurity and privacy will be presented in this session. The first research direction is on privacy-preserving data publishing. The objective is to share large volumes of data for machine learning without compromising the privacy of individuals. We will discuss multiple data sharing scenarios in privacy-preserving data publishing. The second research direction is on authorship analysis. The objective is to identify the author or infer the author's characteristics based on his/her writing styles. The third problem is on malware analysis. Assembly code analysis is one of the critical processes for mitigating the exponentially increasing threats from malicious software. However, it is a manually intensive and time-consuming process even for experienced reverse engineers. An effective and efficient assembly code clone search engine can greatly reduce the effort of this process. I will briefly describe our award-winning assembly clone search engine.

Dr. Benjamin Fung is a Canada Research Chair in Data Mining for Cybersecurity, a Full Professor of School of Information Studies (SIS) at McGill University, and an Associate Editor of IEEE Transactions on Data and Engineering (TKDE) and Elsevier Sustainable Cities and Society (SCS). He received a Ph.D. degree in computing science from Simon Fraser University in 2007. Collaborating closely with the national defense, law enforcement, transportation, and healthcare sectors, he has published over 140 refereed articles that span across the research forums of data mining, machine learning, privacy protection, and cybersecurity with over 14,000 citations. His data mining works in crime investigation and authorship analysis have been reported by media, including New York Times, BBC, CBC, etc. Dr. Fung is a licensed professional engineer in software engineering. See his research website http://dmas.lab.mcgill.ca/fung for more information.

Friday, June 9, 9:45–10:30

Jacob Jaremko talk

Using AI Responsibly for Medical Imaging Analysis
AI is increasingly being applied to assist (or in some settings potentially even replace) radiologists and other clinicians' diagnostic interpretations of medical images.  This talk will review how issues of data privacy, data ownership, consent and liability apply to those wishing to perform AI for medical image analysis in an ethically and legally appropriate and responsible way.

Jacob Jaremko is a Professor of Radiology and Adjunct Professor of Computing Science at the University of Alberta, a practicing board-certified Pediatric and Musculoskeletal radiologist and partner at Medical Imaging Consultants, and co-founder of 2 startup companies including MEDO.ai.  He has a PhD in Biomedical Engineering.  He is a Canada CIFAR AI Chair and Fellow of the Alberta Machine Intelligence Institute. His research has focused on developing objective imaging biomarkers of disease in ultrasound and MRI, and on implementing AI-augmented medical imaging diagnostic tools at the clinical point of care &emdash; building the 21st-century stethoscope.

Friday, June 9, 10:45–12:00

Responsible AI Panel

Panelists: Valentine Goddard, Alexander Scott, Gagan Gill, Sasha Luccioni, Allison Cohen, Richard Khoury
Moderator: Marina Sokolova

Valentine Goddard is a member of the Advisory Council on AI of Canada and a United Nations expert in AI Policy and Governance. Lawyer, certified mediator and curator, Ms. Goddard is the founder and executive director of AI Impact Alliance, an independent non-profit organization whose mission is to facilitate a responsible implementation of AI, and accelerate the achievement of the 17 UN’s Sustainable Development Goals. AI Impact Alliance is a founding organizational member of the International Observatory on the Ethical and Social Impact of AI (OBVIA) and of the Responsible AI Consortium. She is the lead architect of the AI on a Social Mission Conference, a respected international conference on the ethical and social implications of AI, and the Art Impact AI programs which position the arts’ critical role in the future of AI and democracy. Ms. Goddard provides expertise on emerging regulatory frameworks on AI and Data, and on the anticipatory foresight of their socioeconomic implications. She delivers programs that bridge civic engagement and knowledge mobilization with policy innovation. With collaborators from a global network, she delivers thought-provoking programming on the design and the governance of AI Systems (power dynamics, systemic change, political economy, geopolitics, human security, etc). She leads international working groups on critical issues such as gender equality in digital economies and the environment, and supports organizations in their adoption of AI with a focus on the social and regulatory implications.

Alexander Scott is the Director of Group Risk Management at Borealis AI, where he is responsible for the delivery of AI projects across the risk portfolio at RBC. Prior to joining Borealis, Alex led data science teams at TD Bank and a number of management consulting firms. He has supported analytics transformations at banks and public sector institutions across North America. Alex has a Masters in Management Analytics from the Smith School of Business at Queen’s University.

Gagan Gill leads the AI & Society portfolio at CIFAR, a globally renowned research organization dedicated to driving breakthroughs in science and technology. In this role, Gagan oversees programs and initiatives focused on exploring the impact of artificial intelligence on society and responsible AI adoption. Gagan is passionate about ensuring that the development and deployment of AI is done in a responsible, ethical, and equitable way that benefits all members of society. Prior to joining CIFAR, Gagan held a number of positions in program and policy development and knowledge mobilization. Gagan holds an MSc in Neurophysiology, with a secondary field of study in Neuroscience from the University of Guelph.

Dr. Sasha Luccioni is a Research Scientist and Climate Lead at HuggingFace, a Board Member of Women in Machine Learning (WiML), and a founding member of Climate Change AI (CCAI). Over the last decade, her work has paved the way to a better understanding of the societal and environmental impacts of AI technologies.

Allison Cohen is the Senior Applied AI Projects Lead at Mila, the world's largest deep learning research center. In this role, Allison works closely with AI researchers, social science experts and external partners to professionalize and deploy socially beneficial AI projects. Her portfolio of work includes: a misogyny detection and correction tool; an application that can identify online activity that is suspected of containing human trafficking victims; and an agricultural analytics tool to support sustainable practices among smallholder farmers in Rwanda. She was on InspiredMinds! Top 50 Influential Women in AI list and was the Runner Up for the 2022 Women in AI "Leader of the Year" Award in the category of Equity, Diversity and Inclusion. She holds an MA in Global Affairs from the University of Toronto and a BA in International Development from McGill University.

Richard Khoury received his Bachelor’s Degree and his Master’s Degree in Electrical and Computer Engineering from Laval University (Québec City, QC) in 2002 and 2004 respectively, and his Doctorate in Electrical and Computer Engineering from the University of Waterloo (Waterloo, ON) in 2007. From 2008 to 2016, he worked as a faculty member in the Department of Software Engineering at Lakehead University. In 2016, he moved to Université Laval as an associate professor. Since 2021, he’s also serving as president of the Canadian Artificial Intelligence Association. Dr. Khoury’s primary areas of research are data mining and natural language processing, and additional interests include knowledge management, machine learning, and artificial intelligence.

 

Marina Sokolova works in Text Data Mining and Machine Learning. Her research focuses on Ethical AI: studies of personal health information, fairness in ML applications, privacy protection. Her work on performance evaluation of ML classifiers, done with Guy Lapalme, received an international recognition. Dr. Sokolova is an active CAIAC member and Canadian AI contributor since 2004. In 2020, Marina Sokolova has been bestowed with Distinguished Service Award from CAIAC. Marina Sokolova received M.Sc. in Systems Science and Ph.D. in Computer Science from University of Ottawa. She is a member of Institute for Big Data Analytics at Dalhousie University and Adjunct Professor with Faculty of Medicine and Faculty pf Engineering, University of Ottawa.

Friday, June 9, 13:00–15:00

Tutorial

Privacy as hypothesis testing: linking Differential Privacy, membership attacks, and privacy audits
Machine learning (ML) ecosystems run on vast amounts of personal information which are digested into models used for understanding and prediction. However, these ML models have been shown to leak information about users. Differential Privacy enables privacy-preserving statistical analyses on sensitive datasets with provable privacy guarantees. As such, it is seeing increasing interest from both academia and industry. This tutorial covers important aspects of what Differential Privacy means in theory and in practice.

Specifically, we will explore an intuitive privacy definition based on hypothesis tests, and see that it helps understand the theoretical guarantees offered by Differential Privacy, as well as the practical attacks it can defend against. We will see how to use this understanding to perform privacy audits of ML models, and if time permits how it can be extended to enforce fairness in ML predictions or defend against adversarial examples.

Mathias Lécuyer is an assistant professor at the University of British Columbia in Vancouver. Prior to this, he was a PhD student at Columbia University with Roxana Geambasu, Augustin Chaintreau, and Daniel Hsu, and a postdoctoral researcher at Microsoft Research in New York. He is broadly interested in machine learning systems, with a specific focus on applications that provide rigorous guarantees of robustness, privacy, and security. His research focuses both on improving practical and theoretical ML tools (differential privacy, causal inference, reinforcement learning) and enabling specific use-cases and applications for them (ML attacks/defenses, privacy preserving data management, system decisions optimization).

Friday, June 9, 15:15–16:00

Tutorial (continued)

Privacy as hypothesis testing: linking Differential Privacy, membership attacks, and privacy audits

 

 

 

Responsible AI Co-chairs

Ebrahim Bagheri
Professor
Electrical, Computer, and Biomedical Engineering, Ryerson University
Website

Sébastien Gambs
Canada Research Chair in Privacy-preserving and Ethical Analysis of Big Data
Université du Québec à Montréal (UQAM)
Website

Eleni Stroulia
Professor, Department of Computing Science
Acting Vice Dean, Faculty of Science
Director, AI4Society Signature Area
University of Alberta
Website

 

 

 

Program Chairs

Farhana Zulkernine
School of Computing, Queen’s University, Kingston, Ontario

Amilcar Soares
Department of Computer Science, Memorial University of Newfoundland

Contact

  • Conference Chairs

  • Webmaster

  • www.caiac.ca

How to become a sponsor

Our thanks to:

Our host society
Our sponsors

© 2025 Canadian Artificial Intelligence Association