CAN-ASC-6.2:2025- Accessible and Equitable Artificial Intelligence Systems
9. Definitions
Information
Table of contents
Technical committee members
- Lisa Snider, Senior Digital Accessibility Consultant and Trainer, Access Changes Everything Inc.
- Nancy McLaughlin, Senior Policy Advisor on Accessibility, Canadian Radio-television and Telecommunications Commission
- John Willis, Senior Program Advisor, OPS Accessibility Office, Centre of Excellence for Human Rights.
- Jutta Treviranus (Chairperson), Director of the Inclusive Design Research Centre and Professor, OCAD University
- Alison Paprica, Professor (adjunct) and senior fellow, Institute for Health Policy, Management and Evaluation, University of Toronto
- Gary Birch, Executive Director, Neil Squire Society
- Lisa Liskovoi, Senior Inclusive Designer and Digital Accessibility Specialist, Inclusive Design Research Centre, OCAD University
- Clayton Lewis, Professor, University of Colorado
- Julia Stoyanovich, Associate Professor and Director, Tandon School of Engineering, New York University
- Anne Jackson, Professor, Seneca College
- Kave Noori, Artificial Intelligence Policy Officer, European Disability Forum
- Mia Ahlgren, Human Rights and Disability Policy Officer, Swedish Disability Rights Federation
- Sambhavi Chandrashekar (Vice-Chairperson), Global Accessibility Lead, D2L Corporation
- Julianna Rowsell, Senior Product Manager, Product Equity, Adobe
- Kate Kalcevich, Head of Accessibility Innovation, Fable
- Saeid Molladavoudi, Senior Data Science Advisor, Statistics Canada
- Merve Hickok, Founder, President and Research Director, Alethicist.org, Center for AI and Digital Policy, University of Michigan
9.1 Definitions
The following definitions shall apply in this Standard:
Accountability (in AI) — The responsibility an organization holds for the decisions, outcomes and impact of the AI systems it uses. The clear identification of a party that can resolve issues related to this use.
AI literacy — Understanding what AI is, how it works at a basic level, and how it can affect individuals and communities. It includes awareness of:
- how AI systems make decisions or generate outputs; and
- the potential for both immediate and long-term negative impacts, such as biased decisions in services or policies, or future consequences from poor predictions.
AI model — The system or program that has been trained using data to recognize patterns, make predictions, or perform tasks. It applies what it has learned from training data to new, unseen inputs.
AI systems — Technology that uses data about people or the environment to automatically or semi-automatically perform tasks, recognize patterns, make decision, predict outcomes, or create content—often using methods like machine learning or algorithms.
Assistive technology — Equipment, product system, hardware, software, or service that is used to increase, maintain, or improve capabilities of individuals.
Note 1: Assistive technology is an umbrella term that is broader than assistive products.
Note 2: Assistive technology can include assistive services and professional services needed for assessment, recommendation, and provision.
Sources: Adapted from the ISO/IEC Guide 71:2014 Guide for addressing accessibility in standards and CAN-ASC-EN 301 549:2024
Benefit — A positive outcome or advantage resulting from an action or decision, which contributes to the well-being, happiness, or flourishing of individuals, groups, or society as a whole.
Note: Ethical decision-making evaluates benefits in terms of their fairness, distribution, and impact on all affected parties, often weighing them against potential harms to determine the most morally justifiable course of action.
Bias (in AI) — Systematic errors or unfairness in the outcomes, predictions, or decisions made by AI Systems. These biases can arise from various stages of an AI system’s lifecycle, such as during data collection, model design, training, or deployment. AI bias often reflects or amplifies biases present in the data, processes, or assumptions used to build the system. It can lead to unequal or discriminatory outcomes, affecting fairness, accuracy, and trustworthiness of an AI system.
Cumulative harm — The aggregate effect of multiple harms (including low and medium impact) that intersect and build over time.
Deploy (AI systems) — The real-world use of an AI system in order to generate content or make decisions, recommendations, or predictions.
Education and training (people, specific to AI) — Information and activities aimed at expanding knowledge and/or skills, or both.
Equality — Providing each individual or group of people with the same resources and treatment.
Equitable, equity — Refers to fairness, justice, and freedom from discrimination. Equity recognizes that each person has different circumstances.
Equity focuses on enabling all individuals to achieve equal outcomes.
In equitable systems, resources, and opportunities are shaped to diverse individual needs, and the individual is engaged to determine the needs as well as the resources needed to address the needs.
Equivalent alternative — An alternative of equivalent availability, timeliness, cost and convenience to the person with a disability.
Governance (in AI) — A clearly defined framework of accountability in AI decision-making. This includes responsible governance bodies that make and explain decisions about data access and usage.
Harm — Anything that a product or service might do to create a negative consequence for people in any way. These harms may show up as physical, psychological, social, economic, or cultural.
Harms include perpetuating stereotypes, reinforcing existing inequities, and creating barriers for people with disabilities. Accessibility-related harms may include creating inaccessible interfaces, excluding users with specific needs, or failing to consider diverse modes of interaction.
Note: Harms are not always obvious; they can show up subtly, embedded in the way an AI system is designed, developed, or deployed.
Informed consent — The consent of an individual where sufficient information has been provided for the individual to understand the nature, purpose, and potential consequences of the decision or action to which they are consenting.
Note: Meaningful consent requires that individuals have a genuine option to withhold consent, supported by access to an equally effective and timely alternative decision-making process, either with or without AI, that includes direct human oversight and confirmation of the decision.
Management (in AI) — The ongoing, day-to-day activities and decisions involved in collecting, storing, using, sharing, and securely disposing of data used by AI systems.
Predictive policing — Determining the likelihood of criminal or suspicious behaviour or activity based on statistical models, algorithms and data.
Statistical discrimination — The negative impact of statistical reasoning on individuals who are outliers or far from the statistical average in the data used to make statistical decisions. (Statistical reasoning is inaccurate or wrong for people who are not statistically average.) Statistical discrimination is distinct from bias in data in that statistical discrimination cannot be addressed by ensuring proportional representation in data.
Training AI — Refers to a process of using data to teach an AI model how to perform tasks, recognize patterns, make decisions, predict outcomes, or create content. This includes developing the model from the beginning or improving (refining) it over time.
Training data — Refers to the information used during the process of training AI. It consists of labeled or structured examples from which the AI model learns.
Transparency (in AI) — Providing accessible notice and information regarding the data, models, workings, decisions, outcomes and use of AI systems.