Standard

CAN-ASC-6.2:2025- Accessible and Equitable Artificial Intelligence Systems

4. Technical committee members

Information
Designation number
CAN-ASC-6.2:2025
Priority area
Information and communication technologies
Status
Published
Developed by
Accessibility Standards Canada
Date posted
December
2025
Publication date
December
2025
Table of contents
Technical committee members
  • Lisa Snider, Senior Digital Accessibility Consultant and Trainer, Access Changes Everything Inc.   
  • Nancy McLaughlin, Senior Policy Advisor on Accessibility, Canadian Radio-television and Telecommunications Commission
  • John Willis, Senior Program Advisor, OPS Accessibility Office, Centre of Excellence for Human Rights.   
  • Jutta Treviranus (Chairperson), Director of the Inclusive Design Research Centre and Professor, OCAD University  
  • Alison Paprica, Professor (adjunct) and senior fellow, Institute for Health Policy, Management and Evaluation, University of Toronto
  • Gary Birch, Executive Director, Neil Squire Society   
  • Lisa Liskovoi, Senior Inclusive Designer and Digital Accessibility Specialist, Inclusive Design Research Centre, OCAD University   
  • Clayton Lewis, Professor, University of Colorado   
  • Julia Stoyanovich, Associate Professor and Director, Tandon School of Engineering, New York University
  • Anne Jackson, Professor, Seneca College   
  • Kave Noori, Artificial Intelligence Policy Officer, European Disability Forum   
  • Mia Ahlgren, Human Rights and Disability Policy Officer, Swedish Disability Rights Federation
  • Sambhavi Chandrashekar (Vice-Chairperson), Global Accessibility Lead, D2L Corporation   
  • Julianna Rowsell, Senior Product Manager, Product Equity, Adobe   
  • Kate Kalcevich, Head of Accessibility Innovation, Fable   
  • Saeid Molladavoudi, Senior Data Science Advisor, Statistics Canada   
  • Merve Hickok, Founder, President and Research Director, Alethicist.org, Center for AI and Digital Policy, University of Michigan   

4.1 Consumer and public interest

Lisa Snider, Senior Digital Accessibility Consultant and Trainer, Access Changes Everything Inc.   

Nancy McLaughlin, Senior Policy Advisor on Accessibility, Canadian Radio-television and Telecommunications Commission   

John Willis, Senior Program Advisor, OPS Accessibility Office, Centre of Excellence for Human Rights.   

4.2 Academic and research bodies

Jutta Treviranus (Chairperson), Director of the Inclusive Design Research Centre and Professor, OCAD University  

Alison Paprica, Professor (adjunct) and Senior fellow, Institute for Health Policy, Management and Evaluation, University of Toronto   

Gary Birch, Executive Director, Neil Squire Society   

Lisa Liskovoi, Senior Inclusive Designer and Digital Accessibility Specialist, Inclusive Design Research Centre, OCAD University   

Clayton Lewis, Professor, University of Colorado   

Julia Stoyanovich, Associate Professor and Director, Tandon School of Engineering, New York University 

4.3 Government bodies with authorities and jurisdiction

Anne Jackson, Professor, Seneca College   

Kave Noori, Artificial Intelligence Policy Officer, European Disability Forum   

Mia Ahlgren, Human Rights and Disability Policy Officer, Swedish Disability Rights Federation   

4.4 Business and industry

Sambhavi Chandrashekar (Vice-Chairperson), Global Accessibility Lead, D2L Corporation   

Julianna Rowsell, Senior Product Manager, Product Equity, Adobe   

Kate Kalcevich, Head of Accessibility Innovation, Fable   

Saeid Molladavoudi, Senior Data Science Advisor, Statistics Canada   

Merve Hickok, Founder, President and Research Director, Alethicist.org, Center for AI and Digital Policy, University of Michigan   

1. Accessibility Standards Canada: About us

Accessibility Standards Canada, under whose auspices this Standard has been produced, is a Government of Canada departmental corporation mandated through the Accessible Canada Act. Accessibility Standards Canada’s Standards contribute to the purpose of the Accessible Canada Act, which is to benefit all persons, especially persons with disabilities, through the realization of a Canada without barriers through the identification, removal, and prevention of accessibility barriers.  Disability, as defined by the Accessible Canada Act, means any impairment, including a physical, mental, intellectual, cognitive, learning, communication or sensory impairment — or a functional limitation — whether permanent, temporary, or episodic in nature, or evident or not, that, in interaction with a barrier, hinders a person’s full and equal participation in society. All of Accessibility Standards Canada’s standards development work, including the work of our technical committees, is carried out in recognition of, and in accordance with, the following principles in the Accessible Canada Act: all persons must be treated with dignity regardless of their disabilities;all persons must have the same opportunity to make for themselves the lives that they are able and wish to have regardless of their disabilities;all persons must have barrier-free access to full and equal participation in society, regardless of their disabilities;all persons must have meaningful options and be free to make their own choices, with support if they desire, regardless of their disabilities;laws, policies, programs, services, and structures must take into account the disabilities of persons, the different ways that persons interact with their environments and the multiple and intersecting forms of marginalization and discrimination faced by persons;persons with disabilities must be involved in the development and design of laws, policies, programs, services, and structures; andthe development and revision of accessibility standards and the making of regulations must be done with the objective of achieving the highest level of accessibility for persons with disabilities.These principles align with the principles of the United Nations’ Convention on the Rights of Persons with Disabilities, ratified by the Government of Canada in 2010 to recognize the importance of promoting, protecting, and upholding the human rights of persons with disabilities to participate fully in their communities. Standards developed by Accessibility Standards Canada align with Articles in the Convention.Accessibility Standards Canada seeks to create standards that are aligned with its vision. This includes commitments to break down barriers to accessibility and abide by the principle of “nothing without us” in our standards development process, where everyone, including persons with disabilities, can expect a Canada without barriers.As part of the “nothing without us” principle, Accessibility Standards Canada promotes that accessibility is good for everyone, as it can have society-wide benefits. As a result, Standards developed by Accessibility Standards Canada are designed to achieve the highest levels of accessibility. This means that Accessibility Standards Canada standards create equity-based technical requirements while taking into consideration national and international best practices, as opposed to focusing on minimum technical requirements. This approach is meant to push innovation in standards and develop technical requirements that have broad positive impacts. This approach to innovation strives to improve the outcomes for all Canadians, including creating employment opportunities and solutions that contribute to Canada's economic growth. The standards development process used by Accessibility Standards Canada is the most accessible in Canada, if not the world. Accessibility Standards Canada provides accommodations to meet the needs of Technical Committee members with disabilities. Accessibility Standards Canada provides compensation for people with disabilities to encourage their active participation. Accessibility Standards Canada ensures an accessible public review process, including accessible permission forms and multiple formats of the standard, to encourage Canadians with disabilities to comment. To facilitate an accessible experience for all, our standards are available for free on our website. This includes providing standards in multiple formats, including plain-language, American Sign language (ASL) and langue des signes québécoise (LSQ) summaries. This allows the following groups to benefit from the technical content of our standards:people with disabilities;people without disabilities;the federal public sector;private sector;non-government organizations;indigenous communities; andsociety.Accessibility Standards Canada applies an intersectional framework to capture the experiences of people with disabilities who also identify as 2SLGBTQI+, Indigenous Peoples, women, and visible minorities. Its standards development process requires that technical committees apply a cross-disability perspective to ensure that no new barriers to accessibility are unintentionally created. In addition, standards developed by Accessibility Standards Canada align with United Nations Sustainable Development Goals, which were adopted by Canada in 2015 to promote partnership, peace and prosperity for all people and the planet by 2030. Accessibility Standards Canada is engaged in the production of voluntary accessibility standards, which are developed by technical committees using a consensus-based approach. Each technical committee is composed of a balanced group of experts who develop the technical content of a standard. At least 30 % of these technical experts are people with disabilities and lived experience and 30% are from equity seeking groups including 2SLGBTQI+, indigenous peoples, women and visible minorities. These technical experts also include consumers and other users, government and authorities, labour and unions, other standards development organizations, businesses and industry, academic and research bodies, and non-governmental organizations. All Accessibility Standards Canada standards also incorporate related findings from research reports conducted through Accessibility Standards Canada’s Advancing Accessibility Grants and Contributions program. This program involves persons with disabilities, experts, and organizations to advance accessibility standards research and supports research projects that help with the identification, removal, and prevention of new barriers to accessibility.Accessibility Standards Canada standards are subject to review and revision to ensure that they reflect current trends and best practices. Accessibility Standards Canada will initiate the review of this Standard within four years of the date of publication. Suggestions for improvement, which are always welcome, should be brought to the notice of the respective technical committee. As a Standards Council of Canada Accredited Standards Development Organization, all Accessibility Standards Canada standards are developed through an accredited standards development process and follow Standard Council of Canada’s Requirements and Guidance for Standards Development Organizations. These voluntary standards apply to federally regulated entities and can be recommended to the Minister responsible for the Accessible Canada Act. In addition to its focus on developing accessibility standards, Accessibility Standards Canada has been a leader amongst Canadian federal organizations for promoting and adopting accessibility internal to government. Accessibility Standards Canada is the first organization in the federal government to have a Board of Directors majority-led by persons with disabilities. Accessibility Standards Canada has a state-of-the-art accessible office space for its employees, Board of Directors, and Technical Committee Members. The carefully designed accessible workspace aligns with the organization’s belief in the importance of equitable design. To obtain additional information on Accessibility Standards Canada, its standards or publications, please contact: Web site:https://accessible.canada.ca/E-mail:ASC.Standards-Normes.ASC@asc-nac.gc.ca  Mail:Accessibility Standards Canada320, Saint-Joseph BoulevardSuite 246Gatineau, QC J8Y 3Y8
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems

2. Standards Council of Canada statement

A National Standard of Canada is a standard developed by a Standards Council of Canada (SCC) accredited Standards Development Organization, in compliance with requirements and guidance set out by SCC. More information on National Standards of Canada can be found at www.scc.ca.SCC is a Crown corporation within the portfolio of Innovation, Science and Economic Development (ISED) Canada. With the goal of enhancing Canada's economic competitiveness and social well-being, SCC leads and facilitates the development and use of national and international standards. SCC also coordinates Canadian participation in standards development, and identifies strategies to advance Canadian standardization efforts.Accreditation services are provided by SCC to various customers, including product certifiers, testing laboratories, and standards development organizations. A list of SCC programs and accredited bodies is publicly available at www.scc.ca.
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/2-standards-council-canada-statement

3. ASC legal notice

Please read this Legal Notice before using the standard document. ,

3.1 Legal notice for standards

The Canadian Accessibility Standards Development Organization (operating as “Accessibility Standards Canada”) standards are developed through a consensus-based standards development process approved by the Standards Council of Canada. This process brings together volunteers representing varied viewpoints and interests to achieve consensus and develop standards.  ,

3.2 Understanding this edition of the standard

Revisions may have been or may eventually be developed in relation to this edition of the standard. It is the responsibility of the users of this document to verify if any revisions exist.   ,

3.3 Disclaimer and exclusion of liability

This document was developed as a reference document for voluntary use. It is the responsibility of the users to verify if laws or regulations make the application of this Standard mandatory or if trade regulations or market conditions stipulate its use, for example, in technical regulations, inspection plans originating from regulatory authorities, and certification programs. Although the primary application of this Standard is stated in its scope, it remains the responsibility of the users of this Standard to judge its suitability for their particular purpose. It is also the responsibility of the users to consider limitations and restrictions specified in the purpose and/or scope of this Standard. This document is provided without any representations, warranties, or conditions of any kind, expressed or implied, including without limitation, implied representations, warranties or conditions concerning this document’s fitness for a particular purpose or use, its merchantability, or its non-infringement of any third party’s intellectual property rights. Accessibility Standards Canada makes no representations or warranties in respect of the accuracy, completeness, or currency of any of the information published in this document. Accessibility Standards Canada makes no representations or warranties regarding this document’s compliance with any applicable statute, rule, regulation or combination thereof. Users of this document should consult applicable federal, provincial, and municipal laws and regulations. Accessibility Standards Canada does not, by the publication of its standards documents intend to urge action that is not in compliance with applicable laws, and this document may not be construed as doing so. In no event shall accessibility standards Canada, its contractors, agents, employees, directors, or officers, or his majesty the king in right of Canada, his employees, contractors, agents, directors, or officers be liable for any direct, indirect, or incidental damages, injury, loss, costs, or expenses, however caused, including but not limited to special or consequential damages, lost revenue, business interruption, lost or damaged data, or any other commercial or economic loss, whether based in contract, tort (including negligence), or any other theory of liability, arising out of or resulting from access to or possession or use of this document, even if accessibility standards Canada or any of them have been advised of the possibility of such damages, injury, loss, costs, or expenses. In publishing and making this document available, Accessibility Standards Canada is not undertaking to render professional or other services for or on behalf of any person or entity or to perform any duty owed by any person or entity to another person or entity. The information in this document is directed to those who have the appropriate degree of knowledge and experience to use and apply its contents, and Accessibility Standards Canada accepts no responsibility whatsoever arising in any way from any and all use of or reliance on the information contained in this document. Accessibility Standards Canada publishes voluntary standards and related documents. Accessibility Standards Canada has no power, nor does it undertake, to enforce conformance with the contents of the standards or other documents published by Accessibility Standards Canada.  ,

3.4 Intellectual property and ownership

As between Accessibility Standards Canada and users of this document (whether it be printed, electronic or alternate form), Accessibility Standards Canada is the owner, or the authorized licensee, of all copyright and moral rights contained herein. Additionally, Accessibility Standards Canada is the owner of its official mark. Without limitation, the unauthorized use, modification, copying, or disclosure of this document may violate laws that protect Accessibility Standards Canada and / or others’ intellectual property and may give rise to a right in Accessibility Standards Canada and / or others to seek legal redress for such use, modification, copying, or disclosure. To the extent permitted by licence or by law, Accessibility Standards Canada reserves all intellectual property and other rights in this document.  ,

3.5 Patent rights

Some elements of this Standard may be the subject of patent rights or pending patent applications. Accessibility Standards Canada shall not be held responsible for identifying any or all such patent rights. Users of this Standard are expressly informed that determination of the existence and / or validity of any such patent rights is entirely their own responsibility.  ,

3.6 Licence to comments

In this Legal Notice, a “comment” refers to all written or orally provided information, including all suggestions, that a user provides to Accessibility Standards Canada in relation to a standard and / or a draft standard. By providing a comment to Accessibility Standards Canada in relation to a standard and / or draft standard, the commenter grants to Accessibility Standards Canada and the Government of Canada a non-exclusive, royalty-free, perpetual, worldwide, and irrevocable licence to use, translate, reproduce, disclose, distribute, publish, modify, authorize to reproduce, communicate to the public by telecommunication, record, perform, or sublicense the comment, in whole or in part and in any form or medium, for revising the standard and/or draft standard, and/or for non-commercial purposes. By providing the comment, the commenter being the sole owner of the copyright or having the authority to license the copyright on behalf of their employer, confirms their ability to confer the licence and the commenter waives all associated moral rights, including, without limitation, all rights of attribution in respect of the comment. Where the provider of the comment is not the comment’s author, the provider confirms that a waiver of moral rights by the author has been made in favour of the provider or the comment’s copyright owner. At the time of providing a comment, the commenter must declare and provide a citation for any and all intellectual property within the comment that is owned by a third party. ,

3.7 Authorized uses of this document

This document, in all formats including alternate formats, is being provided by Accessibility Standards Canada for informational, educational, and non-commercial use only. The users of this document are authorized to do only the following: Load this document onto a computer for the sole purpose of reviewing it.Search and browse this document. Print this document if it is in electronic format.Disseminate this document for informational, educational, and non-commercial purposes.Users shall not and shall not permit others to:Alter this document in any way or remove this Legal Notice from the attached standard.Sell this document without authorization from Accessibility Standards Canada. Use this document to mislead any users of a product, process or service addressed by this Standard.Reproduce all of, or specific portions of the standard within other publicly available standards documents or works, unless Accessibility Standards Canada grants, in writing, permission to do so and the following attribution is included by the user: “This material comes from [insert title of standards] and cannot be further reproduced without Accessibility Standards Canada’s authorization”.If you do not agree with any of the terms and conditions contained in this Legal Notice, you must not load or use this document or make any copies of the contents hereof. Use of this document constitutes your acceptance of the terms and conditions of this Legal Notice. , National Standard of CanadaCAN-ASC-6.2:2025Accessible and Equitable Artificial Intelligence Systems Published in December 2025 by Accessibility Standards CanadaA departmental corporation of the federal government320, Saint-Joseph Boulevard, Suite 246, Gatineau, QC, J8Y 3Y8To access standards and related publications, visit accessible.canada.ca or call 1-833-854-7628.Cette Norme nationale du Canada est disponible en versions française et anglaise.ICS code(s): 03.100, 35.020, 35.040, 35.080, 35.100, 35.240ISBN 978-0-660-79223-1Catalogue number AS4-34/1-2025E-PDF© His Majesty the King in Right of Canada, as represented by the Minister responsible for the Accessible Canada Act, 2025.No part of this publication may be reproduced in any form without the prior permission of the publisher.
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/3-asc-legal-notice

4. Technical committee members

4.1 Consumer and public interest

Lisa Snider, Senior Digital Accessibility Consultant and Trainer, Access Changes Everything Inc.   Nancy McLaughlin, Senior Policy Advisor on Accessibility, Canadian Radio-television and Telecommunications Commission   John Willis, Senior Program Advisor, OPS Accessibility Office, Centre of Excellence for Human Rights.    ,

4.2 Academic and research bodies

Jutta Treviranus (Chairperson), Director of the Inclusive Design Research Centre and Professor, OCAD University  Alison Paprica, Professor (adjunct) and Senior fellow, Institute for Health Policy, Management and Evaluation, University of Toronto   Gary Birch, Executive Director, Neil Squire Society   Lisa Liskovoi, Senior Inclusive Designer and Digital Accessibility Specialist, Inclusive Design Research Centre, OCAD University   Clayton Lewis, Professor, University of Colorado   Julia Stoyanovich, Associate Professor and Director, Tandon School of Engineering, New York University  ,

4.3 Government bodies with authorities and jurisdiction

Anne Jackson, Professor, Seneca College   Kave Noori, Artificial Intelligence Policy Officer, European Disability Forum   Mia Ahlgren, Human Rights and Disability Policy Officer, Swedish Disability Rights Federation    ,

4.4 Business and industry

Sambhavi Chandrashekar (Vice-Chairperson), Global Accessibility Lead, D2L Corporation   Julianna Rowsell, Senior Product Manager, Product Equity, Adobe   Kate Kalcevich, Head of Accessibility Innovation, Fable   Saeid Molladavoudi, Senior Data Science Advisor, Statistics Canada   Merve Hickok, Founder, President and Research Director, Alethicist.org, Center for AI and Digital Policy, University of Michigan   
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/4-technical-committee-members

5. Preface

This is the first edition of the CAN-ASC-6.2, Accessible and Equitable Artificial Intelligence Systems.This Standard is intended to align with other relevant standards, such as:CAN-ASC-EN 301 549:2024-Accessibility requirements for ICT products and services (EN 301 549:2021, IDT);CAN-ASC-1.1:2024 (REV-2025)-Employment;CSA ISO/IEC 42001:25-Information technology-Artificial intelligence-Management system;CSA ISO/IEC 30071-1:20-Information technology-Development of user interface accessibility-Part 1:Code of practice for creating accessible ICT products and services; andCSA ISO/IEC 29138-1:19-Information technology-User interface accessibility-Part 1:User accessibility needs.This Standard is intended to align with relevant acts, codes, regulations and statutes, such as:Accessible Canada Act;United Nations' Convention on the Rights of Persons with disabilities; andCanadian Human Rights Act.This voluntary Standard can be used for conformity assessment.Development of this Standard was undertaken by Accessibility Standards Canada (ASC). The content was prepared by the Technical Committee on Accessible and Equitable Artificial Intelligence Systems, selected by ASC, under the authority of ASC management, and has been formally approved by the Technical Committee.  ,

5.1 International agreements

5.1.1 Convention on the Rights of Persons with Disabilities

The United Nations’ Convention on the Rights of Persons with Disabilities protects and promotes the rights and dignity of persons with disabilities without discrimination, and on an equal basis with others. Parties to the Convention are required to promote and ensure the full enjoyment of human rights of persons with disabilities, including full equality under the law. The Convention has served as the major catalyst in the global movement towards viewing persons with disabilities as full and equal members of society.This Standard aligns with the following Articles in the Convention:Article 5 - Equality and non-discriminationArticle 6 - Women with disabilitiesArticle 7 - Children with disabilitiesArticle 8 - Awareness-raisingArticle 9 - AccessibilityArticle 11 - Situations of risk and humanitarian emergenciesArticle 14 - Liberty and security of personArticle 16 - Freedom from exploitation, violence and abuseArticle 17 - Protecting the integrity of the personArticle 22 - Respect for privacyArticle 24 - EducationArticle 25 - HealthArticle 27 - Work and employmentArticle 28 - Adequate standard of living and social protection

5.1.2 Sustainable Development Goals

The United Nations 2030 Agenda for Sustainable Development and its 17 Sustainable Development Goals are a global call to action. They aim to leave no one behind and address social, economic, and environmental challenges. Canada and 192 other United Nations member states adopted the 2030 Agenda in 2015. Standards can provide concrete and actionable guidance towards the achievement of the Goals.This Standard contributes to the following Goals:Goal 4 - Ensure inclusive and equitable quality education and promote lifelong learning opportunities for allGoal 5 - Achieve gender equality and empower all women and girlsGoal 9 - Build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovationGoal 17 - Strengthen the means of implementation and revitalize the Global Partnership for Sustainable Development
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/5-preface

6. Introduction

This Standard is the framework of a multi-part standard. It is intended to create a high-level structure for future parts, as well as address automated decision tools that preceded large language models and generative Artificial Intelligence (AI). It is the diversity of people with disabilities in systems designed to favour statistical averages that warrants the urgency of this standard. While compliance to Standards is best supported by testable, technically precise specifications, the requirements, necessary protections and contexts of use of AI by people with disabilities are variable and in flux. Technical precision would not address this diversity, nor would it keep pace with the changes in the field of AI. This Standard is designed to support contextual adaptability to the diversity of applications and requirements of people with disabilities. This Standard emphasizes process and intended outcomes and will be supported by more precise technical guidance.AI has the potential to present both extreme benefits and extreme harms to people with disabilities. To uphold the principles of the Accessible Canada Act when deploying AI requires that people with disabilities:experience equitable benefits from AI systems;do not experience inequitable harms from AI systems;do not experience a loss of rights and freedoms due to the use of AI systems; andare given agency and are treated with respect in their interactions with AI systems, including the right to choose equitable alternatives. These four clauses outline requirements to ensure that:AI is accessible to people with disabilities (Clause 10);AI systems are equitable to people with disabilities (Clause 11);organizations implement processes needed to achieve accessible and equitable AI (Clause 12); andAI education supports systemic change needed to achieve accessible and equitable AI (Clause 13).
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/6-introduction

7. Scope

7.1 Terminology

In this Standard, three terms are defined as follows:Shall: expresses a requirement, or a provision that the user is obliged to satisfy to comply with the standard.Should: expresses a recommendation, or that which is advised but not required.May: expresses an option, or that which is permissible within the limits of the standard.Notes accompanying clauses do not include requirements or alternative requirements; the purpose of a note accompanying a clause is to separate explanatory or informative material.Notes to tables and figures are considered part of the table or figure and may be written as requirements.Annexes are designated normative (mandatory) or informative (non-mandatory) to define their application.
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/7-scope

8. References

This Standard refers to the following publications, and where such reference is made, it shall be to the edition listed below: CAN-ASC-EN 301 549:2024-Accessibility requirements for ICT products and services (EN 301 549:2021, IDT);Accessible Canada Act, 2019; Personal Information Protection and Electronic Documents Act, 2000; andPrivacy Act, 1985.
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/8-references

9. Definitions

9.1 Definitions

The following definitions shall apply in this Standard: Accountability (in AI) — The responsibility an organization holds for the decisions, outcomes and impact of the AI systems it uses. The clear identification of a party that can resolve issues related to this use.AI literacy — Understanding what AI is, how it works at a basic level, and how it can affect individuals and communities. It includes awareness of:how AI systems make decisions or generate outputs; andthe potential for both immediate and long-term negative impacts, such as biased decisions in services or policies, or future consequences from poor predictions.AI model — The system or program that has been trained using data to recognize patterns, make predictions, or perform tasks. It applies what it has learned from training data to new, unseen inputs.AI systems — Technology that uses data about people or the environment to automatically or semi-automatically perform tasks, recognize patterns, make decision, predict outcomes, or create content—often using methods like machine learning or algorithms.Assistive technology — Equipment, product system, hardware, software, or service that is used to increase, maintain, or improve capabilities of individuals. Note 1: Assistive technology is an umbrella term that is broader than assistive products.Note 2: Assistive technology can include assistive services and professional services needed for assessment, recommendation, and provision. Sources: Adapted from the ISO/IEC Guide 71:2014 Guide for addressing accessibility in standards and CAN-ASC-EN 301 549:2024 Benefit — A positive outcome or advantage resulting from an action or decision, which contributes to the well-being, happiness, or flourishing of individuals, groups, or society as a whole.Note: Ethical decision-making evaluates benefits in terms of their fairness, distribution, and impact on all affected parties, often weighing them against potential harms to determine the most morally justifiable course of action.Bias (in AI) — Systematic errors or unfairness in the outcomes, predictions, or decisions made by AI Systems. These biases can arise from various stages of an AI system’s lifecycle, such as during data collection, model design, training, or deployment. AI bias often reflects or amplifies biases present in the data, processes, or assumptions used to build the system. It can lead to unequal or discriminatory outcomes, affecting fairness, accuracy, and trustworthiness of an AI system.Cumulative harm — The aggregate effect of multiple harms (including low and medium impact) that intersect and build over time.Deploy (AI systems) — The real-world use of an AI system in order to generate content or make decisions, recommendations, or predictions.Education and training (people, specific to AI) — Information and activities aimed at expanding knowledge and/or skills, or both.Equality — Providing each individual or group of people with the same resources and treatment.Equitable, equity — Refers to fairness, justice, and freedom from discrimination. Equity recognizes that each person has different circumstances.Equity focuses on enabling all individuals to achieve equal outcomes.In equitable systems, resources, and opportunities are shaped to diverse individual needs, and the individual is engaged to determine the needs as well as the resources needed to address the needs.Equivalent alternative — An alternative of equivalent availability, timeliness, cost and convenience to the person with a disability.Governance (in AI) — A clearly defined framework of accountability in AI decision-making. This includes responsible governance bodies that make and explain decisions about data access and usage.Harm — Anything that a product or service might do to create a negative consequence for people in any way. These harms may show up as physical, psychological, social, economic, or cultural.Harms include perpetuating stereotypes, reinforcing existing inequities, and creating barriers for people with disabilities. Accessibility-related harms may include creating inaccessible interfaces, excluding users with specific needs, or failing to consider diverse modes of interaction.Note: Harms are not always obvious; they can show up subtly, embedded in the way an AI system is designed, developed, or deployed.Informed consent — The consent of an individual where sufficient information has been provided for the individual to understand the nature, purpose, and potential consequences of the decision or action to which they are consenting.Note: Meaningful consent requires that individuals have a genuine option to withhold consent, supported by access to an equally effective and timely alternative decision-making process, either with or without AI, that includes direct human oversight and confirmation of the decision. Management (in AI) — The ongoing, day-to-day activities and decisions involved in collecting, storing, using, sharing, and securely disposing of data used by AI systems.Predictive policing — Determining the likelihood of criminal or suspicious behaviour or activity based on statistical models, algorithms and data.Statistical discrimination — The negative impact of statistical reasoning on individuals who are outliers or far from the statistical average in the data used to make statistical decisions. (Statistical reasoning is inaccurate or wrong for people who are not statistically average.) Statistical discrimination is distinct from bias in data in that statistical discrimination cannot be addressed by ensuring proportional representation in data.Training AI — Refers to a process of using data to teach an AI model how to perform tasks, recognize patterns, make decisions, predict outcomes, or create content. This includes developing the model from the beginning or improving (refining) it over time.Training data — Refers to the information used during the process of training AI. It consists of labeled or structured examples from which the AI model learns.Transparency (in AI) — Providing accessible notice and information regarding the data, models, workings, decisions, outcomes and use of AI systems.
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/9-definitions

10. Accessible AI

To support the principles of full and equitable participation by people with disabilities according to the Accessible Canada Act, as applied to AI systems, the following shall be accessible to people with disabilities:AI systems; andthe processes, resources, services and tools used to plan, procure, create, implement, govern, manage, maintain, monitor, and use AI systems.It shall be possible for people with disabilities to be:active participants in all roles in the AI lifecycle complying with Clause 10.1; andusers of AI systems complying with Clause 10.2. ,

10.1 People with disabilities as full participants in the AI lifecycle

It shall be possible for people with disabilities to participate fully in all roles in the AI lifecycle and in all parts of the AI lifecycle. This includes, but is not limited to:creation of datasets;AI systems, and components (design, coding, implementation, evaluation, refinement);procurement;consumption;governance;management; andmonitoring.

10.1.1 People with disabilities engaged in the creation of AI systems

The tools and processes deployed by organizations to create (design, code, implement, evaluate, refine), procure, consume, govern, manage, and monitor AI systems as well as their components and outputs shall be accessible to people with disabilities.

10.1.1.1 Accessible AI systems

Where organizations engage in creating AI systems, the AI systems shall be accessible to people with disabilities, including meeting the relevant clauses of CAN-ASC-EN 301 549:2024.

10.1.1.2 Accessible AI tools

Where an organization is creating tools used to design and develop AI systems, the tools shall at a minimum meet the relevant clauses of CAN-ASC-EN 301 549:2024, specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.

10.1.1.3 Accessible AI outputs

Where an organization is creating AI systems, the outputs from these systems shall at a minimum meet the relevant clauses of CAN-ASC-EN 301 549:2024, specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.

10.1.1.4 Accessible tools and outputs created by AI systems

Where an organization is using AI systems to create (design, code, implement, evaluate, refine) tools, the tools created using AI systems and their outputs shall at a minimum meet the relevant clauses of CAN-ASC-EN 301 549:2024, specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.

10.1.2 People with disabilities engaged in deploying AI systems

Where AI systems are deployed, the tools and resources used to implement AI systems, including tools and resources used to customize (pre-trained models), trained models, setup, maintain, govern and manage AI systems, shall be accessible to people with disabilities, including at a minimum meeting the relevant clauses of CAN-ASC-EN 301 549:2024, specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.

10.1.3 People with disabilities engaged in oversight of AI systems

Tools and processes deployed by organizations to assess, monitor, report issues, evaluate and improve AI systems shall be accessible to people with disabilities, at a minimum meeting the relevant clauses of CAN-ASC-EN 301 549:2024, specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12. ,

10.2 People with disabilities as users of AI systems

People with disabilities shall be able to equitably use and benefit from AI systems. Where organizations deploy AI systems and tools that impact employees, contractors, partners, customers, or members of the public, the outputs of AI systems shall at a minimum meet the relevant clauses of CAN-ASC-EN 301 549:2024, specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.

10.2.1 Accessible transparency and explainability documentation

Information to address disclosure, notice, transparency, explainability, and contestability of AI systems, their function, decision-making mechanisms, potential risks, and implementation shall be published before deployment and kept up-to-date (reflecting the current state of the AI used), and shall be accessible to people with disabilities, at a minimum meeting CAN-ASC-3.1:2025-Plain Language, and the relevant clauses of CAN-ASC-EN 301 549:2024, specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.

10.2.2 Accessible feedback mechanisms

Mechanisms to provide feedback (see Clause 12.12) shall be accessible to people with disabilities, at a minimum meeting the relevant clauses of CAN-ASC-3.1:2025, and the relevant clauses of CAN-ASC-EN 301 549:2024, specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.

10.2.3 Mitigating statistical discrimination in the use of AI-based assistive technology

Not all people with disabilities will benefit equally from AI-based accommodations if their interaction modes aren’t captured in the data used to train the AI. When AI-based technology is considered for an accommodation, organizations shall evaluate potential inequities in collaboration with the intended beneficiary.Note: For example, sign language interpretation requires specialized training to ensure linguistic and cultural accuracy. Replacing human interpreters with AI or using AI-powered Sign Language Recognition (SLR) technologies can cause harm to ASL and LSQ signers, particularly in critical settings such as justice and healthcare. Any use of AI for sign language communication should be agreed upon with the intended beneficiary, with the option to opt for a human alternative at any time. 
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/10-accessible-ai

11. Equitable AI

When AI systems make decisions about, or otherwise impact, people with disabilities, those decisions and uses of AI systems shall result in equitable treatment of and outcomes for people with disabilities. Note: Doing so will result in benefits to individuals and population groups, as well as to society at large, by helping ensure that all individuals are able to lead productive lives and contribute to society. Principles: Equitable treatment requires that people with disabilities:experience equitable benefits from AI systems;do not experience inequitable harms from AI systems;do not experience a loss of rights and freedoms due to the use of AI systems; andare given agency and are treated with respect in their interactions with AI systems, including the right to choose equitable alternatives. ,

11.1 Equitable access to benefits

Building on the principle that people with disabilities should be able to benefit from AI systems at least comparably to others, organizations shall make efforts to ensure that people with disabilities experience equitable benefits from AI systems.

11.1.1 Preventing underrepresentation and misrepresentation in training data

Organizations shall ensure that people with disabilities are not underrepresented or misrepresented in the training data, including:avoiding biased proxies;biased data labels; orsynthetic datathat fails to reflect actual disability experiences, diversity, and variability.

11.1.2 Equitable performance across user groups

Organizations shall validate and tune AI systems to perform comparably along the dimensions of:accuracy;reliability; androbustness for people with disabilities as well as for others.

11.1.3 Disaggregated performance metrics

Organizations shall assess and report AI system performance using disaggregated results for people with disabilities.

11.1.4 Monitoring of real-world impacts on people with disabilities

Organizations shall continuously monitor both the performance of AI systems according to validation criteria and the real-world impacts of AI-assisted decisions on people with disabilities, complying with Clause 12.8.

11.1.5 System improvement through feedback and refinement

Organizations shall use monitoring and performance data including data collected in the public registry complying with Clause 12.8, to:improve system usability;gather additional data;enhance data quality; andrefine validation criteria,thereby creating positive feedback loops. ,

11.2 Assessment and mitigation of harms

Organizations shall assess and mitigate potential harms that AI systems may pose to people with disabilities throughout the AI lifecycle. This includes:identifying;prioritizing; andaddressing risksthat may disproportionately affect people with disabilities due to their status as statistical minorities or outliers in data-driven systems.Organizations shall ensure that harm assessments are not limited to high-impact decisions but also account for cumulative, indirect, or context-specific harms. These harms may include, but are not limited to:discrimination;loss of privacy;reputational damage;exclusion; orerosion of agency and autonomy.

11.2.1 Equitable risk assessments

Where risk assessment frameworks are employed to determine the risks and benefits of AI systems, risks for people with disabilities, who feel the greatest impact of harm, shall receive priority. The risk assessment shall not be based solely on the risks and benefits to the majority. 

11.2.2 Mitigating cumulative harms

Care must be taken to recognize that people with disabilities may experience higher levels of harm than others, caused by the aggregate effect of many cumulative harms that intersect or build up over time as the result of AI-assisted decisions that are not otherwise classified as high impact. Where there are threats of serious or irreversible harm, lack of quantifiable certainty (e.g. in risk assessment) shall not be used as a reason for postponing effective measures to prevent harmful impacts to people with disabilities.

11.2.3 Equitable accuracy assessment criteria

Organizations shall select accuracy assessment criteria in line with the risk assessment results, taking care to capture the actual or potential harms to people with disabilities who are outliers in the data.Accuracy assessment should:include disaggregated metrics for people with disabilities;consider the context of use; andconsider the conditions relative to people with disabilities.

11.2.4 Information security for people with disabilities

Organizations shall develop plans to protect people with disabilities in the case of data breaches or malicious attacks of AI systems. The plans shall identify risks associated with disabilities as well as clear and swift actions to protect people with disabilities.People with disabilities should not have to consent to a lower level of information security to benefit from AI systems. Note: Disability data is highly unique, making it easily identifiable. As a result, the data of people with disabilities requires a higher level of protection. 

11.2.5 Fairness and non-discrimination in AI decision-making

Care must be taken to recognize that people who are discriminated against in AI-assisted decision making are often people with disabilities. Organizations shall ensure that AI systems are not negatively biased against people with disabilities due to:biased choices during data modelling;misrepresentation in the training data;use of biased proxies;biased data labelling;unrepresentative synthetic data;biased design of the systems;tuning of the systems according to incorrect or incomplete criteria; orbiases that arise in the context of use of the system.

11.2.6 Mitigating statistical discrimination

Even with full proportional representation in the data, people with disabilities will likely remain outliers or marginalized minorities in the context of AI-assisted decisions. For this reason, to mitigate statistical discrimination, organizations shall not subject people with disabilities to AI-assisted decisions without their informed consent, understanding and access to equivalent alternatives.

11.2.7 Reputational harms

Organizations shall ensure that AI systems do not repeat or distribute stereotypes or misinformation about people with disabilities. ,

11.3 Upholding of rights and freedoms

Organizations shall uphold the fundamental rights and freedoms of people with disabilities in all uses of AI systems. This includes ensuring that AI systems are not used in ways that compromise privacy, dignity, and autonomy.

11.3.1 Freedom from surveillance

Organizations shall refrain from discriminatory use of AI tools for surveillance. 

11.3.2 Freedom from discriminatory profiling

Organizations shall refrain from using AI tools for:biometric categorization;emotion analysis; orpredictive policing of people with disabilities.  ,

11.4 Preservation of agency and respectful treatment

Organizations shall ensure that people with disabilities retain agency, autonomy, and dignity in all interactions with AI systems. This includes:meaningful participation in decision-making processes;access to accurate and understandable information; andthe ability to choose equitable alternatives to AI-assisted decisions.AI systems shall be designed and deployed in ways that respect the rights of people with disabilities to:engage;understand; andinfluence the systems that affect them.Organizations shall provide mechanisms for:informed consent;contestability; andhuman oversight.Organizations shall prevent the use of AI systems to misinform or manipulate.

11.4.1 Engagement and participation

Organizations shall solicit input from, and encourage the involvement of, individuals with disabilities during all stages of AI system:consideration;planning;design;development;use; andoperational management, including continuous monitoring post deployment.

11.4.2 Information and disclosure

Organizations shall inform people about their intended or actual use of AI systems that make decisions about, or otherwise impact, people with disabilities. This information shall be provided in a manner that is:accurate;accessible; andunderstandable, complying with Clause 10.

11.4.3 Consent, choice, and recourse

When AI systems are used to make or assist in decisions, organizations shall offer a multi-level optionality mechanism for:clients;employees; andother impacted individuals.These mechanisms shall enable the people identified in 11.4.3 a) to request an equivalently full-featured and timely alternative decision-making process that is, at the individual’s choice, either performed:without the use of AI; ormade using AI with direct human oversight and verification of the decision.People shall be given information in accessible formats about ways to:correct;contest;change; orreverse an AI-assisted decision or action that impacts them (see Clause 11.2).Note: Accessible formats include formats that will be accessible to the requestor. Alternate formats that may be requested under the Accessible Canada Regulation include audio formats, braille, large print, and electronic format.

11.4.4 Freedom from misinformation and manipulation

Organizations shall ensure that AI systems are not used to specifically misinform or manipulate people with disabilities. 

11.4.5 Support of human control and oversight

Organizations shall ensure there is a traceable chain of human responsibility that makes it clear who is accountable for the accessibility and equity of decisions made by an AI system.  ,

11.5 Supporting research and development of equitable AI

Where organizations support research and development of AI, they shall include support of research and development of accessible and equitable AI systems.
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/11-equitable-ai

12. Organizational processes to support accessible and equitable AI

Where organizations implement or use AI systems, or both, organizational processes shall result in AI systems that are accessible to, and equitable for, people with disabilities (see Clauses 10 and 11 for definitions of “accessible” and “equitable” AI). Each process shall:include people with disabilities in governance and decision-making throughout the AI lifecycle; andbe accessible to people with disabilities who are:governance or management committee members;employees;contractors;clients;disability organization members; ormembers of the public.Where applicable, this Clause applies to organizational processes used to:support accessible and equitable AI governance;plan and justify the need for AI systems;notify the public of an intention to use AI systems;assess data for their appropriateness for use by AI systems;design and develop accessible and equitable AI systems;procure accessible and equitable AI systems;customize accessible and equitable AI systems;conduct ongoing impact assessments, ethics oversight, and monitoring of potential harms;train personnel in accessible and equitable AI;provide transparency, accountability, and consent mechanisms;provide access to equivalent alternative approaches;address accessibility and equity feedback, incidents, and complaints about AI systems;provide review, refinement, and termination mechanisms; andsafely store and manage data throughout the AI lifecycle. ,

12.1 Support accessible and equitable AI governance

Organizations shall solicit input from and encourage the involvement of people with disabilities during all stages of the:dataset;AI systems;components creation (design, coding, implementation, evaluation, refinement);procurement;consumption;governance;management; andmonitoring.In addition to consulting people with disabilities who are external to the organization on specific AI-related decisions (see Clauses 12.4, 12.5, 12.6, 12.7, 12.9, 12.14), organizations should have ongoing and continuous involvement of people with disabilities by:recruiting;onboarding; andsupporting the involvement of people with disabilities.Involvement of people with disabilities should include:employees;contractors; andmembers of governance and management committees (e.g., board of directors, steering committees, advisory committees, and management teams). ,

12.2 Plan and justify the use of AI systems

Where an organization is proposing and planning to deploy an AI system, the impact on people with disabilities shall be considered prior to deployment and throughout the use of the AI system. 

12.2.1 Plan for the use of AI

Organizations shall create a plan for the use of AI that:includes information on:how and where human control and oversight are embedded in the use of the AI system; andhow users can opt for a human alternative to the AI system; andoutlines clear and swift actions to protect people with disabilities in the event of data breaches:at any stage of the project lifecycle; orwhen malicious attacks of AI systems occur after deployment.

12.2.2 Risk and impact determination

Effective measures to prevent harmful impacts to people with disabilities shall be taken regardless of the availability of quantitative or predictive, or both risk determination(s).Impact and risk assessment processes shall:include people with disabilities that may be directly or indirectly impacted by the AI system as active participants in the decisions (see Clause 11.2.2);include determination of impact on the broadest range of people with disabilities as possible; andaccount for aggregate impacts for people with disabilities of many cumulative harms that can intersect or build up over time as the result of AI-assisted decisions that are not classified as high impact.

12.2.3 Equitable risk assessment frameworks

Where risk assessment frameworks are employed to determine the risks and benefits of AI systems (see Clause 11.2.2), organizations shall:Prioritize prevention or mitigation, or both, for risks for people with disabilities (who are minorities and outliers in statistical modelling). For example, by ensuring that risk assessment models are not based solely on the risks and benefits to the majority (or “typical” case).Select accuracy assessment criteria in line with the risk assessment results, taking care to capture the actual or potential harms to individuals with disabilities and other minority or disadvantaged groups.Include disaggregated metrics for people with disabilities in accuracy assessments and consider the context of use and the conditions relative to people with disabilities who may be impacted by the use of the AI system. ,

12.3 Provide notice of intention to use an AI system

The intention to use an AI system shall be publicly disclosed in accessible formats as part of the organization’s accessibility plan and distributed to national disability organizations and interested parties.Note: This Clause applies to all AI systems whether or not it is determined that they directly affect people with disabilities.

12.3.1 Request for notice

Organizations shall establish a process whereby any interested party can request to be included in a distribution list for notices. 

12.3.2 Accessible input mechanisms in notices

Organizations shall include accessible methods to provide input in notices.  ,

12.4 Assess data for their appropriateness for the use of AI systems

Organizations shall determine if potential datasets containing information about people with disabilities are appropriate, inappropriate, or conditionally appropriate for use as inputs to AI systems, involving relevant people with disabilities in making the determination of appropriateness.There shall be an alignment between:the dataset that is used as an input to an AI system;the model or method used by the AI system; andthe objective or task assigned to the AI system.The appropriateness of each dataset shall be assessed before it is used as an input for every:objective or task; andAI system.Note: A dataset may be an appropriate input for a specific objective or task in one AI system, but inappropriate for other tasks in the same AI system, or inappropriate when applied to individuals who are different from most people on multiple variables or labels.Refer to Annex A for examples.

12.4.1 Prevention of bias and misrepresentation in data

Organizations shall collect, use, and govern data, data models, and algorithms in a manner that prevents negative bias or unwarranted discrimination toward people with disabilities in the use and outputs of AI systems. Specifically, steps shall be taken to prevent: biased choices during data modelling;misrepresentation in the training data;  the use of biased proxies;biased data labelling;biased design of the systems;tuning of the systems according to incorrect or incomplete criteria;biases that arise in the context of use of the system; andsynthetic data that reflects insufficient scope and purpose of disability experiences relevant to the purpose of the AI system.

12.4.2 Prevention of harm and discrimination

Organizations shall ensure that: harms to people with disabilities due to data breaches (either accidental or deliberate) are prevented;AI systems do not repeat or distribute stereotypes or misinformation about people with disabilities; andplanners, developers, operators, and governance bodies have expertise to assess and respond to cases where non-discrimination requires that some people, including some people with disabilities, should not be subjected to AI-assisted decisions, because they are statistical outliers in the statistical distributions that the AI systems use (see Clause 11.2). ,

12.5 Design and develop accessible and equitable AI systems

Organizations shall ensure that the design and development of AI systems proactively support accessibility and equity for people with disabilities. This shall include: embedding accessibility and equity requirements from the outset;engaging people with disabilities in meaningful and compensated ways; andapplying inclusive design and monitoring practices throughout the AI lifecycle.

12.5.1 Accessibility and equity requirements in design and development

Organizations shall include accessibility and equity requirements in design and development of AI systems that comply with Clauses 10 and 11.

12.5.2 Engagement of people with disabilities in accessibility testing

Prior to implementation or use, or both, of an AI system that has a direct or indirect impact on people with disabilities, relevant people with disabilities or disability organizations shall be engaged to test the accessibility of the AI system, complying with Clause 10.This engagement shall be fairly compensated by the organization.

12.5.3 Feedback from people with disabilities

Feedback from people with disabilities or disability organizations shall be sought in all decisions relating to designing and developing AI systems complying with Clause 11.4. Note: This will ensure these systems provide people with disabilities with equitable benefits, preserve individual agency, prevent or mitigate, or both harmful impacts on people with disabilities.

12.5.4 Prohibit surveillance and profiling of people with disabilities

Organizations shall adopt planning and design frameworks as well as monitoring processes to ensure that any AI system they use, control, or govern is not used:specifically to surveil employees and clients with disabilities;for biometric categorization, emotion analysis or predictive policing specifically of employees and clients with disabilities; andto misinform or manipulate people with disabilities.

12.5.5 Prevent misuse and manipulation through AI

Complying with Clauses 11.3 and 11.4, organizations shall adopt planning and design frameworks as well as monitoring processes to ensure that any AI system they use, control, or govern: is not used to:misinform; ormanipulate people with disabilities; anddoes not produce one or more of the following:discriminatory impactsharm or heightened risk of harm;reproduction of stereotypesother forms of bias against people with disabilities where the AI system is used:to surveil employees, clients, or other users of the system; orfor biometric categorization, emotion analysis or predictive policing of employees, clients, or other users of the system; orany combination of the above, including all of the above.

12.5.6 Establish accountability for AI system decisions

The design of any AI system shall include accountability and governance mechanisms that make clear who is accountable for decisions made by an AI system.  ,

12.6 Procure accessible and equitable AI systems

Organizations shall ensure that procurement processes for AI systems are designed to uphold accessibility and equity for people with disabilities. This includes applying the requirements of Clauses 10 and 11 to engage people with disabilities in procurement decisions and verify conformance to accessibility and equity criteria before acquisition.

12.6.1 Apply accessibility and equity requirements in procurement

Organizations shall include procurement requirements, complying with Clauses 10 and 11. 

12.6.2 Feedback from people with disabilities during procurement

Feedback from people with disabilities or disability organizations shall be sought in all decisions relating to procuring AI systems to prevent or mitigate, or both, negative impacts on people with disabilities that: are disproportionate in scale;produce inequitable outcomes for people with disabilities; andproduce cumulative harms to people with disabilities, regardless of the scale of the immediate impact experienced.

12.6.3 Engaging people with disabilities in accessibility testing and impact assessment

Prior to implementation or use, or both, of an AI system that has a direct or indirect impact on people with disabilities, people with disabilities or disability organizations shall be engaged to:test the accessibility of the AI system; andconduct an impact assessment.This engagement shall be fairly compensated by the organization implementing or using the AI.

12.6.4 Verification of accessibility and equity conformance

Conformance to accessibility and equity criteria shall be verified by neutral experts in accessibility and disability equity before a procurement decision of an AI system is finalized.Note: Neutral experts are individuals with recognized subject matter expertise who provide impartial advice or analysis, and do not have financial, personal, or professional stakes in the outcome. 

12.6.5 Termination provisions in procurement contracts 

Organizations shall include provisions to halt or terminate procurement contracts of an AI system if:the system malfunctions; orperformance as measured against accessibility or equity criteria degrades or are no longer met. ,

12.7 Customize accessible and equitable AI systems

Organizations shall customize AI systems in a manner that upholds accessibility and equity for people with disabilities. This includes:applying relevant accessibility and equity requirements;actively involving people with disabilities in customization decisions; andensuring that customized systems are tested for accessibility prior to deployment.

12.7.1 Accessibility and equity requirements in customization

Customization of AI systems shall comply with the requirements of Clauses 10 and 11.

12.7.2 Feedback from people with disabilities during customization

Feedback from relevant people with disabilities or disability organizations shall be sought in all decisions relating to customizing AI systems to prevent or mitigate, or both, negative impacts on people with disabilities that: are disproportionate in scale;produce disparate outcomes for people with disabilities; andproduce cumulative harms to people with disabilities, regardless of the scale of the immediate impact experienced.

12.7.3 Accessibility testing of customization

Prior to the implementation or use, or both, of an AI system that has a direct or indirect impact on people with disabilities, people with disabilities or disability organizations shall be engaged to test the accessibility of the AI system.This engagement shall be fairly compensated by the organization implementing or using the AI. ,

12.8 Conduct ongoing impact assessments, ethics oversight, and monitoring of potential harms

Organizations shall conduct ongoing data quality monitoring and impact assessments to identify:emerging or actual bias; ordiscrimination toward people with disabilities, or both.

12.8.1 Monitoring and assessment of system outputs

The monitoring and assessment of system outputs shall consider multiple dimensions including, but not limited to, AI system outputs with respect to people with disabilities that: may produce:discriminatory impacts;harm or heightened risk of harm;reproduction of stereotypes;other forms of bias against people with disabilities; orany combination of the above, including all of the above;may not produce equitable access to benefits;may produce inequitable outcomes for people with disabilities;may produce cumulative harms to people with disabilities regardless of the scale of the immediate impact experienced;may undermine the:rights;freedoms;dignity;individual agency of people with disabilities; orany combination of the above, including all of the above; orany combination of the above, including all of the above.

12.8.2 Public registry of harms, barriers and inequitable treatment

Organizations shall maintain a public registry of;harms;contested decisions;reported barriers to access; andreports of inequitable treatment of people with disabilities related to AI systems.The public registry shall:comply with the Personal Information Protection and Electronic Documents Act (PIPEDA);comply with the Privacy Act;document the impact of low, medium and high impact decisions on people with disabilities in their registry; andbe publicly available in an accessible format.Once a publicly accessible monitoring system encompassing all organizations that employ AI systems is established and maintained to track the cumulative impact of low, medium and high impact decisions on people with disabilities, organizations shall submit this information into the system.

12.8.3 Thresholds for unacceptable risk

Thresholds for unacceptable levels of risk and harm shall be established with national disability organizations and organizations with expertise in accessibility and disability equity. As outcomes are monitored, AI systems testing shall evolve to address oversights and harms, and the system shall be updated to mitigate these oversights and harms or provide an alternative that bypasses them. ,

12.9 Train personnel in accessible and equitable AI

All personnel responsible for any aspects of the AI lifecycle shall receive training in accessible and equitable AI, delivered in accessible formats. 

12.9.1 Updating training content

Organizations shall regularly update training content to reflect current issues, technologies and approaches. 

12.9.2 Training content

Training shall be made specific to the learner’s context.This training shall include:legal considerations for privacy;user interface accessibility considerations;harm and risk detection strategies;bias detection and mitigation strategies; andways to involve people with disabilities in the AI lifecycle.Training shall be developed and deployed in collaboration with people with disabilities or disability organizations, or both, in accordance with 13.3. ,

12.10 Provide transparency, accountability and consent mechanisms

Organizations shall inform people about their intended or actual use of data and AI systems that make decisions about, or otherwise impact, people with disabilities. Transparency about AI includes providing information about:what data was used to pre-train, customize, or dynamically train an AI system;data labels and proxy data used in training;the decision(s) to be made by the AI system and the determinants of the decision(s);the availability of accessible, optional full-featured decision-making systems; andthe names and contact information of individuals within the organization accountable for the AI system and resultant decisions.

12.10.1 Accessibility of transparency information

All transparency information shall be provided in accessible formats and in a non-technical, plain language format, such that the potential impact of the AI system is clear. Note: Accessible formats include formats that will be accessible to the requestor. Alternate formats that may be requested under the Accessible Canada Regulation include audio formats, braille, large print, and electronic format.

12.10.2 Consent to data use

Organizations shall implement appropriate consent and engagement mechanisms based on how data is collected for use in AI systems. In cases where AI systems use data that is collected with the consent of the data subjects, people with disabilities shall be able to withdraw consent for any or all uses of their data, at any time, and without negative consequences.In cases where AI systems use data that is collected without the consent of the data subjects (e.g. AI systems that use de-identified administrative data about publicly funded services), organizations shall engage and involve people with disabilities in decisions about how datasets are used. ,

12.11 Provide access to equivalent alternative approaches

Organizations that deploy AI systems shall provide people with disabilities with alternative options that offer equivalent availability, reasonable timeliness, cost and convenience to the person with a disability.The following alternatives shall include:the option to request human decisions by people with knowledge and expertise in the needs of people with disabilities in the domain of the decisions to be made; orthe option to request human supervision or final determination of decisions to be made by a human with expertise in the needs of people with disabilities, guided by the AI.The organization shall retain individuals that have the necessary expertise to make equitable human decisions regarding people with disabilities when AI systems are deployed to replace decisions previously made by humans.Note: For example:When AI sign language systems are deployed, deaf end-users need to have the autonomy to accept or decline the use of AI interpretation. They must be given a real option to request a human interpreter instead, without any penalty.Where AI systems that use a camera to proctor an exam, Blind end-users need to have the autonomy to accept or decline the use of AI proctoring which may detect eye movements. They must be given a real option to request a human proctor instead, without any penalty. ,

12.12 Address accessibility and equity feedback, incidents, and complaints about AI systems

The organization shall address accessibility and equity-related feedback, incidents, and complaints, including providing details about redress, challenge, and appeals functions for people with disabilities, that:provide easy to find, accessible and understandable information about how feedback, incidents and complaints are addressed;acknowledge receipt and provide response to feedback, incidents, and complaints in a timely manner;provide a timeline for addressing feedback, incidents, and complaints;offer a procedure for people with disabilities or their representatives to provide feedback or to contest decisions anonymously;provide details about the course of action to address feedback, incidents and complaints; andcommunicate the status of addressing feedback, incidents, and complaints to people with disabilities or their representatives, and offer opportunities to appeal or contest the proposed remediation.Feedback related to harms shall be documented in the public registry of harms as long as they can be anonymized to protect privacy in accordance with the Privacy Act, the Personal Information Protection and Electronic Documents Act (PIPEDA) and other applicable federal and provincial privacy legislation, with consent from the individual submitting the feedback. ,

12.13 Provide review, refinement, halting and termination mechanisms

Organizations that deploy AI systems shall continuously review, refine and, if necessary, halt or terminate AI systems. 

12.13.1 Considering the full range of harms

Organizations shall consider the full range of harms during continuous review and refinement processes, including cumulative harms from low and medium impact decisions to people with disabilities. 

12.13.2 Feedback from people with disabilities during review and refinement

Review and refinement processes shall involve people with disabilities.

12.13.3 Conditions for halting and termination

In the situation where the system degrades such that accessibility or equity criteria for people with disabilities are no longer met, the organizations shall halt the use of the AI system until the accessibility barrier or inequitable treatment is addressed, or the system is terminated.

12.13.4 Learning from mistakes and failures

All AI systems shall be updated to mitigate risks if harms to people with disabilities are identified.AI systems that use machine learning shall be designed to learn from mistakes and failures, including those detailed in the public registry in Clauses 12.8 and 12.12. ,

12.14 Safely store and manage data throughout the AI lifecycle

Organizations shall ensure that disability data is used to create equitable AI systems in a safe and secure manner that does not result in harms for the individuals who provide their data.

12.14.1 Data storage and management

Data about people with disabilities shall be safely stored and managed in each phase of the data lifecycle beginning with:data collection through to retention (storage);use;disclosure (sharing); anddestruction.Note: Misuse or breaches of disability-related data can lead to discrimination, exclusion, and other harms to people with disabilities.Consistent with clause 22 of the United Nations' Convention on the Rights of People with Disabilities, organizations shall protect the privacy of personal, health and rehabilitation information of people with disabilities on an equal basis with others.Data storage and management shall comply with the Privacy Act, the Personal Information Protection and Electronic Documents Act (PIPEDA) and other applicable federal and provincial privacy legislation.

12.14.2 De-identification

Organizations shall ensure that re-identification is not possible through strong anonymization techniques.
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/12-organizational-processes-support-accessible-and-equitable-ai

13. Accessible education, training and literacy on AI

All interested parties of the AI ecosystem, including those involved in the creation, procurement, deployment, and oversight of AI systems, those who use AI in decision-making, and those impacted by AI systems directly or indirectly, need to be better informed about accessibility and equity. For this reason, education, training and support of people’s AI literacy shall:be accessible;include instruction about accessible and equitable AI;involve people with disabilities in creating and delivering such education and training; andcontribute to literacy efforts to enable those impacted by AI-assisted decisions or actions to exercise agency and autonomy. ,

13.1 Accessible education, training and support of AI literacy

To ensure inclusive and equitable participation of people with disabilities in AI-related education, training, and literacy, organizations shall ensure that they comply with one or more of the following: Materials and methodologies, including technology, content, methods, and processes for instruction, assessment and certification shall be accessible by complying with relevant requirements of the CAN-ASC-EN 301 549:2024.Education and training on AI enable people with disabilities to participate in governance and management of AI systems, by complying with the requirements of Clause 12.1.AI literacy enables people with disabilities to take an active role in the creation, procurement, deployment, and oversight of AI systems. ,

13.2 Accessible and equitable AI in education and training for those using AI professionally

Organizations shall ensure that their current workforce, including but not limited to those in technical roles, receive education and training on AI that integrates content on accessible and equitable AI covered in Clauses 10 and 11, to ensure that all AI systems are accessible and equitable.Training shall meet the requirements of Clause 12.9. ,

13.3 Participation of people with disabilities in the development and delivery of AI education, training and AI literacy support

Development of education, training and literacy materials and methodologies, as well as delivery of instruction, shall involve people with disabilities. ,

13.4 AI literacy

To promote informed and equitable engagement with AI systems, organizations shall implement AI literacy initiatives that empower individuals, especially people with disabilities, to understand and navigate the implications of these systems, including the following considerations:When organizations implement AI systems, they shall engage in AI literacy efforts to enable people who are impacted by the implemented AI systems, including people with disabilities, to understand the goals, benefits and risks of these systems, as well as any accessibility and equity concerns. Specifically:impacts of participation and non-participation; orinformed consent, and understanding alternative options and opportunities for recourse; or both.AI literacy shall be available to those affected by implemented AI systems in order to exercise informed consent, evaluate the impacts of participation and non-participation, as well as understand alternative options and opportunities for recourse, as discussed in Clause 11.4.
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/13-accessible-education-training-and-literacy-ai

14. Annex A (Informative)

This Annex provides a supporting use case for Clause 12.4. The example includes a description of a hypothetical dataset with examples of appropriate use, conditionally appropriate use and inappropriate use. Dataset: Facial recognition data collected from 100,000 people primarily without disabilities (>90%) using commercial facial recognition software in controlled lab settings.Appropriate use: Data for an AI-based decision-support system that assists flight attendants with faster, more accurate validation of passport photos, but allows for human identification in real time, presenting no delays or difference in service levels for people with disabilities.Conditionally appropriate use: Data for an AI-based decision-support system that assists in identifying individuals in transportation environments that are not controlled, provided that the system has been validated to account for varying camera angles, lighting conditions, and facial differences in these specific settings, and the software has been adapted to work in diverse environments. Alternative methods of validation must be readily available for use when travellers choose to opt out of facial recognition or if facial recognition does not work for them.Inappropriate uses: Data used as an input for an AI-based tool that controls entry into buildings with no human oversight or methods to bypass the facial recognition system. Use of an AI-based tool or system to make or inform decisions about people who are dissimilar from the people represented in the training data to the point that the AI-based tool or system would not be expected to provide meaningful outputs related to them.
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/14-annex-informative

15. Annex B: Bibliography (Informative)

Note: Research findings from Accessibility Standards Canada’s Advancing Accessibility Standards Research Grants and Contributions Program informed the background research and development of this Standard. Related research reports are listed in the Bibliography below.This Standard refers to the following publications and their specific editions: ,

15.1 Acts

European Parliament. EU AI Act: First regulation on artificial intelligence. 2023White House Office of Science and Technology. Blueprint for an AI Bill of Rights: Making automated systems work for the American people. 2022 ,

15.2 Online resources

Center for Responsible AI at New York University airesponsibly.comHickok, M. AI Ethicist: Trustworthy AI, Responsible AI, AI Governance www.aiethicist.orgInclusive Design Research Centre. We Count: Inclusive Artificial Intelligence (AI) Initiatives wecount.inclusivedesign.caMontreal AI Ethics Institute. Democratizing AI ethics literacy montrealethics.aiTreasury Board of Canada Secretariat, Government of Canada. Directive on Automated Decision Making https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=32592 ,

15.3 Publications

Association des Personnes Intéressées à l’Aphasie et à l’Accident Vasculaire Cérébral. n.d. Towards Better Communication Accessibility: Identifying Perceived Barriers and Facilitators in Financial Institutions for People Living with Aphasia.Canadian Association of the Deaf. 2023. Advancing Accessibility Standards for Deaf, Deaf-Blin and Hard of Hearing Canadians.Canadian Association of the Deaf. 2025. Quiet waves: Firsthand Experiences of Deaf, DeafBlind, and Hard of Hearing Individuals Reports of Barriers in Communication in the Built Environment.Carleton University. 2023. Informing Standards for Acoustics and the Built Environment.Centre for Equitable Library Access. n.d. Accessibility standards in commercial audiobooks.CSA Group. 2021. Advancing Accessibility Standards Research: Review of CSA Group Standards for Accessibility Adaptation.Neil Squire Society. 2023. Research and Inform Standards for Next Generation 911.Ontario College of Art and Design University. 2021. Future of Work and Disability: Inclusion, artificial intelligence, machine learning and work.Ontario College of Art and Design University. 2023. Equitable Digital Systems Research Report.Ontario College of Art and Design University. 2025. Accessible Canada, Accessible World / Un Canada accessible, Un monde accessible.PEACH Research Unit. 2023. Visualizing Accessibility Standards: A demonstration with CSA B651.Queen's University. 2023. Accessible Communication.Realize. n.d. INDEED (INvestigating the DEvelopment of Accessibility Standards in Canada and the Inclusion/Exclusion of Episodic Disabilities.Regroupement des aveugles et amblyopes du Québec. 2022. Web accessibility of Canadian banking/financial services.Réseau québécois pour l’inclusion des personnes sourdes et malentendantes. 2023. Accessible information for Deaf and hard of hearing Canadians Perceptions of Deaf and Hard of hearing persons, interpreters and broadcasters.Ryerson University. n.d. Understanding user perspective of the speed/accuracy/delay tradeoff for captioning fast-paced media.Surrey Place. 2023. Usability of Digital Information and Technology with People with IDD.The University of Ontario Institute of Technology. 2024. A Study of Accessible and Inclusive Virtual and Blended Information and Communication Technologies (ICTs) for the Federal Public Service and Federally Regulated Industries in Post-COVID-19 Canada.University Health Network. 2023. Recommendations for the Inclusion of Wayfinding Technologies in Canadian Accessibility Standards. , National Standard of CanadaCAN-ASC-6.2:2025Accessible and Equitable Artificial Intelligence Systems Published in December 2025 by Accessibility Standards CanadaA departmental corporation of the federal government320, Saint-Joseph Boulevard, Suite 246, Gatineau, QC, J8Y 3Y8To access standards and related publications, visit accessible.canada.ca or call 1-833-854-7628.Cette Norme nationale du Canada est disponible en versions française et anglaise.ICS code(s): 03.100, 35.020, 35.040, 35.080, 35.100, 35.240ISBN 978-0-660-79223-1Catalogue number AS4-34/1-2025E-PDF
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/15-annex-b-bibliography-informative