Summary of CAN-ASC-6.2:2025 – Accessible and Equitable Artificial Intelligence Systems
Information
Table of contents
Introduction to the CAN-ASC-6.2:2025 – Accessible and Equitable Artificial Intelligence Systems Standard
This is the first edition of the CAN-ASC-6.2:2025 - Accessible and Equitable Artificial Intelligence Systems Standard.
This Standard is intended to align with:
- Other relevant standards, including:
- CAN-ASC - EN 301 549:2024 - Accessibility requirements for ICT products and services (EN 301 549:2021, IDT)
- CAN/ASC - 1.1:2024 (REV-2025) – Employment
- CSA ISO/IEC 42001:25 Information technology - Artificial intelligence - Management system
- CSA ISO/IEC 30071-1-20 - Information technology -Development of user interface accessibility - Part 1: Code of practice for creating accessible ICT products and services
- CSA ISO/IEC 29138-1:19 Information technology - User interface accessibility - Part 1: User accessibility needs
- Other relevant acts, codes, and regulations, including:
- Accessible Canada Act
- United Nations' Convention on the Rights of Persons with Disabilities
- Canadian Human Rights Act
Goals and purpose
This Standard will help organizations achieve, create and use AI that:
- gives equitable benefits to people with disabilities,
- avoids causing unfair harm,
- protects rights and freedoms of people with disabilities, and
- treats people with disabilities with respect and gives them choices, including the choice not to use AI.
Scope
The first 9 Clauses of this Standard cover details on the Standard Development Organization (SDO) and the technical committee responsible for developing the Standard, legal obligations, and how the Standard should be used.
Clauses 10 to 13 cover the following requirements:
- Accessible AI: Making sure AI can be used by people with disabilities.
- Equitable AI: Making sure AI applications treat people with disabilities fairly.
- Organization processes to support accessible and equitable AI: Helping organizations build and use AI in equitable and accessible ways.
- Education and training: Teaching people about AI to make the system more inclusive.
Accessible AI
AI systems must be easy for people with disabilities to use. People with disabilities should be involved in every step of creating, managing, and using AI.
Participation in the AI process
People with disabilities should be able to take part in every role and step of the AI process, including:
- designing,
- coding,
- testing,
- buying,
- using, and
- checking AI systems.
Organizations must make sure all tools used to build and manage AI meet accessibility standards (CAN-ASC-EN 301 549:2024). This includes:
- Making accessible AI systems and tools.
- Ensuring outputs from AI are accessible.
People with disabilities as AI users
AI systems must be equitable, usable, and beneficial for people with disabilities. If AI affects workers, customers, or the public, it must meet accessibility standards.
Organizations must:
- share clear, plain language information about how AI systems work and what decisions they make, and how people can challenge those decisions,
- provide accessible ways for people to give feedback, and
- work with individuals to make sure AI tools used as accommodations are equitable.
Example: If AI is used for sign language interpretation, Deaf users must be able to choose a human interpreter instead, especially in important settings like hospitals or courts.
Equitable AI
AI systems must treat people with disabilities equitably. Equitable treatment helps people live full lives and take part in their communities.
Equitable access to benefits
Organizations must:
- include people with disabilities in training data,
- test and adjust AI tools so that they work well for everyone,
- share performance results separately for people with disabilities,
- track real-world effects, and
- use feedback to improve systems.
Preventing harm
Organizations must find and reduce harm caused by AI. Harm can be big or small, and it can build up over time. Examples include discrimination, loss of privacy, exclusion, and loss of control.
Organizations must:
- focus on risks that affect people with disabilities,
- understand and minimize small harms that can add up,
- use equitable ways to check accuracy,
- protect personal information,
- avoid unfair treatment,
- make sure people understand and agree before AI is used, and
- avoid spreading false or harmful ideas about people with disabilities.
Respecting rights and freedoms
AI must not be used in ways that reduce privacy, dignity, or independence.
Organizations must avoid:
- using AI to inequitably track or monitor people with disabilities, or
- using AI to judge people based on their body, movement, facial expressions, or emotions.
Respecting choice and dignity
People must be able to make informed choices and be treated with respect.
Organizations must:
- include people with disabilities in all steps,
- share clear, accessible information,
- offer choices, like using a human instead of AI,
- avoid misleading or manipulating people, and
- make sure someone within the organization is responsible for AI decisions.
Supporting equitable and accessible AI research
Organizations that support AI research must also support research that helps make AI equitable and accessible for people with disabilities.
Organization processes to support accessible and equitable AI
Organizations must make sure that their AI is equitable and usable for people with disabilities. This means including people with disabilities in planning, building, using, and managing AI systems. This includes employees, contractors, clients, and members of the public.
Equitable and accessible AI governance
Organizations must:
- include people with disabilities in decision making, and
- hire and support people with disabilities in roles like employees and board members.
Planning and explaining AI use
Before using AI, organizations must:
- make a clear plan that includes human options and emergency actions,
- understand risks and impacts, including small harms that build up, and
- use equitable ways to check for risk and accuracy.
Informing the public of AI use
Organizations must:
- tell the public when AI will be used,
- use accessible formats when telling the public,
- let people sign up for updates, and
- provide accessible ways to give feedback.
Checking data used in AI
Before using data about people with disabilities, organizations must:
- make sure the data fits the task,
- include people with disabilities in the decision,
- avoid bias, misrepresentation, and unfair labels, and
- prevent harm and discrimination.
Designing and building equitable and accessible AI
Organizations must:
- include accessibility and equity from the start,
- test AI systems with people with disabilities,
- ask for feedback,
- avoid using AI to track, sort, or trick people, and
- make it clear who is responsible for AI decisions within the organization.
Buying AI systems
Organizations must:
- include accessibility and equity in buying decisions,
- ask for feedback from people with disabilities,
- test systems before use,
- have experts check AI systems for equity and accessibility, and
- include a way to cancel contracts if needed.
Customizing AI systems
When changing or adjusting AI systems, organizations must:
- follow accessibility and equity rules,
- get feedback from people with disabilities, and
- test the customized system before use.
Ongoing checks for harm
Organizations must:
- regularly check how AI affects people with disabilities,
- look for ways AI might harm people, or treat them unfairly,
- keep a public record of harms, and
- work with disability groups to set limits for risk.
Training staff
Everyone working with AI must get training that is accessible, up to date, and specific to their role.
Training must cover:
- keeping personal information private,
- designing AI for everyone,
- finding and reducing risk and harm,
- making AI equitable, and
- including people with disabilities.
People with disabilities must help create and deliver this training.
Transparency and consent
Organizations must:
- share clear information about how AI and user data are used,
- make this information accessible and easy to understand,
- get permission to use user data,
- let people take back their consent without penalty, and
- include people with disabilities in decisions about user data collected without consent.
Offering other options
People with disabilities must be able to choose other options that are just as available, timely, and convenient.
These options include asking for:
- a human decision-maker, or
- a human to review AI decisions.
Organizations must hire people with the right expertise to make equitable decisions.
Examples:
- Deaf users must be able to choose a human interpreter.
- Blind users must be able to choose a human proctor during exams.
Handling feedback and complaints
Organizations must:
- provide clear, accessible ways to give feedback or complaints,
- respond to feedback quickly, tell people when they can expect a response,
- allow anonymous feedback,
- explain what will be done to fix the issue,
- keep people updated and offer ways to review and challenge a decision, and
- add feedback about harm to the public record, if it can be anonymized and the person agrees.
Reviewing and improving AI systems
Organizations must:
- keep checking how AI systems work and how they might be harming people with disabilities,
- improve AI systems when needed,
- stop using AI systems if they cause harm or no longer meet standards,
- include people with disabilities in reviews, and
- make sure AI systems learn from mistakes, including complaints of harm from people with disabilities.
Keeping disability data safe
Organizations must:
- keep disability-related data safe from collection to deletion,
- protect personal, health, and rehabilitation information,
- follow privacy laws like the Privacy Act and PIPEDA, and
- make sure data cannot be re-identified.
Accessible education, training, and literacy on AI
Everyone needs to understand accessibility and equity in AI. This includes if you build it, buy it, use it, or are affected by it.
Education and training must:
- be accessible,
- teach about accessible and equitable AI,
- include people with disabilities in creating and delivering training, and
- help people understand how AI affects their choices and independence.
Accessible learning
Organizations must:
- use accessible tools and teaching methods,
- help people with disabilities take part in managing AI, and
- support people with disabilities in all roles related to AI.
Training for professionals
Organizations must train staff, including technical teams, on accessible and equitable AI. This training must follow the rules of Accessible AI, Equitable AI, and Accessible education, training and literacy on AI.
Involving people with disabilities in creating training
People with disabilities must help design and deliver AI education and training.
Helping people understand AI
Organizations must help people understand how AI works and what it means for them. This includes:
- explaining goals, benefits, risks, and equity,
- helping people understand their choices, and
- showing how to give consent, choose other options, and challenge decisions.
Annexes
Annexes provide more information and context on concepts presented within this Standard. The Standard states general requirements while the annexes provide detail. The list identifies annexes to read in addition to reading the Standard.
- Annex A: Use case for assessing if data matches an AI task