CAN-ASC-6.2:2025- Accessible and Equitable Artificial Intelligence Systems
12. Organizational processes to support accessible and equitable AI
Information
Table of contents
Technical committee members
- Lisa Snider, Senior Digital Accessibility Consultant and Trainer, Access Changes Everything Inc.
- Nancy McLaughlin, Senior Policy Advisor on Accessibility, Canadian Radio-television and Telecommunications Commission
- John Willis, Senior Program Advisor, OPS Accessibility Office, Centre of Excellence for Human Rights.
- Jutta Treviranus (Chairperson), Director of the Inclusive Design Research Centre and Professor, OCAD University
- Alison Paprica, Professor (adjunct) and senior fellow, Institute for Health Policy, Management and Evaluation, University of Toronto
- Gary Birch, Executive Director, Neil Squire Society
- Lisa Liskovoi, Senior Inclusive Designer and Digital Accessibility Specialist, Inclusive Design Research Centre, OCAD University
- Clayton Lewis, Professor, University of Colorado
- Julia Stoyanovich, Associate Professor and Director, Tandon School of Engineering, New York University
- Anne Jackson, Professor, Seneca College
- Kave Noori, Artificial Intelligence Policy Officer, European Disability Forum
- Mia Ahlgren, Human Rights and Disability Policy Officer, Swedish Disability Rights Federation
- Sambhavi Chandrashekar (Vice-Chairperson), Global Accessibility Lead, D2L Corporation
- Julianna Rowsell, Senior Product Manager, Product Equity, Adobe
- Kate Kalcevich, Head of Accessibility Innovation, Fable
- Saeid Molladavoudi, Senior Data Science Advisor, Statistics Canada
- Merve Hickok, Founder, President and Research Director, Alethicist.org, Center for AI and Digital Policy, University of Michigan
Where organizations implement or use AI systems, or both, organizational processes shall result in AI systems that are accessible to, and equitable for, people with disabilities (see Clauses 10 and 11 for definitions of “accessible” and “equitable” AI). Each process shall:
- include people with disabilities in governance and decision-making throughout the AI lifecycle; and
- be accessible to people with disabilities who are:
- governance or management committee members;
- employees;
- contractors;
- clients;
- disability organization members; or
- members of the public.
Where applicable, this Clause applies to organizational processes used to:
- support accessible and equitable AI governance;
- plan and justify the need for AI systems;
- notify the public of an intention to use AI systems;
- assess data for their appropriateness for use by AI systems;
- design and develop accessible and equitable AI systems;
- procure accessible and equitable AI systems;
- customize accessible and equitable AI systems;
- conduct ongoing impact assessments, ethics oversight, and monitoring of potential harms;
- train personnel in accessible and equitable AI;
- provide transparency, accountability, and consent mechanisms;
- provide access to equivalent alternative approaches;
- address accessibility and equity feedback, incidents, and complaints about AI systems;
- provide review, refinement, and termination mechanisms; and
- safely store and manage data throughout the AI lifecycle.
12.1 Support accessible and equitable AI governance
- Organizations shall solicit input from and encourage the involvement of people with disabilities during all stages of the:
- dataset;
- AI systems;
- components creation (design, coding, implementation, evaluation, refinement);
- procurement;
- consumption;
- governance;
- management; and
- monitoring.
- In addition to consulting people with disabilities who are external to the organization on specific AI-related decisions (see Clauses 12.4, 12.5, 12.6, 12.7, 12.9, 12.14), organizations should have ongoing and continuous involvement of people with disabilities by:
- recruiting;
- onboarding; and
- supporting the involvement of people with disabilities.
- Involvement of people with disabilities should include:
- employees;
- contractors; and
- members of governance and management committees (e.g., board of directors, steering committees, advisory committees, and management teams).
12.2 Plan and justify the use of AI systems
Where an organization is proposing and planning to deploy an AI system, the impact on people with disabilities shall be considered prior to deployment and throughout the use of the AI system.
12.2.1 Plan for the use of AI
Organizations shall create a plan for the use of AI that:
- includes information on:
- how and where human control and oversight are embedded in the use of the AI system; and
- how users can opt for a human alternative to the AI system; and
- outlines clear and swift actions to protect people with disabilities in the event of data breaches:
- at any stage of the project lifecycle; or
- when malicious attacks of AI systems occur after deployment.
12.2.2 Risk and impact determination
- Effective measures to prevent harmful impacts to people with disabilities shall be taken regardless of the availability of quantitative or predictive, or both risk determination(s).
- Impact and risk assessment processes shall:
- include people with disabilities that may be directly or indirectly impacted by the AI system as active participants in the decisions (see Clause 11.2.2);
- include determination of impact on the broadest range of people with disabilities as possible; and
- account for aggregate impacts for people with disabilities of many cumulative harms that can intersect or build up over time as the result of AI-assisted decisions that are not classified as high impact.
12.2.3 Equitable risk assessment frameworks
Where risk assessment frameworks are employed to determine the risks and benefits of AI systems (see Clause 11.2.2), organizations shall:
- Prioritize prevention or mitigation, or both, for risks for people with disabilities (who are minorities and outliers in statistical modelling). For example, by ensuring that risk assessment models are not based solely on the risks and benefits to the majority (or “typical” case).
- Select accuracy assessment criteria in line with the risk assessment results, taking care to capture the actual or potential harms to individuals with disabilities and other minority or disadvantaged groups.
- Include disaggregated metrics for people with disabilities in accuracy assessments and consider the context of use and the conditions relative to people with disabilities who may be impacted by the use of the AI system.
12.3 Provide notice of intention to use an AI system
The intention to use an AI system shall be publicly disclosed in accessible formats as part of the organization’s accessibility plan and distributed to national disability organizations and interested parties.
Note: This Clause applies to all AI systems whether or not it is determined that they directly affect people with disabilities.
12.3.1 Request for notice
Organizations shall establish a process whereby any interested party can request to be included in a distribution list for notices.
12.3.2 Accessible input mechanisms in notices
Organizations shall include accessible methods to provide input in notices.
12.4 Assess data for their appropriateness for the use of AI systems
Organizations shall determine if potential datasets containing information about people with disabilities are appropriate, inappropriate, or conditionally appropriate for use as inputs to AI systems, involving relevant people with disabilities in making the determination of appropriateness.
There shall be an alignment between:
- the dataset that is used as an input to an AI system;
- the model or method used by the AI system; and
- the objective or task assigned to the AI system.
- The appropriateness of each dataset shall be assessed before it is used as an input for every:
- objective or task; and
- AI system.
Note: A dataset may be an appropriate input for a specific objective or task in one AI system, but inappropriate for other tasks in the same AI system, or inappropriate when applied to individuals who are different from most people on multiple variables or labels.
Refer to Annex A for examples.
12.4.1 Prevention of bias and misrepresentation in data
Organizations shall collect, use, and govern data, data models, and algorithms in a manner that prevents negative bias or unwarranted discrimination toward people with disabilities in the use and outputs of AI systems.
Specifically, steps shall be taken to prevent:
- biased choices during data modelling;
- misrepresentation in the training data;
- the use of biased proxies;
- biased data labelling;
- biased design of the systems;
- tuning of the systems according to incorrect or incomplete criteria;
- biases that arise in the context of use of the system; and
- synthetic data that reflects insufficient scope and purpose of disability experiences relevant to the purpose of the AI system.
12.4.2 Prevention of harm and discrimination
Organizations shall ensure that:
- harms to people with disabilities due to data breaches (either accidental or deliberate) are prevented;
- AI systems do not repeat or distribute stereotypes or misinformation about people with disabilities; and
- planners, developers, operators, and governance bodies have expertise to assess and respond to cases where non-discrimination requires that some people, including some people with disabilities, should not be subjected to AI-assisted decisions, because they are statistical outliers in the statistical distributions that the AI systems use (see Clause 11.2).
12.5 Design and develop accessible and equitable AI systems
Organizations shall ensure that the design and development of AI systems proactively support accessibility and equity for people with disabilities. This shall include:
- embedding accessibility and equity requirements from the outset;
- engaging people with disabilities in meaningful and compensated ways; and
- applying inclusive design and monitoring practices throughout the AI lifecycle.
12.5.1 Accessibility and equity requirements in design and development
Organizations shall include accessibility and equity requirements in design and development of AI systems that comply with Clauses 10 and 11.
12.5.2 Engagement of people with disabilities in accessibility testing
- Prior to implementation or use, or both, of an AI system that has a direct or indirect impact on people with disabilities, relevant people with disabilities or disability organizations shall be engaged to test the accessibility of the AI system, complying with Clause 10.
- This engagement shall be fairly compensated by the organization.
12.5.3 Feedback from people with disabilities
Feedback from people with disabilities or disability organizations shall be sought in all decisions relating to designing and developing AI systems complying with Clause 11.4.
Note: This will ensure these systems provide people with disabilities with equitable benefits, preserve individual agency, prevent or mitigate, or both harmful impacts on people with disabilities.
12.5.4 Prohibit surveillance and profiling of people with disabilities
Organizations shall adopt planning and design frameworks as well as monitoring processes to ensure that any AI system they use, control, or govern is not used:
- specifically to surveil employees and clients with disabilities;
- for biometric categorization, emotion analysis or predictive policing specifically of employees and clients with disabilities; and
- to misinform or manipulate people with disabilities.
12.5.5 Prevent misuse and manipulation through AI
Complying with Clauses 11.3 and 11.4, organizations shall adopt planning and design frameworks as well as monitoring processes to ensure that any AI system they use, control, or govern:
- is not used to:
- misinform; or
- manipulate people with disabilities; and
- does not produce one or more of the following:
- discriminatory impacts
- harm or heightened risk of harm;
- reproduction of stereotypes
- other forms of bias against people with disabilities where the AI system is used:
- to surveil employees, clients, or other users of the system; or
- for biometric categorization, emotion analysis or predictive policing of employees, clients, or other users of the system; or
- any combination of the above, including all of the above.
12.5.6 Establish accountability for AI system decisions
The design of any AI system shall include accountability and governance mechanisms that make clear who is accountable for decisions made by an AI system.
12.6 Procure accessible and equitable AI systems
Organizations shall ensure that procurement processes for AI systems are designed to uphold accessibility and equity for people with disabilities. This includes applying the requirements of Clauses 10 and 11 to engage people with disabilities in procurement decisions and verify conformance to accessibility and equity criteria before acquisition.
12.6.1 Apply accessibility and equity requirements in procurement
Organizations shall include procurement requirements, complying with Clauses 10 and 11.
12.6.2 Feedback from people with disabilities during procurement
Feedback from people with disabilities or disability organizations shall be sought in all decisions relating to procuring AI systems to prevent or mitigate, or both, negative impacts on people with disabilities that:
- are disproportionate in scale;
- produce inequitable outcomes for people with disabilities; and
- produce cumulative harms to people with disabilities, regardless of the scale of the immediate impact experienced.
12.6.3 Engaging people with disabilities in accessibility testing and impact assessment
- Prior to implementation or use, or both, of an AI system that has a direct or indirect impact on people with disabilities, people with disabilities or disability organizations shall be engaged to:
- test the accessibility of the AI system; and
- conduct an impact assessment.
- This engagement shall be fairly compensated by the organization implementing or using the AI.
12.6.4 Verification of accessibility and equity conformance
Conformance to accessibility and equity criteria shall be verified by neutral experts in accessibility and disability equity before a procurement decision of an AI system is finalized.
Note: Neutral experts are individuals with recognized subject matter expertise who provide impartial advice or analysis, and do not have financial, personal, or professional stakes in the outcome.
12.6.5 Termination provisions in procurement contracts
Organizations shall include provisions to halt or terminate procurement contracts of an AI system if:
- the system malfunctions; or
- performance as measured against accessibility or equity criteria degrades or are no longer met.
12.7 Customize accessible and equitable AI systems
Organizations shall customize AI systems in a manner that upholds accessibility and equity for people with disabilities. This includes:
- applying relevant accessibility and equity requirements;
- actively involving people with disabilities in customization decisions; and
- ensuring that customized systems are tested for accessibility prior to deployment.
12.7.1 Accessibility and equity requirements in customization
Customization of AI systems shall comply with the requirements of Clauses 10 and 11.
12.7.2 Feedback from people with disabilities during customization
Feedback from relevant people with disabilities or disability organizations shall be sought in all decisions relating to customizing AI systems to prevent or mitigate, or both, negative impacts on people with disabilities that:
- are disproportionate in scale;
- produce disparate outcomes for people with disabilities; and
- produce cumulative harms to people with disabilities, regardless of the scale of the immediate impact experienced.
12.7.3 Accessibility testing of customization
- Prior to the implementation or use, or both, of an AI system that has a direct or indirect impact on people with disabilities, people with disabilities or disability organizations shall be engaged to test the accessibility of the AI system.
- This engagement shall be fairly compensated by the organization implementing or using the AI.
12.8 Conduct ongoing impact assessments, ethics oversight, and monitoring of potential harms
Organizations shall conduct ongoing data quality monitoring and impact assessments to identify:
- emerging or actual bias; or
- discrimination toward people with disabilities, or both.
12.8.1 Monitoring and assessment of system outputs
The monitoring and assessment of system outputs shall consider multiple dimensions including, but not limited to, AI system outputs with respect to people with disabilities that:
- may produce:
- discriminatory impacts;
- harm or heightened risk of harm;
- reproduction of stereotypes;
- other forms of bias against people with disabilities; or
- any combination of the above, including all of the above;
- may not produce equitable access to benefits;
- may produce inequitable outcomes for people with disabilities;
- may produce cumulative harms to people with disabilities regardless of the scale of the immediate impact experienced;
- may undermine the:
- rights;
- freedoms;
- dignity;
- individual agency of people with disabilities; or
- any combination of the above, including all of the above; or
- any combination of the above, including all of the above.
12.8.2 Public registry of harms, barriers and inequitable treatment
- Organizations shall maintain a public registry of;
- harms;
- contested decisions;
- reported barriers to access; and
- reports of inequitable treatment of people with disabilities related to AI systems.
- The public registry shall:
- comply with the Personal Information Protection and Electronic Documents Act (PIPEDA);
- comply with the Privacy Act;
- document the impact of low, medium and high impact decisions on people with disabilities in their registry; and
- be publicly available in an accessible format.
- Once a publicly accessible monitoring system encompassing all organizations that employ AI systems is established and maintained to track the cumulative impact of low, medium and high impact decisions on people with disabilities, organizations shall submit this information into the system.
12.8.3 Thresholds for unacceptable risk
Thresholds for unacceptable levels of risk and harm shall be established with national disability organizations and organizations with expertise in accessibility and disability equity. As outcomes are monitored, AI systems testing shall evolve to address oversights and harms, and the system shall be updated to mitigate these oversights and harms or provide an alternative that bypasses them.
12.9 Train personnel in accessible and equitable AI
All personnel responsible for any aspects of the AI lifecycle shall receive training in accessible and equitable AI, delivered in accessible formats.
12.9.1 Updating training content
Organizations shall regularly update training content to reflect current issues, technologies and approaches.
12.9.2 Training content
Training shall be made specific to the learner’s context.
This training shall include:
- legal considerations for privacy;
- user interface accessibility considerations;
- harm and risk detection strategies;
- bias detection and mitigation strategies; and
- ways to involve people with disabilities in the AI lifecycle.
- Training shall be developed and deployed in collaboration with people with disabilities or disability organizations, or both, in accordance with 13.3.
12.10 Provide transparency, accountability and consent mechanisms
Organizations shall inform people about their intended or actual use of data and AI systems that make decisions about, or otherwise impact, people with disabilities.
Transparency about AI includes providing information about:
- what data was used to pre-train, customize, or dynamically train an AI system;
- data labels and proxy data used in training;
- the decision(s) to be made by the AI system and the determinants of the decision(s);
- the availability of accessible, optional full-featured decision-making systems; and
- the names and contact information of individuals within the organization accountable for the AI system and resultant decisions.
12.10.1 Accessibility of transparency information
All transparency information shall be provided in accessible formats and in a non-technical, plain language format, such that the potential impact of the AI system is clear.
Note: Accessible formats include formats that will be accessible to the requestor. Alternate formats that may be requested under the Accessible Canada Regulation include audio formats, braille, large print, and electronic format.
12.10.2 Consent to data use
Organizations shall implement appropriate consent and engagement mechanisms based on how data is collected for use in AI systems.
- In cases where AI systems use data that is collected with the consent of the data subjects, people with disabilities shall be able to withdraw consent for any or all uses of their data, at any time, and without negative consequences.
- In cases where AI systems use data that is collected without the consent of the data subjects (e.g. AI systems that use de-identified administrative data about publicly funded services), organizations shall engage and involve people with disabilities in decisions about how datasets are used.
12.11 Provide access to equivalent alternative approaches
Organizations that deploy AI systems shall provide people with disabilities with alternative options that offer equivalent availability, reasonable timeliness, cost and convenience to the person with a disability.
The following alternatives shall include:
- the option to request human decisions by people with knowledge and expertise in the needs of people with disabilities in the domain of the decisions to be made; or
- the option to request human supervision or final determination of decisions to be made by a human with expertise in the needs of people with disabilities, guided by the AI.
- The organization shall retain individuals that have the necessary expertise to make equitable human decisions regarding people with disabilities when AI systems are deployed to replace decisions previously made by humans.
Note: For example:
- When AI sign language systems are deployed, deaf end-users need to have the autonomy to accept or decline the use of AI interpretation. They must be given a real option to request a human interpreter instead, without any penalty.
- Where AI systems that use a camera to proctor an exam, Blind end-users need to have the autonomy to accept or decline the use of AI proctoring which may detect eye movements. They must be given a real option to request a human proctor instead, without any penalty.
12.12 Address accessibility and equity feedback, incidents, and complaints about AI systems
- The organization shall address accessibility and equity-related feedback, incidents, and complaints, including providing details about redress, challenge, and appeals functions for people with disabilities, that:
- provide easy to find, accessible and understandable information about how feedback, incidents and complaints are addressed;
- acknowledge receipt and provide response to feedback, incidents, and complaints in a timely manner;
- provide a timeline for addressing feedback, incidents, and complaints;
- offer a procedure for people with disabilities or their representatives to provide feedback or to contest decisions anonymously;
- provide details about the course of action to address feedback, incidents and complaints; and
- communicate the status of addressing feedback, incidents, and complaints to people with disabilities or their representatives, and offer opportunities to appeal or contest the proposed remediation.
- Feedback related to harms shall be documented in the public registry of harms as long as they can be anonymized to protect privacy in accordance with the Privacy Act, the Personal Information Protection and Electronic Documents Act (PIPEDA) and other applicable federal and provincial privacy legislation, with consent from the individual submitting the feedback.
12.13 Provide review, refinement, halting and termination mechanisms
Organizations that deploy AI systems shall continuously review, refine and, if necessary, halt or terminate AI systems.
12.13.1 Considering the full range of harms
Organizations shall consider the full range of harms during continuous review and refinement processes, including cumulative harms from low and medium impact decisions to people with disabilities.
12.13.2 Feedback from people with disabilities during review and refinement
Review and refinement processes shall involve people with disabilities.
12.13.3 Conditions for halting and termination
In the situation where the system degrades such that accessibility or equity criteria for people with disabilities are no longer met, the organizations shall halt the use of the AI system until the accessibility barrier or inequitable treatment is addressed, or the system is terminated.
12.13.4 Learning from mistakes and failures
12.14 Safely store and manage data throughout the AI lifecycle
Organizations shall ensure that disability data is used to create equitable AI systems in a safe and secure manner that does not result in harms for the individuals who provide their data.
12.14.1 Data storage and management
Data about people with disabilities shall be safely stored and managed in each phase of the data lifecycle beginning with:
- data collection through to retention (storage);
- use;
- disclosure (sharing); and
- destruction.
Note: Misuse or breaches of disability-related data can lead to discrimination, exclusion, and other harms to people with disabilities.
- Consistent with clause 22 of the United Nations' Convention on the Rights of People with Disabilities, organizations shall protect the privacy of personal, health and rehabilitation information of people with disabilities on an equal basis with others.
- Data storage and management shall comply with the Privacy Act, the Personal Information Protection and Electronic Documents Act (PIPEDA) and other applicable federal and provincial privacy legislation.
12.14.2 De-identification
Organizations shall ensure that re-identification is not possible through strong anonymization techniques.