Artificial Intelligence has the potential to present both extreme opportunities and extreme harms to persons with disabilities. To uphold the principles of the Accessible Canada Act when deploying AI requires that persons with disabilities:benefit from AI systems, at least comparably to others; are unharmed by AI systems to a greater extent than others; not lose rights and freedoms due to the use of AI systems; and are not denied agency and are treated with respect in their interactions with AI systems. The four clauses outline the requirements to ensure that:AI is accessible to persons with disabilities (clause 5.1); AI systems are equitable to persons with disabilities (clause 5.2);organizations implement processes needed to achieve accessible and equitable AI (clause 5.3); andAI education supports systemic change needed to achieve accessible and equitable AI (clause 5.4).
,
5.1 Accessible AI
AI systems (clause 5.1.2) and the processes, resources, services and tools used to plan, procure, create, implement, govern, manage, maintain, monitor, and use AI systems (clause 5.1.1), shall be accessible to persons with disabilities. It shall be possible for persons with disabilities to be active participants in all roles in the AI lifecycle, as well as users of AI systems.5.1.1 Persons with disabilities as full participants in the AI lifecycle
It shall be possible for persons with disabilities to participate fully in all roles in the AI lifecycle and in all parts of the AI lifecycle. This includes, but is not limited to, datasets, AI systems and component creation (design, coding, implementation, evaluation, refinement), procurement, consumption, governance, management, and monitoring.5.1.1.1 Persons with disabilities engaged in the creation of AI systems
The tools and processes deployed by regulated entities to create (design, code, implement, evaluate, refine), procure, consume, govern, manage, and monitor AI systems as well as their components shall be accessible to persons with disabilities.Where regulated entities engage in creating AI systems, the tools and processes shall be accessible to persons with disabilities, including meeting the CAN/ASC - EN 301 549: 2024 - Accessibility requirements for ICT products and services standard (EN 301 549:2021, IDT). Where a regulated entity is creating tools used to design and develop AI systems and their components, the tools used to create (design, code, implement, evaluate, refine), procure, consume, and use AI systems and their components shall at a minimum meet the CAN/ASC - EN 301 549: 2024 - Accessibility requirements for ICT products and services standard (EN 301 549:2021, IDT), specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.Where a regulated entity is creating tools used to design and develop AI systems and their components, the outputs from tools used to create (design, code, implement, evaluate, refine), procure, consume, and use AI systems and their components shall at a minimum meet the CAN/ASC - EN 301 549: 2024 - Accessibility requirements for ICT products and services standard (EN 301 549:2021, IDT), specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.Where a regulated entity is creating tools used to design and develop AI systems and their components, the tools used to create (design, code, implement, evaluate, refine), procure and consume AI systems and their components shall support the design and development of AI tools that at a minimum meet the CAN/ASC - EN 301 549: 2024 - Accessibility requirements for ICT products and services standard (EN 301 549:2021, IDT), specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.5.1.1.2 Persons with disabilities engaged in deploying AI systems
Where AI systems are deployed, the tools and resources used to implement AI systems, including tools and resources used to customize (pre-trained models), trained models, setup, maintain, govern and manage AI systems, shall be accessible to persons with disabilities, including at a minimum meeting the CAN/ASC - EN 301 549: 2024 - Accessibility requirements for ICT products and services standard (EN 301 549:2021, IDT), specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.5.1.1.3 Persons with disabilities engaged in oversight of AI systems.
Tools and processes deployed by regulated entities to assess, monitor, report issues, evaluate and improve AI systems shall be accessible to persons with disabilities, at a minimum meeting the CAN/ASC - EN 301 549: 2024 - Accessibility requirements for ICT products and services standard (EN 301 549:2021, IDT), specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.5.1.2 Persons with disabilities as users of AI systems
Persons with disabilities shall be able to use and benefit from AI systems. Where regulated entities deploy AI systems and tools that impact employees, contractors, partners, customers, or members of the public, the outputs of AI systems shall at a minimum meet the CAN/ASC - EN 301 549: 2024 - Accessibility requirements for ICT products and services standard (EN 301 549:2021, IDT), specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.5.1.2.1 Accessible transparency and explainability documentation
Information to address disclosure, notice, transparency, explainability, and contestability of AI systems, their function, decision-making mechanisms, potential risks, and implementation shall be up to date (reflecting the current state of the AI used), and shall be accessible to persons with disabilities, at a minimum meeting the CAN/ASC - EN 301 549: 2024 - Accessibility requirements for ICT products and services standard (EN 301 549:2021, IDT), specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.5.1.2.2 Accessible feedback mechanisms
Mechanisms to provide feedback (see clause 5.3.12) shall be accessible to persons with disabilities, at a minimum meeting the CAN/ASC - EN 301 549: 2024 - Accessibility requirements for ICT products and services standard (EN 301 549:2021, IDT), specifically the:functional performance statements in clause 4;generic requirements in clause 5;web requirements in clause 9;non-web documents requirements in clause 10;software requirements in clause 11; anddocumentation and support services in clause 12.5.1.2.3 Address statistical discrimination in assistive technology implementing AI
Not all persons with disabilities will benefit equally from AI-based accommodations if their interaction modes aren’t captured in the data used to train the AI. When AI-based technology is considered for an accommodation, regulated entities shall evaluate potential inequities in collaboration with the intended beneficiary.
,
5.2 Equitable AI
When AI systems make decisions about, or otherwise impact, persons with disabilities, those decisions and uses of AI systems shall result in equitable treatment of persons with disabilities. Doing so will result in benefits to individuals and population groups, as well as to society at large, by helping ensure that all individuals are able to lead productive lives and contribute to society.Principles: Equitable treatment requires that persons with disabilities:benefit from AI systems at least comparably to others; are unharmed by AI systems to a greater extent than others; do not suffer a loss of rights and freedoms due to the use of AI systems; and are not denied agency and are treated with respect in their interactions with AI systems. 5.2.1 Equitable access to benefits
Building on the principle that persons with disabilities should be able to benefit from AI systems at least comparably to others, regulated entities that deploy these systems should:Make all interactive components of AI systems accessible by design. Ensure that persons with disabilities are not underrepresented or misrepresented in the training data, for example, due to the use of biased proxies, biased data labels, or synthetic data that does not represent actual disability experience. Validate and tune AI systems to perform comparably along the dimensions of accuracy, reliability and robustness for persons with disabilities as well as for others. Assess and report AI system performance to include disaggregated results for persons with disabilities.Continuously monitor both the performance of AI systems according to validation criteria and the real-world impacts of the decisions made with the help of these systems on persons with disabilities. Use this information to improve system performance by improving usability, gathering additional data, improving data quality, and refining validation criteria. Doing so will create positive feedback loops. 5.2.2 Assessment and mitigation of harms
Equitable risk assessment: Where risk assessment frameworks are employed to determine the risks and benefits of AI systems, risks for persons with disabilities who are minorities and outliers, and feel the greatest impact of harm, shall receive priority. The risk assessment shall not be based solely on the risks and benefits to the majority.Care must be taken to recognize that persons with disabilities may experience higher levels of harm than others, caused by the aggregate effect of many cumulative harms that intersect or build up over time as the result of AI-assisted decisions that are not otherwise classified as high impact. Where there are threats of serious or irreversible harm, lack of quantifiable certainty (e.g., in risk assessment) shall not be used as a reason for postponing effective measures to prevent harmful impacts to persons with disabilities.Accuracy: Regulated entities shall select accuracy assessment criteria in line with the risk assessment results, taking care to capture the actual or potential harms to individuals with disabilities and other minority or disadvantaged groups. Accuracy assessment should include disaggregated metrics for persons with disabilities. They should also consider the context of use and the conditions relative to persons with disabilities. Information security: Regulated entities shall develop plans to protect persons with disabilities in the case of data breaches or malicious attacks of AI systems. The plans shall identify risks associated with disabilities as well as clear and swift actions to protect persons with disabilities. Fairness and non-discrimination: Care must be taken to recognize that people who are discriminated against in AI-assisted decision making are often persons with disabilities. Regulated entities shall ensure that AI systems are not negatively biased against persons with disabilities due to biased choices during data modelling, misrepresentation in the training data, use of biased proxies, biased data labelling, unrepresentative synthetic data, biased design of the systems, tuning of the systems according to incorrect or incomplete criteria, or biases that arise in the context of use of the system.Even with full proportional representation in the data, persons with disabilities will likely remain outliers or marginalized minorities in the context of AI-assisted decisions. For this reason, to mitigate statistical discrimination, persons with disabilities should not be subjected to AI-assisted decisions without their consent and full understanding.Prevention of reputational harms: Regulated entities shall ensure that AI systems do not repeat or distribute stereotypes or misinformation about persons with disabilities.5.2.3 Upholding of rights and freedoms
Freedom from surveillance: Regulated entities shall refrain from using AI tools to surveil persons with disabilities. Freedom from discriminatory profiling: Regulated entities shall refrain from using AI tools for biometric categorization, emotion analysis or predictive policing of persons with disabilities.5.2.4 Preservation of agency and respectful treatment
Engagement and participation: Regulated entities shall solicit input from, and encourage the involvement of, individuals with disabilities during all stages of AI system design, development, use, and operational management, including continuous monitoring post deployment. Information and disclosure: Regulated entities shall inform people about their intended or actual use of AI systems that make decisions about, or otherwise impact, persons with disabilities. This information shall be provided in a manner that is accurate, accessible, and comprehensible. Consent, choice, and recourse: When AI systems are used to make or assist in decisions, regulated entities shall offer a multi-level optionality mechanism for clients, employees, and other impacted individuals to request an equivalently full-featured and timely alternative decision-making process that is, at the individual’s choice, either performed without the use of AI, or made using AI with direct human oversight and verification of the decision. Whenever possible, people will be given information about ways to correct, contest, change, or reverse an AI-assisted decision or action that impacts them (see clause 5.2.2 for ways to assess impact). Freedom from misinformation and manipulation: Regulated entities shall ensure that AI systems are not used to misinform or manipulate persons with disabilities. Support of human control and oversight: There shall be a traceable chain of human responsibility that makes it clear who is accountable for the accessibility and equity of decisions made by an AI system.
,
5.3 Organizational processes to support accessible and equitable AI
Where organizations implement and/or use AI systems, organizational processes shall result in AI systems that are accessible to, and equitable for, persons with disabilities (see clauses 5.1 and 5.2 for definitions of “accessible” and “equitable” AI). Each process shall include persons with disabilities in governance and decision-making throughout the AI lifecycle and shall be accessible to persons with disabilities who are governance or management committee members, employees, contractors, clients, disability organization members, or members of the public.The organizational processes to which this clause applies include, but are not limited to, processes used to: support inclusive and accessible AI governance;plan and justify the need for AI systems;notify the public of an intention to use AI systems;assess data for their appropriateness for use by AI systems;design and develop accessible and equitable AI systems;procure accessible and equitable AI systems;customize accessible and equitable AI systems;conduct ongoing impact assessments, ethics oversight, and monitoring of potential harms;train personnel in accessible and equitable AI;provide transparency, accountability, and consent mechanisms;provide access to equivalent alternative approaches;address accessibility and equity feedback, incidents, and complaints about AI systems;provide review, refinement, and termination mechanisms; andsafely store and manage data throughout the AI lifecycle.5.3.1 Support inclusive and accessible AI governance
To ensure that AI systems are accessible and equitable, organizations shall implement the requirements of clauses 5.1 and 5.2. Organizations shall solicit input from and encourage involvement of persons with disabilities during all stages of the dataset, AI systems and components creation (design, coding, implementation, evaluation, refinement), procurement, consumption, governance, management, and monitoring.In addition to consulting persons with disabilities who are external to the organization on specific AI-related decisions (see clauses 5.3.4, 5.3.5, 5.3.6, 5.3.7, 5.3.9, 5.3.14), organizations should have ongoing and continuous involvement of persons with disabilities in AI-related decisions through employee, contractor, management, and governance roles.This may include, but is not limited to, recruiting, onboarding and supporting the involvement of persons with disabilities as employees, contractors, members of governance and management committees (e.g., board of directors, steering committees, advisory committees, and management teams).5.3.2 Plan and justify the use of AI systems
Where an organization is proposing and planning to deploy an AI system, the impact on persons with disabilities shall be considered.Effective measures to prevent harmful impacts to persons with disabilities shall be taken regardless of the availability of quantitative and/or predictive risk determination(s).Impact and risk assessment processes shall: Include persons with disabilities that may be directly or indirectly impacted by the AI system as active participants in the decisions (see clause 5.2.2).Include determination of impact on the broadest range of persons with disabilities as possible.Account for aggregate impacts for persons with disabilities of many cumulative harms that can intersect or build up over time as the result of AI-assisted decisions that are not classified as high impact.Where risk assessment frameworks are employed to determine the risks and benefits of AI systems, organizations shall (see clause 5.2.2):Prioritize prevention and/or mitigation for risks for persons with disabilities (who are minorities and outliers in statistical modelling), for example by ensuring that risk assessment models are not based solely on the risks and benefits to the majority (or ”typical” case).Select accuracy assessment criteria in line with the risk assessment results, taking care to capture the actual or potential harms to individuals with disabilities and other minority or disadvantaged groups. Include disaggregated metrics for persons with disabilities in accuracy assessments and consider the context of use and the conditions relative to persons with disabilities who may be impacted by the use of the AI system.Where the AI system is intended to replace or augment an existing function, persons with disabilities that face the greatest barriers in accessing or benefiting from the existing function shall be included in the decision-making process.Plans shall be in place for clear and swift actions to protect persons with disabilities in the event of data breaches at any stage of the project lifecycle, or malicious attacks of AI systems after deployment.5.3.3 Notice of intention to use an AI system
The intention to use an AI system shall be publicly disclosed in accessible formats as part of the organization’s accessibility plan and distributed to national disability organizations and interested parties. A process shall be established whereby any interested party can request to be included in a distribution list for notices. The notice shall include accessible methods to provide input. Note: This clause applies to all AI systems whether it is determined that they directly affect persons with disabilities.5.3.4 Assess data for their appropriateness for the use of AI systems
Organizations shall determine if potential datasets containing information about persons with disabilities are appropriate, inappropriate, or conditionally appropriate for use as inputs to AI systems, involving persons with disabilities in making the determination of appropriateness.There shall be an alignment between:the dataset that is used as an input to an AI system;the model or method used by the AI system; and the objective or task assigned to the AI system.A dataset may be an appropriate input for a specific objective or task in one AI system, but inappropriate for other tasks in the same AI system, or inappropriate when applied to individuals who are different from the majority of people on multiple variables or labels. The appropriateness of each dataset shall be assessed for every objective or task and every AI system before it is used as an input. Organizations shall collect, use, and govern data and data models and algorithms in a manner that prevents negative bias or unwarranted discrimination toward persons with disabilities in the use and outputs of AI systems. Specifically, steps shall be taken to prevent: biased choices during data modelling;misrepresentation in the training data;the use of biased proxies;biased data labelling;biased design of the systems;tuning of the systems according to incorrect or incomplete criteria;biases that arise in the context of use of the system; andsynthetic data that is reliant on insufficiencies in scope and purpose of disability experiences relevant to the purpose of the AI system.Organizations shall ensure that: harm to persons with disabilities due to data breaches (either accidental or deliberate) is prevented;AI systems do not repeat or distribute stereotypes or misinformation about persons with disabilities; andplanners, developers, operators, and governance bodies have expertise to assess and respond to cases where non-discrimination requires that some people – including some persons with disabilities – should not be subjected to AI-assisted decisions, because they are seen as outliers in the statistical distributions that the AI systems model and uses. (see clause 5.2.2).Refer to Annex A for examples.5.3.5 Design and develop accessible and equitable AI systems
Design and development for AI systems shall include the requirements of clauses 5.1 and 5.2.Per the requirements of clause 5.1, prior to implementation and/or use of an AI system that has a direct or indirect impact on persons with disabilities, persons with disabilities and disability organizations shall be engaged to test the accessibility of the AI system. This engagement shall be fairly compensated.Per the requirements of clause 5.2.2, feedback from persons with disabilities and disability organizations shall be sought in all decisions relating to designing and developing AI systems to ensure comparable benefits relative to other segments of the impacted population, preserve individual agency, prevent and/or mitigate harmful impacts on persons with disabilities.Per the requirements of clause 5.2.3 and 5.2.4, organizations shall adopt planning and design frameworks as well as monitoring regimes to ensure that any AI system they use, control, or govern:is not used to surveil employees and clients with disabilities;is not used for biometric categorization, emotion analysis or predictive policing of employees and clients with disabilities; andis not used to misinform or manipulate persons with disabilities. The design of any AI system shall include accountability and governance mechanisms that make clear who is accountable for decisions made by an AI system. 5.3.6 Procure accessible and equitable AI systems
Procurement for AI systems shall include the requirements of clauses 5.1 and 5.2. Feedback from persons with disabilities and disability organizations shall be sought in all decisions relating to procuring AI systems to prevent and/or mitigate impacts on persons with disabilities that: are disproportionate in scale, in comparison to individuals without disabilities;produce disparate outcomes for persons with disabilities in comparison to other individuals without disabilities; andproduce cumulative harms to persons with disabilities, regardless of the scale of the immediate impact experienced.Prior to implementation and/or use of an AI system that has a direct or indirect impact on persons with disabilities, persons with disabilities and disability organizations shall be engaged to test the accessibility of the AI system and conduct an impact assessment. This engagement shall be fairly compensated. Conformance to accessibility and equity criteria shall be verified by a third party with expertise in accessibility and disability equity before a procurement decision of an AI system is finalized. Procurement contracts should include the ability to halt or terminate an AI system if the system malfunctions or performance as measured against equity criteria degrades or are no longer met.5.3.7 Customize accessible equitable AI systems
Customization for AI systems shall include the requirements of clauses 5.1 and 5.2. Feedback from persons with disabilities and disability organizations shall be sought in all decisions relating to customizing AI systems to prevent and/or mitigate impacts on persons with disabilities that: are disproportionate in scale, in comparison to individuals without disabilities;produce disparate outcomes for persons with disabilities, in comparison to other individuals without disabilities; andproduce cumulative harms to persons with disabilities, regardless of the scale of the immediate impact experienced.Prior to the implementation and/or use of an AI system that has a direct or indirect impact on persons with disabilities, persons with disabilities and disability organizations shall be engaged to test the accessibility of the AI system. This engagement shall be fairly compensated. 5.3.8 Conduct ongoing impact assessments, ethics oversight, and monitoring of potential harms
Organizations shall conduct ongoing data quality monitoring and impact assessments, to identify emerging or actual bias and/or discrimination toward persons with disabilities. This monitoring and assessment should consider multiple dimensions including, but not limited to, AI system outputs with respect to persons with disabilities that: may produce impact disproportionate in scale, in comparison to other individuals without disabilities;may not produce equitable access to benefits, in comparison to other individuals without disabilities;may produce disparate outcomes for persons with disabilities, in comparison to other individuals without disabilities;may produce cumulative harms to persons with disabilities regardless of the scale of the immediate impact experiencedmay undermine the rights, freedoms, dignity and/or individual agency of persons with disabilities.Organizations shall also maintain a public registry of harms, contested decisions, reported barriers to access, and reports of inequitable treatment of persons with disabilities related to AI systems. The registry shall comply with the Personal Information Protection and Electronic Documents Act (PIPEDA).All federally regulated organizations shall document the impact of low, medium and high impact decisions on persons with disabilities in their registry. This information shall be publicly available in an accessible format. Once a publicly accessible monitoring system encompassing all federally regulated organizations that employ AI systems is established and maintained to track the cumulative impact of low, medium and high impact decisions on persons with disabilities, federally regulated organizations shall submit this information into the system. Thresholds for unacceptable levels of risk and harm shall be established with national disability organizations and organizations with expertise in accessibility and disability equity. As outcomes are monitored, AI systems testing must evolve to address oversights and harms, and the system must be updated to mitigate these oversights and harms or provide an alternative that bypasses them.5.3.9 Train personnel in accessible and equitable AI
All personnel responsible for any aspects of the AI lifecycle shall receive training in accessible and equitable AI. This training shall be regularly updated and provided in accessible formats as per clause 5.1. Training shall be created in collaboration with persons with disabilities and be made specific to the learner’s context, not just generic accessibility and equity training. This training shall include: legal considerations for privacy;user interface accessibility considerations;harm and risk detection strategies; bias detection and mitigation strategies; andways to involve persons with disabilities in the AI lifecycle.5.3.10 Provide transparency, accountability and consent mechanisms
Organizations shall inform people about their intended or actual use of AI systems that make decisions about, or otherwise impact, persons with disabilities. This information shall be provided in a manner that is accurate, accessible, and comprehensible. Transparency about AI includes providing information about:what data was used to pre-train, customize, or dynamically train an AI system;data labels and proxy data used in training;the decision(s) to be made by the AI system and the determinants of the decision(s);the availability of accessible, optional full-featured decision-making systems; andthe names and contact information of individuals within the organization accountable for the AI system and resultant decisions.All information shall be provided in accessible formats and in a non-technical, plain language format, such that the potential impact of the AI system is clear. In cases where AI systems use data that is collected with the consent of the data subjects, it shall be possible for persons with disabilities to withdraw consent for any or all uses of their data, at any time, and without negative consequences. In cases where AI systems use data that is collected without the consent of the data subjects (e.g., AI systems that use de-identified administrative data about publicly funded services), organizations shall engage and involve persons with disabilities in decisions about how datasets are used.5.3.11 Provide access to equivalent alternative approaches
Organizations that deploy AI systems shall provide persons with disabilities the following alternatives:the option to request human decisions by persons with knowledge and expertise in the needs of persons with disabilities in the domain of the decisions to be made; orthe option to request human supervision or final determination of decisions to be made by a human with expertise in the needs of persons with disabilities, guided by the AI.These alternative options shall offer equivalent availability, reasonable timeliness, cost and convenience. The organization shall retain individuals that have the necessary expertise to make equitable human decisions regarding persons with disabilities when AI systems are deployed to replace decisions previously made by humans.5.3.12 Address accessibility and equity feedback, incidents, and complaints about AI systems
The organization shall address accessibility and equity-related feedback, incidents, and complaints, including providing details about redress, challenge, and appeals functions for persons with disabilities, that: are easy to find, accessible, and are actionable;acknowledge receipt and provide response to feedback, incidents, and complaints in no more than 24 hours;provide a timeline for addressing feedback, incidents, and complaints; offer a procedure for persons with disabilities or their representatives to provide feedback on or to contest decisions anonymously; andcommunicate the status of addressing feedback, incidents, and complaints to persons with disabilities or their representatives, and offer opportunities to appeal or contest the proposed remediation. Feedback related to harms shall be documented in the public registry of harms as long as they can be anonymized to protect privacy in accordance with the Privacy Act, the Personal Information Protection and Electronic Documents Act (PIPEDA) and other applicable federal and provincial privacy legislation, with consent from the individual submitting the feedback.5.3.13 Provide review, refinement, halting and terminology mechanisms
Organizations that deploy AI systems shall continuously review, refine and, if necessary, halt or terminate AI systems. These continuous review and refinement processes shall consider the full range of harms, including cumulative harms from low and medium impact decisions to persons with disabilities. The review process shall involve persons with disabilities.In the situation where the system degrades such that accessibility or equity criteria for persons with disabilities are no longer met, the AI system shall be halted until the accessibility barrier or inequitable treatment is addressed, or the system is terminated.All AI systems shall be updated to mitigate risks if harms to persons with disabilities are identified. AI systems that use machine learning shall be designed to learn from mistakes and failures.5.3.14 Safely store and manage data throughout the AI lifecycle
Organizations shall ensure that disability data is used to create equitable AI systems in a safe and secure manner that does not result in harms for the individuals who provide their data.Data about persons with disabilities must be safely stored and managed in each phase of the data lifecycle beginning with data collection through to retention (storage), use, disclosure (sharing), and destruction. Misuse or breaches of disability-related data can lead to discrimination, exclusion, and other harms for persons with disabilities. Consistent with clause 22 of the United Nations Convention on the Rights of Persons with Disabilities, organizations shall protect the privacy of personal, health and rehabilitation information of persons with disabilities on an equal basis with others. Data storage and management shall comply with the Privacy Act, the Personal Information Protection and Electronic Documents Act (PIPEDA) and other applicable federal and provincial privacy legislation. As part of their compliance, organizations shall make efforts to ensure that re-identification is not possible through strong anonymization techniques.
,
5.4 Accessible education, training and literacy on AI
All interested parties of the AI ecosystem, including those involved in the creation, procurement, deployment, and oversight of AI systems, those who use AI in decision-making, and those impacted by AI systems directly or indirectly, need to be better informed about accessibility and equity. For this reason, education, training and support of people’s literacy AI shall be accessible. Further, education, training and support of literacy on AI shall include instruction about accessible and equitable AI. To foster the creation of accessible and equitable AI systems and tools, organizations shall involve persons with disabilities in creating and delivering such education and training. Finally, organizations shall contribute to literacy efforts to enable those impacted by AI-assisted decisions or actions to exercise agency and control these impacts.5.4.1 Education, training and support of literacy on AI shall be accessible
Materials and methodologies, including technology, content, methods, and processes for instruction, assessment and certification shall be accessible, by implementing the requirements of the CAN/ASC - EN 301 549: 2024 - Accessibility requirements for ICT products and services standard (EN 301 549:2021, IDT).Education and training on AI shall enable persons with disabilities to participate in staff roles, and as members of governance and management committees, by implementing the requirements of clause 5.3.1.AI literacy shall enable persons with disabilities to take on an active role in the creation, procurement, deployment, and oversight of AI systems.5.4.2 Education and training for those using AI professionally shall include accessible and equitable AI
All education and training on AI shall integrate content on accessible and equitable AI, to ensure that all AI systems are accessible and equitable.Organizations shall ensure that their current workforce, including but not limited to those in technical roles, receives training on accessible and equitable AI, including the requirements of clause 5.3.9. Education and training shall teach participatory methods with and by persons with disabilities.5.4.3 Participation of persons with disabilities in AI education, training and support of literacy
Development of educational, training and literacy materials and methodologies, as well as delivery of instruction, shall involve persons with disabilities.5.4.4. AI literacy
Organizations shall engage in AI literacy efforts to enable people who are impacted by AI systems, including persons with disabilities, to understand the goals, benefits and risks of these systems, as well as any accessibility and equity concerns.AI literacy should allow those affected by AI systems to exercise informed consent, evaluate the impacts of participation and non-participation, as well as understand alternative options and opportunities for recourse, as discussed in clause 5.2.4.
https://accessible.canada.ca/creating-accessibility-standards/asc-62-accessible-equitable-artificial-intelligence-systems/5-preface