Overview of ASC-6.2 Accessible and Equitable Artificial Intelligence Systems
Information
Table of contents
1.0 Introduction
Artificial intelligence can help people with disabilities by creating new tools and making things more inclusive. But it also comes with risks, like making unfair decisions or being hard to use. This Standard shows how to make AI systems fair and accessible for everyone.
Under the Accessible Canada Act, AI systems must:
- Help people with disabilities as much as they help others.
- Avoid causing more harm to people with disabilities than others.
- Protect the rights, freedoms, and choices of people with disabilities.
This Standard explains how to:
1.1 Accessible AI
AI systems, tools, and resources must be easy for people with disabilities to use. People with disabilities should be involved in every step of creating, managing, and using AI.
1.1.1 People with disabilities as part of the AI process
- People with disabilities must be included in all parts of AI development, from design to testing to daily use.
- All tools and processes for managing AI must meet accessibility standards like CAN/ASC-EN 301 549:2024 Accessibility requirements for ICT products and services (EN 301 549:2021, IDT).
- Including people with disabilities helps create better AI systems for everyone.
1.1.2 People with disabilities as AI users
- AI systems must be easy and helpful for people with disabilities to use.
- Organizations must provide clear and simple instructions and feedback options that everyone can understand.
- AI systems must work well with assistive technologies and avoid creating new problems for users with disabilities.
1.2 Fair AI
AI systems must treat people with disabilities fairly and provide the same benefits to everyone. They must also avoid causing harm or being biased.
1.2.1 Fair access to benefits
- AI systems must work just as well for people with disabilities as for anyone else.
- Organizations need to keep checking and improving their AI to ensure fairness.
1.2.2 Preventing harm
- Organizations must focus on risks that could harm people with disabilities the most.
- If there’s a chance of serious harm, organizations must act quickly.
- AI systems must be tested to ensure they’re accurate and fair for people with disabilities.
- Data about people with disabilities must be kept safe from hackers or leaks.
- AI must not be biased or treat people unfairly because of their disabilities.
- People with disabilities must not be subject to AI decisions without knowing. They must understand and agree to how AI affects them.
- AI must not spread false or harmful ideas about people with disabilities.
1.2.3 Protecting rights and freedoms
- AI must not be used to spy on, categorize, or predict people’s behaviour in unfair ways.
- People must have clear options to understand and control AI decisions about them.
1.2.4 Respect and choices
- People with disabilities must be part of designing and managing AI systems.
- Organizations must provide easy-to-understand information about how AI makes decisions.
- If someone doesn’t like an AI decision, there must be a simple way to challenge it.
1.3 Making AI fair and accessible
Organizations must follow clear steps to ensure AI is fair and accessible for everyone. This includes good planning, testing, and involving people with disabilities.
1.3.1 Include people with disabilities in decision-making
- People with disabilities must help make important decisions about AI.
- Decision-making processes must be open and easy for everyone to follow.
1.3.2 Plan and explain AI use
- Organizations must study how AI might affect people with disabilities and fix any problems before using it.
- They must also make plans to prevent harm and keep AI fair.
1.3.3 Share AI plans
- Organizations must tell the public about their plans to use AI. They must provide this information in formats that are easy to access and understand.
- Anyone can ask to be added to an email list to get updates about AI plans.
1.3.4 Check data for fairness
- Organizations must check if the data they use for AI is fair and won’t harm people with disabilities.
- Data must match the purpose of the AI and the task it’s designed for.
- Organizations must:
- Avoid using biased data.
- Make sure training data represents people with disabilities accurately.
- Keep data safe from hackers and leaks.
1.3.5 Build AI that works for everyone
- Before using AI, organizations must test it with people with disabilities to ensure it’s accessible.
- Feedback from these tests must be used to improve the AI.
1.3.6 Buy fair and accessible AI
- When buying AI systems, organizations must make sure they meet fairness and accessibility standards.
- Independent experts must confirm that the AI meets these standards before it’s used.
1.3.7 Adjust AI systems fairly
- When organizations change AI systems, they must make sure they stay fair and accessible.
1.3.8 Keep checking AI
- Organizations must regularly check AI systems to ensure they remain fair and accessible.
- A public list of problems and fixes must be kept and updated.
1.3.9 Train staff
- Staff must learn how to create and manage AI in fair and accessible ways.
- Training should include:
- Privacy rules.
- How to make user interfaces accessible.
- Ways to find and fix bias.
- How to involve people with disabilities in AI projects.
1.3.10 Be clear and ask for consent
- Organizations must explain how their AI works in simple language.
- People must be allowed to say "no" to having their data used without facing problems.
1.3.11 Offer choices
If someone doesn’t want AI to make decisions about them, they must be able to choose a human decision-maker instead.
1.3.12 Handle complaints and fix problems
- Organizations must make it easy to report problems with AI.
- They must respond quickly and clearly to fix these problems.
1.3.13 Review and stop problematic AI
- Organizations must keep reviewing AI systems to ensure they remain fair.
- If a system fails, it must be fixed or stopped until it meets standards again.
1.3.14 Protect data
- Organizations must keep data about people with disabilities safe and private.
- Data storage must follow privacy laws like the Privacy Act and the Personal Information Protection and Electronic Documents Act (PIPEDA)
1.4 Teach and learn about AI
Education about AI must be accessible so everyone can understand how to use it fairly.
1.4.1 Accessible education
- Learning materials must be easy to use for everyone, including people with disabilities.
- Training must help people with disabilities take active roles in AI projects.
1.4.2 Professional training
- Training for AI creators must include lessons on fairness and accessibility.
- Courses should be designed for specific roles to make them useful and relevant.
1.4.3 Include people with disabilities in training
- People with disabilities must help create and teach AI training programs.
1.4.4 AI literacy
- Organizations must teach people about the benefits and risks of AI.
- These lessons should help everyone make informed decisions and know how to challenge unfair AI systems.