Skip to content

Canada, Accessibility & Equitable AI: Designing AI That Actually Includes People

In December 2025, Canada released CAN-ASC-6.2:2025: Accessible and Equitable Artificial Intelligence Systems, the world’s first national standard focused specifically on accessibility in Artificial Intelligence (AI).

Imagine AI not just working, but working for everyone with algorithms that don’t lock people out, or quietly build biases into everyday experiences. Instead of retrofitting accessibility after bias and barriers are encountered, this approach requires teams to design AI systems to be inclusive from day one.

Launched on International Day of Persons with Disabilities, this standard can be viewed as a blueprint for how creators, organizations, and AI developers can build systems that are truly accessible and usable for everyone from the very first line of code. Because it isn’t just about accessibility, it’s about equity. The standard calls for:

  • the inclusion of people with disabilities in training data,
  • ongoing monitoring of real-world impacts,
  • open reporting of how systems perform for disabled users,
  • and continuous feedback loops to improve outcomes.

It also embeds a human-rights lens, stating that AI must avoid harm, protect autonomy, and give people meaningful choice, including the option to opt out of AI altogether.

The Four Essential Foundations

CAN-ASC-6.2:2025 is organized around four foundational areas that guide the development of accessible and equitable artificial intelligence systems.

Note: Please read How to Implement CAN-ASC-6.2:2025 Accessibility Requirements for AI Systems for the full scope of these foundations.

  1. Accessible AI: Ensure people with disabilities can participate at every stage of the artificial intelligence lifecycle, from design and development to deployment, ongoing use, and monitoring.

  2. Equitable AI: All artificial intelligence systems must provide fair treatment for people with disabilities and avoid reinforcing bias, exclusion, or unfair outcomes.

  3. Organizational Processes to Support Accessible and Equitable AI: CAN-ASC-6.2:2025 defines specific organizational processes and practices that shape how accessible and equitable artificial intelligence systems are to be designed, built, and deployed.

  4. Accessible Education and AI Literacy: (IMHO, this one is huge! 🙌🏼)
    For artificial intelligence to be genuinely accessible, everyone needs the opportunity to understand how AI systems work, how they impact daily life, and how to advocate for accessibility and fairness. This applies whether someone is building, purchasing, using, or being affected by AI. Education and training programs must themselves be accessible, grounded in accessible and equitable principles, and developed with the active involvement of people with disabilities. They should also equip learners with a clear understanding of how AI can influence personal choice, autonomy, and independence.

Is This New Standard Actually Enforced?

Like the WCAG, CAN-ASC-6.2:2025 is a standard, not a law, but that doesn’t make it optional in practice.

It’s designed to align with existing legislation, such as the Accessible Canada Act and the Canadian Human Rights Act, allowing organizations to use it to demonstrate their compliance or identify deficiencies.

A Real Shift Left

This standard represents a major shift regarding how accessibility is approached. Rather than treating it as a last-minute addition or a nice-to-have design feature, it embeds accessibility throughout the entire AI lifecycle, including data collection, model training, system evaluation, and organizational governance.

And that’s a really big deal. It means bias, exclusion, and inaccessibility are no longer viewed as bugs in the system but as failures of the system itself.

For the disability community, the likely impact is equally significant, with the promise of fewer exclusionary AI-driven decisions in areas like hiring, lending, and healthcare, and more representative datasets that better reflect real-world diversity. It also opens the door to greater agency and choice in how AI systems are used, helping ensure people with disabilities have more control over the technologies that shape everyday life.

Resources

A human author creates the DubBlog posts. The AI tools Gemini and ChatGPT are sometimes used to brainstorm subject ideas, generate blog post outlines, and rephrase specific sections of content. Our marketing team carefully reviews all final drafts for accuracy and authenticity. The opinions and perspectives expressed remain the sole responsibility of the human author.

Maggie Vaughan, CPACC
Content Marketing Practitioner
DubBot