I recently attended Digital Colloquium's Accessibility Summit—hands down one of my favorite events of the year.
Niki Ramesh delivered this year's keynote, Using AI to Improve User Experience," which was both timely and thought-provoking.
As someone genuinely excited about the promise of AI—especially in the accessibility space—I find the possibilities for good to be boundless: more innovative tools, better experiences, more equitable digital access.
But with all that promise comes real risk: bias and discrimination baked into training data, the spread of AI-generated misinformation, and data privacy trade-offs.
Ms. Ramesh thoughtfully addressed these very risks and concerns in her keynote. One slide in particular resonated with me and served as the inspiration for this blog post.
Are you asking the hard questions?
When developing or procuring AI services:
- Is the data for AI fair and won’t harm people with disabilities?
- Are there partnerships in place to enrich data with diverse inputs?
- How can human oversight and verification be a part of the AI's operation?
- What safeguards prevent the generation of harmful content?
~ Using AI to Improve User Experience, Niki Ramesh, Accessibility Summit 2025, Digital Colloquium
How do we begin to address these questions? Is there a system or process in place that can help ensure we are creating, providing, and using AI tools responsibly and ethically?
When we talk about AI, responsibility and ethics go hand in hand. Both are about looking closely at how AI systems are built and used—and spotting any ethical blind spots that could lead to unintended consequences. While many organizations are already navigating the practical side of responsible AI in their day-to-day operations, having a solid grasp of the bigger ethical picture can help lead to more strategic, future-focused choices.
Enter AI Governance
Artificial intelligence (AI) governance refers to the processes, standards, and guardrails that help ensure AI systems and tools are safe and ethical.
~ What is AI governance? - IBM
Without proper human oversight, AI can cause significant social and ethical harm. For instance, Microsoft’s Tay Chatbot, an AI designed to learn and interact with Twitter users, was launched to engage in online conversations. However, it quickly began repeating racist and offensive messages it learned from the platform, resulting in its shutdown within a day.
Similarly, the English tutoring company iTutor Group Inc. found itself in legal hot water after deploying an AI-powered hiring tool that automatically screened out older job candidates. The algorithm was designed to reject female applicants over the age of 55 and male applicants over 60—regardless of their skills, experience, or qualifications.
These examples underscore the importance of establishing and adhering to guidelines and frameworks that are crucial for ensuring AI systems do not infringe upon human dignity, rights, or safety.
At the core of responsible AI governance are five principles: empathy, bias control, transparency, and accountability.
Michael Impink, an instructor who teaches AI Ethics in Business at the Harvard Division of Continuing Education’s Professional and Executive Development, might suggest adding fairness, privacy, and security, three of the five principles for ethical AI.
To build and deploy AI responsibly, organizations must look beyond the technological innovation and bottom-line benefits and consider AI’s broader societal impact. This means anticipating how AI affects all stakeholders—not just users, but employees, communities, and society at large. A key part of this responsibility is human oversight and thorough examination and scrutiny of the training data to avoid embedding existing real-world biases into algorithms, which can reinforce unfairness and inequality, and erode trust. Transparency is also non-negotiable; organizations must be able to clearly explain how their AI systems make decisions, what logic underpins those outcomes, and how secure the data is. Ultimately, responsible AI means setting high standards early, committing to them consistently, and staying accountable as AI technologies evolve and scale.
Enforcement
Much like a sustainable, continuous accessibility monitoring strategy, AI governance requires constant oversight to make sure AI systems stay in line with evolving laws, ethical standards, and public expectations.
It also takes input from across the board: AI developers, users, policymakers, and ethicists, ensuring that AI-related systems are built in ways that reflect shared societal values.
No matter the structure of your AI governance team—whether it’s a formal committee, an internal task force, or a multi-stakeholder advisory group—it must be empowered to:
- Develop and enforce clear, actionable guidelines for AI development and deployment
- Create a consistent framework for navigating ethical gray areas
- Regularly review and revise those guidelines to keep up with fast-moving AI tech
- Assign clear ownership for each stage of the AI lifecycle
Building a Responsible AI Framework: 5 Key Principles for Organizations
~ Lizzy Short
AI is everywhere, moving at lightning speed, bringing incredible opportunities. As we push the boundaries of what AI can do, we need to strike a careful balance with that momentum and a strong commitment to ethical and responsible governance. Innovation shouldn’t come at the cost of our values. That means building with fairness and transparency in mind, reducing environmental impact, and making sure these tools are accessible to the broadest range of users possible. Responsible AI isn’t just about what we can build—it’s about building in a way that benefits everyone.
Resources
- What is AI governance?
- Responsible AI (YouTube series)
- Beyond the algorithm: AI’s societal impact
- AI and A11Y: Navigating the AI Revolution
- An Overview Of AI Governance Approaches
- How AI Is Impacting Society And Shaping The Future
- The Evolving Landscape Of AI: Responsibility, Accessibility, And Platforms
- Building a Responsible AI Framework: 5 Key Principles for Organizations
A human author creates the DubBlog posts. The AI tool Gemini is sometimes used to brainstorm subject ideas, generate blog post outlines, and rephrase certain portions of the content. Our marketing team carefully reviews all final drafts for accuracy and authenticity. The opinions and perspectives expressed remain the sole responsibility of the human author.