Interview with a Screen Reader User

Back in 2018, I met quite an amazing young man. His name is Tyler. He was attending the university where I work and was in the last semester of his last year. I had actually sought him out. I desperately needed an expert in the use of screen reader technology. Not to just help test the university’s website but to help me build test scenarios that were based on the experiences of a real user.

After Tyler graduated, we stayed in touch. I would ping him occasionally asking him to test a document or webpage. And he would text me about the HTML and javascript classes he was taking and how cool it was to finally be able to know how to fix all the barriers. We have worked on several projects together and as fate would have it, Tyler recently joined my department as a part time employee...I am no longer an army of one in the fight for digital accessibility.

After discussing it with Tyler, I would like to share his insights with you on how people with visual disabilities use screen readers to navigate and consume content, the barriers they encounter and what web designers and programmers could do better.

How long have you been a screen reader user?

I’ve been a screen reader for something like 15 or 17 years.

Which screen reader do you prefer and why? What needs improvement and why?

I use NVDA by NV Access. I prefer NVDA for a number of reasons: It’s open source, it’s free, it’s easy to install, and it’s very easy to use.

I think in general I’d like to see more uniformity as far as terminology and controls across screen readers in general, so I’d apply that to NVDA. I don’t think I have any specific complaints within my use case for NVDA though.

While putting together this blog post, I came across several articles about screen reader detection scripts. Are you pro or con on the use of this technology? What do you see as the benefits and drawbacks?

I’ve been aware of these sorts of scripts for a while, but my main exposure to them so far has been when solving those dang captcha challenges we all know and love. It feels like winning the lottery when the buffering ends and you see that they just couldn’t be bothered to implement an audio version of the challenge.

As nice as that sounds, I think that false convenience is part of the problem. These scripts can offer what some developers believe is a superior, uniquely curated experience. But they can also be easily abused to capture private information, build detailed profiles of disabled users, and further ingrain the fallacy of the separate but equal doctrine.

In essence, I don’t think we should use these scripts until browsers include an opt-in/opt-out feature. Content should be built to be accessible in the first place so screen reader detection shouldn’t be necessary to serve users a seamless, accessible experience.

Until then, I don't personally think it’s worth the tradeoff of potentially being served outdated alternative content, giving developers the ability to build detailed profiles which uniquely identify users as disabled, and giving bad actors the ability to specifically target vulnerable populations.

When using your screen reader, what’s the most common barrier you run into when browsing a web page?

For me personally, the most common barrier I run into is unlabeled or incorrectly labeled content.

I did a lot of research in college and continue to conduct research both professionally and for hobbies. I would say between 25% and 30% of all content I come across is unusable or inaccessible because 1) either the page isn’t properly structured, 2) images, pdfs, videos, etc; aren’t labeled or are given silly cop-out labels like "image.jpg," 3) or because the website is accessible but embedded content isn’t traversable.

If you were in a room full of web designers and programmers, what would be the 3 most important issues you would convey to them?

First, build accessibility into the system design phase. No developer should be able to end a sprint with a deployable piece of code that isn’t accessible to at least the bare minimum assistive technology. That sort of behavior perpetuates bad practices and demonstrates that disabled individuals are considered unnecessary or unimportant.

Second, consult users and use case testers on an ongoing basis to ensure that content starts out accessible and remains accessible.

Third, if you don’t understand accessibility, request that your team be given access to some sort of accessibility training. Some statistics place the number of disabled individuals in America alone at between one fifth and one quarter of the population. When a full one fourth of your target audience might not be able to access some portion of your product, you need to make a change for the better. It starts with developers. If you don’t know how to implement accessibility, you are either going to lose your job when accessibility becomes a necessity, or you are going to embrace change and make a convincing business case as to why your company needs accessibility.

Same question, but this time you’re in a room full of screen reader software developers...what are the top 3 issues you would discuss with them?

First, get on the same page. We really could use some standardized terminology and controls across screen reader software.

It’s the same way half of the video games these days use the X button for either attacking or jumping. Plenty of users make use of at least two different screen readers depending on what mobile and desktop OS they can access. If I suddenly need to switch to android because my iPhone screen shatters, I’d like to be able to quickly and seamlessly understand the basic information available on the screen. When a sighted person gets a new phone they don’t need to take a quick refresher on how to read, right?

Second, once you’ve started working together, start working with companies. I realize some companies already do this, but it isn’t nearly as pervasive as it ought to be. I love that HTML5 is designed to provide a common platform for accessible web content, but the same doesn’t hold true with native apps. I shouldn’t need to switch into Google mode to access G Drive, then switch back to access a MS Word doc.

Thirdly, collectively shame JAWS into actually working.

And lastly, what is the one thing everyone should understand about how people with visual impairments consume information on the web?

This is going to get a little dark, but I really think this is important to realize.

I think everybody needs to realize that overall, consuming information for a low vision individual is a frustrating and humiliating experience. Every time we go to a poorly built page, or get stuck in a tab trap, or find what we need only to learn that we either need to pay to access the accessible version or find a way to scan an image version, everyone from the product manager to the intern is saying "Hey, we don’t care about you. You aren't welcome here, you don’t matter, and we don’t accept you."

I understand that to a lot of people who don’t personally experience a disability, that can sound a bit dramatic. But when your you've grown up with technology your entire life, when you code and know that there are existing, affordable, accessible options available, it really starts to feel like you’ve been written off as an inconvenience. Some IT departments are treated like an unwanted expense. For those of us who know you can and should do better, we feel like an unwanted expense.

One aspect of this that I don’t think many people realize is that the longer we spend without securing accessibility as a human right, the less access the disabled will have to basic services. Especially with an ongoing global pandemic, reliance on the internet has grown exponentially while major companies have lagged behind in ensuring accessibility upon release.

All we want is the same thing everybody else on the internet wants - access to information, access to basic services, and all the cat pictures our hard drives can handle.

Maggie Vaughan, CPACC
~ friend of DubBot, A11Y practitioner in higher ed