“67% of accessibility issues originate in design”
[First published here on April 21, 2022]
This statistic — referenced quite often in accessibility circles — is derived from a Deque case study.
What the case study actually showed was that the team had been able to reduce defects detected in automated testing by 67%, by baking accessibility into their design practices.
They created and rolled out training, guidance, checklists, an adjusted process and a design system designed to maximise accessibility from the start.
It’s really great to see such good practice and commitment to accessibility.
But for those of you looking to re-use the statistic, please be sure to know what it tells us.
It does not tell us that the firm reduced accessibility issues by 67%, it tells us that they reduced accessibility issues detected in automated testing by 67%. Even using Deque’s recent and impressive performance figures for its automated testing tools (57% of issues identified) — you get an overall measured issue reduction of 38%.
In other words — they were able to address, in advance through good design practice, 38% of the accessibility issues that would have otherwise been detected by automated testing.
It doesn’t sound as impressive as 67%. 38% reduction of issues, though, is huge though, in the context of their findings that those problems would otherwise have cost them on average:
3x more to fix during automated testing, or
12x more during manual testing, or
95x more once it was live.
These cost profiles are specific to the processes that the firm — a bank — has in place. Presumably, like most financial institutions, they have large tightly coupled legacy systems. They’ll be highly regulated, and have heavy security requirements . I’d expect making a change to something already integrated into the firms systems would be really time-consuming — and it might well be 95 times faster to make changes to prototypes than production code.
But those constraints aren’t universal. Amazon says its devs are deploying new code every 11.7 seconds — clearly it’s not going to take them 95 times longer to fix their production code.
That’s why we shouldn’t be cutting and pasting figures like these into other contexts. You can easily see how someone might be tempted to claim that front-loading accessibility considerations in design can yield a 95-fold reduction in costs…
In reality, I suspect that the changes the case study bank made will yield improvements far beyond those their automated testing will enable. They’ve taken a holistic approach — providing tailored training, support and resources across disciplines; testing throughout the product development lifecycle with real users with a wide range of access needs, including with assistive tech. The 38% improvement that they’re able to evidence, I’m sure, isn’t a figure that does justice to the impact their work will have had.
Because it’s often really hard to measure what matters. It can be tough to describe good counterfactuals — to show what would otherwise have been. Shoehorning in some contextually-inapplicable statistics can feel like better than nothing.
But we don’t have nothing. We have organisations like WEBAim doing great work auditing the state of accessibility on the web at large.
We know that, across 1 million home pages, 50 million distinct accessibility errors were detected. And that 96.5% of all errors detected fall into just six categories. Addressing just these few types of issues would significantly improve accessibility across the web.
Low contrast text
Missing alt-text for images
Empty links
Missing form input labels
Empty buttons
Missing document language
And all but the last one can be addressed, or meaningfully supported, by designers (e.g. with accessibility annotation/bluelines).
Assuming, of course, that accessible design has been made an explicit part of their job — and they’ve been given the resources, tools and support to do it.
Don’t succumb to spurious accuracy just because it feels robust. It is better to be approximately right than precisely wrong.