Skip to content

You are here: ArticlesHow far can automated tools go towards ensuring accessibility?

How far can automated tools go towards ensuring accessibility?

August 26, 2024 by Nicholas Cook

Some accessibility tooling vendors estimate that their products can catch anywhere from 30 to 80% of “accessibility issues”. But what is defined as an “accessibility issue”? How are these numbers calculated? Are they based on the percentage of WCAG Success Criteria (SCs) that the tool fully covers or do they include partial coverage? Is it simply the percentage of issues typically caught during manual audits? Do they count failures once or do they count every instance of the same failure wherever it shows up on the page?

Recent polling hints that answers to these questions are not well understood in our industry.

Why should you care? When setting your accessibility strategy, it’s essential to have a solid understanding of what coverage tooling provides—and what it doesn’t—so that you can trust them to save you time (and money!).

The last thing anybody wants is for accessibility issues to be slipping by undetected due to a misunderstanding of a tool’s capabilities.

In this article, we’ll help you understand what the most common class of automated accessibility tooling, Static Code Analysis (SCA) is capable of, and explore how new automation techniques can expand the universe of what’s even possible to detect.

Tech stack

First, it's helpful to understand the technology stack that's often tested with accessibility. There are multiple layers of interaction within a website’s ecosystem:

  1. Developers write source code for a website.
  2. A specific browser (e.g., Chrome, Firefox, Safari) renders that code into a DOM (Document Object Model) and an accessibility tree.
  3. Assistive technologies like screen readers interact with the accessibility tree (with help from the layer below, and sometimes the DOM).
  4. The operating system manages the interaction between ATs and the accessibility tree through the OS accessibility layer.

For a screen reader user, all of these layers must come together to deliver the UX. Unless a tool is testing all layers, you cannot be 100% certain issues don’t exist.

And yes, that includes WCAG issues. Many success criteria require accessibility support, which essentially means it has to work for the browsers and assistive technologies you support.

What can Static Code Analysis do?

Some of the most common automated accessibility tools include Deque’s axe-core/axe devTools, WAVE, and the ESLint plugin eslint-plugin-jsx-a11y. These tools are examples of Static Code Analyzers (SCAs), which read and interpret a website at a point in time. This could be the source code written by the developer (as with ESLint) or the rendered DOM (as with axe and WAVE).

SCAs are great at comparing code or DOM against basic rules for making a website more accessible, such as missing alt attributes for images or incorrect heading structures (e.g., skipping from <h1> to an <h4>). axe-core's rule descriptions are a great way to get familiar with these rules.

Looking back at the layers in the end-user experience we outlined before, these tools fall right around layer 1 / layer 2 (source code and DOM rendering).

What can’t Static Code Analysis do?

The most common SCAs in use today cannot test layer 3 or layer 4 because they have no access to assistive technologies like screen readers or the OS.

And while SCAs can detect some common mistakes that can lead to issues with interaction (like using unsupported ARIA attributes), they struggle with more complex interactions such as keyboard traps, focus falling somewhere it shouldn’t (like on the <body> element), or a screen reader announcement being invalid due to a race condition in interactive elements.

Remember that SCAs don’t actually use the website. Almost all websites include interaction, so these are crucial bugs that need to be identified but often slip through the cracks of automated tools.

Because SCAs rely on a one-size-fits-all rule set, they also can't understand your website’s unique context. For example, they can’t tell you whether a <h1> is incorrectly being used purely for visual styling or whether it's correctly describing the page's topic or purpose.

Static Code Analysis WCAG Coverage

The Web Content Accessibility Guidelines (WCAG) serve as an international standard for ensuring web content is accessible to everyone. They are widely used as the benchmark for identifying accessibility issues.

EqualEntry did a thorough case study where they created a sample site littered with accessibility issues for 47 of the 50 WCAG 2.1 success criteria necessary for AA compliance. Then, they tested the site with some of the most common SCAs, and the best of those tools only found 10.6% of the total issues.

Their case study is a great example of how there are several critical SCs that we know traditional automated tools cannot cover effectively, such as:

  • 1.3.2 Meaningful Sequence
  • 2.1.2 No Keyboard Trap
  • 2.4.3 Focus Order
  • 2.5.3 Label in Name (although axe-core has an experimental feature that provides limited coverage)
  • 3.2.1 On Focus
  • 3.2.2 On Input
  • 3.3.1 Error Identification
  • 4.1.3 Status Messages
  • and others…

This is because SCAs don’t understand how to use the page, they’re just looking at a snapshot in time. All of these SCs require actual interaction with the site (i.e., tabbing to a button, filling out a form field, etc.) so they can’t be tested by just inspecting the code. These criteria are essential for ensuring an accessible user experience, yet they fall outside the scope of what SCAs can detect.

How can we bridge the coverage gap? End-to-end testing!

While SCAs may fail to provide adequate coverage for complete WCAG compliance, they are still a powerful tool to keep in your toolbox.

But if your goal is to automate as much as possible, there are more techniques available.

End-to-End (E2E) test automation can replicate exactly what an end user would experience. Unfortunately, traditional E2E testing only considers mouse users by default. At Assistiv Labs, we‘ve been working to change that with our accessibility-first end-to-end testing service. Accessibility-first E2E testing replicates exactly what an end user would experience using a screen reader, keyboard, and more. This allows accessibility-first E2E to cover all 4 layers of the end-user experience we outlined earlier, rather than only layers 1 or 2 like SCAs.

Accessibility-first E2E automation interacts with the page by tabbing, cursoring, clicking, and tapping to achieve an end goal. It crucially helps fill both the interaction and context gaps left by SCA.

One way to look at the difference in coverage is as a Venn Diagram comparing different accessibility testing methods.

Venn diagram showing overlap between manual testing, various accessibility tools, and E2E testing, which covers areas missed by others.

This diagram introduces a few terms we have’t covered yet—don’t worry we’ll be writing more about that in future articles. The main takeaway is that different types of accessibility automation provide different types of coverage. Strategically combining multiple types can greatly increase your ability to trust and rely on automation.

Conclusion

Static Code Analysis (SCA) is great for identifying a number of often easy-to-fix bugs, reducing workload for auditors and manual testers, and overall aiding accessibility efforts, but it can’t be the only solution you have for making sure you have all your bases covered with regards to accessibility. There is no “silver bullet” for solving all of the accessibility issues on your site. It’s essential to take a multi-pronged approach, be aware of the coverage the different tools provide, and—importantly—the areas they can’t cover.

If you’re curious to learn more about how we approach accessibility automation at Assistiv Labs, check out our end-to-end accessibility testing service, or reach out!