Skip to content

You are here: ArticlesHow Continuous Accessibility contributes to accessibility maturity

How Continuous Accessibility contributes to accessibility maturity

June 12, 2024 by Andrew Hedges

Process gets a bad rap, as if through institutionalizing ways of doing things organizations are doomed to ossify. I’m a fan of process. Done right, it’s what enables organizations to do their best work in a repeatable way.

That’s why I’m high on accessibility maturity models. It’s fair to argue that the models we have to work with today aren’t perfect, that they need to…ahem…mature, but even in their current state, accessibility maturity models have the potential to spark conversations that can improve outcomes both for internal employees and external users of our products.

We’ve written before about Continuous Accessibility, what it is, how it’s helpful as well as how end-to-end testing can get us closer to these ideals. So much of improving your organization’s accessibility maturity is about people and policy, but there is a slice of it that can benefit from smartly delegating to the machines.

That’s what this article is about: where accessibility maturity and Continuous Accessibility intersect. Let’s get to it.

Accessibility Maturity Models 101

Accessibility maturity models are tools. In a nutshell, they describe a process that provides insight into the effectiveness and repeatability of an organization’s methods for achieving a range of accessibility-related outcomes.

In many cases, it’s the conversations that go on because you’re self-assessing against one of the several maturity models that provide the most benefit, but the models also provide concrete metrics against which to track progress as well as guidance on how to progress to the next of their several stages.

Most of the models go beyond just digital accessibility into areas such as procurement and personnel. They all share similar mechanics, as follows:

  1. They define a number of dimensions to be evaluated (e.g., procurement, personnel, software development lifecycle, etc.).
  2. Each dimension is evaluated against proof points that serve as evidence of its state.
  3. Each dimension is assigned to a stage of development (e.g., inactive, launch, integrate, optimize).

Here are a couple of examples of proof points to orient you to the types of things the models are measuring:

So, the structure there is as follows:

  • Dimension
    • Proof Point Category
      • Proof point

We’ll discuss Continuous Accessibility next, but here’s a preview of the proof points I think are relevant:

Keep these in mind as we explore our definition of Continuous Accessibility!

Continuous Accessibility 101

Melanie Sumner defines Continuous Accessibility as “the approach to ensuring that code intended to be displayed in browsers can be continuously checked and monitored for digital accessibility requirements through a novel application of existing software engineering concepts and Web Content Accessibility Guidelines (WCAG).”

I think of it as asking the machines to do more in service of always understanding your product’s accessibility posture.

Why is this so important? The sooner accessibility bugs are discovered, the more likely they are to be fixed and, therefore, the less likely they are to impact disabled users. This actually applies to any type of defect and explains why tests in build pipelines (aka CI/CD) are so popular and valuable.

Stated another way, the longer it takes for an engineer to discover that a change they made broke something, the less likely they are to fix it. And, it’s not because they don’t care! It’s because the person will have moved on to new work, making it that much more difficult to come back to the context of the breaking change.

We call it attention decay. It’s what makes Continous Accessibility so important.

Available tools for achieving Continuous Accessibility

Continuous Accessibility is best achieved through the use of automated tools. By that, I do not mean “tools that scan the page and produce a list of accessibility bugs,” though that is one type of automated tool. What I mean is any piece of software that can be run on demand to validate accessibility quality.

Before we talk in more depth about the automated tools, it’s helpful to review the universe of tools and techniques available to us to evaluate digital accessibility. The categories of such tools include the following:

  • Linters
  • Scanners
  • End-to-end tests
  • Manual QA
  • Periodic audits

The preceding list is ordered by the speed at which the tool or technique can operate. Speed contributes to decisions about when to include them in the software development process.

The following list groups the tools by when they’re typically used:

  • In dev
    • Linters
  • In CI/CD
    • Scanners
    • End-to-end tests
  • After development
    • Manual QA
  • After deployment
    • Periodic audits

No offense, humans, but Continuous Accessibility is about what the machines can do.

Humans are and will always be the source of truth for whether software can be used by humans. But, humans are slow and expensive compared to machines, which is why Melanie pushes us to broaden our thinking around Continuous Accessibility, in her words, the “novel application of existing software engineering concepts and Web Content Accessibility Guidelines (WCAG).”

So, while manual QA and periodic audits most faithfully evaluate the usability of software for people using assistive technologies, they don’t fit the spirit of Continuous Accessibility, which is to detect problems before attention decay can set in.

That leaves us with the following list of tools:

  • Linters
  • Scanners
  • End-to-end tests
  • Manual QA
  • Periodic audits

Next, let’s define each of these and spell out their relative advantages and disadvantages.

What can the machines do for us?

Linters and scanners

Linters and scanners are similar in approach and outcomes. They evaluate either source code or UI for problems as measured against a defined set of heuristics.


  • Fast
  • Cheap


  • Limited context around user goals
  • Fairly hard ceiling on the types of issues this approach can surface

End-to-end tests

End-to-end tests haven’t traditionally been super helpful for accessibility testing because the existing tools were all built assuming mouse-first interactions. That’s starting to change as more tools integrate the ability to exercise something closer to keyboard interactions, though in many cases keyboard mode is still simulated.


  • Can be written to take context & user goals into account
  • Fast-ish
  • Cheap-ish


  • More expensive to run, so usually more limited in what gets covered
  • Existing tools are not accessibility-first

About that last point, “existing tools are not accessibility-first.” Most end-to-end testing frameworks assume mouse interactions. Some have been extended with limited keyboard support. We at Assistiv Labs have built a library on top of Playwright that is accessibility-first and that programmatically drives keyboard and screen reader interactions exactly as if a human is interacting with the page.

Our library enables us to automate testing of critical user flows at human-audit-level fidelity. We’re using it today with customers of our End-to-End Accessibility Testing service and plan to release it as a self-serve platform at some point in the future.

Meet me at the intersection of Accessibility Maturity & Continuous Accessibility.

Again, here are the W3C Accessibility Maturity Model proof points I think can be at least partially achieved by applying Continuous Accessibility:

Let’s take them one by one.

Consistent approach to implementing accessibility features across products

Meeting this proof point should involve shifting well left of the development phase into UX research and UI design, but to the extent that automated tools—from linters through to end-to-end tests—can validate code before it’s released, it’s possible to enforce a consistent baseline of functionality at all times, across a company’s products.

Consistent approach to testing and releasing products

This proof point is right in the wheelhouse for automated testing. Implementing continuous accessibility tooling should be considered essential (but not sufficient) for meeting this one.

Bigger organizations struggle with far flung groups taking radically different approaches to accessibility testing. By implementing a standard regime of automated testing, it’s possible to rationalize at least that piece of the puzzle.

I say “essential, but not sufficient” because there should still be manual validation by humans for areas where the machines don’t give us 100% confidence.

The canonical example of where machines try to be helpful but (currently) don’t get us to 100% confidence is image alternative (alt) text. To be fair to our robot friends, it’s a hard problem to solve. The same image could require different alt text depending on the context in which it’s used.

Unlike the other proof points, this one is not explicitly tied to accessibility. That’s because you need to test for accessibility whether or not the change being made was related to accessibility. Automated tooling (i.e., Continuous Accessibility) is the only realistic way to stay on top of the usability of a large, rapidly changing application.

Testing process includes automated accessibility testing

I couldn’t agree more. And, we need to expect more from the tools that vendors (including Assistiv Labs!) provide. Help us understand what’s holding you back from increasing the accessibility quality of your applications so we can build the tools you need!

It’s important here to talk about the Swiss cheese analogy. No one tool or technique is going to find every last accessibility problem. But, each of them is like a slice of Swiss cheese providing coverage with a few holes. Layer enough slices on top of each other and you’ll close off most of the holes through which bugs might slip.

Accessibility identified as product release gate

Some of our customers are very sophisticated with respect to their product accessibility programs and I don’t know of a single one that would fail a build due to an automated test failing. Yet.

What would need to be true for your organization to start failing the build when an automated accessibility test fails?

Let’s Continue the conversation… #DadJokesAllDay

I first presented the concepts in this article as a talk at Accessibility Camp Bay Area 2024. At the end of my session, I posed the following questions to the group:

  • Has your company/organization worked towards an accessibility maturity assessment?
  • How do you measure progress against your accessibility goals?
  • Do you think about Continuous Accessibility in the course of your work?
  • In what ways do you leverage automation to achieve your accessibility goals?

If you have thoughts about any of the preceding, we’d love to talk with you. Find us on LinkedIn, Mastodon, or Twitter or drop us an email at