When talking with customers about accessibility, I frequently get asked:
Why can’t automated accessibility testing tools find all errors?
This question is understandable, given we have accessibility guidelines such as the Web Content Accessibility Guidelines, technical standards such as HTML5 and WAI-ARIA, and a myriad of patterns and practices.
First, let’s address the assumption that accessibility testing tools can only catch between 30% to 50% of errors. There is a big difference between the various automated testing tools, which we will cover in another post. So, independent of technology, let’s review the reasons automated testing tools cannot catch all accessibility errors and must be supplemented with manual testing using the same assistive technologies used by people with disabilities.
Reason #1: Judgment Calls
The inability to make judgment calls on the quality of the accessibility treatment represents a substantial problem for testing tools. For example, testing tools can determine if alternate text is missing from graphics but cannot make judgment calls on the quality of alternate text or techniques used to hide the image from AT specifically.
<img src=”georgewashington.jpg”>
Provides no information for AT and is syntactically incorrect. Easily caught by testing tools.
<img src="georgewashington.jpg"
alt="George Washington crossing the Delaware">
This is a correctly formatted example. AT such as a screen reader would announce this is an image of “George Washington crossing the Delaware”
<img src="georgewashington.jpg"
alt="My Dog Rex">
AT cannot make judgment calls such as improperly labeling the photo of “George Washington crossing the Delaware” as “My Dog Rex”
<img src="squigglyline.jpg"
alt="" role="presentation" >
If the image is “eye candy” and provides no value, then the developer can put empty alt text quotes or use another technique, such as the presentation role, to tell the AT to ignore the image. This is a judgment call, honored by most testing tools. Many developers will use techniques unaware that they are creating issues that will not be caught by automated testing!
Reason #2: Complex Standards
A skip link is a bookmarked hypertext link appearing as the first object on the page. This link allows users to jump to bypass blocks of content repeated on multiple web pages. For example, features such as a menu, search feature, contact information generally appear on every page of a brochure website. This ability to skip blocks is of tremendous value to keyboard users with mobility challenges, who would otherwise have to endure tabbing through heading elements every time they go to a new page. But this is not an absolute rule! The following exceptions make it difficult for automated tools to evaluate!
- On the desktop, it is widely held that if three or fewer objects can take a focus between the top of the page and the main section, then a skip link is not required. This is prevalent in e-learning courses where users may start with the main content.
- Mobile pages do not have the skip link requirement, as it is not relevant to a touch screen environment.
2.4.1 Bypass Blocks: A mechanism is available to bypass blocks of content that are repeated on multiple Web pages. (Level A)
Reason #3: Semantic Code or Content Problems
Assistive testing tools can identify structures that are likely to be missing. Form labels are an excellent example of an easily caught error. A key attribute linking the label to the field is missing in the example below and easily detected by automated testing tools!
First Name:
BAD: <label>First Name:</label>
<input type="text" id="FirstName">
First Name:
Good: <label for="FirstName2">First Name:</label>
<input type="text" id="FirstName2">
However, the following accessibility errors are also content and semantic problems, and are frequently not caught by automated testing tools:
- Programmer misuse of techniques
- Lack of industry-standard accessibility treatments
- Gaps in tool testing ability
- Areas of disabilities outside the scope of tool
Programmer misuse of techniques
In the following example, the developer applied a tabindex=”-1″ technique that prevents the form field from taking the focus. This frequently happens when a developer mistakenly thinks this will alter the field’s order on the screen. Middle Name:
<label for="MiddleName">Middle Name:</label>
<input tabindex="-1" type="text" id="MiddleName">
Lack of industry-standard accessibility treatments
Headings are one way screen reader users can navigate from section to section. Frequently, a developer will style text to appear to be a heading but not use the correct markup. The web page will appear to have proper heading but can actually lack the necessary markup to help a screen reader user navigate the site:
Fake Heading
<div class="editor-indent" style="font-size: 16px !important;
line-height: 20px !important;"><b>Fake Heading</b></div>
Real Heading
<h5>Real Heading</h5>
Gaps in the tool testing ability
Each testing tool is responsible for determining its own ruleset. As such, not all tools have captured and implemented every possible violation. For example, ARIA landmark regions are a structure similar to headings, in that they provide contextual navigation to screen reader users for areas like the header, main area, and footer. The rule for ARIA Landmarks is all content must reside in a landmark region. Any content not in a region technically fails the specification. This is not a widely implemented rule, and only the most technically adept accessibility testing tools catch this error.
Good - <aside role="complementary">In a landmark region</aside> Bad - I am not inside a landmark region.
Areas of disabilities outside the scope of the tool
Some tools target a particular disability. For example, some tools analyze the contrast between text foreground and the page’s background color. The tools perform no other types of accessibility testing.
I have poor color contrast.
<p style="background-color:#000000;
color=#f1f1f1;"
>I have poor color contrast.</p>
In summary, automated testing tools are an important part of any web accessibility testing process. However, organizations must supplement automated tests with manual testing using the same assistive technologies used by people with disabilities.
Recommended Resources
- The Strengths and Weaknesses of Automated (Accessibility) Checkers
- Ten Accessibility Errors Automated Tools Miss on PDF Documents
- Accessible Overlay Tools: Realities and Myths
Subscribe to Accessibility in the News
Stay informed! Get your weekly update on digital accessibility standards, private and public sector trends, litigation, events, and more.