Checking your website for accessibility? Great! But automated testing tools catch only 10-30% of errors. Learn where they excel, fail, and how to use them.
Here’s How to Incorporate Automated Web Accessibility Checkers in a Robust Accessibility Compliance Program
This article on automated web accessibility testing tools is reprinted, with updates and changes, from the May 2017 issue of Mealey’s™ Litigation Report: Cyber Tech & E-Commerce. It was originally published as The Practical Role Of Automated Web Accessibility Testing Tools In A Robust Accessibility Compliance Program. Mealey’s is a subscription-based information provider and a division of LexisNexis. Copyright ©2017 by Hiram Kuykendall. Responses welcome.
Why Use Automated Accessibility Testing Tools?
Legal actions and complaints have exploded against organizations with inaccessible websites and web applications. In the crosshairs are websites and applications that are not perceivable and usable by someone with a vision, hearing, mobility, cognitive disability. It is only natural that institutions would respond by seeking out automated accessibility testing tools that can scan an entire web-based experience and report accessibility issues.
At first, this strategy of targeting errors detected by automated scanning proved effective. After all, many of the entities bringing actions or complaints used the same tools to identify organizations to focus on. This led to a distinctive pattern where organizations were only remediating the automated test defects. However, these automated tests only identify ten to thirty percent of accessibility issues.
It was predicted that actions against organizations that solely focused on automated testing would increase as the evaluation techniques of concerned parties extended beyond what could be identified by automated testing and individuals with assistive needs voiced their objection to inaccessible experiences despite the passage of code by automated testing tools. True to form, we are now seeing actions brought against public and private sector entities that have zero automated failures as identified by the most popular testing tools. As such, organizations are struggling to understand the nature of the seventy percent failure issues and where automated testing falls in a robust accessibility compliance program. The rest of this article will outline some of the most contentious of the seventy percent issues and how automated testing can be used in a robust accessibility compliance program.
NOTE: This article uses Web Content Accessibility Guidelines (WCAG) 2.0 Level A/AA as the accessibility compliance standard. This is important as some government legislation still refers to Section 508, which does not have as thorough a rule set, as the standard. However, the Section 508 Amendment to the Rehabilitation Act of 1973 adopts WCAG 2.0 Level A/AA by reference as the new federal standard, with compliance required by January 2018 (See “Making Technology Accessible To People With Disabilities: Section 508 Refresh Incorporates Internationally Recognized WCAG Standards”).
3 Areas Where Automated Web Accessibility Testing Tools Perform Unreliably
The following categories of issues represent trouble areas that automated web accessibility testing software traditionally cannot reliably evaluate.
1. Conditional Rules
Some WCAG rules have conditions under which a remediation technique is required. Since these rules are conditional, automated tools will, at best, provide a warning to test but do not have the sophistication to indicate when a failure has occurred.
Example: Skip Links
WCAG Success Criterion 2.4.1– Bypass Blocks: A mechanism is available to bypass blocks of content that are repeated on multiple Web pages. (Level A)
People with mobility impairments rely heavily on keyboard-only interaction. The fine motor skills required to use a mouse are often lacking. To provide a streamlined user experience, WCAG specifies that there must be a way for the keyboard user to bypass blocks of repeated content, such as menus, that frequently reside between the start of a web page and the main content. One method for meeting this specific condition is adding a “skip link” as the first focusable item on a page. A keyboard-only user will use the Tab key to access this skip link, which often is invisible until the Tab key is pressed and it receives focus. Once activated, the user is able to bypass all objects between the top of the page and the main body. They effectively skip common header objects such as menus, links to social media, logos, etc. Without a skip link, this isn’t an option. In fact, we have seen poorly coded menus that require the user to tab 40 times just to get to the main content.
While this is a noble principle, there are exceptions. For example, it is widely accepted that if there are four or fewer objects between the top of the page and the main section, a skip link is not required. This is particularly tricky for responsive sites that change layouts between desktop, tablet, and mobile displays. In many instances, a skip link would be appropriate and required for a desktop layout but be frowned upon in a mobile layout. A mobile display usually condenses items to fewer than four objects. Because is usually doesn’t require a keyboard, skip links are also inappropriate for a touch-sensitive device. With these nuances to consider, automated testing tools frequently do not report an error on the lack of a skip link, but rather take the more conservative route of issuing a warning or note to evaluate manually for this condition.
Example: Heading and Landmarks
WCAG Success Criterion 2.4.6 – Headings and Labels: Headings and labels describe topic or purpose. (Level AA)
WCAG principles are just that—principles. Many evolve through additional information provided through the WCAG supplemental sections “How to Meet” and “Understanding.” For example, headings are structures that provide content context. Just as this article uses headings to denote the start of sections, web-based technologies implement the same concept. Assistive technologies, such as screen readers, can quickly jump from heading to heading. Headings also provide a way for assistive technologies to “scan” a page for an impromptu page outline, much as a sighted person might scan headings on a page to get the gist of a page’s content.
The issue is that there have been advances in standards that also satisfy this condition. For example, there are new structures called landmarks that allow the developer to identify page regions; there are header, main, and footer sections, for instance. A designer may solely use heading, solely use landmarks, or use a combination of the two as long as it meets the conditions of the rule above, describing the topic or purpose of the area indicated. The issue is that this contextual relationship is complicated and requires interpretation. At best, automated tools check to see if one or more structures exist, but cannot interpret if they are actually meeting the intent of this objective. Automated testing tools will often not report heading and landmark issues since meeting the intent is as important as the technical coding.
These are just two instances where the conditional nature of the WCAG principles and a wide array of technical implementations suggest that using automated technologies cannot be the only approach for assessing compliance.
2. Complex Failures
Increasingly, web developers are implementing code based on standards and techniques that have direct implications to assistive technologies. Since these code elements have multiple valid uses, it is almost impossible for automated testing tools to detect if developers implement these code elements in a way that provides an accessible experience.
The examples below will not trigger an error in the majority of automated testing tools, but they can lead to unintended inaccessible experiences.
Example: Display:None
WCAG Success Criterion 2.4.6 – Headings and Labels: Headings and labels describe topic or purpose. (Level AA)
If a developer wanted to hide an object on a web page from both sighted and assistive technology users, they could use a style to make it effectively invisible to all. Then at some point, the developer can dynamically alter this style and make it appear without refreshing the page. For reference, you can imagine the hamburger menu (described this way because the three stacked horizontal bars resemble the layers of a burger) on a mobile application. When you press the menu button, the menu appears. One technique to make the pop-up menu invisible is to uses a “display:none” style that makes the object invisible for everyone. By removing the display:none style, the object will magically appear.
Here’s a brief example of that function hampering accessibility:
<label for=”FirstName” style=“display:none”>First Name:</label>
First name: <input id=”FirstName” type=”text”>
In the real life example above, the developer accidentally included the “display:none” style on a correctly associated label for a first name input box on a form. The developer did not know why the label was not showing up, so they just wrote “First name:” on the screen, which is not associated with the label.
Without a properly associated and perceivable label, assistive technologies would perceive that a field exists, but could not provide an indication as to the field’s intent. In the example above, without an associated label the field could be asking for first name, social security number, date of birth, etc.
The majority of automated web accessibility testing tools lack the sophistication to identify that the object is not perceivable by anyone. Most will pass it since the majority of the tag is properly coded. An automated checker would assume the deliberate inclusion of “display:none” is part of a coding strategy rather than an error.
In summary, there are many instances where serious secondary errors coexist with a primary error condition that has been met. Because automated testing tools will find the primary condition acceptable, they often fail to catch the secondary errors which have serious accessibility failures.
3. Patterns
Good web development teams want to create new and better user experiences. That is the hallmark of innovation. To guide developers on the creation of user experiences, various standards groups are creating accessible interaction patterns.[1] These patterns are independent of the underlying technology and provide both the developer and quality assurance teams with test conditions that will ensure WCAG compliance.
Even with these robust patterns, creating an accessible experience frequently falls short. The primary reasons for these failings seem to fall into one of three categories. First, web developers simply do not know the design patterns exist. Second, a design pattern may not be a 100% fit for the interaction the developer wants. The developer may have to adapt an existing pattern to fit the new interaction. And lastly, there is substantial groupthink in software development. There is a massive amount of underlying code created by reputable sources for all technologies, and developers use it widely and often. But much of that pre-created code has a host of accessibility issues and does not conform to accessibility patterns. From an empirical evidence view, the lack of emphasis on accessibility and the readily available inaccessible code gives a new developer few indications that there is more to the coding specifications than is readily observable.
From an automated accessibility testing perspective, the challenge is creating validation rules to gauge how code will interact with users with and without assistive technology. Since the patterns do not necessarily dictate the underlying code, the automated web accessibility testing tools must try and identify success and failure patterns without having an absolute technical standard to rely on. As such, the only viable recourse for the automated tools is to issue warnings that some interactions must be manually evaluated. This leads to the next issue, which is warnings frequently do not carry the same weight as errors in the development process. There have been many instances where the automated tools yield zero errors and numerous warnings, but the metric the team is held to is zero reported errors. In effect, warnings are unverified errors that must be evaluated with the same rigor as an error would be.
Example: Menus
One such example of a common pattern is a menu. A menu appears on just about every web page and has test conditions for blindness, low vision, and mobility impairments. Furthermore, this pattern can be adapted for a wide variety of needs such as elearning interactions and other experiences that require robust keyboard navigation. The following menu pattern is from the W3C WCAG working group.[2]
Sample Keyboard Behavior:
Keyboard actions if keyboard is focused on the menubar
- Left arrow: Previous menubar item
- Right arrow: Next menubar item
- Up arrow: Open pull down menu and select first menu item
- Down arrow: Open pull down menu and select first menu item
- Enter: Open or close pull down menu. Select first menu item if opening
- Space: Open or close pull down menu. Select first menu item if opening
Keyboard actions if keyboard is focused on a menu item
- Left arrow: Open previous pull down menu and select first item
- Right arrow: Open next pull down menu and select first item
- Up arrow: Select previous menu item
- Down arrow: Select next menu item
- Enter: Invoke selected item and dismiss menu
- Space: Invoke selected item and dismiss menu
- Esc: Close menu and return focus to menubar
In summary, there are many patterns of accessibility that attempt to describe the success conditions from the end user experience. Since these patterns can be achieved through several technical implementations, automated testing software frequently relies on warnings as an indication that manual testing is required. Since many development teams focus solely on errors, the equally valid warnings are ignored and the patterns are not truly evaluated.
3 Areas Where Accessibility Testing Tools Succeed
There are numerous errors that automated web accessibility testing cannot reliably catch. But the following categories represent some of the 30% of accessibility issues that testing software can reliably detect.
1. Page Structure
Each web page must have certain information that either describes the page to the user or provides a technical understanding to assistive technology. WCAG 2.0 defines these structures.
Example: Page Structure
- Unique Page Titles – Each page should have a unique title that informs the user about the intent or content of the page. Accessibility scanning software can easily detect missing or duplicate page titles because the requirement is well documented and static. WCAG Success Criterion 2.4.2 – Page Titled: Web pages have titles that describe topic or purpose. (Level A)
- Page Language Indications – Assistive technology requires the page language to contain a language indicator for proper reading. Once again, language attributes are standard and thus easy to detect if missing or improperly applied. WCAG Success Criterion 3.1.1 – Language of Page: The default human language of each web page can be programmatically determined. (Level A)
- Improper Coding Techniques – Assistive technology requires a web page to follow the rules of the underlying specification. While it is true that there are many technical code implications that cannot be evaluated, there are many specifications where a well-defined standard exists. An example of a common improper coding technique would include duplicate identifiers whereby two objects are accidentally given the same name. WCAG Success Criterion 4.1.1 – Parsing: In content implemented using markup languages, elements have complete start and end tags, elements are nested according to their specifications, elements do not contain duplicate attributes, and any IDs are unique, except where the specifications allow these features. (Level A) WCAG Success Criterion 4.1.2 – Name, Role, Value: For all user interface components (including but not limited to: form elements, links and components generated by scripts), the name and role can be programmatically determined; states, properties, and values that can be set by the user can be programmatically set; and notification of changes to these items is available to user agents, including assistive technologies. (Level A)
2. Content
Another area where automated testing excels is in identifying user-created inaccessible content. Modern content management systems provide tools that allow non-developers to create and post content directly to the website. Due to the limited nature of these content additions, automated testing tools can generally identify content entry errors.
Example: Content
- Alternate Text for Images – All images must be tagged with a text description that describes the image content or uses a technique to mark as a decorative image. Frequently, the underlying tools will allow content entry staff to add an image but do not enforce the alternate text rules; however, these basic failures by content entry users are based on easy-to-identify principles and are easily caught. WCAG Success Criterion 1.1.1 – Non-text Content: All non-text content that is presented to the user has a text alternative that serves the equivalent purpose, except for the situations listed below. (Level A)
- Tables – Tables that contain data must have certain markup to indicate which rows are header rows that identify the general contents of the associated columns, and data rows that contain actual values. Frequently, content entry staff will format table headings to give the appearance of being a table header, but forget to set the underlying properties relied on by assistive technologies. WCAG Success Criterion 4.1.1 – Parsing: In content implemented using markup languages, elements have complete start and end tags, elements are nested according to their specifications, elements do not contain duplicate attributes, and any IDs are unique, except where the specifications allow these features. (Level A)
- Color Contrast – While there is no substitute for manual testing, automated testing software can evaluate the colors outlined in the style specifications of a website. As such, automate testing software can assess WCAG compliance by pulling the foreground and background color values from the associated Cascading Style Sheets, or CSS, and apply the mathematical algorithm provided by WCAG standards. WCAG Success Criterion 1.4.3 – Contrast (Minimum): The visual presentation of text and images of text has a contrast ratio of at least 4.5:1, except for the following: (Level AA)
3. Specialized Rules by Automated Testing Tools
Many automated web accessibility testing systems were created in response to particular needs. As such, individual systems may have more robust evaluation rules in specific areas. These specialized rules will attempt to detect and render a decision on the complex rules outlined in the previous section. We have seen tools that do an exemplary job of evaluating complex structures of one type, but then fall short in others.
Automated tests can reliably evaluate roughly thirty percent of accessibility issues. The types of errors readily detected tend to focus on the page level which have well-defined definitions or are content driven.
How to Use Automated Testing in a Robust Accessibility Program
Given the challenges associated with automated testing, where do these tools fall in the lifecycle of a web-based product?
1. Use Tools to Monitor Ongoing Changes in Content for Accessibly Built Environments
There is no substitute for up-front preparation. The majority of the hard-to-catch errors mentioned above (conditional rules, complex failures, patterns, and page structures) will not be an issue if the website or web application is designed to be accessible from the start. The code for components such as menus, headings etc., will largely remain static after development. This leaves the automated web accessibility testing software free to monitor what it does best, dynamic changes in content. While anyone with the capability to update content on the website should be using an accessibility testing tool to proof their work, having an automated accessibility testing tool to continuously monitor for exceptions is highly valuable. As a simple example, many web platforms will allow the saving of graphics as part of a story without enforcing the alternate text rules. Automated accessibility programs do well at finding and reporting these errors.
2. Use for Risk Mitigation with Multi-Site, Multi-Author Organizations
Organizations such as universities and colleges will have a multitude of websites being created by a wide variety of groups. While each group developing a site has the duty to develop accessible web presences, having a continuously running automated accessibility testing program can act as an early warning detection system for governing bodies. Detectable failures act as early warning signs of more significant accessibility issues and can be used to prioritize secondary compliance reviews.
Available Tools
There is a wide variety of commercially available automated accessibility testing tools. Obtaining pricing for these tools generally requires the customer to talk with a sales representative and get a quote.
Recently the accessibility community has released free, open source automated testing tools. These tools generally wrap single-page evaluation tools in software that extracts a predetermined number of pages from a site and executes the tests against each page. Currently these open source tools lack the sophistication of their commercial counterparts. Features such as comparing between scans, marking and preserving false positives, providing remediation workflows, management interfaces for scheduling and managing scans are not present in the current iteration of open source software.
One such open source automated accessibility testing tool is TheA11yMachine. This free tool essentially extracts web pages from a defined URL and executes a single page open source accessibility checker, HTML_CodeSniffer. The results are then combined in a data file that can be displayed in a user-friendly report or imported into secondary systems.
Conclusion
In summary, a robust accessibility compliance program can benefit from automated web accessibility testing programs as part of a risk mitigation strategy. While these programs are limited in their effectiveness at isolating errors within complex features, they are effective at monitoring content-related errors after launch. They also can play a strategic part in monitoring multiple sites in highly decentralized environments such as higher education. And finally, there are a multitude of commercial and open source testing packages with a wide range of features and prices. By understanding the limitations and costs associated with automated accessibility testing packages, organizations can strengthen their existing accessibility programs and create long-term, sustainable, accessible websites and web applications.
What Automated Web Accessibility Testing Tools Have You Used?
Let us know in the comments!
Endnotes
[1] Authoring Practices and Patterns
- WAI-ARIA Authoring Practices 1.1: https://www.w3.org/TR/wai-aria-practices-1.1/
- Web Content Accessibility Guidelines (WCAG) Working Group Web Accessibility Initiative (WAI) Wiki, main page: https://www.w3.org/WAI/GL/wiki/Main_Page
[2] W3C Pattern – Menu
- Web Content Accessibility Guidelines (WCAG) Working Group Using ARIA menus Wiki: https://www.w3.org/WAI/GL/wiki/Using_ARIA_menus
- WAI-ARIA Authoring Practices 1.1, 14 Menu or Menu bar: https://www.w3.org/TR/wai-aria-practices-1.1/#menu
Microassist Accessibility Audit Services
Automated accessibility testing tools catch less than half of known issues. At Microassist, we believe in quality manual testing against recognized standards such as WCAG 2.0/2.1 and Section 508 to ensure accurate audit results. By using the same tools used by individuals in the disability community, our accessibility audit will ensure a complete and thorough analysis of accessibility challenges. Learn more about our accessibility audit services.
Receive Accessibility in the News in Your Inbox!
Accessibility in the News is a weekly curation of the top accessibility stories from a range of online media sources. Topics vary, but generally focus on digital accessibility standards and implementation, accessibility compliance and accessibility litigation, and other online access issues. To receive “Accessibility in the News” curation via email, subscribe below.
Subscribe to Accessibility in the News
Stay informed! Get your weekly update on digital accessibility standards, private and public sector trends, litigation, events, and more.