Skip to content

Instantly share code, notes, and snippets.

@robdodson
Created January 4, 2018 02:44
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save robdodson/5b728d8cfca48fcc8d8fecbc459bb012 to your computer and use it in GitHub Desktop.
Save robdodson/5b728d8cfca48fcc8d8fecbc459bb012 to your computer and use it in GitHub Desktop.
Skimmed this thread, so perhaps much of this was mentioned in some form or another before.
I would say testing against screen reader output is subject to all of the same issues that interactive ui testing suffers from.
It doesn't mean we shouldn't do it, but I have not seen it ever work in at least four instances I've seen it tried previously. The tests were always flakey, suffered from constant need for re-baselining, and generally were a net negative.
AOM is still under active development, so seems, at least for now, a poor target.
Chrome does indeed have an accessibility tree. We don't make it especially easy to introspect this tree. The easiest way currently is by using the chrome.automation extension api (which is only available on dev channel). It does have eventing and is what ChromeVox is based on.
Testing against this tree and making strong assertions against it for your given page makes the most sense to me.
You can see how a screen reader uses such a tree in reaction to events by looking at either ChromeVox or NVDA source. ChromeVox is likely more approachable.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment