Skip to content

Instantly share code, notes, and snippets.

@StommePoes
Last active September 7, 2019 10:11
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save StommePoes/e27e36a5049ecc8436bf33e1cb04dd71 to your computer and use it in GitHub Desktop.
Save StommePoes/e27e36a5049ecc8436bf33e1cb04dd71 to your computer and use it in GitHub Desktop.
(okay gists are hard for me to read. So I've got this on a web page with actual styles: https://stommepoes.nl/live_regions.html)
I've been having thoughts about issues I have with live regions and other automated, flattened text strings. First, my whines and complaints, then an idea I'd like to know if it could be worked out.
THE COMPLAINT
Live regions are messed up. So far, they appear to only be made available to SR (screen reader) users. Not Braille-only SR users though, nor screen magnification users. Nor anyone who needs to focus intently on the specific section of the page or app where they are performing tasks.
I similarly have issues with `aria-labelledby`, `aria-describedby` and `aria-label`. Getting these things to an SR user often seems to require keyboard focus on the element being named or described. It's not available to anyone else. The text is just read out: no way to go word-by-word, check spelling, etc (unless the text being referenced in the DOM can be reached with virtual cursors).
A React dev posted <a href="https://twitter.com/ProvablyFlarnie/status/1166141795652141059">a tweet (with thread and replies)</a> talking about live regions going silent if/when user focus moves, and it reminded me how much we tell developers to rely on live regions for screen reader users. Every ticket I make recommending this, I have to add in the warning that Braille-only and screen magnification users will be left out. Sometimes, when highjacking focus is doable in the user flow, I recommend that instead: this brings the text to everyone's attention and it can be consumed however the user wants. But in many cases the user is busy doing things and highjacking the focus is not cool, and cannot be recommended. Moving focus to a toast, like success messages, is usually never a good idea.
And tooltips! The whole idea of tooltips is broken for everyone but sighted, non-magnified desktop mouse users.
The tooltip pattern, for example, is often suggesting `aria-describedby` on the trigger element (pointing to the tip text). This means the screen reader user gets the blast of the flattened text (of course they can cancel it) the moment they reach the element, but... this goes against the Whole! Purpose! of a tooltip.
What's a tooltip? Some designer or someone wanted to have some additional information, but it's HIDDEN! Why? To make the interface more palatable for human coga reasons? because a majority of the end-users don't need the extra info?? And so the idea is, users can CHOOSE to see that additional info.
Now, if you're a keyboard-only user and you see the tooltip on focus, you didn't choose squat. You could cancel it with Esc (if built correctly, oh but what if we're in a modal dialog or similar? Now we have stacked Escape listeners in our JS! This is possible, but easy to screw up), but it's thrown in your face. What did you gain as a user, as opposed to that content simply sitting out in plain view in the first place?
Meanwhile in your screen reader, maybe there's a word or name in that flattened text that makes little sense. But you can't spell live stuff, nor strings called by `aria-label/labelledby/describedby`, or go word-by-word, or any of that. Not unless you can find the actual text in the DOM. And anyway, if this wasn't a focusable control but just an icon with a tooltip, was `aria-describedby` even heard just because you floated by while reading with a virtual cursor?
Error message pattern: sometimes it's recommended to live-region an error message of an input on `blur` — sensibly since only after the user has left can the input tell if it's been given valid or invalid info — but that `blur` event means the user has Tabbed away. They are now focussed on something else. If it's another field, they would and should expect to immediately hear the new label and other info of the next field.
So when do they hear the error of the field they just left? Some people put `tabindex="0"` on their errors so the error message itself is the next-Tabbed-to element, but the timing can be tricky. You don't want an empty focusable after every field, and the error container can't be filled (or the container itself added to the DOM) until AFTER we know the field is in error, so by the time the user's `blur` is registered, the browser's already put them on the next field or button.
The React twitterer has used setTimeouts. It probably works most of the time, but it is kinda brittle.
I think some of these automatic-reading-of-things are kinda broken. And when Christmas comes, I think I'd like something a bit different (note: my original idea was like the SYN/ACK back and forth, but I've changed it to ping/ask):
THE IDEA
First, the industry is really ignoring earcons (or system bells). The exception seems to be chat programs: they make little beeps and boops.
Suppose we had a (typed) character, and/or an event, coupled with an earcon. Suppose instead of doing what we do now with accessible descriptions and errors and live regions, if we sent users (nearly all users) a little notification (a browser setting could turn this off) consisting of a single event (on Braille, a single character?) coupled with a single sound or system bell.
Then, with this event (in my head a sort of ping), the user could choose to reply with an "ask" — something specific that means "the user is asking" and this "ask" exposes the message. I don't mean that we'd also be hiding visual messages like errors, tooltips, or toasts, but that there's some way for users to 1) be alerted there is new info and 2) allow them to request it at their leisure.
This idea could get around interruptions in general but specifically getting out of the way of Braille users, which then means Braille users could get live notifications.
It also could fix my biggest pet peeve about the tooltip design. The current design has no "ask" for SR or keyboard-only users (though this is why I feel tooltips should always be part of an activatable button/control and never JUST some static thing you can hover), but this ping-ask setup could offer one.
[Some element] has an accessible description. On focus or the appearance of the virtual cursor, the ping/event/earcon/short notification is sent. User can ask or ignore.
User blurs an input field. System sends a ping, not enough to interrupt the reading of the next input's label. User can ask or ignore.
User is typing in a field with a character limit. The remaining character count could send a single ping when the `onInput` event stops (ok I dunno if Dragon triggers `onInput` events). User can "ask" for the current remaining character count.
Additionally, wouldn't it be good if users could perform the "ask" as often as they want. So some dev throws in a damn complicated accessible description with a huge pile of info — think complex password rules! — and users could, without moving focus back and forth, use their "ask" key to get it reread to them as many times as they want so long as it's the last-created message (meaning users do not have a history of messages they can navigate. No queue. They can only get the last message sent).
Is any of this feasable in any way?
I'd love to see thoughts from those involved with building AT (assistive technologies) and the nerds deep in the ARIA specs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment