Skip to content

Instantly share code, notes, and snippets.

@patrickhlauke
Last active August 11, 2016 14:26
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save patrickhlauke/96110b10547770021e58f5098dd31087 to your computer and use it in GitHub Desktop.
Save patrickhlauke/96110b10547770021e58f5098dd31087 to your computer and use it in GitHub Desktop.
Mobile A11y TF - rough proposal for further input-related SCs

2.1 Keyboard accessible [leave as is, perhaps tweaking/expanding some non-normative wording]

2.1.1 Keyboard

2.1.2 No Keyboard Trap

2.1.3 Keyboard (No Exception)

2.5 Pointer Accessible

2.5.1 Pointer: All functionality can be operated with pointer inputs. If functionality requires text entry, a mechanism is available for users to enter the text using pointer inputs (for instance, through an on-screen keyboard). (Level A)

Note 1: in most cases, the on-screen keyboard will already be provided by the operating system or user agent.

[Editorial comment: WCAG 2.0 currently silently assumes that of course content will be designed/built to work with pointer inputs (generally, mouse); this is not always a given, and for completeness should be added. Needs cautious wording to ensure we're not requiring authors to build their own on-screen keyboards etc, probably through use of "a mechanism is available" language which would then cover situations where the OS/UA themselves provide on-screen keyboards (see discussion https://lists.w3.org/Archives/Public/public-mobile-a11y-tf/2016Jul/0014.html)]

2.5.2 Target Size: The size of the target in relation to the visible display at the default viewport size is at least: (Level AA)

[Editorial comment: as already written/finalised]

2.5.3 Pointer gestures: All functionality can be operated without requiring precise/timed pointer gestures or multi-pointer gestures (Level A)

Note 1: examples of a multi-pointer gesture include two-finger pinch/zoom Note 2: this requirement applies to web content which interprets pointer gestures (i.e. this does not apply to gestures that are required to operate the user agent or assistive technology)

Understanding: It may not always be possible for users to perform specific gestures (e.g. draw a complex path with their finger on a touchscreen) in a precise and timely manner - they may lack the precision/accuracy/speed. Further, it may not always be possible for users to perform multi-pointer gestures (e.g. a two-finger pinch/zoom, three-finger rotation)

How to meet/techniques: don't rely solely on pointer gestures that require high precision or specific timings; don't rely on multi-pointer gestures; provide alternatives that do not require gestures (e.g. additional visible controls that perform the same/similar action as a quick flick/swipe) and alternatives that only require a single pointer, rather than multi-pointer.

2.5.4 (No) accidental activation: For pointer activation, at least one of the following is true: (Level A)

  • Activation is on the up-event, either explicitly or implicitly as a platform's generic activation/click event;
  • A mechanism is available that allows the user to choose the up-event as an option;
  • Confirmation is provided, which can dismiss activation;
  • Activation is reversible; or
  • Timing of activation is essential and waiting for the up-event would invalidate the activity.

[Editorial comment: previously 2.6.5, but conceptually fits more under the proposed 2.5 herer - as already outlined, prefer listening to the up event, either explicitly or by using high-level events like click]

2.6 Inputs with Assistive Technology

[Editorial comment: 2.6.1/2.6.2/2.6.3 say roughly the same thing, but for different inputs with AT - they could possibly be combined?]

2.6.1 Keyboard with AT (that remaps key input): All functionality tied to a keyboard input can be operated when assistive technologies that remap keyboard controls are present. (Level AA)

Understanding: It is not always possible for users to activate arbitrary keys on a keyboard, as these may be reserved for commands to operate an assistive technology. Assistive technologies may intercept key presses before they reach the content (e.g. JavaScript on a webpage), except in very specific circumstances (such as when a text entry field receives focus).

How to meet/techniques: content should not rely on user's ability to arbitrarily press keys (e.g. don't add a keydown/keyup/keypress handler simply to the <body>); ensure functionality hangs off of focusable controls; rely on focus/blur/click events; can listen for key presses, but only when focus is in a traditional input/text entry field (as there AT doesn't interfere with key presses) OR use role="application" (where you indicate to AT that you will handle all keyboard input directly, suppresses reading keys etc.)

2.6.2 Touch with Assistive Technology: All functionality can be operated without requiring specific touchscreen gestures (Level A)

Understanding: assistive technologies such as screen readers on a touchscreen device remap gestures as commands for the AT. It is not always possible for users to perform gestures that will be passed on to the content, as assistive technologies intercept gestures before they reach the content (e.g. JavaScript on a webpage).

[Editorial comment: potentially, this could be combined with 2.5.3 into a strong "don't rely on gestures of any kind" - however, keeping this split out into two SCs, both at Level A, may work out best to cover situations where a particular user agent/device does not have any AT, but still operates using gestures on a touchscreen, such as a point-of-sale or ATM device]

How to meet/techniques: content should not rely on user's ability to perform specific gestures. Provide controls that can be operated by simple focus/activation.

2.6.3 Input-agnostic: All functionality can be operated without requiring any particular type of input mechanism (Level AA)

[Editorial comment: previously titled "Additional inputs reliant on assistive technologies [needs sexier title]"; this likely still needs consideration/rewording]

Understanding: there are many input scenarios where assistive technologies interpret user interactions and translate them into user agent directives, such as moving focus to an element or activating an element, without emulating any traditional type of input such as a keyboard or mouse (i.e. no "fake" key events or mouse events are being generated, so cannot be detected/covered by input-specific event handling like keydown/keyup/keypress or mousedown/mouseup/mousemove). Examples include speech control software, gesture recognition to video camera, etc.

How to meet/techniques: can only rely on high-level, input-agnostic events like focus/blur/click. Reference IndieUI Events, though this seems inactive at this point.

2.6.4 Turn off shortcuts: If shortcut keys are used on the web page, a mechanism is available to turn them off (Level A)

Understanding: Speech users can inadvertently activate custom controls by simply dictating a phrase on the page that contains a letter that has been assigned to a custom control.

[Editorial comment: see https://www.w3.org/WAI/GL/mobile-a11y-tf/wiki/Speech_Input_Accessibility_(Guideline_2.7) ]

2.6.5 Customized shortcut keys: a mechanism is available to customize keyboard shortcut keys, allowing for a string of up to 20 characters to be assigned to the shortcut (Level AA)

Understanding: Speech users can inadvertently activate custom controls by simply dictating a phrase on the page that contains a letter that has been assigned to a custom control. This requirement allows them to assign a phrase to the shortcut and reduces the possibility of accidental activation significantly.

2.6.6 No Focus Trap (Level A)

[a generalisation of 2.1.2 - only difference here is that the way to achieve it is different from plain 2.1.2 where keyboard is the only consideration, so it differs not in the rationale/understanding, but rather only in the technique/how to - prime candidate for merging with 2.1.2 by the WG]

Rationale: this is the more generalised equivalent to 2.1.1. which covers scenario of inputs with AT. e.g. when using screen reader + keyboard + AT that remaps keys, there may be situations where the author provides a custom key to exit a particular dialog/input/widget, but that specific key is intercepted by AT. in the touch + AT scenario, the author may have built something that reacts specifically to a custom key or listens for a particular element receiving focus, but touch+AT doesn't allow arbitrary keys being pressed AND doesn't always fire focus (see for instance Android 6/Chrome/Talkback https://patrickhlauke.github.io/touch/tests/results/#mobile-tablet-touchscreen-assistive-technology-events)

How to meet/techniques: avoid trapping focus and then requiring custom/non-standard methods to leave a particular widget (e.g. a custom key to be pressed); in the case of dialogs or similar, provide an explicit focusable control to return to normal focus operation (i.e. to close the dialog and return to the underlying page)

[Editorial comment: Detlev's additional point - still not sure there can be a genuine pointer trap that is not something that freezes the app entirely - at least the 4 finger tap to movce focus to start or end or reverse swiping seem to work most of the time - would be good to document cases]

2.7 Additional sensor inputs

2.7.1 Pointer inputs with additional sensors: All functionality can be operated without requiring pointer information beyond screen coordinates (Level A)

Note 1: Additional sensor information includes pressure (for instance on pressure-sensitive touch screens), tilt or twist (for instance on advanced pen/stylus digitizers).

Understanding: some pointer input devices provide sensors to detect - beyond simple x/y screen coordinates - additional values such as twist, tilt, pressure. Not all users may have these advanced pointer input devices (e.g. users may have a touch screen, but not a pressure-sensitive touch screen), OR they may have the device, but may be unable to operate the advanced functionality (at all, or precisely enough).

How to meet/techniques: functionality/content must not solely rely on advanced pointer input information (e.g. an alternative which does not require advanced sensors, but just plain x/y screen coordinate information only, must be available)

[Editorial comment: this may be seen as a deepening/clarification of 2.5.1; also, 2.1.1/2.1.3 are of course also still valid, so beyond dropping down from "advanced" to "non-advanced" pointer, functionality must also be operable with just the keyboard]

2.7.2 Device sensors: All functionality can be operated without requiring specific device sensor information (Level A)

Note 1: device sensors information includes tilt, orientation, proximity.

Understanding: devices may have additional sensors that act as inputs - e.g. tilt/orientation sensors on a mobile/tablet, allowing the user to control something by simply changing the orientation of the device in space. Not all devices have these sensors, OR the device may have the sensors but the user may be unable to operate these sensors (at all, or precisely enough).

How to meet/techniques: functionality/content must not solely rely on device inputs (e.g. an alternative which does not require the user to manipulate their device/use these device inputs must be available)

[Editorial comment: in light of 2.1.1/2.1.3/2.5.1/2.7.1, this then creates a whole series of required modes of operation for functionality]

@patrickhlauke
Copy link
Author

Need to add what we discussed with speech in last week's meeting

@patrickhlauke
Copy link
Author

Updated the gist - moved some of the proposed SCs around, actually expanded them to be more SC-like in their title, assigned initial Level (A/AA), added David's initial stab at Kim's speech-specific SCs relating to keyboard shortcuts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment