Skip to content

Instantly share code, notes, and snippets.

@patilswapnilv
Created September 21, 2023 15:52
Show Gist options
  • Save patilswapnilv/b3c791b7a445e2dad179a8d16196f242 to your computer and use it in GitHub Desktop.
Save patilswapnilv/b3c791b7a445e2dad179a8d16196f242 to your computer and use it in GitHub Desktop.
Help the user by automating where there is an obvious need
This section is about automation issues, but not all about avoiding automation. In some cases, automation can be helpful. The following example is about one such case.
Example: Sorry, off route; you lose!
No matter how good your GPS system is, as a human driver you can still make mistakes and drive off course, deviating from the route planned by the system. The Garmin GPS units are very good at helping the driver recover and get back on route. It recalculates the route from the current position immediately and automatically, without missing a beat. Recovery is so smooth and easy that it hardly seems like an error.
Before this kind of GPS, in the early days of GPS map systems for travel navigation, there was another system developed by Microsoft, called Streets and Trips. It used a GPS receiver antenna plugged into a USB port in a laptop. The unit had one extremely bad trait. When the driver got off track, the screen displayed the error message, Off Route! in a large bright red font.
Somehow you just had to know that you had to press one of the F, or function, keys to request recalculation of the route in order to recover. When you are busy
contending with traffic and road signs, that is the time you would gladly have the system take control and share more of the responsibility, but you did not get that help. To be fair, this option probably was available in one of the preference settings or other menu choices, but the default behavior was not very usable and this option was not discovered very easily.
Designers of the Microsoft system may have decided to follow the design guideline to "keep the locus of control with the user." While user control is often the best thing, there are times when it is critical for the system to take charge and do what is needed. The work context of this UX problem includes:
� The user is busy with other tasks that cannot be automated.
� It is dangerous to distract the user/driver with additional workload.
� Getting off track can be stressful, detracting further from the focus.
� Having to intervene and tell the system to recalculate the route interferes with the user's most important task, that of driving.
Another way to interpret these twin guidelines about automation is to keep the user in control at higher task levels, where the user has done the initial planning and is driving to get somewhere. But take control from the user when the need is obvious and the user is busy.
This interpretation of the two guidelines means that, on one hand, the system does not insist on staying on this route regardless of driver actions, but quietly allows the driver to make impromptu detours. This interpretation also means that, on the other hand, the system should be expected to continue to recalculate the route to help the driver eventually reach his or her destination.
22.9 ASSESSMENT
Assessment guidelines are to support users in understanding information displays of results of outcomes and other feedback about outcomes such as error indications. Assessment, along with translation, is one of the
places in the Interaction Cycle where cognitive affordances play a primary role.
22.9.1 System Response
A system response can contain:
� feedback, information about course of interaction so far
� information display, results of outcome computation
� feed-forward, information about what to do next.
As an example, consider this message: "The value you entered for your name was not accepted by the system. Please try again using only alphabetic characters."
� The first sentence, "The value you entered for your name was not accepted by the system," is feedback about a slight problem in the course of interaction and is an input to the assessment part of the Interaction Cycle.
� The second sentence, "Please try again using only alphabetic characters," is feed-forward, a cognitive affordance as input to the translation part of the next iteration within the Interaction Cycle.
Figure 22-64
The assessment part of the Interaction Cycle.
Figure 22-65
Existence of feedback, within assessment.
22.9.2
Assessment of System Feedback
Figure 22-64 highlights the assessment part of the Interaction Cycle.
Feedback about errors and interaction problems is essential in supporting users in understanding the course of their interactions. Feedback is the only way users will know if anerrorhas occurredandwhy. Thereis a strong parallel between assessment issues about cognitive affordances as feedback and translation issues about cognitive affordances as feed-forward, including existence of feedback when it is needed, sensing feedback through effective presentation, and
understanding feedback through effective representation of content and meaning.
22.9.3 Existence of Feedback
In Figure 22-65 we highlight the "existence of feedback" portion of the assessment part of the Interaction Cycle.
The "existence of feedback" portion of the assessment part of the Interaction Cycle is about providing necessary feedback to support users' need to know whether the course of interaction is proceeding toward meeting their planning goals.
Provide feedback for all user actions
For most systems and applications, the existence of feedback is essential for users; feedback keeps users on track. One notable
exception is the Unix operating system, in which no news is always good news. No feedback in Unix means no errors. For expert users, this tacit positive feedback is efficient and keeps out of the way of high-powered interaction. For most users of most other systems, however, no news is just no news.
Provide progress feedback on long operations
For a system operation requiring significant processing time, it is essential to inform the user when the system is still computing. Keep users aware of function or operation progress with some kind of feedback as a progress report, such as a percent-done indicator.
Example: Database system not helpful about progress in Pack operation
Consider the case of a user of a dbase-family database application who had been deleting lots of records in a large database. He knew that, in dbase applications, "deleted" records are really only marked for deletion and can still be undeleted until a Pack operation is performed, permanently removing all records marked for deletion.
At some point, he did the Pack operation, but it did not seem to work. After waiting what seemed like a long time (about 10 seconds), he pushed the Escape key to get back control of the computer and things just got more confusing about the state of the system.
It turns out that the Pack operation was working, but there was no indication to the user of its progress. By pushing the Escape key while the system was still performing the Pack function, the user may have left things in an indeterminate state. If the system had let him know it was, in fact, still doing the requested Pack operation, he would have waited for it to complete.
Request confirmation as a kind of intervening feedback
To prevent costly errors, it is wise to solicit user confirmation before proceeding with potentially destructive actions.
But do not overuse and annoy
When the upcoming action is reversible or not potentially destructive, the annoyance of having to deal with a confirmation may outweigh any possible protection for the user.
22.9.4 Presentation of Feedback
Figure 22-66 highlights the "presentation of feedback" portion of the assessment part of the Interaction Cycle.
This portion of the assessment part of the Interaction Cycle is about supporting user sensing, such as seeing, hearing, or feeling, of feedback with effective design of feedback presentation and appearance.
Presentation of feedback is about how feedback appears to users, not how it conveys meaning. Users must be able to sense (e.g., see or hear) feedback before it can be useful to them
in usage.
Figure 22-66
Presentation of feedback, within assessment.
Support user with effective sensory affordances in presentation of feedback
Feedback visibility
Obviously feedback cannot be effective if it cannot be seen or heard when it is needed.
Make feedback visible
It is the designer's job to be sure each instance of feedback is visible when it is needed in the interaction.
Feedback noticeability
Make feedback noticeable
When needed feedback exists and is visible, the next consideration is its noticeability or likelihood of being noticed or sensed. Just putting feedback on the screen is not enough, especially if the user does not necessarily know it exists or is not necessarily looking for it.
These design issues are largely about supporting awareness. Relevant feedback should come to users' attention without users seeking it. The primary
design factor in this regard is location, putting feedback within the users' focus of attention. It is also about contrast, size, and layout complexity and their effect on separation of feedback from the background and from the clutter of other user interface objects.
Locate feedback within the user's focus of attention
A pop-up dialogue box that appears directly within the user's focus of attention in the middle of the screen is much more noticeable than a message or status line at the top or bottom of the screen.
Make feedback large enough to notice
Feedback legibility
Make text legible, readable
Text legibility is about being discernable, not about its content being understandable. Font size, font type, color, and contrast are the primary relevant design factors.
Feedback presentation complexity
Control feedback presentation complexity with effective layout, organization, and grouping
Support user needs to locate and be aware of feedback by controlling layout complexity of user interface objects. Screen clutter can obscure needed feedback and make it difficult for users to find.
Feedback timing Support user needs to notice feedback with appropriate timing of appearance or display of feedback. Present feedback promptly and with adequate persistence, that is, avoid "flashing."
Help users detect error situations early
Example: Do not let them get into too much trouble
A local software company asked us to inspect one of their software tools. In this tool, users are restricted to certain subsets of functionality based on privileges, which in turn are based on various key work roles. A UX problem with
a large impact on users arose when users were not aware of which parts of the functionality they were allowed to use.
As the result of a designer assumption that each user would know their privilege-based limitations, users were allowed to navigate deeply into the structure of tasks that they were not supposed to be performing. They
could carry out all the steps of the corresponding transactions but when they tried to "commit" the transaction at the end, they were told they did not have the privileges to do that task and were blocked and their time and effort were wasted. It would have been easy in the design to help users realize much earlier that they were on a path to an error, thereby saving user productivity.
Feedback presentation consistency
Maintain a consistent appearance across similar kinds of feedback
Maintain a consistent location of feedback presentation on the screen to help users notice it quickly
Figure 22-67
Content/meaning of feedback, within assessment.
Feedback presentation medium
Consider appropriate alternatives for presenting feedback.
Use the most effective feedback presentation medium Consider audio as alternative channel
Audio can be more effective than visual media to get users' attention in cases of a heavy task load or heavy sensory work load. Audio is also an excellent alternative for vision-impaired users.
22.9.5 Content and Meaning of Feedback
In Figure 22-67 we highlight the "content and meaning of feedback" portion of the assessment part of the Interaction Cycle.
The content and meaning of feedback represent the knowledge that must be conveyed to users to be effective in helping them understand action outcomes and interaction progress.
This understanding is conveyed through effective content and meaning in feedback, which is dependent on clarity, completeness, proper tone, usage centeredness, and consistency of feedback content.
Help users understand outcomes with effective content/meaning in feedback
Support user ability to determine the outcomes of their actions through understanding and comprehension of feedback content and meaning.
Clarity of feedback
Design feedback for clarity
Use precise wording and carefully chosen vocabulary to compose correct, complete, and sufficient expressions of content and meaning of feedback.
Support clear understanding of outcome (system state change) so users can assess effect of actions
Give clear indication of error conditions
Example: Unavailable?
Figure 22-68 contains an error message that occurred during a Save As file operation in an early version of Microsoft Word. This is a classic example that has generated considerable discussion among our students. The UX problems and design issues extend well beyond just the content of the error message.
In this Save As operation the user was attempting to save a file of unformatted data, calling it "data w/o format" for short. The resulting error message is confusing because it is about a folder not being accessible because of things like unavailable volumes or password
protection. This seems about as unclear and unrelated to the task as it could be.
In fact, the only way to understand this message is to understand something more fundamental about the Save As dialogue box. The design of the File Name: text field is overloaded. The usual input entered here is a file name, which is by default associated with the folder name in the Save in: field at the top.
Figure 22-68
A confusing and seemingly irrelevant error message.
But some designer must have said "That is fine for all the GUI wusses, but what about our legions of former DOS users, our heroic power users who want to enter the full command-style directory path name for the file?" So the design was overloaded to accept full path names of files as well, but no clue was added to the labeling to reveal this option. Because path names contain the slash (/) as a dedicated delimiter, a slash within a file name cannot be parsed unambiguously so it is not allowed.
In our class discussions of this example, it usually takes students a long time to realize that the design solution is to unload the overloading by the simple addition of a third text field at the bottom for Full file directory path name:. Slashes in file names still cannot be allowed because any file name can also appear in a path name, but at least now, when a slash does appear in a file name in the File Name: field, a simple message, "Slash is not allowed in file names," can be used to give a clear indication of the real error.
The most recent version of Word, as of this writing, half-way solves the problem by adding to the original error message: "or the file name contains a \ or /".
Precise wording
Support user understanding of feedback content by precise expression of meaning through precise word choices. Do not allow wording of feedback to be treated as an unimportant part of interaction design.
Completeness of feedback
Support user understanding of feedback by providing complete information through sufficient expression of meaning, to disambiguate, make more precise, and clarify. For each feedback message, the designer should ask
"Is there enough information?" "Are there enough words used to distinguish cases?"
Be complete in your design of feedback; include enough information for users to fully understand outcomes and be either confident that their command worked or certain about why it did not
The expression of a cognitive affordance should be complete enough to allow users to fully understand the outcomes of their actions and the status of their course of interaction.
Prevent loss of productivity due to hesitation, pondering
Having to ponder over the meaning of feedback can lead to lost productivity. Help your users move on to the next step quickly, even if it is error recovery.
Add supplementary information, if necessary
Short feedback is not necessarily the most effective. If necessary, add additional information to make sure that your feedback information is complete and sufficient.
Give enough information for users to make confident decisions about the status of their course of interaction
Help users understand what the real error is
Give enough information about the possibilities or alternatives so users can make an informed response to a confirmation request
Example: Quick, what to do?
In Figure 22-69 is an exit message from the Microsoft Outlook email system that we used previously (Figure 22-44) as an example about completeness of cognitive affordances and giving enough information for users to
make confident decisions. This message is displayed when a user tries to exit the Outlook email system before all queued messages are sent. We also use it as an example here in the assessment section, even though technically the part of the system response that is at issue here is the lack of a cognitive affordance as feed-forward.
When users first encountered this message, they were often unsure about how to respond because it did not inform them of the consequences of either choice. What are the consequences of
"exiting anyway?" One would hope that the system could go ahead and send the messages, regardless, but why then did it give this message?
So maybe the user will lose those messages. What made it worse was the fact that control would be snatched away in 8 seconds, and
counting. How imperious!
Most users we tested with took the right one, it seems, by making what they thought to be the conservative choice, not exiting yet. Figure 22-70 is an updated version of this same message, only this time it gives a bit more information about the repercussions of exiting prematurely, but it still does not say if exiting will cause messages to be lost or just queued for later.
Figure 22-69
Not enough information in this feedback, or feed- forward, message.
Figure 22-70
This is better, but still could be more helpful.
Figure 22-71
Useless message shows poor designer attitude.
Figure 22-72
Gobbledygook email message.
Tone of feedback expression
When writing the content of a feedback message, it can be tempting to castigate the user for making a "stupid" mistake. As a professional interaction designer you must separate yourself from those
feelings and put yourselves in the shoes of the user. You cannot know the conditions under which your error messages are received, but the occurrence of errors could well mean that the user is already in a stressful situation so do not be guilty of adding to the user's distress with a caustic, sarcastic,
or scornful tone.
Design feedback wording, especially error messages, for positive psychological impact Make the system take blame for errors
Be positive, to encourage
Provide helpful, informative error messages, not "cute" unhelpful messages
Example: Say what?
The almost certainly apocryphal message in Figure 22-71 is an extreme example of an unhelpful message.
Usage centeredness of feedback
Employ usage-centered wording, the language of the user and the work context, in displays, messages, and other feedback
We mentioned that user centeredness is a design concept that often seems unclear to students and some practitioners. Because it is mainly about using the vocabulary and concepts of the user's work context rather than the technical vocabulary and context of the system, we should probably call
it "work-context-centered" design. This section is about how usage
centeredness applies to feedback in the assessment part of the Interaction Cycle.
In Figure 22-72 we see a real email system feedback message received by one of us many years ago, clearly system centered, if anything, and not user or work context centered. Systems people
will argue correctly that the technical information in this message is valuable to them in tracing the source of the problem.
That is not the issue here; rather it is a question of the message audience. This message is sent to users, not the systems people, and it is clearly an unacceptable message to users. Designers must seek ways to get the right message to the right audience. One solution is to give a non-technical explanation here and add a button that says "Click here for a technical description of the problem for your systems representative." Then put this jargon in the next message box.
This message in the next example is similar in some ways, but is more interesting in other ways.
Example: Out of paper, again?
As anin-class exercise, we used to display the computermessage in Figure 22-73 and ask the students to comment on it.
Some students, usually ones who were not engineering majors, would react negatively from the start. After a lot of the usual comments pro and con, we would ask the class whether they thought it was usage centered. This usually caused some confusion and much disagreement. Then we ask a very specific question: Do you think this message is really about an error? In truth, the correct answer to this depends on your viewpoint, a reply we never got from a student.
The system-centered answer is yes; technically an "error condition" arose in the operating system error-handling component when it got an interrupt from the printer, flagging a situation in which there is a need for action to fix a problem. The process used inside the operating system is carried out by what the software systems people call an error-handling routine. This answer is correct but not absolute.
From a user-, usage-, or work-context-centered view, it is definitely and 100% not an error. If you use the printer enough, it will run out of paper and you will have to replace the supply. So running out of paper is part of the normal workflow, a natural occurrence that signals a point where the human has a responsibility within the overall collaborative human-system task flow. From this perspective, we told our students we
had to conclude that this was not an acceptable message to send to a user; it was not usage centered.
We decided that this exercise was a definitive litmus test for determining whether students could think user
Figure 22-73
Classic system-centered "error" message.
centrically. Some of our undergraduate CS students never got it. They stubbornly stuck to their judgment that there was an error and that it was perfectly appropriate to send this message to a user to get attention to attending the error.
Each semester we told them that it was okay that they did not "get it"; that they could still live productive lives, just not in any UX development role. Not everyone is cut out to take on a UX role in a project.
Just to finish up the analysis of this message:
� Why is the message box titled Printers Folder? Does this refer to some system aspect that should be opaque to the user?
� The printer is out of paper. Add paper. Is the need to add paper when the printer is out of paper not obvious enough? If so, the Add paper imperative is redundant and even condescending.
� To continue printing, click retry. Why "click retry" if the objective is to continue printing? Why not Click continue printing and label the Retry button as Continue Printing?
� Windows will automatically retry after 5 seconds. First, it should be Windows will periodically try to continue printing. Beyond that, this may seem to be a useless and maybe intrusive "feature" but it could be helpful if the printer is remote-the user would not have to go back and forth to click on the button here and to see if the printer is printing. Beyond that, the 5 seconds does seem a bit arbitrary and probably too short a time to get new paper loaded, but this is not harmful.
Consistency of feedback
Be consistent with feedback
In the context of feedback, the requirement for consistency is essentially the same as it was for the expression of cognitive affordances: choose one term for each concept and use it throughout the application.
Label outcome or destination screen or object consistently with starting point and action
This guideline is a special case of consistency that applies to a situation where a button or menu selection leads the user to a new screen or dialogue box, a common occurrence in interaction flow. This guideline requires that the name of the destination given in the departure button label or menu choice be the same as its name when you arrive at the new screen or dialogue box. The next example is typical of a common violation of this guideline.
Example: Am I in the right place?
In Figure 22-74 we see an overlay of two partial windows within a personal document system. In the bottom layer is a menu listing some possible operations within this document system. When you click on Add New Entry, you go to the window in the top layer, but the title of that window is not Add New Entry, it is Document Data Entry. To a user, this could mean the same thing, but the words used at thepointof departurewere Add New Entry.
Finding different words, Document Data Entry, at the destination can be confusing
and can raise doubts about the success of the user action. The explanation given us by the designer was that the destination window in the top layer is a destination shared by both the Add New Entry menu choice and the Modify/View Existing Entries menu choice. Because state variables are passed in the transition, the corresponding functionality is applied correctly, but the same window was used to do the processing.
Therefore, the designer had picked a name that sort of represented both menu choices. Our opinion was that the destination window name ended up representing neither choice well and it takes only a little more effort to use two separate windows.
Example: Title of destination does not match Simple Search Tab label
In this example, consider the Simple Search tab, displayed at the top of most screens in this digital library application and shown in Figure 22-75.
That tab leads to a screen that is labeled Search all bibliographic fields, as shown in Figure 22-76.
We had to conclude that the departure label on the Simple Search tab and the destination label, Search all bibliographic fields, do not match well enough because we observed users showing surprise upon arrival and not being sure about whether they had arrived at the right place. We suggested a slight change in the wording of the destination
label for the Simple Search function to include the same name, Simple
Figure 22-74
Arrival label does not match departure label (screen image courtesy of Raphael Summers).
Figure 22-75
The Simple Search tab at the top of a digital library application screen.
Search, used in the tab and not sacrifice the additional information in the destination label, Search all bibliographic fields, as shown in Figure 22-77.
Figure 22-76
However, it leads to Search all bibliographic fields, not a match.
Figure 22-77
Problem fixed by adding Simple Search: to the destination label.
User control over feedback detail
Organize feedback for ease of understanding
When a significant volume of feedback detail is available, it is best not to overwhelm the user by giving all the information at once. Rather, give the most important information, establishing the nature of the situation,
upfront and provide controls affording the user a way to ask for more details, as needed.
Provide user control over amount and detail of feedback
Give only most important information at first; more on demand
22.9.6 Assessment of Information Displays
Information organization for presentation
Organize information displays for ease of understanding
There are entire books available on the topics of information visualization and information display design, among which the work of Tufte (1983, 1990, 1997) is perhaps the most well known. We do not attempt to duplicate that material here, but rather reference the interested reader to pursue these topics in
detail from those sources. We can, however, offer a few simple guidelines to help with the routine presentation of information in your displays of results.
Eliminate unnecessary words Group related information
Control density of displays; use white space to set off
Columns are easier to read than wide rows
This guideline is the reason that newspapers are printed in columns.
Use abstraction per Shneiderman's "mantra": Overview first; zoom and filter; details on demand
Ben Shneiderman has a "mantra" for controlling complexity in information display design (Shneiderman & Plaisant, 2005, p. 583):
� overview first
� zoom and filter
� details on demand
Example: The great train mystery
Train passengers in Europe will notice entering passengers competing for seats that face the direction of travel. At first, it might seem that this is simply about what people were used to in cars and busses. But some people we interviewed had stronger feelings about it, saying they really were uncomfortable traveling backward and could not enjoy the scenery nearly as much that way.
Believing people in both seats see the same things out the window, we wondered if it really mattered, so we did a little psychological experiment and compared our own user experiences from both sides. We began to think about the view in the train window as an information display.
In terms of bandwidth, though, it did not seem to matter; the total amount of viewable information was the same. All passengers see the same things and they see each thing for the same amount of time. Then we recalled Ben Shneiderman's rules for controlling complexity in information display design (see earlier discussion).
Applying this guideline to the view from a train window, we realized that a passenger traveling forward is moving toward what is in the view. This traveler sees the overview in the distance first, selects aspects of interest, and, as the trains goes by, zooms in on those aspects for details.
In contrast, a passenger traveling backward sees the close-up details first, which thenzoomoutandfadeintoanoverview inthedistance. Butthisclose-upview isnot very useful because it arrives too soon without a point of focus. By the time the passengeridentifiessomethingofinterest, thechancetozoominonithas passed; it is already getting further away. The result can be an unsatisfying user experience.
Figure 22-78
Limited horizontal visual bandwidth.
Visual bandwidth for information display
One of the factors that limit the ability of users to perceive and process displayed information is visual bandwidth of the display medium. If we are talking about the usual computer display, we must use a display monitor with a very small space for all our information presentation. This is tiny in comparison to, say, a newspaper.
When folded open, a newspaper has many times the area, and many times the capacity to display information, of the average computer screen. And a reader/ user can scan or browse a newspaper much more rapidly. Reading devices such as Amazon's Kindle(tm) and Apple's iPad(tm) are pretty good for reading and thumbing through whole book pages, but lack the visual bandwidth afforded for "fanning" through pages for perusal or scanning provided by a real paper book.
Designs that speed up scrolling and paging do help but it is difficult to beat the browsing bandwidth of paper. A reader can put a finger in one page of a newspaper, scan the major stories on another page, and flash back effortlessly to the "book-marked" page for detailed reading.
Example: Visual bandwidth
In our UX classes we used to have an in-class demonstration to illustrate this concept. We started with sheets of paper covered with printed text. Then we gave students a set of cardboard pieces the same size as the paper but each with a smaller cutout through which they must read the text.
One had a narrow vertical cutout that the reader must scan, or scroll, horizontally across the page. Another had a low horizontal cutout that the reader must scan vertically up and down the page. A third one had a small square in the middle that limited visual bandwidth in both vertical and horizontal directions and required the user to scroll in both directions.
You can achieve the same effect on a computer screen by resizing the window and adjusting the width and height accordingly. For example, in Figure 22-78, you can see limited horizontal visual bandwidth, requiring excessive horizontal scrolling to read. In Figure 22-79, you can see limited vertical visual bandwidth, requiring excessive
vertical scrolling to read. And in Figure 22-80, you can see limited horizontal and vertical visual bandwidth, requiring excessive scrolling in both directions. It was easy for students to conclude that any visual bandwidth limitation, plus the necessary scrolling, was a palpable barrier to the task of reading information displays.
22.10 OVERALL
This section concludes the litany of guidelines with a set of guidelines that apply globally and generally to an overall interaction design rather than being associated with a specific part of the Interaction Cycle.
22.10.1 Overall Simplicity
As Norman (2007a) points out, most people think of simplicity in terms of a product that has all the features but operates with a single button. His point is that people genuinely want features and only
say they want simplicity. At least for consumer appliances, it is all about marketing and marketing people know that features sell. And more features imply more controls.
Norman (2007a) says that even if a product design automates some features well enough so that fewer controls are necessary, people are willing to pay more for machines with more controls. Users do not want to give up control. Also, more controls give the appearance of more power, more functionality, and more features.
Figure 22-79
Limited vertical visual bandwidth.
Figure 22-80
Limited horizontal and vertical visual bandwidth.
But in the computer you use to get things done at work, complexity can be a barrier to productivity. The desire is for full functionality without sacrificing UX.
Do not try to achieve the appearance of simplicity by just reducing usefulness
A well-known Web-search service provider seeking improved ease of use "simplified" their search page. Unfortunately, they did it without a real understanding of what simplicity means. They just reduced their functionality but did nothing to improve the usability of the remaining functionality. The result was a less useful search function and the user is still left to figure out how to use it.
Organize complex systems to make the most frequent operations simple
Some systems cannot be entirely simple, but you can still design to keep some of the most frequently used operations or tasks as simple as possible.
Example: Oh, no, they changed the phone system!
Years ago our university began using a special digital phone system. It had, and still has, an enormous amount of functionality. Everyone in the university was asked to attend a one-day workshop on how to use the new phone system. Most employees rebelled and refused to attend a workshop to learn how to use a telephone-something they had been using all their lives.
They were issued a 50-page user's guide entitled "Excerpts from the PhoneMail System User Guide." Fifty pages and still an excerpt; who is going to read that? The answer is that almost everyone had to read at least parts of it because the designer's approach was to make all functions equally difficult to do. The 10% of the functionality that people had to use every day was just as mysterious as the other 90% that most people would never need. Decades later, people still do not like that phone system, but they were captive users.
22.10.2 Overall Consistency
Historically, "be consistent" is one of the earliest interaction design guidelines ever and probably the most often quoted. Things that work the same way in one place as they do in another just make logical sense.
But when HCI researchers have looked closely at the concept of consistency in interaction design over the years, many have concluded that it is often difficult to pin it down in specific designs. Grudin (1989) shows that the concept is
difficult to define (p. 1164) and hard to identify in a design, concluding that it is an issue without much real substance. The transfer effects that support ease of learning can conflict with ease of use (p. 1166). And blind adherence without interpretation within usage context to the rule can lead to foolish or undesirable consistency, as shown in the next example.
Be consistent by doing similar things in similar ways
Example: And what country shall we send it to?
Suppose that the menu choices in all pull-down menus in an application are ordered alphabetically for fast searching. But one pull-down menu is in a form in which the user enters a mailing address. One of the fields in the form is for "country" and the pull-down list contains dozens of entries. Because the majority of customers for this Website are expected to live in the United States, ease of use will be better in a design with "United States" at the top of the pull-down list instead of near the bottom of an alphabetical list, even though that is inconsistent with all the other pull-down menus in the application.
Use consistent layout/location for objects across screens Maintain custom style guides to support consistency
Structural consistency
We think Reisner (1977) helped clarify the concept of consistency, in the context of database query languages, when she coined the term "structural consistency." In referring to the use of query languages, structural consistency simply required similar syntax (wording or user actions) to denote similar or related semantics. So, in our context, the expression of cognitive affordances for two similar functions should also be similar.
However, in some situations, consistency can work against distinguishability. For example, if a design contains two different kinds of delete functions, one of which is used routinely to delete objects within an application, but the other is dangerous because it applies to files and folders at a higher level, the need to distinguish these delete functions for safety may override this guideline for making them similar.
Use structurally similar names and labels for objects and functions that are structurally similar
Example: Next and previous
A simple example is seen in the common Next and Previous buttons that might appear, for example, for navigation among pictures in an online photo gallery. Although these two buttons are opposite in meaning, they both are a similar kind of thing; they are symmetric and structurally similar navigation controls. Therefore, they should be labeled in a similar way. For example, Go forward and Previous picture are not as symmetric and not as similar from a linguistic perspective.
Consistency is not absolute
Many design situations have more than one consistency issue and sometimes they trade-off against each other. We have a good example to illustrate.
Example: May I mix you a screwdriver?
Figure 22-81 Multipurpose screwdrivers.
Consider the case of multi-blade screwdrivers that are handy for dealing with different sizes and types of screws. In particular, they each have both flat-blade and Phillips-blade driver bits and each of these types comes in both small and large sizes.
Figure 22-81 illustrates two of these so-called "4-in-1" screwdrivers. As part of a discussion of consistency, we bring screwdrivers like these to class for an in-class exercise with students. We begin by showing the class the screwdrivers
and explain how the bits are interchangeable to get the needed combination of blade type and size.
Next we pick a volunteer to hold and study one of these tools and then speak to the class about consistency issues in its design. They pull it apart, as shown in Figure 22-82.
The conclusion always is that it is a consistent design. We have another volunteer study the other screwdriver, always reaching the same conclusion. Then we show the class that there are differences between the two designs, which become apparent when you compare the bits at the ends of each tool, as shown in Figure 22-83.
One tool is consistent by blade type, having both flat blades, large and small, on one insertable piece and both Phillips blades on the other piece. The other tool is consistent by size, having both large blades on one insertable piece and both small blades on the other piece.
We now ask them if they still think each design is consistent and they do. They are each consistent; they each have intra-product consistency. Neither is more consistent than the other, but each is consistent in a different way and are not consistent with each other.
Consistency in design is supposed to aid predictability but, because there is more than one way to be consistent in this example, you still lack inter-product consistency and you do not necessarily get predictability. Such is one difficulty of interpreting and applying this seemingly simple design guideline.
Consistency can work against innovation
Final caveat: While a style guide works in favor of consistency and reuse, remember that is also can be a barrier to inventiveness and innovation (Kantrovich, 2004). Being the same all the time is not necessarily cool! When the need arises to break with consistency for the sake of innovation, throw off the constraints and barriers and dive through the wormhole to the creative side.
22.10.3 Humor
Avoid poor attempts at humor
Poor attempts at humor usually do not work. It is easy to do humor badly and it can easily be misinterpreted by users. You may be sitting in your office feeling good and want to write a cute error message, but users receiving it may be tired and stressed and the
last thing they need is the irritation of a bad joke.
22.10.4 Anthropomorphism
Simply put, anthropomorphism is the attribution of human characteristics to non-human objects. We do it every day; it is a form of humor. You say "my car is sick today" or "my computer does not like me" and everyone understands what you mean. In interaction design, however, the context is usually about getting work done and
Figure 22-82
Revealing the inner parts of the two screwdrivers.
Figure 22-83
The two sets of screwdriver bits.
anthropomorphism can be less appreciated, especially if the user is already having difficulties.
Avoiding anthropomorphism
Avoid the use of anthropomorphism in interaction designs
Shneiderman and Plaisant (2005, pp. 80, 484) say that a model of computers that leads one to believe they can think, know, or understand in ways that humans do is erroneous and dishonest. When the deception is revealed, it undermines trust.
Avoid using first-person speech in system dialogue
"Sorry, but I cannot find the file you need" is less honest and no more informative than something such as "File not found" or "File does not exist." If attribution must be given to what it is that cannot find your file, you can reduce anthropomorphism by using the third person, referring to the software, as in "Windows is unable to find the application that created this file." This guideline urges us to especially eschew chatty and over-friendly use of first- person cuteness, as we see in the next example.
Example: Who is there?
Figure 22-84 contains a message from a database system after a search request had been submitted. Ignoring other obvious UX problems with this dialogue box and message, most users find this kind of use of first person as dishonest, demeaning, and unnecessary.
Avoid condescending offers to help
Figure 22-84
Message tries to make computer seem like a person.
Just when you think all hope is lost, then along comes Clippy or Bob, your personal office assistant or helpful agent. How intrusive and ingratiating! Most users dislike this kind of pandering and insinuating into your affairs, offering blandishments of hope when real help is preferred.
People expect other humans to be able to solve problems better than a machine. If your interaction dialogue portrays the machine as a human, users will expect more. When you cannot deliver, however, it is overpromising. The example that follows is ridiculous and cute but it also makes our point.
Example: Come on, Clippy, you can do better
Clearly the pop-up "help" in Figure 22-85 is not a real example, but this kind of pop-up in general can be intrusive. In real usage situations, most users expect better.
The case in favor of anthropomorphism
On the affirmative side, Murano (2006) shows that, in some contexts, anthropomorphic feedback can be more effective than the equivalent non-anthropomorphic feedback. He also makes the case for why users sometimes prefer anthropomorphic
feedback, based on subconscious social behavior of humans toward computers.
In his first study, Murano (2006) explored user reactions to language- learning software with speech input and output using speech recognition. Users were given anthropomorphic feedback in the form of dynamically loaded and software-activated video clips of a real language tutor giving feedback.
In this kind of software usage situation, where the objectives and interaction are very similar to what would be expected from a human language tutor, "the statistical results suggested the anthropomorphic feedback to be more effective. Users were able to self-correct their pronunciation errors more effectively with the anthropomorphic feedback. Furthermore it was clear that users preferred the anthropomorphic feedback." The positive results are not surprising because this kind of application naturally uses human-computer interaction that is very close to natural human-to-human interaction.
In a second study, Murano (2006) looked at Unix for a rank beginner, again employing speech input and output with speech recognition and again employing anthropomorphic video clips of real humans as feedback. He found anthropomorphic feedback to be more effective and more desired by users than other feedback.
However, we cannot see natural language interaction with Unix as a viable long-term alternative. Unix is complex and difficult to learn, not intended for beginners. Anyone intending to use Unix for more than just an experiment will perforce not remain a beginner for long. Any expert Unix user we have
Figure 22-85
Only too glad to help.
ever seen would surely find speech interaction less convenient and less precise than the lightning-fast typed commands usually associated with UNIX usage. If the interaction in this study was not anthropomorphic per se, speech input and output can convey the feeling of being the "equivalent"
of anthropomorphic.
In his third study, Murano (2006) determined that, for direction-finding tasks, a map plus some guiding text was more effective than anthropomorphic feedback using video clips of a human giving directions verbally, with user preferences about evenly divided. The bottom line for Murano is that some application domains are more suited for anthropomorphic interaction
than others.
Well-known studies by Reeves and Nass (1996) attempted to answer the question why anthropomorphic interaction might be better for some users in some kinds of applications. They concluded that people naturally tend to interact with a computer the same social way we interact with people, especially in cases where feedback is given as natural language speech (Nass, Steuer, & Tauber, 1994). People treat computers in a social manner if the output of computers treats them in a social manner.
While a social manner of interaction did seem to be effective and desired by users of tasks that have a human-to-human counterpart, including tasks such as natural language learning and tutoring in a teacher-student kind of interaction, it is unlikely that a mutually social style and anthropomorphic interaction would have a place in the thousands of other kinds of tasks that make up a large portion of real computer usage-installing driver software, creating a text document, updating a data spreadsheet,
The bottom line for us is that users may think they would prefer anthropomorphic user-computer dialogue because it is somehow friendlier or maybe they would prefer to interact with another human rather than having to interact with a computer. But the fact remains: a computer is not human. So eventually expectations will not be met. Especially for the use of computers to get things done in business and work environments, we expect users to tire quickly of anthropomorphic feedback, particularly if it soon becomes boring by a lack of variety over time.
22.10.5 Tone and Psychological Impact
Use a tone in dialogue that support a positive psychological impact Avoid violent, negative, demeaning terms
Avoid use of psychologically threatening terms, such as "illegal," "invalid," and "abort" Avoid use of the term "hit"; instead use "press" or "click"
22.10.6 Use of Sound and Color
The use of color in displays is a topic that fills volumes of publications for research and practice. Read more about it in some of these references (Nowell, Schulman, & Hix, 2002; Rice, 1991a, 1991b). The use of color in interaction design, or any kind of design, is a complex topic, well beyond the scope of this book.
Avoid irritation with annoying sound and color in displays
Bright colors, blinking graphics, and harsh audio not only are annoying but can have a negative effect on user productivity and the user experience over the long-term.
Use color conservatively
Do not count on color to convey much information in your designs. It is good advice to render your design in black and white first so that you know it works without reliance on color. That will rule out usability problems some users may have with color perception due to different forms of color blindness, for example.
At the end of the day, color decisions are often out of the hands of interaction designers, anyway, being constrained by corporate or organizational standards and branding concerns.
Use pastels, not bright colors
Bright colors seem attractive at first, but soon lead to distraction, visual fatigue, and distaste.
Be aware of color conventions (e.g., avoid red, except for urgency)
Again, color conventions are beyond our scope. They are complicated and differ with international cultural conventions. One clear-cut convention in our Western culture is about the use of red. Beyond very limited use for emergency or urgent situations, red, especially blinking red, is alarming as well as irritating and distracting.
We heard a story at the Social Security Administration that they had an early design in which any required field in a form would blink in red if the user tried to save the form or go to the next form before filling in these fields. Later someone
on the team read that blinking red can bring out latent epilepsy in some people and it got changed.
Example: Help, am I at sea?
Figure 22-86 is a map of the Outer Banks in North Carolina. We have never been able to use this map easily because it violates deeply established color conventions used in maps. Blue is almost always used to denote water in maps, while gray, brown, green, or something similar is used for land. In this map, however, blue, even a deep blue, is used to represent land and because there is about as much land as sea in this map, users experience a cognitive disconnect that confuses and makes it difficult to get oriented.
Watch out for focusing problem with red and blue
Figure 22-86
Map of Outer Banks, but which is water and which is land?
Chromostereopsis is the phenomenon humans face when viewing an image containing significant amounts of pure red and pure blue. Because red and blue are at opposite ends of the visual light spectrum and occur at different frequencies, they focus at slightly different depths within the eye and can appear to be at different distances from the eye.
Adjacent red and blue in an image can cause the muscles used to focus the eye to oscillate, moving back and forth between the two colors, eventually leading to blurriness and fatigue.
Example: Roses are red; violets are blue
In Figure 22-87 we placed adjacent patches of blue and red. If the color reproduction in the book is good, some readers may experience chromostereopsis while viewing this figure.
22.10.7 Gratuitous Graphics
Jon Meads, our friend who runs Usability Architects, Inc., wants us to understand the difference between graphic design and usability (Meads, 1999):
As usability consultants, we're often asked by potential clients to bring in a portfolio of "screens" that we've designed. But we don't have any, because we don't design "screens"; we design interaction, the intended behavior by which people will use a product or a Website.
He points out that graphic design is good for attracting attention, getting a user to stop at your Website, but it takes good UX to get them to stay at your Website. He says that Web pages that dazzle can also distract and turn off users who just want to get something done. It is a question of balance of look and feel, and behavior.
Avoid fancy or cute design without a real purpose
To impress, all you need is a trebuchet and a piano.
- Chris Stevens, Northern Exposure
A fancy appearance to a software application or Website can be an asset, but while bling-bling makes for nice jewelry, the "flash and trash" approach to interaction design can detract from usability. As Jon Meads puts it, "Usability is not graphic design."
Aaron Marcus (2002) agrees, warning us that in the rush to provide aesthetics, fun, and pleasure in the UX, we may overdo it and move toward a commercialization of UX that will, in fact, dehumanize the user experience.
22.10.8 Text Legibility
It is obvious that text cannot convey the intended content if it is illegible.
Make presentation of text legible
Make font size large enough for all users Use good contrast with background
� Use both color and intensity to provide contrast.
Figure 22-87
Chromostereopsis: humans focus at different depths in the eye for red and blue.
Use mixed case for extensive text Avoid too many different fonts, sizes Use legible fonts
� Try Ariel, sans serif Verdana, or Georgia for online reading.
Use color other than blue for text
� It is difficult for the human retina to focus on pure blue for reading.
Accommodate sensory disabilities and limitations
� Support visually challenged, color blind users.
22.10.9 User Preferences
Allow user settings, preference options to control presentational parameters
Afford users control of sound levels, blinking, color, and so on. Vision-impaired users, especially, need preference settings or options to adjust the text size in application displays and possibly to hear an alternative audio version of the text.
22.10.10 Accommodation of User Differences
As we have said, a treatise on accessibility is outside our scope and is treated well in the literature. Nonetheless, all interaction designers should be aware of the requirement to accommodate users with special needs.
Accommodate different levels of expertise/experience with preferences
Most of us have seen this sign in our offices or on a bumper sticker: Lead, follow, or get out of the way. In interaction design, we might modify that slightly to: Lead, follow, and get out of the way.
� Lead novice users with adequate cognitive affordances
� Follow intermittent or intermediate users with lots of feedback to keep them on track
� Get out of the way of expert users; keep cognitive affordances from interfering with their physical actions
Constantine (1994b) has made the case to design for intermediate users, which he calls the most neglected user segment. He claims that there are more intermediate users than beginners or experts.
Don't let affordances for new users be performance barriers to experienced users
Although cognitive affordances provide essential scaffolding for inexperienced users, expert users interested in pure productivity need effective physical affordances and few cognitive affordances.
22.10.11 Helpful Help
Be helpful with Help
Do not send your users to Help in a handbasket. For those who share our warped sense of humor, we quote from the manual for Dirk Gently's (Adams, 1990, p. 101) electronic I Ching calculator as an example of perhaps not so helpful help. As the protagonist consults the calculator for help to a burning personal question,
The little book of instructions suggested that he should simply concentrate "soulfully" on the question which was "besieging" him, write it down, ponder on it, enjoy the silence, and then once he had achieved inner harmony and tranquility he should push the red button. There wasn't a red button, but there was a blue button marked 'Red' and this Dirk took to be the one.
Entertaining, yes; helpful, no. Note that it also makes reference at the end to an amusing little problem with cognitive affordance consistency.
22.11 CONCLUSIONS
Be cautious using guidelines.
Use careful thought and interpretation when using guidelines. In application, guidelines can conflict and overlap.
Guidelines do not guarantee usability.
Using guidelines does NOT eliminate need for usability testing. Design by guidelines, not by politics or personal opinion.
Intentionally left as blank
Connections with Software Engineering
Oh, East is East and West is West, and never the twain shall meet, Till Earth and Sky stand presently at God's great Judgment Seat; But there is neither East nor West, Border, nor Breed, nor Birth,
When two strong men stand face to face, tho' they come from the ends of the earth!
-Rudyard Kipling
23.1 INTRODUCTION
In Chapter 2 we showed how software systems with interactive components have two distinct logical parts: the functional core and the user interface. Although the separation of code into two clearly identifiable components is not always possible, the two parts are conceptually distinct and each must be developed on its own terms with its own roles within the project team (Pyla et al., 2003, 2005, 2007). Figure 23-1 is an abstraction of this separation and resulting connections.
The user-interface part, the focus of this book, often accounts for half or more of the total lines of code in the overall system (Myers & Rosson, 1992). It begins
Figure 23-1
An abstract representation of the separation of, and communication between, the two components of system development.
with contextual inquiry, takes shape in design, gets refined in evaluation, and is ultimately implemented in user-interface software.
Therefore, a practical objective of UX practitioners is to provide interaction design specifications, as we discussed in Chapter 9, which can be used by software engineers to build the user interface component of a system.
The functional part of a software system, sometimes called the functional core, is manifest as non-user-interface software. The design and development of this functional core requires specialized software engineering knowledge, training, and experience in topics such as algorithms, data structures, software architectures, calling structures, and database management. The goal of SE is to create efficient and reliable software systems containing the specified functionality, as well as integrating and implementing user-interface software.
To achieve the UX and SE goals for an interactive system, that is, to create an efficient and reliable system with required functionality and a quality user experience, effective development processes are required for both UX and SE
lifecycles. The Wheel UX lifecycle template in this book is a time-tested process for ensuring a quality user experience.
The SE development lifecycle, with its significantly longer history and tradition than that of UX, comes in many flavors. On one end of this spectrum is the rigid Waterfall Model (Royce, 1970): a sequence of stages for concept definition, requirements engineering, design (preliminary and detailed design), design review, implementation, integration and testing (I&T), and deployment. On the other end of this spectrum are the agile methods (Chapter 19), a test-driven incremental approach where delivering periodic releases of software modules that add business value to the customer is the focus of the process.
23.1.1 Similarities between Lifecycles
At a high level, UX and SE share the same objectives of understanding the customer's and users' wants and needs, translating these needs into system requirements, designing a system to satisfy these requirements, and testing to help ensure their realization in the final product. At the process level, both lifecycles have similar stages, such as identifying needs, designing, and evaluating, even though these stages entail different philosophies and practices, as discussed in the next section.
23.1.2 Differences between Lifecycles
As often mentioned in this book, UX practitioners iterate early and frequently with design scenarios, screen sketches, paper prototypes, and low-fidelity, roughly coded software prototypes before much, if any, software is committed to the user interface. Often this frequent and early iteration is done on a small scale and scope, primarily as a means to evaluate a part of an interaction design in the context of a small number of user tasks.
UX roles evaluate interaction designs in a number of ways, including early design walkthroughs, rapid evaluation techniques, and lab-based techniques. The primary goal is to find UX problems or flaws in the interaction design so that the design can be improved iteratively.
Even though there is iteration in traditional SE development lifecycles, more so in agile approaches than in the Waterfall approach, the iteration is still on a larger scale (coarser granularity) and scope. In the Waterfall approach, iteration takes place at the granularity of lifecycle stages, such as requirements or design. In agile approaches, while there is iteration at the code-module level, it is still coarser than most kinds of UX iteration because it includes both software code and interaction design.
Another difference between these two lifecycles has to do with terminology. Even though certain terms appear in both lifecycles, they often mean different things.
For example, scenarios in SE (called "use cases" in the object-oriented design paradigm) are used to "identify a thread of usage for the system to be constructed (and) provide a description of how the system will be used" (Pressman, 2009). Whereas in UX, a design usage scenario is "a narrative or story that describes the activities of one or more persons, including information about goals, expectations, actions, and reactions (of persons)" (Rosson & Carroll, 2002).
Overall, software engineers concentrate on the system whereas usability engineers concentrate on the users.
23.2 LOCUS OF INFLUENCE IN AN ORGANIZATION
In our experience we have seen three major roles in an organization that have a significant influence on the direction of a product development: business role, design or "creative" role, and software or development role. Each role brings a unique skillset, perspective, or bias to a project effort.
The business role is concerned with the subject matter of that work domain.
For example, if you are building a software application for helping civil engineers construct bridges, your "business" stakeholders will include structural engineers and other people who know the mechanics of construction and engineering. Sometimes marketing also plays a key role in formulating the product direction under this business role umbrella.
Gross generalizations notwithstanding, in our experience we found that people in business roles usually care about feature coverage. They tend to think of a product's quality in terms of what it can do, how comprehensively it accounts for the business needs, how its features stack against the competition, and all the nuances with which a particular business need is addressed.
The software or development role shares some of the business role's tendency to think of a product's quality in terms of features or "use-cases" supported.
Perhaps an even stronger tendency of this role is to think of quality in terms of code reliability, maintainability of code modules, speed of execution, and other software performance attributes. The underlying sentiment is that optimizing the functional core of a system is more important than all other concerns.
The UX or design role, however, tends to prioritize user needs and experience over all other factors. This often trades off with feature counts
because simplicity and ease of use correlate inversely with abundance of options and features on a user interface. For these roles, quality manifests itself in terms of usability, user satisfaction, usefulness, and emotional impact.
This locus-of-influence perspective is somewhat orthogonal to general project management concerns such as cost and resource allocation; eachroletends to prioritize different aspects of the overall project effort given cost and resource constraints. As a thought experiment, if you were to think of a measure of the amount of influence or authority a given role has in an organization and average it across all the people who play that role in an organization, you will get what we call the locus-of-influence factor. The higher the value of this factor for a given role, the greater the influence on the product direction.
The locus-of-influence factors for the three roles color the personality of an organization that builds interactive systems. It becomes the DNA that permeates all aspects of the culture, including the everyday operations and priorities of that organization. When you hear people say "Google has an engineering culture," they are probably referring to a heavy weighting of this factor toward SE. Similarly, when people call Apple a design company, they are referring to a high value for UX or design role there.
As an extension to this thought experiment, now assume you are somehow able to assess quantitatively the amount of influence each role exerts on the overall product direction, what we call the locus of influence for each role.
Suppose we are also able to combine those measures in a reasonable way to get what we call the locus of influence in an organization. This abstract measure represents an aggregate of the underlying forces, biases, aspirations, and direction that propel a product through the development lifecycle.
This locus of influence for a company is usually a by-product of the company's history, leadership, culture, expertise of roles, and the perception of value of each role's expertise. So what happens to the project effort
when you manipulate the locus of influence for each role? We discuss some generalizations for each of the interesting cases.
23.2.1 Scenario 1: SE as Primary Product Architects
In an organization with a predominantly high engineering or programming locus of influence, the project is biased toward code and technology concerns. The SE role, perhaps working with business, elicits requirements from customers and envisions the product design. These requirements tend to have a functional flavor rather than a user-centered one. The SE role translates the gathered requirements into functional design, which then gets implemented in code.
The emphasis of quality is on code and other software engineering concerns such as cohesion and coupling. Because an SE role's job performance is judged in light of these concerns, it is natural that they work toward building the best functional core they can.
The interaction design concerns are not a big priority in such an organization, and people in the SE role probably do not have much training or expertise in designing for user experience. We know of many companies where, even today, SE roles create the interaction designs for the system. Even if there are specialist UX roles in this scenario, they are often brought in near the end of the lifecycle for "fixing" the experience and "making things pretty." The UX role is a "priest in a parachute," brought in at the end to bless the product by suggesting some quick, and mostly cosmetic, changes because it is too late to change anything major.
UX roles in this kind of a culture are constrained by SE decisions and state of progress. Because SE roles ultimately implement the interaction designs, there is no "cultural" force to ensure that the designs by the UX roles are adopted.
Any change proposed by the UX role can require a difficult and often protracted negotiation between SE and UX roles. The UX role is required to "prove" that their suggestions are better and legitimate. This scenario can be more extreme in organizations with legacy software infrastructure.
We know of organizations where SE roles are valued higher than any other role, even to the extent of limiting career advancement options to other roles. We have seen frustrated colleagues leave organizations because their contributions were not considered an important part of the overall project effort.
In summary, the scenario of having SE roles as primary product architects suffers from an implicit conflict of interest due to the fact that something that is easy to use is almost always not easy to implement. Cooper (2004) succinctly sums up this scenario via the title of his popular book: The Inmates Are Running the Asylum.
23.2.2 Scenario 2: UX as Primary Product Architects
In an organization with a predominantly high design or user experience locus of influence, the project is biased toward users, usage, usability, and emotional impact. The UX role conducts contextual inquiry, analyzes and models the work practice, envisions an interaction design, and provides an
experience for the user. This emphasis on usage-in-context ensures grounding in user concerns, goals, and aspirations, which in turn leads to a system with better usability.
As an aside, why is this scenario likely to produce a system that fosters a better user experience? Why will the outcome be any different when essentially the process is similar to that in scenario 1 where SE roles conduct requirements engineering activities with users and customers? Is not this essentially a similar activity conducted by different roles? Are UX roles better than SE roles when it comes to requirements? No. This is not about who is better. It is about each role's innate tendencies, allegiances, foci, and training.
UX roles are naturally interested in users because they design for usage. SE roles are interested in system functionality because they implement that functionality. UX role instincts tend to be about workflows, barriers, and
breakdowns in work practice, social aspects of work, and emotional impact of a system. SE role instincts tend to be about algorithms and data structures, separation of concerns among data and presentation layers, class hierarchies, and code reuse. UX roles, starting with their human-computer interaction (HCI) 101 classes, are trained in concepts such as contextual and task analysis. SE roles, starting with their SE 101 classes, are trained in requirements engineering via use-case modeling and functional decomposition.
Therefore, it is no surprise that, all other things being equal, a UX role will produce a more user-centered and user experience-oriented analysis of the work domain and what is needed in the envisioned system.
Conversely, the SE role will produce a more system-centered and functionality-oriented analysis of the work domain and how it can be supported by the system.
Getting back to the scenario, the UX role, after analyzing the work practice, designs the envisioned interaction and hands it off to the SE role for implementation. In an organization like this, the designers have a free reign and usually tend to produce interaction designs that push the envelope with respect to innovation and complexity. This model puts pressure on the SE role to implement these sometimes blue-sky designs. This can become a coping scenario for SE if the technology of the target platform does not support what is needed in the UX designs or the SE role does not have the required skills or training to translate the UX designs into code.
There are two possible outcomes in this scenario: (1) the SE role works toward updating the underlying technology to support the new interaction design needs or (2) the SE role resorts to "hacking" the available infrastructure to implement the designs. Obviously the former is more advisable but requires significant effort-an unlikely option for systems with a considerable legacy code base. The latter delivers the envisioned user experience but results in a system with brittle code and maintenance challenges.
Another issue with this scenario has to do with the communication of constraints. The UX role does not know which aspects of its interaction designs are feasible and which are expensive or impossible to implement. This is because being easy to envision in a prototyping platform may not translate easily into being easy to implement in the actual target platform.
Emphasis on interaction in the prototype almost always leads to stubbing of computational functionality. The temptation is to stub the difficult parts of the computational design without first understanding their design requirements. Later, development of the stubbed functions can reveal basic problems that affect the system at many levels above the stub in question. The result is upheaval rather than a smooth progression toward an implementation.
In summary, the scenario of having UX roles as primary product architects tends to push the envelope when it comes to design, with SE playing a "support" role for the overall vision.
23.2.3 Scenario 3: SE and UX as Collaborators
It is not our intention in the previous two scenarios to take sides. We believe that both the SE team and the UX team are essential and complementary. This complementarity is the perspective of our third scenario, which occurs within organizations where the three factors of influence are about even. In an environment of collaboration between SE and UX roles-the two roles work as equal partners together and with the business role. Working together, they undertake early analysis activities. The UX roles conduct contextual inquiry and analysis while briefing the SE role periodically on findings and the emerging needs for the product. In other words, an UX role's concerns and analyses for the user interface imply requirements for the SE role, because they have to implement the UI software component of the system. The two roles may also collaborate during this phase and conduct these activities together.
As the UX role undertakes ideation, sketching, and other early design activities, they keep the SE role updated. They ensure feasibility of their explorations and address potential constraints early on. The UX role prototypes the interaction and the SE role designs the backend. The UX role iteratively refines the interaction design via evaluation, while keeping the SE role informed of any surprises or findings with functional implications. The UX role delivers the final prototypes or other models as specification of the interaction design that the SE role implements along with the backend functionality.
This kind of an organizational environment plays to the strengths and expertise of the different roles. When there are discussions, debates, or
disagreements, all opinions are heard and the final decision is left to the role responsible for that area. For example, final interaction design decisions are left to the UX role and final technology decisions to SE roles.
The implicit requirement for this scenario to work is intimate communication and coordination between SE and UX roles. We discuss this further later.
Once this kind of synchronization is established, we have known such organizations to be very productive with high throughput. These organizations tend to produce quality products-the best user experience within the technology constraints-even if they tend to be more evolutionary than revolutionary in terms of innovation.
In summary, the scenario of having UX and SE roles collaboratively driving a product direction tends to result in productive work environments, which generally produce optimal design solutions given technology constraints. However, there is no overt push to break out of existing constraints and innovate beyond normal progression of the product evolution.
23.3 WHICH SCENARIO IS RIGHT FOR YOU?
This is an important question and, like most things in HCI, the answer is "it depends." It depends on the nature of the product under development, available resources, company culture, expertise of people, and competition in that product area.
In our experience, we found scenario 1, where SE roles lead the product strategy, almost never advisable where a quality user experience is a goal.
Interaction design concerns must take precedence if user experience is a product differentiator in the market.
Scenario 2, where UX roles lead the product strategy, is good for interactive systems trying to push the envelope, break into a market, or displace an existing market leader. This approach allows designers to flex their wings and create an interaction design that is unencumbered by constraints. Often such "pie in the sky" ideas require major changes on the SE side.
Scenario 3 is practical and probably appropriate for most situations.
Separation of concerns-each role concentrating on their domains while being mindful of the other role's constraints-provides a work environment where things get done quickly without endless debates and
arguments. Because neither side pushes the other beyond "normal" expectations, the end product tends to be functional with a good user experience, but rarely a paradigm shifter.
23.4 FOUNDATIONS FOR SUCCESS IN SE-UX DEVELOPMENT
23.4.1 Communication
Although SE and UX roles can successfully do much of their work independently and in parallel, because of the tight coupling between the backend and the user interface, a successful project requires that the two roles communicate so that each knows generally what the other is doing and how that might affect its own activities and work products.
The two roles cannot collaborate without communication, and the longer they work without knowing about the other's progress and insights, the more their work is likely to diverge, and the harder it becomes to bring the two lifecycle products together at the end. Communication is important
between SE and UX roles to have activity awareness about how the other group's design is progressing, what process activity they are currently performing, what features are being focused on, what insights and concerns they have for the project, what directions they are taking, and so on.
Especially during the early requirements and design activities, each group needs to be "light on its feet" and able to inform and respond to events and activities occurring in the counterpart lifecycle. However, in many organizations, such necessary communication does not take place because the two lifecycles operate independently; that is, there is no structured development framework to facilitate communication between these two lifecycles, leaving cross-domain (especially) communication dependent on individual whim or chance.
Based on our experience, ad hoc communication processes have proven to be inadequate and often result in nasty surprises that are revealed only at the end when serious communication finally does occur. This usually happens too late in the overall process.
There is a need for a role or a system to ensure that the necessary information is being communicated to all relevant parties in the system development effort.
Usually, that role is a "project manager" who keeps track of the overall status of each role, work products, and bottlenecks or constraints. For larger organizations with more complex projects, there is a need for communication systems to automate and help the project manager manage some of these responsibilities.
23.4.2 Coordination
When the two lifecycle concepts are applied in isolation, the resulting lack of understanding between the two roles, combined with an urgency to get their own work done, often leads to working without collaboration and coordination. This often results in not getting the UX needs of the system represented in the software design.
Without coordination, the two roles duplicate their efforts in UX and SE activities when they could be working together. For example, both SE and UX roles conduct separate field visits and client interviews for systems analysis and requirements gathering during the early stages of the project. Without collaboration, each project group reports its results in
documentation not usually seen by people in the other lifecycle. Each uses those results to drive only their part of the system design and finally merge at the implementation stage. However, because these specifications were created without coordination and communication, when they are now considered together in detail, developers typically discover that the two design parts do not fit with one another because of large differences and incompatibilities.
Moreover, this lack of coordinated activities presents the appearance of a disjointed development team to the client. It is likely to cause confusion in the clients: "why are we being asked similar questions by two different groups from the same development team?"
Coordination will help in team building, communication, and in each lifecycle role recognizing the value, and problems, of the other, in addition to early agreement on goals and requirements. In addition, working together on early lifecycle activities is a chance for each role to learn about the value, objectives, and problems of the other.
23.4.3 Synchronization
Eventually the two lifecycle roles must synchronize the work products for implementation and testing. However, waiting until one absolutely must synchronize creates problems. Synchronization of the design work products of the two lifecycle roles is usually put off until the implementation and testing phases near the end of the development effort, which creates big surprises that are often too costly to address.
For example, it is not uncommon to find UX roles being brought into the project late in the development process, even after the SE implementation stage (scenario 1 above). They are asked to test and/or "fix" the usability of an already implemented system, and then, of course, many changes proposed by the UX
roles that require significant modifications must be ignored due to budget and time constraints. Those few changes that actually do get included require a significant investment in terms of time and effort because they must be retrofitted (Boehm, 1981).
Therefore, it is better to have many synchronization points, earlier and throughout the two project lifecycles. These timely synchronization points would allow earlier, more frequent, and less costly "calibration" to keep both design parts on track for a more harmonious final synchronization with fewer harmful surprises.
The idea is for each role to have timely readiness of work products when the other project role needs them. This prevents situations where one project role must wait for the other one to complete a particular work product. However, the more each team works without communication and collaboration, the less likely they will be able to schedule their project activities to arrive simultaneously at common checkpoints.
23.4.4 Dependency and Constraint Enforcement
Because each part of an interactive system must operate with the other, many system requirements have both SE and UX components. If an SE component or feature is first to be considered, the SE role should inform the UX role that an interaction design counterpart is needed, and vice versa.
When the two roles gather requirements separately and without communication, it is easy to capture requirements that are conflicting, incompatible, or one-sided. Even if there is some ad hoc form of communication between the two groups, it is inevitable that some parts of the requirements or design will be forgotten or will "fall through the cracks."
The lack of understanding of the constraints and dependencies between the two lifecycles' timelines and work products often create serious problems, such as inconsistencies in the work products of the SE and UX design. As an example, software engineers perform a detailed functional analysis from the requirements of the system to be built. Interaction designers perform a hierarchical task analysis, with usage scenarios to guide design for each task, based on their requirements. These requirements and designs are maintained separately and not necessarily shared. However, each view of the requirements and design has elements that reflect constraints or dependencies in elements of the counterpart view.
For example, each task in the task analysis on the UX side implies the need for corresponding functions in the SE specifications. Similarly, each function in the software design may reflect the need for access to this functionality through one
or more user tasks in the user interface. Without the knowledge of such dependencies, when tasks are missing in the user interface or functions are missing in the software because of changes on either lifecycle, the respective sets of designs have a high probability of becoming inconsistent.
In our experience, we often encounter situations that illustrate the fact that design choices made in one lifecycle constrain the design options in the other. For example, we see situations where user interfaces to software systems were designed from a functional point of view and the code was factored to minimize duplication on the backend core. The resulting systems had user interfaces that did not have proper interaction cues to help the user in a smooth task transition. Instead, a task-oriented approach would have supported users with screen transitions specific to each task, even though this would have resulted in a possibly "less efficient" composition for the backend.
Another case in our experience was about integrating a group of individually designed Web-based systems through a single portal. Each of these systems was designed for separate tasks and functionalities. These systems were integrated on the basis of functionality and not on the way the tasks would flow in the new system. The users of this new system had to go through awkward screen transitions when their tasks referenced functions from the different existing systems.
Constraints, dependencies, and relationships exist not only among activities and work products that cross over between the two lifecycles, but they also exist within each of the lifecycles. For example, on the UX side, a key task identified in task analysis should be considered and matched later for a design scenario and a benchmark task.
"We Cannot Change THAT!": Usability and Software Architecture
Len Bass, NICTA, Sydney, Australia
Bonnie E. John, IBM T. J. Watson Research Center and Carnegie Mellon University
Usability analyses or user test data are in; the development team is poised to respond. The software had been modularized carefully so that modifications to the user interfaces (UI) would be fast and easy. When the usability problems are presented, someone around the table exclaims, "Oh, no, we cannot change THAT!"
The requested modification or feature reaches too far into the architecture of the system to allow economically viable and timely changes to be made. Even when the functionality is right, even when the UI is separated from that functionality, architectural decisions made early in development have precluded the
implementation of a usable system. Members of the design team are frustrated and disappointed that despite their best efforts, despite following current best practice, they must ship a product that is far less usable than they know it could be.
This scenario need not be played out if important usability concerns are considered during the earliest design decisions of a system, that is, during design of the software architecture. Software architecture refers to the internal structure of the software-what pieces are going to make up the system and how they will interact. The relationships between architectural decisions and software quality attributes such as performance, availability, security, and modifiability are relatively well understood and taught routinely in software architecture courses. However, the prevailing wisdom in the last 25 years has been that usability had no architectural role except through modifiability; design the UI to be modified easily and usability will be realized through iterative design, analysis, and testing.
Software engineers developed "separation patterns" or generalized architecture designs that separated the user interface into components that could change independently from the core application functionality.
The Model-View-Controller (MVC) pattern, http://en.wikipedia.org/wiki/Model-view-controller, is an example of one of these. Separation of the user interface has been quite effective and is used commonly in practice, but it has problems: (1) there are many aspects of usability that require architectural support other than separation and (2) the later changes are made to the system, the more expensive they are to achieve. Forcing usability to be achieved through modification means that time and budget pressures are likely to cut off iterations on the user interface and result in a system that is not as usable as possible.
Consider, for example, giving the user the ability to cancel a long-running command. In order for the user to cancel a command, the system must first recognize that the particular operation will indeed be long enough that the user might want to cancel (as opposed to waiting for it to complete and then undo). Second, the system must display a dialogue box giving the user the ability to cancel. Third, the system must recognize when the user selects the "cancel" button regardless of what else it is doing and respond quickly (or the user will keep hitting the cancel button). Next, the system must terminate the active operation and, finally, the system must restore the system to its state prior to the issuance of that command (having stored all the necessary information prior to the invocation of the command), informing the user if it fails to restore any of the state.
In order for cancel to be supported, aspects of the MVC must all cooperate in a systematic fashion. Early software architecture design will determine how difficult it is to implement this coordination. Difficulty translates into time and cost, which, in turn, reduce the likelihood that the cancel command will be implemented.
Cancel is one of two dozen or so usability operations that we have identified as having a significant impact on the usability of a system. These architecturally significant usability scenarios include undo, aggregating data, and allowing the user to personalize their view. For a more complete list of these operations, see Bass and John (2003).
After identifying the architecturally significant usability scenarios important for the end users of a system, the developers-software engineers-must know how to design the architecture and implement the command and all of the subtleties involved in delivering a usable product. For the most part, this information is not taught in standard computer science courses today. Consequently, most software developers will learn this only through painful experience. To help this situation, we have developed usability-supporting architectural patterns embodied in a checklist describing responsibilities of the software that architecture designers and developers should consider when implementing these operations (Adams et al., 2005; Golden, 2010). However, only some usability scenarios have been
embodied in responsibility checklists and knowledge of the existence of these checklists among practicing developers is very limited.
Organizations that have used these materials, however, have found them valuable. NASA used our usability- supporting architectural patterns in the design of the Mars Exploration Rover Board (MERBoard), a wall-sized collaborative workspace intended to facilitate shoulder-to-shoulder collaboration by MER science teams. During a redesign of the MERBoard software architecture, 17 architecturally significant usability scenarios were identified as essential for MERBoard and a majority of the architecture's components were modified in response to the issues raised by the usability-supporting architectural patterns (Adams et al., 2005). ABB considered usability-supporting architectural patterns in the design of a new product line architecture, finding 14 issues with their initial design and crediting this process with a 17:1 return on investment of their architect's time-1-day's work by two people saved 5 weeks of work later (Stoll et al., 2009). For more information, see the Usability and Software Architecture Website at http://www.cs.cmu.edu/~bej/usa/index.html.
References
Adams, R. J., Bass, L., & John, B. E. (2005). Applying general usability scenarios to the design of the software architecture of a collaborative workspace. In A. Seffah, J. Gulliksen & M. Desmarais (Eds.), Human-Centered Software Engineer- ing: Frameworks for HCI/HCD and Software Engineering Integration. Kluwer Academic Publishers.
Bass, L., & John, B. E. (2003). Linking usability to software architecture patterns through general scenarios. Journal of Systems and Software, 66(3), 187-197.
Golden, E. (2010). Early-Stage Software Design for Usability. Ph.D. dissertation in Human-Computer Interaction: Human- Computer Interaction Institute, School of Computer Science, Carnegie Mellon University.
Stoll, P., Bass, L., Golden, E., & John, B. E. (2009). Supporting usability in product line architectures. In Proceedings of the 13th International Software Product Line Conference, San Francisco, CA August 24-28, 2009.
23.4.5 Anticipating Change within the Overall Project Effort In the development of interactive systems, each phase and each iteration have a potential for change. In fact, at least the early part of the UX process is intended to change the design iteratively. This change can manifest itself during the requirements phase (growing and evolving understanding of the emerging system by project team members and users), design stage (evaluation identifies that the interaction metaphor was not easily understood by users), and so on. Such changes often affect both lifecycles because of the various dependencies that exist between and within the two processes.
Therefore, change can be visualized conceptually as a design perturbation that has a ripple effect on all stages in which previous work has been done. For example, during the UX evaluation, the UX role may recognize the need for a new task to be supported by the system. This new task requires updating the previously
Figure 23-2
User interaction design as input to UI software design.
generated hierarchical task inventory (HTI) document and generation of new usage scenarios to reflect the new addition (along with the rationale).
On the SE side, this change to the HTI generates the need to change the functional decomposition (for example, by adding new functions to the functional core to support this task on the user interface). These new functions, in turn, mandate a change to the design, schedules, and, in some cases, even the architecture of the entire system.
Thus, one of the most important requirements for system development is to identify the possible implications and effects of each kind of change and to account for them in the design accordingly.
One particular kind of dependency between lifecycle parts represents a kind of "feed forward," giving insight to future lifecycle activities. For example, during the early design stages in the UX lifecycle, usage scenarios provide insights as to how the layout and design of the user interface might look like. In other words, for project activities that are connected to one another (in this case, the initial screen design is dependent on or connected to the usage scenarios), there is a possibility that the designers can forecast or derive insights from a particular design activity.
Sometimes the feed-forward is in the form of a note: "when you get to screen design, do not forget to consider such and such." Therefore, when the project team member encounters such premonitions or ideas about potential effects on later stages (on the screen design in this example), there is a need to document them when the process is still in the initial stages (usage scenario phase). When
the team member reaches the initial screen design stage, previously documented insights are then readily available to aid the screen design activity.
23.5 THE CHALLENGE OF CONNECTING SE AND UX
23.5.1 User Interaction Design, Software, and Implementation
In Figure 23-2 we show software design and implementation for just UI software (middle and bottom boxes). While this separation of UI software from
non-user-interface (functional core) software is an acceptable abstraction, it is actually an oversimplification.
The current state of the art in software engineering embodies a well-developed lifecycle concept and
well-developed process for producing requirements and design specifications for the whole software system. But they do not have a process for developing UI software separately from the functional (non-UI) software.
Furthermore, there are currently no major software development lifecycle concepts that adequately support including the UX lifecycle as a serious part of the overall system development process. Most software engineering textbooks (Pressman, 2009; Sommerville, 2006) just mention the UI design without saying anything about how it happens. Most software engineering courses in colleges and universities describe a software development lifecycle without any reference to the UI. Students are taught about the different stages of developing interactive software and, as they finish the implementation stages in the lifecycle, the UI somehow appears automagically. Important questions about how the UI is designed, by whom, and how the two corresponding SE and UX lifecycles are connected are barely mentioned (Pyla et al., 2004).
So, in practice, most software requirements specifications include little about the interaction design. If they do get input from UX people, they include use cases and screen sketches as part of their requirements, or they might sketch these up themselves, but that is about the extent of it. However, in reality there is a need for UX people to produce interaction design specifications and for SE people to make a connection with them in their lifecycle. And this is best done in the long run within a broader, connected lifecycle model embracing both lifecycle processes and facilitating communication across and within both development domains.
23.5.2 The Promise of Agile Development
In Chapter 19, we attempted such an integrated model in an agile development context. Even though traditional agile methods (such as XP) do not explicitly mention UX processes, we believe that the underlying philosophy of these methodologies to be flexible, ready for change, and evaluation-centered has the potential to bridge the gap between SE and UX if they are extended to include UI components and techniques. As we mentioned in Chapter 19, this requires compromises and adjustments on both sides to respect the core tenets of each lifecycle.
23.5.3 The Pipedream of Completely Separate Lifecycles Although we have separated out the UX lifecycle for discussion in most of this book for the convenience of not having to worry too much about the SE counterpart, we realize that because the two worlds of development cannot exist in isolation, we do try to face our connection to the SE world in this chapter.
Figure 23-3
UX and SE lifecycles in series.
23.5.4
How about Lifecycles in Series?
Consider the make-believe scenario, very similar to the one discussed earlier, in which timing means nothing and SE people sit around waiting for a complete and final interaction design to be ready. Then a series connection of the two lifecycles, as shown in Figure 23-3, might work.
The UX people work until they achieved a stable interaction design and have decided (by whatever criterion) to stop iterating. Then they hand off that finished version of the interaction design and agree that it will not be changed by further iteration in this version of the system.
The output of the UX lifecycle used as input to the SE lifecycle is labeled "interaction design specifications as UI software requirements inputs" to emphasize that the interaction design specifications are not yet software
requirements but inputs to requirements because only SE people can make software requirements and those requirements are for the entire system.
We, the HCI folks, provide the inputs to only part of that requirements process.
There are, of course, at least two things very wrong about the assumptions behind this series connection of lifecycles. First, and most obvious, the timing just will not work. The absolute lack of parallelism leads to terrible inefficiencies, wasted time, and an unduly long overall product lifecycle.
Once the project is started, the SE people could and would, in fact, work in parallel on compiling their own software requirements, deferring interaction design requirements in anticipation of those to come from the UX people.
However, if they must wait until the UX people have gotten through their entire iterative lifecycle, they will not get the interaction design specifications to use in specifying UI software requirements until far into the project schedule.
The second fatal flaw of the series lifecycle connection is that the SE side cannot accommodate UI changes that inevitably will occur after the interaction design "handoff." There is never a time this early in the overall process when the UX people can declare their interaction design as "done." UX people are constantly iterating and, even after the last usability testing session, design changes continue to occur for many reasons, for example, platform constraints do not allow certain UI features.
23.5.5 Can We Make an Iterative Version of the Serial Connection?
To get information about the evolving interaction design to SE people earlier and to accommodate changes due to iteration, perhaps we can change the configuration in Figure 23-3 slightly so that each iteration of the interaction design, instead of just the final interaction design, also goes through the software lifecycle; see Figure 23-4.
While this would help alleviate the timing problem by keeping SE people informed much earlier of what is going on in the UX cycle, it could be confusing and frustrating to have the UX requirements inputs changing so often. Each UX iteration feeds an SE iteration, but the existing SE lifecycle concepts are not equipped for iteration this finely grained; they cannot afford to keep starting over with changing requirements.
23.5.6 It Needs to Be More Collaborative and Parallel
So variations of a series lifecycle connection are fraught with practical challenges. We need parallelism between these two lifecycles. As shown in Figure 23-5, there is a need for something in-between to anchor this parallelism.
Figure 23-4
Iterating a serial connection.
Figure 23-5
Need for connections between the two lifecycles.
As we mentioned earlier, however, this parallel configuration has the strongest need for collaboration and coordination, represented by the connecting box with the question mark in Figure 23-5. Without such communication parallel lifecycles cannot work. However, traditional SE and UX lifecycles do not have mechanisms for that kind of communication. So in the interest of a realistic UX/SE development collaboration without undue
timing constraints, we propose some kind of parallel lifecycle connection, with a communication layer in-between, such as that of Figure 23-6.
Conceptually, the two lifecycles are used to develop two views of the same overall system. Therefore, the different activities within these two lifecycles have deep relationships among them. Consequently, it is important that the two development roles communicate after each major activity to ensure that they share the insights from their counterpart lifecycle and to maintain situational awareness about their progress.
The box in the middle of Figure 23-6 is a mechanism for communication, collaboration, constraint checking, and change management discussed earlier. This communication mechanism allows (or forces) the two development domains to keep each other informed about activities, work products, and (especially) design changes. Each stage of each lifecycle engages in work product flow and communication potentially with each stage of the other lifecycle but the connection is not one to one between the corresponding stages.
Because SE people face many changes to their own requirements, change is certainly not a foreign concept to them, either. It is all about how you handle change. In an ideal world, SE people can just plug in the new interaction design, change the requirements that are affected, and move forward. In the practical world, they need situational awareness from constant feedback
from UX people to prepare SE people to answer two important questions: Can our current design accommodate the existing UX inputs? Second, based on the trajectory of UX design evolution, can we foresee any major problems?
Having the two lifecycles parallel has the advantage that it retains the two lifecycles as independent, thereby protecting their individual and inherent interests, foci, emphases, and philosophies. It also ensures that the influence and the expertise of each lifecycle are felt throughout the entire process, not just during the early parts of development.
Figure 23-6
More parallel connections between the two lifecycles.
This is especially important for the UX lifecycle because, if the interaction design were to be handed over to the SE role early on, any changes necessary due to constraints arising later in the process will be decided by the SE role alone without consultation with the UX role and without understanding of the original design rationale. Moreover, having the UX role as part of the overall team during the later parts of the development allows for catching
any misrepresentations or misinterpretations of UI specifications by the SE role.
23.5.7 Risk Management through Communication, Collaboration, Constraint Checking, and
Change Management
Taking a risk management perspective, the box in the middle of Figure 23-6 allows each lifecycle to say to the other "show me your risks" so that they can anticipate the impact on their own risks and allocate resources accordingly. Identifying and understanding risks are legitimate arguments for getting project resources as an investment in reducing overall project risks.
If a challenge is encountered in a stage of one lifecycle, it can create a large overall risk for parallel but non-communicating lifecycles because of a lack
of timely awareness of the problem in the other lifecycle. Such risks are minimal in a series configuration, but that is unrealistic for other reasons. For example, a problem that stretches the timeline on the UX side can eventually skew the timeline on the SE side.
In Figure 23-6, the risk can be better contained because early awareness affords a more agile response in addressing it. In cases where the UX design is not compatible with the SE implementation constraints, Figure 23-3 represents a very high risk because neither group is aware of the incompatibility until late in the game. Figure 23-4 represents only a medium risk because
the feedback loop can help developers catch major problems. Figure 23-6, however, will minimize risk by fostering earlier communication throughout the two lifecycles; risks are more distributed.
23.6 THE RIPPLE MODEL TO CONNECT SE AND UX
To connect the SE and UX lifecycles, we developed "Ripple" (so named because of the ripple effect of a thread of communication), a communication-fostering framework (Pyla, 2009).
The Ripple model, shown in Figure 23-7, describes the specific environment, tool support, entities, and various components involved in a particular interactive system development project. The Ripple model is expressed at a level
of detail that is useful for developers to adopt and employ manually for a particular project context or as a framework on which to design an automated software system to manage the communication required between the two lifecycles.
As an example, a software implementation of the Ripple model would work as follows (using quotes to set off state-change indicators that could be used as communication triggers): A person in a UX role, John Doe, logs into the system, "starts" working on task analysis by selecting that activity in Ripple, which "creates" a hierarchical task inventory (HTI) document, which will be stored in a work product repository.
Figure 23-7
The Ripple model.
The Ripple implementation automatically detects the fact that John Doe started task analysis, and the work product repository automatically detects
the creation of this new work product. Upon creation, these two events are sent to the event queue component directing them to be sent to the appropriate parties. For example, if a dependency relationship exists between UX's task analysis and SE's functional analysis: "every task in UX role's HTI must have one or more corresponding functions to support the task on the backend,"
the system automatically sends a message to the functional analysis work activity in SE.
This message will be waiting when the SE role logs in through the developer interface and starts to work on the functional analysis activity. Similarly, when John Doe sends the insight about the need for a new task, the system automatically sends messages to all other developers who work on
task-related activities (e.g., usage scenarios) and this message will be delivered immediately.
23.6.1 The Ripple Project Definition Subsystem Using a project manager interface, a project manager accesses the Ripple project definition subsystem to specify the component parts of a project, including SE and UX lifecycle types, work activities to be conducted as part of the two lifecycles, roles, and work products.
23.6.2 The Ripple Constraint Subsystem The job of the constraint subsystem is to represent, monitor, and enforce various dependency relationships among different entities between the two lifecycles during development. Through these constraints, different time-based events in the development space can trigger other events that need to be performed to maintain stability in the design.
Using the Ripple Mappings Description Component, the project manager can declare the different relationships that exist among various entities within the development space. For example, consider the relationship between the SE role's functional decomposition work product and the UX role's hierarchical task list work product: a mapping element must be declared so that a change to one of these work products requires at least a consideration of change to the other.
A project manager can declare a mapping between these two work activities to be dependent on the source work activity (e.g., HTI, by UX role), a trigger event that perturbs the design space (e.g., new task description added to HTI by UX role), a related work activity elsewhere in the design space (e.g., functional
analysis by SE role), or the type of relationship (e.g., every task in UX role's HTI must have one or more corresponding functions to support the task on the backend).
The Trigger Event Listener is a software agent that monitors the event queue for trigger events to enforce a relationship. For each event arriving at the event queue, the trigger event listener checks the mappings description to identify the corresponding relationship and delegates the enforcement of that relationship to the Relationship Enforcement Component by passing to it the event and its corresponding relationship. For example, in the case of the UX role creating a new task in the HTI, the module upon verifying the existence of a relationship, wherein the SE role is required to update their functional decomposition work product, informs the relationship enforcement component to notify the SE role about this change.
23.6.3 The Ripple Repository Subsystem
Various work products of the combined design process are stored as a shared design representation in a single repository called the Work Product Repository with each of the SE and UX roles having two separate views to this dataset.
Developers are required to post new work products created at the end of each work activity here, creating "posting" trigger events.
The Ripple implementation of this repository has mechanisms to detect any queue events as and when time-based events for work products, such as work product created or is being modified, occur. Once detected, these events are sent to the event queue to be acted upon by the trigger event listener.
23.7 CONCLUSIONS
23.7.1 You Need a True Separate Role for Interaction Design Although we have seen remarkable exceptions where software engineering people are also very good at interaction design, we generally stand by our conclusion that the UX process generally should not be done by a software engineering person. We need a true separate role for interaction designer.
In past years, this role has blossomed into a major career niche, going under many appellations, including user experience specialist, usability engineer, UX practitioner, UX designer, and information architect. While people entered this field from human factors, psychology, computer science, or engineering, now there are academic programs tailored specifically to train people to meet the demand for these skills.
People in all roles in both domains must work together and share in the design and development not just of the user interface, but of the whole interactive system. While these roles correspond to distinguishable activities, they are mutually dependent aspects of the same whole effort. These roles represent essential ingredients in the development process, and trade-offs concerning any one of them must be considered with regard to its impact on the other roles. Ever since we started working in this field, we have believed that cooperating and complementary roles, coming from both software and UX domains, are essential to the design of high-quality user interfaces. The roles require a lot of communication during interaction design and development as they work across the software, interaction, and work domain boundaries within a team of closely coordinated peers.
23.7.2 Sometimes Team Members Need to Wear Multiple Hats
Small organizations or resource-constrained teams sometimes force a situation where both interaction design and software design are, in fact, done by the same person, but that person must be aware of taking on both roles, roles that differ in about every way, requiring different skills, approaches, methods, and mind-sets. So, one individual person can take on two or more roles, a person wearing multiple hats. As anyone who has had multiple roles living in one head under the hats will tell you, the key is in maintaining the role distinction, keeping the roles separated, and being aware of which activity one is doing at any given time.
Failure to keep the roles separate will subject the hat wearer to a fundamental conflict of interest between the two roles. What is best for users is almost always not easiest for programmers. When push comes to shove, it is far too easy to resolve this conflict of interest in favor of easier programming, at the cost of the user experience. We have seen it far too often and far too blatantly. Cooper (2004, p. 16) puts it well:
The process of programming subverts the process of making easy-to-use products for the simple reason that the goals of the programmer and the goals of the user are dramatically different. The programmer wants the construction process to be smooth and easy. The user wants the interaction with the program to be smooth and easy. These two objectives almost never result in the same program. In the computer industry today, the programmers are given the responsibility for creating interaction that makes the user happy, but in the unrelenting grip of this conflict of interest, they simply cannot do so.
So, wearing multiple hats requires the wearer to be faithful to the needs and goals of each hat. While you are reading this book, you should be wearing your interaction designer hat.
23.7.3 Interaction Design Needs Specialized Expertise and Training
Significant training and educational prerequisites for the software engineer's role are obvious, but how hard can it be to make an interaction design? Do you really need a whole different role just to do interaction design? It definitely does not take a rocket scientist.1 Is not it just common sense, something most anyone on the development team can do if they put their minds to it? If it were just common sense, we would have to wonder why good sense is not more common in the designs we use.
It is especially easy for software people to think that they can do interaction design with the best of them. Talk with many programmers about user interfaces and you will hear about widgets, interaction styles, callbacks, and everything you will need to build a user interface. You may sense a feeling of confidence-a feeling that helpful style guides and new interface software programming tools have catapulted this programmer into a real user interface expert.
Anyone, in fact, can produce some user interaction design. However, just because a programmer has an interface software toolkit does not mean that he or she can design a highly usable interaction design, and it does not mean that they necessarily know a good user interaction design when they see one. In our experience, we have actually encountered junior software folks smiling broadly as they wave the standards or guidelines manual and proclaim that not only can they now create the interaction design, but there "will not be any need for UX testing if I just follow the guidelines."
As we now know, there is a significant prerequisite for the interaction designer's role, too, including psychology, human factors, industrial design, systems engineering, and everything in this book! But computer science and software engineering are not among those prerequisites. Sure, design guidelines are important, but what is less well understood is the absolute necessity for a good UX lifecycle process, including lifecycle concepts, and process activities and techniques. Additionally, there is the song we played in the Preface: a requisite mind-set for truly appreciating the plight of the user.
1What is the big deal about comparing everyone to a rocket scientist? You really have to know only one thing to be a rocket scientist: rocket science.
23.7.4 Success Criteria for Developing Interactive Systems Although we have talked much about processes, the bottom line is that the success of an interactive system development project is, at its base, all about the people. If team members in each role have respect for the other roles, and each team member has the requisite capabilities to carry out the assigned roles, the project will find a way to succeed.
Experienced developers already appreciate the importance of communication but, in the fog of battle, people get busy and people get consumed with their own responsibilities. So, the project needs to be infused with reminders to maintain communication about activities and progress, especially about problems and changes.
Finally, because resources are always limited, the team must act in ways to take utmost advantage of what resources they have. Among the issues this translates into are the staggering of the two corresponding lifecycles so that one does not constrain the productivity of the others and constantly ensuring situational awareness of overall process and the design.
Making It Work in the Real World
Objectives
After reading this chapter, you will:
1. Understand what it takes to put the UX process to work as a practitioner in the field
2. Know how to be a smart UX practitioner
3. Know how to participate in UX professionalism
4. Understand the impact and limitations of cost-justifying a UX process
5. Appreciate the business and politics of UX within your organization
24.1 PUTTING IT TO WORK AS A NEW PRACTITIONER
Here is a little advice on getting started in applying the UX process within your organization. Some readers will already be engaged in the UX process and can ignore the points that no longer apply.
24.1.1 Professional Preparation
Find someone with whom you can apprentice
If possible, as you are getting started, find an established UX practitioner, in your organization or elsewhere, with whom you can work to "learn the trade." Just following an expert around for a while can give you a great deal of confidence to try some of the UX process activities on your own.
It is especially important for you to sit in on design sessions and observe UX evaluation sessions. In a small company, it may be hard to find a knowledgeable person with whom you can apprentice; you may be the resident expert! In most large companies, however, you should be able to find someone suitable.
Get training for project team members
Get appropriate training on these new techniques for members of the project team, especially those who are being given responsibility for the UX process. Even those who are not involved directly in the UX lifecycle process can benefit
from some formal training because these ideas may be dramatically different from those they encounter in their own domain. Having all members of the team with a common baseline of knowledge in these techniques is helpful in making it all work.
Get consulting help when needed, especially during start-up
By having an expert around while you try these activities the first time or two, you will learn a great deal more about how to do them and how not to do them and you will gain skills and confidence that will allow you to continue with subsequent activities yourself. There are two sources of consultants that you can tap.
If your organization is large enough, there are probably already people somewhere in it whom you can bring in to help you get started. If not, if you are breaking entirely new ground in your organization by trying these ideas or if your organization is fairly small, then you may want an outside consultant to help you get started. While this may sound like an expensive proposition, remember what Red Adair, the famous Texas oil-well firefighter, said when someone confronted him about his costs for putting out oil-well fires: "If you think the experts are expensive, wait until you bring in the amateurs!"
Start a regularly scheduled brown-bag UX lunch bunch Within your project, your organization, or your community start a regular get-together for people with a mutual interest in UX design. This kind of a support group can have many purposes, from serving as a critique group for emerging interaction designs; to getting advice on some particular process activity; to sharing experiences with the process; or to being an educational
forum for presenting and sharing relevant topics, showing videotapes of interest, and so on.
Perhaps most importantly, a special interest group for UX raises awareness of the UX process activities that are happening. Publicize it widely, on electronic bulletin boards and any other communication medium you have available.
Begin by meeting once a month, and then meet more often if interest and attendance warrant it. Instead, perhaps, subgroups with interest in some specific topic(s) may want to meet more often. Many places that have tried this idea have been amazed at how quickly their group has grown and how popular and effective it can be.
Start a small internal newsletter and/or electronic bulletin board specifically related to UX activities in your organization
A nice spin-off to the brown-bag lunch idea is a small newsletter to serve as another forum for exchanging ideas. This newsletter can be published electronically. In it, you can talk about actual evaluation sessions, suggest readings from new articles and books, give conference reports, and relate success stories-essentially the same kinds of things that you discuss during the brown-bag lunch groups.
An internal electronic bulletin board or a blog is also an excellent medium for exchanging information, asking questions, posting answers, making suggestions, and so on. This kind of communication will increase the visibility of human-computer interaction (HCI) and UX greatly in your organization.
Attend conferences related to human-computer interaction and UX
The Usability Professionals Association (UPA)1 has an annual conference that appeals to practitioners in the field. Sponsored by SIGCHI, a special interest group of the Association for Computing Machinery, the Conference on Human Factors in Computing Systems2 (known as the CHI, pronounced like the Greek letter w) is the largest annual conference on HCI. CHI has a decidedly research flavor but features many activities and attractions oriented toward practitioners, too. CHI has a variety of activities, including the standard fare of paper presentations, panels, and poster sessions. It also has special-interest group meetings; impromptu birds-of-a-feather sessions; book exhibits; demonstrations of tools and other applications by both research and commercial groups; and exhibits of unusual, often futuristic, user interface technology.
For the new or aspiring UX practitioner, we especially recommend the UPA conferences and their Body of Knowledge project.3 The mission is to create "a living reference that represents the collective knowledge of the usability profession and provides an authoritative source of reference and define the scope of the profession."
In addition, the annual User Interface Software and Technology Symposium (UIST)4 is a smaller, single-track forum for exchanging state-of-the-art ideas and results, more on the software side of things. The Human Factors and
1http://www.upassoc.org/ 2http://www.sigchi.org/conferences
3http://www.upassoc.org/upa_projects/body_of_knowledge/bok.html 4www.acm.org/uist/
Ergonomics Society5 also has conferences with many sessions dedicated to user interface issues. HCI International,6 Interact,7 CSCW,8 DIS,9 and other conferences also abound.
We also recommend the Interaction Design Association (IxDA), a global network dedicated to serving the professional practice of interaction design and the professional needs of an international community of practitioners, teachers, and students of interaction design. The "IxDA network provides an on-line forum for the discussion of interaction design issues and provides other opportunities and platforms for people who are passionate about Interaction Design to gather and advance the discipline."10
Prepare a UX portfolio
Also, if you are looking at the job market, it is time to compile a portfolio of your existing UX work. Many companies interviewing for new UX professionals are asking for this now. Highlight the process you followed, the prototypes you created, the redesigns you made, and so on. Your portfolio must tell a story of each design project you undertook: the users affected, the challenges faced, and the innovation provided. Make it visual with design sketches, screen images, and other design artifacts, with appropriate annotations. Include surprises and unique insights. Use it as a conversational prop when you are presenting in person, for example, at a job interview.
24.1.2 Administrative Preparation
Get a commitment from management to try these new techniques
You cannot operate in a vacuum. In most organizations you need permission to try new things. Share what you know about UX and get your management committed to trying it. First, lay out your UX process plan, at least roughly. Then, have a one-on-one meeting with one or more key upper-level managers and convince them to let you try your plan.
If you are prepared and keep the plan pretty simple, chances are very good that you will get the support you want. Ask this manager to call a meeting to discuss the plan with the project team. Let the manager run the meeting, as if it is
5hfes.org/
6http://www.hci-international.org/ 7http://interact2011.org/ 8http://www.cscw2012.org/ 9http://www.dis2012.org/ 10http://www.ixda.org/
all the manager's idea. If this does not work, you run the meeting, but have the manager there to support you.
Establish UX leadership
Get at least one person on the project team who can be the UX leader. Maybe this person is you! If it is not possible, for whatever reason, to get a full-time person, start with a part-time person. Find a way with management to give that person primary responsibility for design, evaluation, and iterative refinement of the interaction design.
Also give that person the authority to carry out the responsibilities of the job.
Later, as the importance of this role becomes more recognized and appreciated within your organization, you can add other people to your emerging UX team.
Get a commitment from project team members to try these new ideas
Those members of the team who are not responsible for developing the interaction design should be made aware of what those who are responsible for it will be doing and why. Get at least some level of commitment from these
non-user-interface people for the ideas you will be trying out so that they will know what to expect.
Generate a failure story and then a success story, no matter how small
Often, when managers and team members are asked "What will it take for you to get approval to begin trying some of these new ideas?" they respond, "Failure!" To convince people that these ideas will work, start by showing them failure when the right process is not used for developing the user interaction design. Set up some version of a system that needs a lot of UX improvement.
In your UX lab, make a 5-minute video of a user having a really terrible time trying to use the interface. Using the techniques presented in this book, revise the interaction design, or at least the worst part of it. Then make another
5-minute video of a user, the same one if it is feasible. Use the revised design to perform the same tasks as in the first video. Presumably, of course, the user will love-or at least like and be able to use-the revised design.
Show the two video clips to managers and explain to them the process that got you from the first version to the second one. If your video clips are different enough, they will make the point for you dramatically. What managers will usually want to know after such a presentation is "Why did not we start using this
UX process before now?" This success story, demonstrating the effectiveness of the process in action, can do more to help sell these ideas than almost anything else you can do.
24.1.3 Technical Preparation
Start a blog about your UX activities
It will be a valuable and illuminating experience to maintain a record or journal, as you go, of how you applied various techniques in the UX process and how well they worked. Maintain it as an online blog and others can participate.
You will also impress your teammates with the ability to recall what you all decided earlier and it might help keep the team from going in circles and reinventing process ideas.
Get some practice doing contextual inquiry and analysis
On your next project, follow some of the steps of contextual inquiry and analysis and go out and interview and observe customers and users in the application domain. You will be surprised how easy and effective it soon becomes.
Personalize and actualize a process
Throughout this book we have encouraged you to personalize the process, taking from our process what works, what you can afford, and what meets your goals for a project. Now is the time to codify and document those process and technique selections and actualize them-put them into action.
Marc Rettig (1992), whose HCI and UX writing has resonated with us over the years, gave this advice back in the 1990s to software programmers who found themselves in a position where they had to do interaction design: Get a process. He offers this "catchy truism," "good management means doing the right things, and doing them right." Doing the right things is about having a process.
Guidance in doing them right is given in the techniques in the process-oriented chapters, the techniques that support the lifecycle process.
Set up a UX lab
Find an enclosed (or enclosable) corner, a broom closet, a vacant office, some space somewhere, and make it your official UX lab. This single activity, along with getting a UX practitioner on the project team, can have a huge impact on attitude toward these new ideas.
Put a big, bold sign on the door. People will wonder what is going on in there and will start asking questions about what a UX lab is and what it is to
be used for. This will begin raising awareness about the increasing importance of UX in your project and organization-good PR! Get in the minimal equipment recommended and then-starting small-use it to do some formative evaluations of your evolving user interaction designs.
24.1.4 Give It a Try
Start small
There is a lot of material in this book. The best way to get it under your belt in real projects is to start small and work up to the whole process. Choose an interaction design project that is small enough so that you will not be overwhelmed from the beginning as you apply these new techniques.
If you are required to work on a large project, choose some reasonable portion of it to focus on initially. Select, for example, a smallish subsystem of your large project or a few of its most important functions and features.
The project (or part of the project) you choose should be one that has some visibility, but that is not extremely high risk.
As Nielsen (1994c) said, "Anything is better than nothing." People often fear that they will not be successful the first time they try these techniques. These techniques are so effective that you almost cannot lose. Any data you collect from even a short session with a single user is invaluable input
that you can use to make improvements in the interaction design. Do not be afraid to try these techniques; you will become comfortable with
them quickly.
Prototype and evaluate only a core part of the interaction design the first few times you attempt to do formative evaluation
If you try to encompass too much of the interaction design in the initial prototype, you will probably spend too much time developing it, and you could become overwhelmed if you attempt to evaluate all parts of it. For your first few prototypes and subsequent formative evaluation cycles, incorporate a
core set of functions, those functions without which a user cannot perform useful work with the system being developed.
Keeping the prototype small will allow you to keep the formative evaluation process manageable until you become more knowledgeable and confident with it. Later prototypes can, and of course should, include much more of the system functionality.
Do some observations of users with a prototype of the interaction design
If you cannot get management to agree to let you try all these ideas at once, then at a minimum get them to let you either go off-site or bring in one or two participants, whichever is most appropriate for your situation, for a short period of time-2 hours, half a day, a day-to evaluate your interaction designs. Informally observe people using the system and give management a short report on your observations. Include in your report the major problems identified and the expected impact of making changes to the interaction design based on your observations.
Have developers and managers watch at least one participant from an evaluation session
Often, developers, even after training, and managers, even after realizing the need for UX, are still reluctant to believe in the UX lifecycle process. One of the best ways to convince both developers and managers, for example, that evaluation with users is critical to ensuring a quality user experience is to have them observe some participants.
Once you get your UX lab set up, this is easy to do. Schedule a specific time for them to come to the lab and watch at least one participant during an evaluation session. If you have a video hookup or a one-way mirror with which they can observe from a different room than where the participant is working, that is best for the participant. If developers or managers simply will not come and watch a participant live during an evaluation session, show them a few short, carefully selected video clips of some sessions. This will go a long way toward convincing skeptics about the value of these techniques.
24.2 BE A SMART UX PRACTITIONER
As you gain experience, you will learn "tricks of the trade" that will make you more valuable as a UX practitioner on each project. We have said many times in this book not to apply the process blindly but with judgment. Find the most economical level of commitment to the process and be flexible in its application.
In a 1993 workshop (Atwood, 1994), researchers and practitioners pooled their experience to compile a list of tricks of the trade. The results are still relevant today.
� . Better be fast and mostly right than slow and perfect. This follows our engineering advice to make it good enough, but not perfect. Engineering means "satisficing" (Simon, 1956).
� Chase what gives the most bang for the buck. UX practice is a cost-benefit balancing act. When you get good at this, you have increased your cost-benefit to your organization. When you can prove it to management, ask for a raise.
� Distinguish between customers and users.
� Serve as a catalyst/lighting rod. The UX practitioner has the advantage over many other jobs in the organization by being responsible for talking with customers, users, and developers.
� Push what works.
� Know when to turn it over to product development. Do not get "married" to your designs and prototypes; cut them loose-discard them if they are not working or let them graduate when they are ready.
� Know the development environment and the developer's concerns. Make sure your designs are well received. If you have been communicating all along, there should be no surprises at this point.
� Know the customer's and the users' concerns: the goal of contextual inquiry and contextual analysis.
We have a few of our own to add.
� Make yourself a best-practices list by choosing from options in this book.
� Make your paper prototypes work economically for you. Before you go to the lab for UX testing, use low-fidelity prototypes to be sure of the concepts, language use, nonexistence of showstoppers, and effective task flow. The UX lab is not a cost-effective place to discover the right verbs for button labels.
� Evaluate continuously, throughout the lifecycle.
� Early evaluation is to find UX problems, not performance measurements.
� Use goal-directed choices for process and techniques. It is not about which process or method is best but which works best under a given constraint in a given context.
� Get your software developers to agree on the process and what it means. For example, do not let them take your low-fidelity prototype too soon and start designing screens to match it exactly before you have iterated and worked out the details.
24.3 UX PROFESSIONALISM
Beyond the preparation for project work we recommend professional career preparation, including membership and participation in professional UX societies, conferences, and workshops. Find out which HCI and UX publications
are most relevant to your interests and subscribe. Join the Usability Professionals' Association (UPA), the ACM Special Interest Group on CHI (SIGCHI)11-local and/or national-and/or any other professional organizations appropriate for your background and interests.
Get involved in a professional society and help steer it toward useful goals.
Morris (2005) makes the case for stronger representation of HCI or UX as a profession to business. We have been looking inward to how getting
organized within a professional group can help us all be better practitioners, which is good. But an effective professional organization succeeds by supporting business in areas related to the profession.
He cites the American Chemical Society, for example, as an organization that provides for chemists and their employers such services as employment registration and competitive analysis tools regarding salaries. Morris feels that we in HCI have not yet reached business with this kind of attention and are, therefore, often more or less invisible to management.
Stewart (2002), leader of System Concepts in the United Kingdom, gauges the HCI profession as finally becoming successful and tells us how to keep that from becoming a danger to us. He thinks that our growing acceptance as a profession will demand more from us as professionals, including improving our credentials and competence. He wants to see us more as a real profession and less as a black art and worries that otherwise acceptance of the importance of UX will unseat us because managers will see it as a function too important to leave to us.
24.4 COST-JUSTIFYING UX
One of the earliest articles stating concern over the cost of usability was by Mantei and Teorey (1988). Since then there have been many articles and a few books dedicated to the topic. Most notable is the Bias and Mayhew (1994) edited collection. The book starts by posing very important questions to which we need the answers when we propose doing usability or UX to our managers: How much usability or UX are we going to get and how much will it cost?
How will we know we are getting it, and how much more money will it make for us on our products? It continues with a framework for answering these questions, a discussion of the business case for doing so, and offers some different approaches and case studies.
11http://www.sigchi.org/
The second edition (Bias & Mayhew, 2005) extends the ideas to the Web and is reviewed by Sutton (2007). Among the pioneers in cost-benefit and the business case analysis of usability engineering is Karat (1990a, 1990b,
1991, 1993).
24.4.1 Cost Cutting Is Not Always the Best Idea Sometimes cutting costs can save otherwise wasted resources, but sometimes cost cutting directly reduces what you get in return. Cooper (2004, p. xxiii) is leery about an obsessive appetite for slashing costs at every turn: "unfortunately, most executives have an almost irresistible desire to reduce the time and money invested in programming. They see, incorrectly, the obsolete advantage in reducing costs."
Similarly, going with the lowest bidder is not always the best idea. For example, there is a story about the IRS buying a system from the highest bidder, but it gave them the highest payback in increased productivity.
24.4.2 Cost-Benefit and Business Case Analysis of UX
As Siegel (2003) points out, we can be very impatient about having to prove our value in a business case to our organization. We see the value of usability and UX as self-evident or we would not be working in the area. So, when business decision makers do not see it as clearly, we get frustrated.
Casting a broad net
Sometimes managers require cost-benefit analyses to be convinced of anything. In the case of UX, as in many cases, the customer, the person with authority to purchase an application or sign a development contract, is not always the final user. The distance between cost and benefit can be great. As it is often the case,
UX usually has us paying the cost in one place and accruing the benefits
in another place. Developers pay a cost to develop, customers pay a cost to own, and users pay a cost to use.
The concept of total cost of ownership (TCO) leads us to cast a broad net when looking for all the costs and benefits. As George Flanagan (1995) has told us, "the cost of end-user computing is greater than typically estimated and labor is the most significant component." He cited a survey of 500 business computer users in which usability was the characteristic identified most often with quality.
Weiss (2005) tells how the distance between usability development cost and benefit can be quite large in the telecom industry. Manufacturers bear the brunt of usability costs of producing mobile handsets, which they sell to carriers who retail them to their subscribers.
The fact that a manufacturer did, for example, usability testing is not an immediate selling point to carriers. The benefits of good quality in the phones and the penalties for bad design are felt by the end consumers. But, of course, the marketing impact can eventually trickle back up to the carriers and manufacturers so they must be concerned with usability and other quality factors to survive in the long run.
A rational argument about UX cost-benefit
Can we afford to include UX techniques in our system development process? Instead of the standard pat answer of "can we afford not to?," we think we can help shed some light on the question rather than the answer. Cost-benefit analyses have shown with dollar figures that usability or UX process costs are often quite low, especially in comparison to the benefits. But, to some, that is a counterintuitive finding because development costs always seem to be high.
"No-Risk" Usability Support
Randolph G. Bias, Ph.D., CHFP, Associate Professor, School of Information, The University of Texas at Austin12, and Principal, The Usability Team13
One Wednesday when I was a full-time consultant our team received a call from a U.S. rental car company. Well,
actually it was from the vendors who were responsible for designing the rental car company's Website. It seems they had a problem, they realized it was a usability problem, and could we run a usability study for them and make redesign recommendations? Sure, no problem. "By Friday?" Whoa. We were confident we could find representative users, business and leisure travelers, to serve as test participants. But we negotiated a delivery date of "this coming Monday" and got to work discussing their perceived problems, designing the usability study, and recruiting test participants.
We worked through the weekend, of course, and ended up grossing $14,000 for our 5-day gig, which ended with a deliverable of a usability test report complete with prioritized usability problems and recommended redesigns. The contracting team was pleased with the results and set about implementing some large subset of our recommendations.
In 1994, and again in 2005, Deborah Mayhew and I published edited volumes on Cost-Justifying Usability. I believe strongly that we usability professionals can help ensure our seat at the software development table so that we can
12http://www.ischool.utexas.edu/~rbias/website/ 13www.theusabilityteam.com
maximally, positively influence the users' experience by attending to and comparing the tangible costs and benefits of our work. One of the challenges with such an approach is that there are just about always confounds-at the same time as usability improvements are made, there are also changes made to the marketing message, increases in the sales force, or any number of other changes, making it difficult to attribute with confidence any certain fraction of new benefits to the usability effort alone.
The joy of this exercise with the rental car Website team was that the only changes they instituted, in this revision, were those motivated by usability testing. A few weeks later I called them and asked if they had any data on improvements in user performance on their Website. The team was thrilled to report that from the first day of the new design they had realized a $200,000 per day bump in revenue, and that even if everyone who had failed to secure a reservation on the online site before the redesign had subsequently called the toll-free number to reserve a car, the company still was realizing a $50,000/day increase in revenue. Thus, the payback period for the usability investment, for this particular engagement, was "before lunch on the first day."
As someone who is convinced that the state of Web and other software design world is such that investments in professional, systematic usability engineering are just about always "worth it," this experience has led me and my current consulting partners (The Usability Team; www.theusabilityteam.com) to offer what we are calling "usability on spec" or "no-risk usability." We will work with you (Mr. or Ms. Web-or-Other-Software-Designer/Developer) to come up with a plan for some usability support, based on your site/product, your historical user data, the stage in your development process, and other variables. Then we will do the usability work for you, for free! But we will also negotiate some small percentage (say, 5%) of the measureable benefits realized after the usability improvements are implemented. [Note, in the example given earlier, using the conservative $50,000/day figure and assuming the same effect across a year, the company realized an $18 million benefit for their $14,000 investment, an approximately 1300:1 return on investment (ROI), and a 5% fee would have been over $900,000. Nothing ventured, nothing gained, eh.] While not every usability study will yield a four-figure ROI, and not every study will allow for such a crisp connection between usability costs and subsequent tangible benefits, we are eager to help all realize the importance and value of professional, systematic usability engineering of their customer-facing user interfaces.
However, usability or UX engineering, if done right, does not necessarily add greatly to overall development cost. The first reason is that most of the usability or UX costs are concentrated in early parts of the overall product lifecycle. Much of the UX engineering should be done before the system is implemented in software.
Out of the entire overall system development process, only a small cycle of analysis, design, prototyping, and evaluation represents the part associated with most UX engineering costs. Also, this mini-cycle is small and lightweight in comparison to other parts of the overall process, if it can be accomplished before a commitment to implementation in software.
Yes, this mini-cycle must necessarily be iterative, but it is only a small, lightweight part of the overall process. It is hoped that you can rearrange your project budget so that you do not use more resources for development overall, just different resources with a different distribution during the lifecycle.
Our second claim is that good usability saves on many other costs. As many of the writers on UX cost have said, "Pay me now or pay me more later." Poor usability is costly; good usability is all about saving costs. UX process costs are mostly one-time costs; operational costs can accrue for years. Downstream costs of poor usability can be substantial.
Usage costs, such as for lost user productivity, employee dissatisfaction, heavy user training, help desk operations, field support, or the cost of user errors, get more attention if those users are your employees. User errors are sources of costs that can keep cropping up over time if they cause other problems in your operation, such as database corruption.
Perhaps the cost of poor usability is the highest in the e-commerce world of the Web, where a bad design can mean lost revenue and losing a competitive edge in a fast-moving marketplace. The Internet is where you absolutely
must avoid releasing something that will embarrass you and the organization, despite the pressure to development in "Internet time."
Costs of not having a good UX
According to a 2006 AP wire report (The Roanoke Times, 2006), a "Defense Department's computerized travel reservation system turned into a half-billion- dollar fiasco, so flawed that only 17 percent of the travelers are using it as intended, Senate investigators say." It was supposed to be the pentagon's private version of an Internet travel site, but it took a half-hour to book a simple itinerary that a regular travel agent could have booked in 5 minutes, and it was missing flight and hotel information and did not always provide the least expensive options.
In a similar story, Reuters UK Edition news service reported that a Taiwan stock trader in 2005 bought over $200 million (value in US$) worth of shares with one mis-stroke on her computer keyboard, causing a panic reaction on the market and an immediate loss to her company of $12 million.
The cost of poor usability also weighs in heavily at the help desk. According to Flanagan (1995) in a 90-day study of incoming calls to help desks for 24 different software products, over 60% of all calls were determined to be about usability issues and for 11 of the applications, over 90% of the calls were related to usability.
The cost of correcting a usability problem in a design depends on the stage of the project in which you catch the problem. Naturally, the earlier it is detected, the less it costs to fix. Early on, Mantei and Teorey (1988) stated that as problems are found later and later in the lifecycle, costs associated with fixing them increase in a geometric progression: a problem that costs $1 to fix in early analysis can cost $10 to fix in design, $100 to fix in a prototype, and $1000 to fix after deployment. On the other side of that coin, it has been estimated that for every $1 we spend on usability, we get from $2 to $10 in return from the market (Kreitzberg, 2000).
Is return on investment (ROI) the right place to look?
Most practitioners and management would agree that good usability and good UX are about good business, not about "being nice." So, if good usability makes for good business, what measures can be used to prove it? Certainly cost savings within the process are one way, maybe by comparing the "with and without usability" cases. For example, back in 1993, we heard of a NYNEX project in which the company saved $1 billion by prototyping and iteratively refining the interaction design for a voice-activated telephony system (Thomas, 1993). Such success stories are impressive, but rare. Beyond those, not everyone believes in the power of ROI calculations.
Daniel Rosenberg (2004), who oversaw UX at Oracle, says that ROI is a phantom not worth chasing. In his 20 plus years of experience, he has never been asked to produce an ROI analysis. He thinks that the kind of ROI analyses in the HCI literature do not fit the real world he lives in. Rather, he defines his professional goal as adding value to products through an improved UX. He is part of the commercial software industry that produces large and complex software suites for use in companies all over the world. It is a world in which a single sale of a system can bring in millions of dollars for a system that will be used by thousands of concurrent users.
At the other end of the scale are the small internal IT projects where, he says, the argument is usually made for usability ROI. Unfortunately, he said, "case study" stories based on little data get spread in the literature as "myths." The promise of better data is at least partially blocked by corporate legal departments who consider development cost data to be confidential. He points out correctly that the economics of software production are complex and contain too many confounding factors to do a convincing "before and after" or "with and without" comparison.
For example, increased revenue from a product that UX practitioners believe has been improved through usability testing might instead be due to changes in prices, size of the sales force, emphasis in a marketing campaign, and so on.
Instead, Rosenberg proposes that we consider more strategic indicators, indicators of longer term effects of product value for executives and upper management, a key example of which is the customer relationship. Customers of large commercial software systems can have an ownership or usage lifecycle of a decade, including in-the-field fixes and upgrades to new releases.
Over that time, total ownership costs can add up to much more than the original purchase price. Whereas ROI can be an internal fascination about how to save development costs, total cost of ownership (TCO) is an external measure of how well your product is working for your customers, a measure of the
real value your product provides. He says that in business, saving money is tactical but making money is strategic.
Our friend Gitte Lindgaard (2004) balances Rosenberg's view by saying that, the absence of requests for ROI justification notwithstanding, "if we want our contribution to be taken seriously by other stakeholders, we absolutely must demonstrate the business value of HCI." We need to speak "clearly to business decision makers and target issues that are truly of concern to them." We must find the issues that will provide the most persuasive arguments and apply the most appropriate analysis techniques, and Lindgaard gives some compelling examples of doing just that.
Bloomer and Croft (1997) echo this sound advice, "start by finding the 'hot buttons' of the group," such as enhancing customer service, improving product quality, or reducing operational costs-and get data about problems in these areas. Even though most of us will never be asked to produce a usability justification analysis, we can and should seek actively to understand the broader business context of our work and find ways to take the initiative within that scope to define, address, and solve the organization's business problems.
Lund (1997a) looks at the problem of economic justification of usability and UX as more than just showing the difference between costs of usability and savings from usability in a project. That kind of data is going to be for a project that is in the past, but a company is interested in whether it is worth maintaining a permanent usability group in the future. What is the value of a UX group in the long-term, including projects that never get to market and, therefore, never generate revenue?
Lund takes a corporate bottom-line view in which the value of any activity or group in the company is assessed with respect to how it affects earnings. That translates directly into decreasing company costs and increasing product revenues, fundamental factors to which he says we must tie arguments for
our existence. This includes helping the company identify new business opportunities emerging from technology and ideation that keep UX designers
engaged in contributing to the business. In his company they keep track of the value of new product ideas generated by each department and the resulting revenues from those ideas maturing into marketed products.
Siegel (2003) gives us some advice on how to approach our own persuasive business case for UX. If you do use cost and savings figures, "show a conservative bias." Stretching the truth in your estimates can injure your credibility and can build false expectations. As Lindgaard also said, target your analysis to recognized company concerns. They will get the ear of management faster than new ideas from the "outside." It may be necessary to lay the groundwork for your arguments by establishing the right metrics for cost, for example, and taking enough data before making the justification case.
Do not promote your approach as a "new paradigm" that will "save the company." Suspicion that your proposal might be ideologically motivated can raise stiff resistance. Instead, bill it as an effective way to pursue established company goals and help the company do what it is already doing, such as understanding its users. If necessary, invoke the company mission statement, if it is appropriate.
One of Siegel's points encountered often in our consulting and industry practice is about "incrementalism." The usual approach to cost-benefit analysis is to look at the value of a product before and after applying usability testing, for example, to improve the design. However, this "incremental" approach ignores the "order-of-magnitude" improvements a very bad design can get from a complete re-analysis and redesign and not just usability testing.
A small example of cost savings
For a large distributed system, a very large government organization we worked with had about 75,000 active users at any point in time. On average, we showed, for one particular task, that the number of transactions per user in a day was about 20. This added up to a daily frequency for this one transaction of 75,000 X 20 1/4 1,500,000.
The user time per transaction ranged from 5 to 20 minutes. We determined that the average time saved per transaction, due to one specific improvement in usability, was about 30 seconds. At that time, the average fully loaded hourly rate for these agents/clerks was $25.00, so the average annual savings for just this one task and this one modest usability improvement, not counting other savings, such as for user training or help desk costs, were
1/4 75,000 users * 20 transactions/user-day * 0.5 minute/transaction * 230 days/year *
$25/hour * 1 hour/60 minutes 1/4 $71,875,000.00.
For any reasonable usability engineering cost for this product, the payback is enormous. Managers will pay attention to this kind of cost analysis because they do similar analyses themselves for budgets. Also, you can remind them that long after schedules are forgotten, the user experience, good or bad, remains.
Mayhew (2010) offers a free downloadable cost-justification tool for these calculations.
Strategic planning for better UX in the future
The natural time for arguments in favor of adequate budget and schedules to allow for quality UX design is at the beginning of a project, even before a project begins. That is when such resources are allocated. However, that is also when everyone, especially managers, are enthusiastic (and most unrealistic) about getting it done fast. Although this is the opportune time to make a pitch for additional resources, this is also when everyone is excited about getting going on a big sprint. In any case, during a project is not a good time to argue for resources because that is when the allocations have already been done and everyone is obsessed with getting it done or getting a product out.
We offer an additional suggestion. Make your pitch for enough resources to make it better the first time (instead of having to fix it later) at the end of a project that did not go particularly well because there was not enough time to do it right. This is the time, even if only for a brief and forgettable moment before people move on, that everyone can see that it was not enough. Everyone can see ways that you could have done better. People will still have that feeling that, if only they had had more time, they could have made it much better. This is the time to point out in a non-emotional way that your team, and managers above the team, chronically do not allow enough time to get a design right the first time around.
24.5 UX WITHIN YOUR ORGANIZATION
24.5.1 Politics and Business of Selling UX
Your biggest challenge may be not technical, but possibly about selling the case to management (Trenner & Bawa, 1998). This selling requires workable techniques to convince managers that they should let you try these ideas out (Schaffer, 2004). The material presented in this book can form a basis for controllability, accountability, and quantitative methods that are so important, and rightfully so, to managers.
Most managers are familiar with software engineering principles and paradigms and probably even encourage or enforce their use. If you were around in the days when structured programming and software engineering
were emerging as the accepted approaches to software development, you will remember that there was the inevitable opposition to it, largely because people claimed there was not time to do all those things in the development process.
Also, now, managers are going through a similar encounter with new methods and techniques, only this time it is for usability and UX (Mayhew, 2008). Now managers are hearing UX buzzwords, such as "user-centered design," "iterative refinement," or "rapid prototyping." And today we are hearing the same kind of resistance that the software engineering people heard decades ago. But already the arguments about why proper UX engineering cannot be done are bearing less and less weight as people realize that this leads to the situation where, as the saying goes, there is never time to do it right but always time to do it over.
However, many managers will need to understand this relatively new UX methodology. What they may not realize is that, by necessity, the UX lifecycle process is not linear but is highly and continually iterative. An iterative lifecycle can impact much of what managers have to deal with, including scheduling, control, organizational roles, territoriality, project management, communication, test facilities, and tools.
So, it is up to us-up to you-to help sell the new concepts, which could take you out of your comfort zone as a UX practitioner. You might want to just do your job and not have to hassle with trying to convince the rest of your organization of the value of UX. Do you believe in UX so fervently that you feel it should not need any selling?
Bloomer and Croft (1997) warn of trying to "evangelize rather than sell usability." If we try to spread our enthusiasm for how neat all this usability and UX stuff is, you may find it is not as interesting to management as you expected. It is not about beliefs; we have to demonstrate the benefits in business terms and demonstrate a connection of UX to achieving key business goals.
Selling the process
To the readers seeing much of this contextual inquiry and contextual analysis process for the first time, you might think it is just too much and can never get accepted in your organization. But, in fact, contextual inquiry has been finding acceptance in commercial software product and system development simply because it is effective and helps solve the problem of getting design requirements that represent real user needs in real work contexts. Yes, it is a big piece of process that was not there before, but you can start small, make some success stories, and sell its value to your organization.
Beyond the factors that trade off in making a wise choice of how much process to use, there always looms the prospect of criticism based merely on resistance to anything new, regardless of cost or benefit. Selling new or additional processes is always a challenge.
Selling UX as part of the business process
When you have difficulty in selling your vision of UX to management, maybe it is because you are still speaking the language of UX engineering. We all understand that language and are convinced of the value but that is preaching to the choir. And while we would love to see the whole organization revamped around a UX process, Rideout (1991) reminds us that it is unlikely your existing organizational structure will change to adapt to UX; "one of the most effective ways to bring UX engineering into an organization is to build it into existing processes."
Alton (2007) suggests that one way for the UX practitioner or UX leader to speak a language that business people understand is by making a connection to risk management. Worst-case analysis of risk means asking what is the
worst thing that can happen, how likely is that to happen, what will it cost if it happens, and how much will it cost to keep it from happening, or at least to reduce the probability to an acceptable level?
In this light, usability and UX are more like business insurance, just like data security and backing up of files. For a new commercial Website, for example, one of the worst things that could happen is a failure that results in no one wanting to use your site. Your investment and future sales are in jeopardy. Let management decide how much that will cost the company.
Your job is to propose strategies based on UX engineering and user involvement to minimize the probability of this kind of failure and loss. The more the loss in the case of failure, the more you can afford to spend on UX. Alton shows how a user-involvement questionnaire can be applied to analyze exposure to risk based on the kinds of users and kinds of usage you expect.
Selling an investment in UX Just selling ideas may not be enough. Depending on the nature of the projects in your organization and your emphases in design, you might need to convince management to invest in a UX evaluation lab and all its equipment or in an ideation studio. Design and ideation cannot happen at desks in offices or cubicles. A dedicated design studio space is a place to post sketches and drawings and display other artifacts, the visual and tactile context for ideation.
Legal and Intellectual Property Issues
Brad A. Myers, Carnegie Mellon University
User interfaces are subject to a variety of legal and intellectual property issues of which a commercial user interface developer (and especially, a manager) must be aware. Property is something that a person or company can own, and intellectual property (IP) is generally property that is nonphysical. Examples of the kinds of things that can be intellectual property include ideas, designs, expressions, names, formulas, lists, and so on. Intellectual property can be protected by various means, and the rules vary by country. In the United States, IP is enshrined in the U.S. Constitution, where Article 1, Section 8, Clause 8 provides that Congress shall have the power "To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries."
Most people think of patents when they think of IP, but there are a variety of types.
� Trade secrets are intellectual property that a person or company keeps secret. People who are told the secret are generally required not to divulge it. For example, the formula used for Coca-Cola is a trade secret of the Coca Cola Company. Employees generally sign an agreement not to divulge a company's secrets, and others who are told are often required to sign a "nondisclosure agreement" in which they agree to keep the secret. Anyone can have a trade secret just by not telling others the information. However, trade secrets are rarely useful for protecting a user interface, as the user must be able to see the user interface to use it. However, the implementation of user interface algorithms is often a trade secret. An example would be the algorithm that predicts words from the user's typing using an onscreen keyboard.
� Copyrights are a legal mechanism that protects a particular expression of an idea. Copyrights are the primary way that literature, music, and artwork are protected. A copyright does not cover ideas and information, only the form or manner in which they are expressed. For example, one could not copyright the idea of dragging items to a trashcan icon to delete them, but one could copyright a particular drawing of a trashcan. Software code can be copyrighted, but this only protects the exact expression-implementing the same algorithm a different way would not be affected by a copyright. Copyrights are free and automatic-anyone can add a # symbol to any work to put the public on notice that the work is protected. Copyrights last for a certain amount a time, which varies based on various factors (e.g., in the United States, a copyright on a personally authored work such as a story lasts for the life of the author plus an additional 70 years. For a work made for hire, the copyright lasts for 95 years from the year of its first publication or 120 years from the year of its creation, whichever expires first. See http://www.copyright.gov/help/faq/faq-duration.html for full rules for the United States). After a copyright has expired, the work is generally available to the public for use. For example, the works of Mozart are no longer covered by copyright, but a particular performance of them (that particular expression) can still be copyrighted. If desired, user interface designers can copyright their particular user interface expressions (icons, background designs, window decorations, etc.) as well as their software code.
� Trademarks are a legal mechanism to protect a distinctive phrase or indicator that uniquely identifies a particular commercial product. The goal of trademarks is to avoid confusion in the consumer's mind. There are a variety of kinds of trademarks depending on what is being protected and how. Marks that denote trademarks include W for registered trademark, (tm) for trademark, SM for service mark, etc. Logos and names of companies and products will almost always be trademarks. Trademarks are issued by a government
agency, such as the U.S. Patent and Trademark Office (USPTO), and are very expensive. Once issued, a trademark lasts for as long as the product or company is available commercially. If one uses a name that a company thinks is too similar to its trademark, then the company can sue to prevent that confusing name from being used.
� Patents are a legal mechanism to protect an invention. They give the inventor a monopoly to use the invention for a period of time in exchange for revealing how the invention works. Patents were only ruled by the U.S. Supreme Court in 1981 to apply to software, and hence to user interfaces (before that, the rule was "for an idea to be patentable it must have first taken physical form"). Now, there are thousands of patents on user interface features and interaction techniques, with thousands more issued every year. A patent has a "specification" with its figures, which are the description of the invention, and the "claims," which describe what is actually protected by the patent. Patents must describe something new (that has never been described or seen in public before), useful for some purpose, nonobvious (it cannot only be an improvement that would be obvious to a regular person), and disclosed properly (so someone could reproduce the invention using the specification). Patents are issued by a government agency, such as the USPTO, and may cost about
$20,000 to get. If one invents a new user interface, a patent can be written and filed, and then future users will have to license the patent to create an interface that works the same way. Conversely, if one creates a user interface that does what someone else's patent describes, then one can be sued in federal court for patent infringement.
Enlist a UX champion
Most of the many books and articles (Billingsley, 1995; Butler & Ehrlich, 1994) about getting UX to work within an organization advise recruiting a UX champion, for example. Look to senior management people who fund development projects; they will have the power to include UX as a key component of the development process. It is even better if you can find a senior executive who, perhaps through his or her own reading or conferences, already believes in the value of UX to the organization.
Sell UX as important to marketing
It might be possible to sell the business case for UX based on its ability to help with branding in the look and feel. No company has done this better than Apple Corporation and they are looked up to by many others for this connection of UX to branding on the marketing side.
Revise reward policies
It may seem obvious, but one way to help the UX culture flourish in your organization is to change the reward structure to favor product quality and the UX process. If people are rewarded on the basis of timelines and meeting delivery schedules, UX and other product quality factors will be eclipsed by these schedule-driven concerns. If people in your team roles are rebelling against changes in the development process, you may need an adjustment at the corporate level in the culture of how people are rewarded and focus more on how people follow the process.
Inertial resistance to change
Sometimes UX practitioners will run into resistance that comes from inertia or just plain opposition to change, especially in large, established organizations where software people and management are used to doing things the same way for years. Sometimes the resistance comes in the form of passive-aggressive behavior. They agree with you and say they will use the process, but they do not. Or they say, "We are already ready doing that process," when either they are not really doing it or they are not doing it correctly.
Anderson (2000) has tracked obstacles to adoption of processes for UX, including a lack of understanding of the process, fear of losing control, discomfort in moving from something familiar, competing ideologies, turf battles, and the feeling that UX is a marketing responsibility.
Alternatively, resistance to a UX process can stem from the impression that because the current way of development "is not broken," it does not need improving. If the design passes your UX testing, there is a tendency to think "the job is done and we should move on."
But beyond just performing benchmark tasks in time, UX practitioners must ask themselves constantly how to make it even better; can we make a better conceptual design? Even though we have ironed out the surface user performance questions, are there still deep UX issues?
Another way inertia shows up is in the lack of innovation in design. If your electronic forms on the Web are the same as the paper forms that preceded them because "that is the way we have always done it," it could be that you are just "paving the cow paths."
UX credibility
Selling any idea or technology to business management takes credibility, which first and foremost comes from delivering, from producing what is promised and more, and from doing it on time and within budget. That is what is required of everyone in a development environment, and UX practitioners are no exception.
Some established researchers and practitioners, such as Dennis Wixon and others, decry even the need to defend our credibility. "Why single out
usability?" Why not ask software engineering to justify themselves. Well, we are the relatively new kid on the block and we have to earn organizational respect, just as the software engineering folks once did. Remember the state of structured programming back in the 1970s? Also, while you cannot build a software product without software engineering, you can build one without UX, as we well know.
Be a source of information about your profession. Be a resource of expertise and counsel others on their concerns about usability and UX. Give your own project team training in UX to ensure that they do not have gaps in their knowledge and skills.14 Also, importantly, give yourself some visibility. No one in management will give much credibility to a process they know nothing about or a process that is essentially invisible within the organization.
You can boost your visibility within the organization via presentations within departmental meetings and periodically scheduled seminars and workshops, spreading the word until everyone is UX literate. Showcase a failure story and follow it with a genuine success story. Show before and after video presentations of lost, confused, bewildered, and frustrated users followed by happy and productive users due to design improvements. Seeing for themselves is one
of the most effective ways to boost your credibility among people outside the UX team.
Be a UX evangelist and use "guerilla" tactics to insert UX into the corporate culture. For example, almost all companies have quality assurance people and activities. Visit the quality assurance people and convince them, over time, to incorporate usability and UX as part of their concept of quality. Having the support of a quality assurance group, long-established within the company, can only enhance your credibility.
24.5.2 Getting Away from the "Human Factors Pool" Model
In the early days, UX people were usually called human factors experts and, in many organizations, were kept corralled in a centralized "pool" of human factors consultants, often within a "service" arm of the organization lateral to the development groups, for example, in the quality assurance department or the documentation group.
Then they are split up and assigned to various business units doing development, much like orphans are farmed out to working families, with about as much clout in the new working environments. They are never really part of a development project and continue to report to their "home" departments while on loan to projects.
Once we had a couple of human factors people take us to lunch so they could unburden themselves about their plight. They came from a central human factors "pool" and were assigned to a project long after it had been designed and
14An example of a self-paced source for this kind of training is the Online User eXperience Institute (OUXI) at
http://www.ouxinstitute.com/
parts had been implemented. Their job was to give feedback on the design but, of course, not too much feedback as it was impossible to make all but the most cosmetic changes by then.
They, however, had large issues with an enormous mismatch between the application organizational structure and the users' workflow. Several very closely related parts of a task were located in different screens in the design and the flow of the design was such that it was not easy to move among screens except by following the built-in logical "next screen" path. Clearly, the design would have a powerful negative impact on UX and user performance in the field, or at sea.
Although they protested vigorously about the huge design flaws, they were flatly ignored because they had no authority within the development project. It would be a long journey from this situation to where an organization has its own UX division reporting to the CEO and UX practitioners hold the power to enforce their design recommendations.
24.5.3 UX in the Organizational Structure
The question of UX ownership within an organization has been a popular discussion topic in articles and workshops. ACM interactions magazine covered it in a special section (Gabriel-Petit, 2005). Most of the articles began by saying that it was the wrong question to ask, that no one can "own" UX within an organization. Most of the authors got it right: no one "owns UX within the organization" or, better yet, everyone in the organization has a stake in "owning" the responsibility for UX.
Gabriel-Petit (2005, p. 17) said that the ownership of UX is best shared in a culture of collaboration and vision. Within that context a person with the most UX experience can function as a leader. Knemeyer (2005) says that business decision makers should own UX; it is the CEOs and administrators who set organizational goals and control budgets and even the HR people who hire the staff. The way to help shape UX in the organization is to influence thinking about it at these higher levels.
Aucella (1997) warns us that we should at least find some relatively permanent place for UX in the organizational scheme of things. Perhaps the UX approach was successful in one project but, when this project is over and team members disperse, UX can die off and not be pursued further if there is no "home" for UX in the organization. She recommends working toward buy-in within the project and beyond, before the end of the project.
Project meetings should include a focus on UX; try to develop a culture in which planning and budget negotiations include UX. Be sure you have at least one experienced UX practitioner to lead the effort and be sure to document the
results prominently so that the "history" is preserved and not buried soon after the project is done.
If UX practitioners are loaned out to develop projects from a centralized "pool" of practitioners, a practitioner may serve on several development teams at once, moving from team to team at appropriate times in their respective development processes. This approach usually turns out to be undesirable because it tends to fragment the process, for the UX practitioners, and can stretch their usefulness too thin.
For example, one human factors engineer was assigned to rotate among
11 different projects with a total of 248 software engineers! This, of course, is an extreme case-so much so that this practitioner was relatively ineffective on any of the 11 projects. In addition to fragmentation, this on-loan-talent approach usually precludes participating in the feeling of team ownership of the product.
In the long run, the best day-to-day "home" for UX practitioners seems to be in carrying out UX roles as full-fledged members of project teams rather
than being centralized in talent pools. Even if a central pool was dignified by making us our own department with corresponding standing within the overall organization, we would still have to function as outsiders with development projects.
It is much better to assign each UX practitioner to a specific permanent role within the organization. Project teams are already composed of different skills sets so why should the UX skill set be any different? Now we are all system developers, as it should be! You do not read articles anymore about who owns programming in a development organization; programmers work in development projects where they are needed, as part of a team.
Similarly, there are advantages to having UX practitioners co-located permanently with the rest of the project team and reporting administratively to the same managers. The risk of this arrangement, of course, is for the manager to not understand or appreciate the value of the UX practitioners, as that manager will be doing annual performance evaluations for raises and promotions. There is no longer any greater UX group in the organizational structure to protect the UX practitioners. That highlights the imperative to sell management on the value of UX.
In one successful instantiation of the team approach, the entire project team was located in close physical proximity with each other. The team consisted of one or more software engineers, user interaction designers, marketing people, graphic artists, human factors engineers, technical writers, and trainers.
The usability lab was in the center of the physical space, with team members' offices located around the lab.
Some of the most interesting team interaction occurred when software engineers began attending usability evaluation sessions. At first, only one or two attended, but as the project progressed, there literally was standing room
only in the control room of the lab. In fact, all team members were told when usability test sessions were scheduled, and many attended regularly. Everyone was anxious to see how users would respond to the newest cycle of changes to the interface.
Being able to position ourselves as part of the team this way is not a technical issue, but depends on management. As Don Norman once put it, "Bad products often arise because of poor organizational structure." The structure of a product often reflects the structure of the company that built it. Parts of a company that build the various parts of a product or system may not talk to each other; they may even compete with each other.
If your development organization is organized hierarchically, it may be easy to communicate up and down within the hierarchy, but it can be difficult to communicate across the structure. Before a cross-disciplinary team can make a decision within this infamous "stovepipe" organization, each question must travel up and down all the respective pipes.
Ferrara (2005) says that UX practitioners are responsible for the ultimate UX in products and will be held accountable for same. So, if they do not control the UX process, they must find ways to influence those who do, but should do so with respect and as a team effort. Hawdale (2005) weighs in, supporting the opinion that it is about leadership with vision.
The one who takes the lead and pursues a vision is the one others will look up to as the UX person. Tognazzini (2005) says that we must work harder to define ourselves as UX professionals and take control for design back from the engineers.
Until that happens we will fight against the odds, playing catch-up instead of having a fair chance at the head start we need to lead the project lifecycle rather than follow it. Strategic approaches to UX within an organization mean influencing people and integrating the profession and its practice into the organization. UX is an organizational effort, not just a technical one.
Also, strategic approaches are organization-wide approaches. When usability gets to a strategic level in a corporation, usability data are used in corporate-wide decision making, including product priorities. Rosenbaum, Rohn, and Humburg (2000) report on a series of CHI Conference workshops about strategic UX planning, about how usability or UX groups can make themselves "more effective and influential in how corporations develop products." Their findings are detailed but a few conclusions stand out.
For example, it seems reasonable that small UX groups in large organizations will perceive more difficulty in creating a broadly felt influence, but the survey showed that "organization size did not affect what organizational approaches and usability methods were rated most effective in achieving strategic usability." Apparently organization size also
did not affect what factors were considered as obstacles to creating strategic impact. So we are all in the same boat, needing to build partnerships with marketing, engineering, and corporate management by educating about UX and selling its value.
As part of strategic thinking, Deborah J. Mayhew (1999a) asks how can UX practitioners position themselves as change agents? "Understanding what motivates organizations and causes them to change is key." To Mayhew, strategic establishment of usability or UX within a development organization occurs in three stages: promotion, implementation, and institutionalization. Promotion is selling, influencing others. Identify the obstacles to this kind of change in your organization and the right kind of motivation to overcome them.
Implementation means putting the process to work in real projects, which means getting the right people to manage and carry it out. To then institutionalize the process, you have to extrapolate your success and extend the influence of UX to be part of the development process at the organization level. The key is to be strategic in the implementation phase and plan for institutionalizing as you go. You have to get to know the people who document and enforce the organization process standards and help them integrate the UX process into their standard operating procedures.
24.5.4 Legacy Systems
A legacy system is a system with maintenance problems that date back possibly many years. Back in the 1990s, legacy systems were more of an issue than today. The question was what to do about large systems that had aged but were still working to provide important services (Schneidewind & Ebert, 1998)? The classic and most extreme cases of legacy systems were the old mainframe hardware and software systems with terminals being converted to systems of networked desktop computers.
Such cases are becoming mercifully rare. When these cases do occur, however, they have far-reaching implications and careful consideration must be focused on whether to continue maintaining the old system, redesign it, or retire it altogether in favor of a replacement.
The legacy problem still exists in different forms; existing systems get old and it is difficult to decide when to abandon system maintenance and opt for developing or buying a new system with new technology. It is a matter of risk management: When is the cost of old system shortcomings and constant maintenance, including instability in the face of incremental functionality changes, more than the cost of starting over?
Systems with better initial designs last longer. Almost always users want to keep the old system as long as possible, as that is what they are used to. However, they are often pleasantly surprised when they discover the improved UX in a well-designed new system.
Alternatively, realization of older functionality and user interfaces with new technology are often clumsy cut-and-paste reincarnations without good redesign to leverage the advantages, including UX advantages, of the new technology.
24.5.5 Transition to Production
We talked a bit about the transition from prototype design to the product. We cautioned not to hang onto it too long; let production developers do their thing. However, we do advise to keep an eye on it even after you let it go.
Beltram (2005) describes a particularly heart-breaking scenario in which the design was changed after the UX lifecycle was done, the interaction design had stabilized, and the unevaluated design was passed on for production development. After all the hard work of a long UX lifecycle, including presentations to management about the high level of UX achieved, the UX practitioner releases the design for production engineers to package it up for distribution.
However, a year later, the UX practitioner is faced with angry ranting customers and sees, for the first time, that many unbelievable changes have been made to her designs. Labels that had been painstakingly worded were changed for the worse, carefully placed navigation links were missing, and the user's workflow had been badly damaged. How could this possibly have happened?
In some organizations, especially those that develop software for domain- complex systems, there are some "extra, unacknowledged phases of design" that can occur in the process of getting the software ready for deployment; it is not always just building and shipping. Many things can happen after the UX cycle is over, including changes to address non-UX quality issues, changes in code to fix bugs, or some last-minute customization at the request of the customer-all done by people who did not work on the original project. As Beltram puts it, "That's a lot of cooks in the kitchen, all fussing with Nellie's original recipe."
Looking to the Future
Dennis Wixon, PhD, Startup Business Group, Microsoft Corporation
In looking over the last 30 years of growth of the computer industry, one obvious conclusion is that "the user has won." No product team or business would begin a new venture without considering seriously how they would achieve an excellent user experience. Given the vagaries of the development process, the final product may or may not provide that excellent experience. Certainly suboptimal (from a UX perspective) products are created with surprising regularity. However, I do not think anyone will ever hear statements such as "people like that do not deserve to be our customers." (Yes, I really heard that in one meeting many years ago.)
It would be tempting to say that such success was due to the creativity, hard work, and determination of a community of UX researchers, practitioners, designers, and academics, led and inspired by a few geniuses (e.g., Doug Englebart). Similarly, one could say that this progress was inevitable and driven by the inexorable economics of industry, that is, we had to broaden the market for technology beyond computer scientists, mathematicians, engineers, and hobbyists. To broaden that market we had to make computer technology approachable for novices, useful for workers, productive for businesses, and fun for gamers. We could also say that this progress was inevitable given the increase in performance and the reduction in price of technology, the growth of networking, and proliferation of form factors such as PCs, cellphones, game consoles, and tablets. These three factors (and numerous others) worked in synergy to drive the progress in user experience for the last 30 years.
Taking stock and even congratulating ourselves are no doubt in order. But as we do so, I would recommend that we turn our attention to the future. What are the challenges that we face over the next 30 years in creating quality user experience? The following are a few candidates; the reader is encouraged to add his/her own.
First, the growth of agile methods represents both a challenge and an opportunity. The speed and overall approach of agile software methods challenge both research and design methodologies. Some have argued that it is impossible to do good research or design in the context of an agile team. However, this argument ignores several important considerations. First, in many cases we do not have a choice. We need to embrace these approaches or be left behind. Second, many agile methods promise a partially working system as part of every sprint. Surely that promise offers opportunities for testing or review. Third, a number of methods for working with agile teams are described in this book. Certainly more will be created. The history of all the UX disciplines is adaptability, integration, and creativity. In summary, while I would not minimize the challenge, I would not ignore the opportunity.
Second, the proliferation of platforms represents another challenge. Again, this challenge also creates opportunity. The rapid growth of the cellphone market challenges designers to create great designs for tiny screens. It challenges research to understand usage for a wide variety of users in every imaginable context from a shopping mall to a rural farm. These challenges are compounded by a need to write software once and have it run on all platforms. However, there are also great opportunities. Logging technology enables us to understand aggregate usage in ways that were previously unimagined. The need to run on all platforms challenges teams to create flexible development environments. Advances in touch technology and voice recognition offer promise. We have already seen products with innovative and excellent user experience dominate the marketplace and inspire a variety of new products.
Third, the rise of analytics represents an opportunity and a challenge. Analytics provides an unprecedented window into user behavior. It is possible now to look at the behavior of entire populations of users and see how they use systems. At the same time, extracting valid conclusions can be challenging given the complexity of the environment in which usage occurs. One way to think of this is that our study of whole populations as they behave provides unprecedented ecological validity. However, this same environment is almost completely uncontrolled. Seeing a rise or fall in usage could be due to anything happening at that time. For example, if we
observe a drop off in game play for a previously popular Internet game what do we conclude? We could conclude that the game has limited replayability or we might know that a new version of a widely popular competitor has just been launched and thus conclude that an external factor is causing the drop. Ironically, the most effective approach to the problem of so much data is more data and more diverse data. The more we know, the more measured and confident we can be in our conclusions.
Fourth, we do run the risk of retarding our future progress with some self-inflicted wounds. One example would be the current manufactured controversy between research and design. It is unproductive to say research can be dangerous. Any activity can be dangerous. Creative design is fraught with risk. Launching a product or service is risky and dangerous. Lack of activity can be dangerous too. Markets move on, and playing it safe can lead one to be not a player at all. We do not advance progress in user experience by creating rhetorical controversies between disciplines that need to work together. We do advance the field when we look at successful products and product failures and make an honest attempt to understand them and apply their lessons. For example, many years ago Petrosky wrote a book entitled To Engineer Is Human in which he conducted a brilliant analysis of engineering failures. This type of analysis does not lead to the intellectual cul-de-sac of suggesting we should not engineer new products. Instead it leads us to a deeper of understanding of how to avoid the mistakes of the past and make true progress.
Finally, I see a major challenge ahead. While UX has made significant headway in creating better products and contributing to business success, by and large, UX is still on the periphery in far too many businesses. By that I mean that user experience experts do not play as full a role in product decision making as other disciplines do. The contribution of UX to business success is unique, and UX needs to be an unfiltered voice in the product development process. There are many reasons for perpetuation of this "glass ceiling." Some of them are historical and cultural.
But it is most important that UX professionals focus on those factors that they have some control over. They can focus on strategic work even if that means that detailed design and research may go undone. They can continue to innovate in methodology and design and integrate those methods with more traditional approaches. They can document their value and contribution to the success of products. They can shed some of the traditional values that have marginalized their work. One example is the belief that we need to have a complete design before we can collect data or offer an evaluation.
Overall, while there have been failures and setbacks, I see a past of accomplishment and a future of promise for UX.
24.6 PARTING WORDS
Congratulations! You made it through the book. May the UX force be with you.
Intentionally left as blank
References
ABC News Nightline, (1999). Deep Dive.
Abernethy, C. N. (1993). Expanding jurisdictions and other facets of human-machine interface IT standards. StandardView, 1(1), 9-21.
Accot, J., & Zhai, S. (1997). Beyond Fitts' law: Models for trajectory-based HCI tasks. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 295-302). Atlanta, GA.
Acohido, B. (1999, November 18). Did Similar Switches Confuse Pilots? Controls' Proximity Another Aspect of Crash Probe. Seattle Times Investigative Reporter, from http://community.seattletimes. nwsource.com/archive/?date1/419991118&slug1/42996058.
Adams, D. (1990). The Long Dark Tea-Time of the Soul. Pocket Books.
Alben, L., Faris, J., & Saddler, H. (1994). Making it Macintosh: Designing the message when the message is design. interactions, 1(1), 11-20.
Altom, T. (2007). Usability as risk management. interactions, 14(2), 16-17.
Anderson, R. I. (2000). Business: Making an e-business conceptualization and design process more "user"-centered. interactions, 7(4), 27-30.
Anderssona, B.-E., & Nilsson, S.-G. (1964). Studies in the reliability and validity of the critical incident technique. Journal of Applied Psychology, 48(6), 398-403.
Andre, T. S., Hartson, R., Belz, S. M., & McCreary, F. A. (2001). The user action framework: A reliable foundation for usability engineering support tools. International Journal of Human-Computer Studies, 54(1), 107-136.
Ann, E. (2009). What's design got to do with the world financial crisis? interactions, 16(3), 27-20. Antle, A. N. (2009). Embodied child computer interaction: Why embodiment matters. interactions,
16(2), 27-30.
Apple Computer Inc, (1993). Making It Macintosh: The Macintosh Human Interface Guidelines Companion.
Addison-Wesley.
Arnheim, R. (1954). Art and Visual Perception: A Psychology of the Creative Eye. University of California Press.
Atwood, M. E. (1994). Advances derived from real-world experiences: An INTERCHI '93 workshop report. SIGCHI Bulletin, 26(1), 22-24.
Aucella, A. F. (1997). Ensuring success with usability engineering. interactions, 4(3), 19-22. August, J. H. (1991). Joint Application Design: The Group Session Approach to System Design. Yourdon Press.
Bailey, R. W. (1996). Human Performance Engineering: Designing High Quality Professional User Interfaces for Computer Products, Applications, and Systems (3rd ed.). Upper Saddle River, NJ: Prentice Hall. Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An empirical evaluation of the System Usability
Scale. International Journal of Human-Computer Interaction, 24(6), 574-594.
Bannon, L. (2011). Reimagining HCI: Toward a more human-centered perspective. interactions, 18(4), 50-57.
Barnard, P. (1993). The contributions of applied cognitive psychology to the study of human- computer interaction. In R. M. Baecker, J. Grudin, B. Buxton & S. Greenberg (Eds.), Readings in Human Computer Interaction: Toward the Year 2000 (pp. 640-658). San Francisco, CA: Morgan Kaufmann.
Baskinger, M., & Gross, M. (2010). Tangible interaction 1/4 Form � computing. interactions, 17(1), 6-11. Bastien, J. M. C., & Scapin, D. L. (1995). Evaluating a user interface with ergonomic criteria.
International Journal of Human-Computer Interaction, 7(2), 105-121. Beale, R. (2007). Slanty design. Communications of the ACM, 50(1), 21-24.
Beck, K. (1999). Embracing change with extreme programming. IEEE Computer, 32(10), 70-77. Beck, K. (2000). Extreme Programming Explained: Embrace Change. Addison-Wesley.
Becker, K. (2004). Log on, tune in, drop down: (and click "go" too!). interactions, 11(5), 30-35. Becker, S. A. (2005). E-government usability for older adults. Communications of the ACM, 48(2),
102-104.
Beltram, D. (2005). Too many cooks. interactions, 12(2), 66-67.
Bennett, J. L. (1984). Managing to meet usability requirements: Establishing and meeting software development goals. In J. Bennett, D. Case, J. Sandelin, & M. Smith (Eds.), Visual Display Ter- minals (pp. 161-184). Englewood Cliffs, NJ: Prentice-Hall.
Berger, N. (2006). The Excel story. interactions, 13(1), 14-17. Berry, R. E. (1988). Common user access: A consistent and usable human-computer interface for the
SAA environments. IBM Systems Journal, 27(3), 281-300.
Beyer, H., & Holtzblatt, K. (1998). Contextual Design: Defining Customer-Centered Systems. San Francisco, CA: Morgan-Kaufman.
Beyer, H., Holtzblatt, K., & Baker, L. (2004). An agile customer-centered method: Rapid contextual design. In Extreme Programming and Agile Methods (LNCS 3134) (pp. 50-59). Calgary, Canada: Springer Berlin/Heidelberg.
Bias, R. G. (1991). Walkthroughs: Efficient collaborative testing. IEEE Software, 8(5), 94-95. Bias, R. G., & Mayhew, D. J. (Eds.). (1994). Cost-Justifying Usability. Academic Press, Inc.
Bias, R. G., & Mayhew, D. J. (2005). Cost-Justifying Usability: An Update for the Internet Age (2nd ed.).
San Francisco, CA: Morgan Kaufmann.
Bier, E. A. (1990). Snap-dragging in three dimensions. In Proceedings of the Symposium on Interactive 3D Graphics (pp. 193-204), Snowbird, UT.
Bier, E. A., & Stone, M. C. (1986). Snap-dragging. In Proceedings of the Conference on Computer Graphics and Interactive Techniques (pp. 233-240).
Billingsley, P. A. (1993). Reflections on ISO 9241: Software usability may be more than the sum of its parts. StandardView, 1(1), 22-25.
Billingsley, P. A. (1995). Starting from scratch: Building a usability program at Union Pacific Railroad.
interactions, 2(4), 27-30.
Bittner, K., & Spence, I. (2003). Use Case Modeling. Addison-Wesley.
Bjerknes, G., Ehn, P., & Kyng, M. (Eds.), (1987). Computers and Democracy: A Scandinavian Challenge.
Aldershot, UK: Avebury.
Bloomer, S., & Croft, R. (1997). Pitching usability to your organization. interactions, 4(6), 18-26. BMW AG. (2010). BMW automobiles. http://www.bmw.com/com/en/insights/technology/joy/bmw_joy.html.
Last accessed 07/10/2011.
B0dker, S. (1989). A human activity approach to user interfaces. Human-Computer Interaction, 4(3), 171-195.
B0dker, S. (1991). Through the Interface: A Human Activity Approach to User Interface Design. Hillsdale, NJ: Lawrence Erlbaum.
B0dker, S., & Buur, J. (2002). The design collaboratorium-A place for usability design. ACM Trans- actions on Computer-Human Interaction, 9(2), 152-169.
B0dker, S., Ehn, P., Kammersgaard, J., Kyng, M., & Sundblad, Y. (1987). A utopian experience. In
G. Bjerknes, P.Ehn & M.Kyng (Eds.), Computers and Democrary-A Scandinavian Challenge
(pp. 251-278). Aldershot, UK: Avebury.
Boehm, B. W. (1981). Software Engineering Economics. Englewood Cliffs: Printice-Hall, Inc. Boehm, B. W. (1988). A spiral model of software development and enhancement. IEEE Computer,
21(5), 61-72.
Boff, K. R., & Lincoln, J. E. (1988). Engineering Data Compendium: Human Perception and Performance. Dayton, OH: Wright-Patterson AFB, Harry G. Armstrong Aerospace Medical Research Laboratory.
Bolchini, D., Pulido, D., & Faiola, A. (2009). "Paper in screen" prototyping: An agile technique to anticipate the mobile experience. interactions, 16(4), 29-33.
Borchers, J. (2001). A Pattern Approach to Interaction Design. Wiley.
Borman, L., & Janda, A. (1986). The CHI conferences: A bibliographic history. SIGCHI Bulletin, 17(3), 51.
Borsci, S., Federici, S., & Lauriola, M. (2009). On the dimensionality of the System Usability Scale: A test of alternative measurement models. Cognitive Process, 10(3), 193-197.
Boucher, A., & Gaver, W. (2006). Developing the drift table. interactions, 13(1), 24-27.
Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1), 49-59.
Branscomb, L. M. (1981). The human side of computers. IBM Systems Journal, 20(2), 120-121. Brassard, M. (1989). The Memory Jogger Plus�. Goal/QPC Inc.
Brooke, J. (1996). SUS: A quick and dirty usability scale. In P. W. Jordan, B. Thomas, B. A.
Weerdmeester & I. L. McClleland (Eds.), Usability Evaluation in Industry (pp. 189-194). London, UK: Taylor & Francis.
Brown, C. M. (1988). Human-Computer Interface Design Guidelines. Norwood, NJ: Ablex Publishing. Brown, L. (1993). Human-computer interaction and standardization. StandardView, 1(1), 3-8.
Brown, T. (2008, June). Design thinking. Harvard Business Review, 84-92.
Buchenau, M., & Suri, J. F. (2000). Experience prototyping. In: Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques (DIS) (pp. 424-433).
Butler, K. A. (1996). Usability engineering turns 10. interactions, 3(1), 58-75.
Butler, M. B., & Ehrlich, K. (1994). Usability engineering for Lotus 1-2-3 Release 4. In M. E. Wickland (Ed.), Usability in Practice: How Companies Develop User-Friendly Products (pp. 293-326). Boston, MA: Academic Press.
Buxton, W., & Sniderman, R. (1980). Iteration in the Design of the Human-Computer Interface.
Proceedings of the 13th Annual Meeting, Human Factors Association of Canada (pp. 72-81).
Buxton, W., Lamb, M. R., Sherman, D., & Smith, K. C. (1983). Towards a Comprehensive User Interface Management System. Computer Graphics, 17(3), 35-42.
Buxton, B. (1986). There's more to interaction than meets the eye: Some issues in manual input. In
A. D. Norman & S. W. Draper (Eds.), User Centered System Design: New Perspectives on Human- Computer Interaction (pp. 319-337). Hillsdale, NJ: Lawrence Erlbaum.
Buxton, B. (2007a). Sketching and Experience Design. In Stanford University Human-Computer Interaction Seminar (CS 547). http://www.youtube.com/watch?v1/4xx1WveKV7aE. Last accessed 7/14/2011.
Buxton, B. (2007b). Sketching User Experiences: Getting the Design Right and the Right Design. San Francisco, CA: Morgan Kaufmann.
Callahan, J., Hopkins, D., Weiser, M., & Shneiderman, B. (1988). An empirical comparison of pie vs. linear menus. In: Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 95-100), Washington, DC.
Capra, M. G. (2006). Usability Problem Description and the Evaluator Effect in Usability Testing. Ph.D. Dissertation, Blacksburg: Virginia Tech.
Card, S. K., English, W. K., & Burr, B. J. (1978). Evaluation of mouse, rate-controlled isometric joy- stick, step keys, and text keys for text selection on a CRT. Ergonomics, 21(8), 601-613.
Card, S. K., Moran, T. P., & Newell, A. (1980). The keystroke-level model for user performance time with interactive systems. Communications of the ACM, 23(7), 396-410.
Card, S. K., Moran, T. P., & Newell, A. (1983). The Psychology of Human-Computer Interaction. Hillsdale, NJ: Lawrence Erlbaum.
Carey, T. T., & Mason, R. E. A. (1989). Information system prototyping: Techniques, tools, and methodologies. In Software Risk Management (pp. 349-359). Piscataway, NJ: IEEE Press.
Carmel, E., Whitaker, R. D., & George, J. F. (1993). PD and joint application design: A transatlantic comparison. Communications of the ACM, 36(6), 40-48.
Carroll, J. M. (1984). Minimalist design for active users. In: Proceedings of the INTERACT Conference on Human-Computer Interaction (pp. 39-44). Amsterdam.
Carroll, J. M. (1990). Infinite detail and emulation in an ontologically minimized HCI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 321-328). Seattle, WA.
Carroll, J. M., Kellogg, W. A., & Rosson, M. B. (1991). The task-artifact cycle. In J. M. Carrol (Ed.), Designing Interaction: Psychology at the Human-Computer Interface (pp. 74-102). New York: Cambridge University Press.
Carroll, J. M., Mack, R. L., & Kellogg, W. A. (1988). Interface metaphors and user interface design. In
M. Helander (Ed.), Handbook of Human-Computer Interaction (pp. 67-85). Holland: Elsevier Science.
Carroll, J. M., & Rosson, M. B. (1985). Usability specifications as a tool in iterative development. In H. R. Hartson (Ed.), Advances in Human-Computer InteractionVol. 1(pp. 1-28). Norwood, NJ: Ablex.
Carroll, J. M., & Rosson, M. B. (1992). Getting around the task-artifact cycle: How to make claims and design by scenario. ACM Transactions on Information Systems, 10, 181-212.
Carroll, J. M., Singley, M. K., & Rosson, M. B. (1992). Integrating theory development with design evaluation. Behaviour & Information Technology, 11(5), 247-255.
Carroll, J. M., & Thomas, J. C. (1982). Metaphor and the cognitive representation of computing systems. IEEE Transactions on Systems, Man and Cybernetics, 12(2), 107-116.
Carroll, J. M., & Thomas, J. C. (1988). Fun. SIGCHI Bulletin, 19(3), 21-24. Carter, P. (2007). Liberating usability testing. interactions, 14(2), 18-22.
Castillo, J. C., & Hartson, R. (2000). Critical incident data and their importance in remote usability evaluation. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 590-593).
Checkland, P., & Scholes, J. (1990). Soft Systems Methodology in Action. John Wiley.
Chin, J. P., Diehl, V. A., & Norman, K. L. (1988, May 15-19). Development of an instrument mea- suring user satisfaction of the human-computer interface. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 213-218). Washington, DC.
Chorianopoulos, K., & Spinellis, D. (2004). Affective usability evaluation for an interactive music television channel. ACM Computers in Entertainment, 2(3), 1-11.
Christensen, J. M., Topmiller, D. A., & Gill, R. T. (1988). Human factors definitions revisited. Human Factors Society Bulletin, 31(10), 7-8.
Churchill, E. F. (2009). Ps and Qs: On trusting your socks to find each other. interactions, 16(2), 32-36.
Clement, A., & Besselaar, P. V. D. (1993). A retrospective look at PD projects. Communications of the ACM, 36(6), 29-37.
Clubb, O. L. (2007). Human-to-computer-to-human interactions (HCHI) of the communications revolution. interactions, 14(2), 35-39.
Cobb, M. (1995). Unfinished Voyages. A follow-up to The CHAOS Report.
Cockton, G., & Woolrych, A. (2001). Understanding inspection methods: Lessons from an assess- ment of heuristic evaluation. In Proceedings of the International Conference on Human-Computer Interaction (HCI International) and IHM 2001 (pp. 171-192).
Cockton, G., & Woolrych, A. (2002). Sale must end: Should discount methods be cleared off HCI's shelves? interactions, 9(5), 13-18.
Cockton, G., Lavery, D., & Woolrych, A. (2003). Changing analysts' tunes: The surprising impact of a new instrument for usability inspection method assessment. In Proceedings of the International Conference on Human-Computer Interaction (HCI International) (pp. 145-162).
Cockton, G., Woolrych, A., Hall, L., & Hindmarch, H. (2003). Changing analysts' tunes: The surprising impact of a new instrument for usability inspection method assessment? In P. Johnson &
P. Palanque (Eds.), People and Computers (Vol. XVII). Springer-Verlag. Constantine, L. L. (1994a). Essentially speaking. Software Development, 2(11), 95-96. Constantine, L. L. (1994b). Interfaces for intermediates. IEEE Software, 11(4), 96-99.
Constantine, L. L. (1995). Essential modeling: Use cases for user interfaces. interactions, 2(2), 34-46. Constantine, L. L. (2001). Cutting corners: Shortcuts in model-driven web development. Beyond
Chaos: ACM, 177-184.
Constantine, L. L. (2002). Process agility and software usability: Toward lightweight usage-centered design. Information Age, 8(2).
Constantine, L. L., & Lockwood, L. A. D. (1999). Software for Use: A Practical Guide to the Models and Methods of Usage-Centered Design. Addison-Wesley Professional.
Constantine, L. L., & Lockwood, L. A. D. (2003). Card-based user and task modeling for agile usage-centered design. In Proceedings of the CHI Conference on Human Factors in Computing Systems (Tutorial).
Cooper, A. (2004). The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity. Indianapolis, IN: Sams-Pearson Education.
Cooper, A., Reimann, R., & Dubberly, H. (2003). About Face 2.0: The Essentials of Interaction Design.
John Wiley.
Cooper, G. (1998). Research into Cognitive Load Theory & Instructional Design at UNSW. http://paedpsych. jku.at:4711/LEHRTEXTE/Cooper98.html. Last accessed 2/2/2011.
Costabile, M. F., Ardito, C., & Lanzilotti, R. (2010). Enjoying cultural heritage thanks to mobile tech- nology. interactions, 17(3), 30-33.
Cox, D., & Greenberg, S. (2000). Supporting collaborative interpretation in distributed Groupware. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (pp. 289-298). Philadelphia, PA.
Cross, K., Warmack, A., & Myers, B. A. (1999). Lessons learned: Using Contextual Inquiry Analysis To Improve PDA Control of Presentations. Unpublished report. Carnegie Mellon University.
Cuomo, D. L., & Bowen, C. D. (1992). Stages of user activity model as a basis for user-system interface evaluations. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 1254-1258).
Curtis, B., & Hefley, B. (1992). Defining a place for interface engineering. IEEE Software, 9(2), 84-86. Curtis, P., Heiserman, T., Jobusch, D., Notess, M., & Webb, J. (1999). Customer-focused design data in a large, multi-site organization. In Proceedings of the CHI Conference on Human Factors in Com-
puting Systems (pp. 608-615), Pittsburgh, PA.
Dagstuhl, S. (2010). Demarcating User eXperience Seminar. In Dagstuhk Seminar. http://www. dagstuhl.de/10373. Last accessed 08/16/2010.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1992). Extrinsic and intrinsic motivation to use computers in the workplace. Journal of Applied Psychology, 22(14), 1111-1132.
del Galdo, E. M., Williges, R. C., Williges, B. H., & Wixon, D. R. (1986). An evaluation of critical in- cidents for software documentation design. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 19-23).
Desmet, P. (2003). Measuring emotions: Development and application of an instrument to measure emotional responses to products. In M. A. Blythe, A. F. Monk, K. Overbeeke & P. C. Wright (Eds.), Funology: From Usability to Enjoyment (pp. 111-123). Dordrecht, The Netherlands: Kluwer Academic.
Diaper, D. (1989). Task Analysis for Knowledge Descriptions (TAKD): The method and an example. In D. Diaper (Ed.), Task Analysis for Human-Computer Interaction (pp. 108-159). Chichester, England: Ellis Horwood.
Dick, W., & Carey, L. (1978). The Systematic Design of Instruction. Glenview, IL: Scott, Foresman. Donohue, J. (1989). Fixing Fallingwater's flaws. Architecture, 99-101.
Dormann, C. (2003). Affective experiences in the home: Measuring emotion. In Proceedings of the Con- ference on Home Oriented Informatics and Telematics, the Networked Home of the Future(HOIT) Irvine, CA. Dourish, P. (2001). Where the Action Is: The Foundations of Embodied Interaction. Cambridge,
MA: MIT Press.
Draper, S. W., & Barton, S. B. (1993). Learning by exploration, and affordance bugs. In Proceedings of the CHI Conference on Human Factors in Computing Systems (INTERCHI Adjunct) (pp. 75-76), New York.
Dray, S., & Siegel, D. (2004). Remote possibilities? International usability testing at a distance. inter- actions, 11(2), 10-17.
Dray, S. M., & Siegel, D. A. (1999). Business: penny-wise, pound-wise: Making smart trade-offs in plan- ning usability studies. interactions, 6(3), 25-30.
Dubberly, H., & Pangaro, P. (2009). What is conversation, and how can we design for it? interactions,
16(4), 22-28.
Dumas, J. S., Molich, R., & Jeffries, R. (2004). Describing usability problems: Are we sending the right message? interactions, 11(4), 24-29.
Dumas, J. S., & Redish, J. C. (1999). A Practical Guide to Usability Testing (Rev Sub ed.). Exeter, England: Intellect Ltd.
Dzida, W., Wiethoff, M., & Arnold, A. G. (1993). ERGOGuide: The Quality Assurance Guide to Ergonomic Software: Joint internal technical report of GMD (Germany) and Delft University of Technology (The Netherlands).
Ehn, P. (1988). Work-Oriented Design of Computer Artifacts. Stockholm. Sweden: Arbetslivcentrum. Ehn, P. (1990). Work-Oriented Design of Computer Artifacts (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum. Ekman, P., & Friesen, W. (1975). Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues.
Englewood Cliffs, NJ: Prentice Hall.
Elgin, B. (1995). How can networked users provide their own usability feedback? Subjective usability feedback from the field over a network. SIGCHI Bulletin, 27(4), 43-44.
Engel, S. E., & Granda, R. E. (1975). Guidelines for Man/Display Interfaces. Report Number TR 00.2720.
Poughkeepsie, NY: IBM. Ferrara, J. C. (2005). Building positive team relationships for better usability. interactions, 12(3), 20-21. Fitts, P. M. (1954). The information capacity of the human motor system in controlling the ampli-
tude of movement. Journal of Experimental Psychology, 47(6), 381-391.
Fitts, P. M., & Jones, R. E. (1947). Psychological aspects of instrument display: Analysis of factors contributing to 460 "pilot error" experiences in operating aircraft controls. In H. W. Sinaiko (Ed.), Reprinted in Selected Papers on Human Factors in the Design and Use of Control Systems (1961) (pp. 332-358). New York: Dover.
Fitts, P. M., & Peterson, J. R. (1964). Information capacity of discrete motor responses. Journal of Experimental Psychology, 67(2), 103-112.
Flanagan, G. A. (1995). Usability management maturity, Tutorial, CHI '95. Unpublished CHI '95 Tutorial.
Flanagan, J. C. (1954). The critical incident technique. Psychological Bulletin, 51(4), 327-358. Foley, J. D., & Van Dam, A. (1982). Fundamentals of Interactive Computer Graphics. Addison-Wesley
Longman.
Foley, J. D., Van Dam, A., Feiner, S. K., & Hughes, J. F. (1990). Computer Graphics: Principles and Practice (2nd ed.). Addison-Wesley Longman Publishing Co., Inc.
Foley, J. D., & Wallace, V. L. (1974). The art of natural graphic man-machine conversation. Proceedings of the IEEE, 62(4), 462-471.
Forlizzi, J. (2005). Robotic products to assist the aging population. interactions, 12(2), 16-18. Frank, B. (2006). The science of segmentation. interactions, 13(3), 12-13.
Friedlander, N., Schlueter, K., & Mantei, M. (1998). Bullseye! when Fitts' law doesn't fit. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 257-264), Los Angeles, California. Frishberg, L. (2006). Presumptive design, or cutting the looking-glass cake. interactions, 13(1), 18-20.
Frishberg, N. (2006). Prototyping with junk. interactions, 13(1), 21-23.
Gabriel-Petit, P. (2005). Sharing ownership of UX (in Special Issue Whose profession is it anyway?).
interactions, 12(3), 16-18.
Gannon, J. D. (1979). Human factors in software engineering. IEEE Computer, 6-60. Gaver, W. W. (1991). Technology affordances. In Proceedings of the CHI Conference on Human Factors in
Computing Systems (pp. 79-84), New Orleans, Louisiana.
Gellersen, H. (2005). Smart-Its: Computers for artifacts in the physical world. Communications of the ACM, 48(3), 66.
Genov, A. (2005). Iterative usability testing as continuous feedback: A control systems perspective.
Journal of Usability Studies, 1(1), 18-27.
Gershman, A., & Fano, A. (2005). Examples of commercial applications of ubiquitous computing.
Communications of the ACM, 48(3), 71.
Gibson, J. J. (1977). The theory of affordances. In R. Shaw & J. Bransford (Eds.), Perceiving, Acting, and Knowing: Toward an Ecological Psychology (pp. 67-82). Hillsdale, NJ: Lawrence Erlbaum.
Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Houghton Mifflin. Gilb, T. (1987). Design by objectives. SIGSOFT Software Engineering Notes, 12(2), 42-49.
Gillan, D. J., Holden, K., Adam, S., Rudisill, M., & Magee, L. (1990). How does Fitts' law fit pointing and dragging? In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 227-234), Seattle, WA.
Go, K., & Carroll, J. M. (2004). The blind men and the elephant: Views of scenario-based system design. interactions, 11(6), 44-53.
Good, M., Spine, T., Whiteside, J. A., & George, P. (1986). User derived impact analysis as a tool for usability engineering. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 241-246), New York.
Good, M. D., Whiteside, J. A., Wixon, D. R., & Jones, S. J. (1984). Building a user-derived interface.
Communications of the ACM, 27(10), 1032-1043.
Gould, J. D., Boies, S. J., Levy, S., Richards, J. T., & Schoonard, J. (1987). The 1984 Olympic Message System: A test of behavioral principles of system design. Communications of the ACM, 30(9), 758-769.
Gray, W. D., Atwood, M., Fisher, C., Nielsen, J., Carrol, J. M., & Long, J. (1995). Discount or disservice? Discount usability analysis-evaluation at a bargain price or simply damaged merchandise? In Proceedings of the CHI Conference on Human Factors in Computing Systems (Panel Session) (pp. 176-177), Denver, CO.
Gray, W. D., John, B. E., Stuart, R., Lawrence, D., & Atwood, M. E. (1990). GOMS meets the phone company: Analytic modeling applied to real-world problems. In Proceedings of the INTERACT Conference on Human-Computer Interaction (pp. 29-34).
Gray, W. D., & Salzman, M. C. (1998). Damaged merchandise? A review of experiments that compare usability evaluation methods. Human-Computer Interaction, 13(3), 203-261.
Greenbaum, J. M. & Kyng, M. (Eds.). (1991). Design at Work: Cooperative Design of Computer Systems.
Lawrence Erlbaum.
Greenberg, S., & Buxton, B. (2008). Usability evaluation considered harmful (some of the time). In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 111-120), Florence, Italy.
Grudin, J. (1989). The case against user interface consistency. Communications of the ACM, 32(10), 1164-1173.
Grudin, J. (2006). The GUI shock: Computer graphics and human-computer interaction. interactions,
13(2), 45-47, 55.
Gunn, C. (1995). An example of formal usability inspections at Hewlett-Packard Company. In: Pro- ceedings of the CHI Conference on Human Factors in Computing Systems (Conference Companion) (pp. 103-104), Denver, CO.
Gutierrez, O. (1989). Prototyping techniques for different problem contexts. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 259-264).
Hackman, G., & Biers, D. (1992). Team usability testing: Are two heads better than one. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 1205-1209).
Hafner, K. (2007). Inside Apple stores, a certain aura enchants the faithful. New York Times, from http://www.nytimes.com/2007/12/27/business/27apple.html?ei1/45124& en1/46b1c27bc8cec74b5&ex1/41356584400&partner1/4permalink&exprod1/4permalink&
pagewanted1/4all.
Hallno�s, L., & Redstro�m, J. (2002). From use to presence: On the expressions and aesthetics of everyday computational things. ACM Transactions on Computer-Human Interaction, 9(2), 106-124.
Hammond, N., Gardiner, M. M., & Christie, B. (1987). The role of cognitive psychology in user- interface design. In M. M. Gardiner & B. Christie (Eds.), Applying Cognitive Psychology to User- Interface Design (pp. 13-52). Wiley.
Hamner, E., Lotter, M., Nourbakhsh, I., & Shelly, S. (2005). Case study: Up close and personal from Mars. interactions, 12(2), 30-36.
Hanson, W. (1971). User engineering principles for interactive systems. In Proceedings of the Fall Joint Computer Conference (pp. 523-532). Montvale, NJ.
Harrison, M., & Thimbleby, H. (Eds.). (1990). Formal Methods in Human-Computer Interaction.
Cambridge University Press.
Hartson, H. R., & Hix, D. (1989). Toward empirically derived methodologies and tools for human- computer interface development. International Journal of Man-Machine Studies, 31, 477-494.
Hartson, R. (1998). Human-computer interaction: Interdisciplinary roots and trends. Journal of Sys- tems and Software, 43, 103-118.
Hartson, R. (2003). Cognitive, physical, sensory, and functional affordances in interaction design.
Behaviour & Information Technology, 22(5), 315-338.
Hartson, R., Andre, T. S., & Williges, R. C. (2003). Criteria for evaluating usability evaluation methods. International Journal of Human-Computer Interaction, 15(1), 145-181.
Hartson, R., & Castillo, J. C. (1998). Remote evaluation for post-deployment usability improvement. In Proceedings of the Conference on Advanced Visual Interfaces (AVI) (pp. 22-29), L'Aquila, Italy.
Hartson, R., & Smith, E. C. (1991). Rapid prototyping in human-computer interface development.
Interacting with Computers, 3(1), 51-91.
Hassenzahl, M. (2001). The effect of perceived hedonic quality on product appealingness. Interna- tional Journal of Human-Computer Interaction, 13(4), 48-499.
Hassenzahl, M., Beu, A., & Burmester, M. (2001). Engineering joy. IEEE Software, 18(1), 70-76. Hassenzahl, M., Burmester, M., & Koller, F. (2003). AttrakDiff: Ein Fragebogen zur Messung wahr-
genommener hedonischer und pragmatischer Qualita�t (AttrakDif: A questionnaire for the measurement of perceived hedonic and pragmatic quality). In Proceedings of Mensch & Com- puter 2003: Interaktion in Bewegung (pp. 187-196), Stuttgart.
Hassenzahl, M., Platz, A., Burmester, M., & Lehner, K. (2000). Hedonic and ergonomic quality aspects determine a software's appeal. In: Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 201-208), The Hague, The Netherlands.
Hassenzahl, M., & Roto, V. (2007). Being and doing: A perspective on user experience and its mea- surement. Interfaces, 72.
Hassenzahl, M., Scho� bel, M., & Trautmann, T. (2008). How motivational orientation influences the evaluation and choice of hedonic and pragmatic interactive products: The role of regulatory focus. Interacting with Computers, 20, 473-479.
Hawdale, D. (2005). The vision of good user experience. interactions, 12(3), 22-23.
Heidegger, M. (1962). Being and Time. (J. Macquarrie & E. Robinson, Trans., 1st US ed.). New York: Harper & Row.
Helms, J. W., Arthur, J. D., Hix, D., & Hartson, H. R. (2006). A field study of the wheel: A usability engineering process model. Journal of Systems and Software, 79(6), 841-858.
Hertzum, M., & Jacobsen, N. E. (2003). The evaluator effect: A chilling fact about usability evaluation methods. International Journal of Human-Computer Interaction, 15(1), 183-204.
Hertzum, M., Jacobsen, N. E., & Molich, R. (2002). Usability inspections by groups of specialists: Perceived agreement in spite of disparate observations. In: Proceedings of the CHI Conference on Human Factors in Computing Systems (Extended Abstracts) (pp. 662-663), Minne- apolis, MN.
Hewett, T. T. (1986). The role of iterative evaluation in designing systems for usability. In: Proceedings of the Conference of the British Computer Society, Human Computer Interaction Specialist Group on People and Computers (pp. 196-214), York, UK.
Hewett, T. T. (1999). Cognitive factors in design: Basic phenomena in human memory and problem solving. In Proceedings of the CHI Conference on Human Factors in Computing Systems (Extended Ab- stracts) (pp. 116-117).
Hinckley, K., Pausch, R., Goble, J. C., & Kassell, N. F. (1994). A survey of design issues in spatial input. In Proceedings of the ACM Symposium on User Interface Software and Technology (pp. 213-222). Marina del Rey, CA.
Hix, D., & Hartson, H. R. (1993). Developing User Interfaces: Ensuring Usability Through Product & Process.
New York: John Wiley.
Hix, D., & Hartson, R. (1993). Formative evaluation: Ensuring usability in user interfaces. In
L. Bass & P. Dewan (Eds.), Trends in Software: User Interface Software (pp. 1-30). New York: John Wiley & Sons.
Hix, D., & Schulman, R. S. (1991). Human-computer interface development tools: A methodology for their evaluation. Communications of the ACM, 34(3), 74-87.
Hochberg, J. (1964). Perception. Prentice-Hall.
Holtzblatt, K. (1999). Introduction to special section on contextual design. interactions, 6(1), 30-31. Holtzblatt, K., Wendell, J. B., & Wood, S. (2005). Rapid Contextual Design: A How-to Guide to Key Tech-
niques for User-Centered Design. San Francisco, CA: Morgan-Kaufman.
Hornb�k, K. (2006). Current practice in measuring usability: Challenges to usability studies and re- search. International Journal of Human-Computer Studies, 64(2), 79-102.
Hornb�k, K., & Fr0kj�r, E. (2005). Comparing usability problems and redesign proposals as input to practical systems development. In: Proceedings of the CHI Conference on Human Factors in Comput- ing Systems (pp. 391-400), Portland, OR.
Howarth, D. (2002). Custom cupholder a shoe-in. In Roundel (p. 10). BMW Car Club publication.
Howarth, J., Andre, T. S., & Hartson, R. (2007). A structured process for transforming usability data into usability information. Journal of Usability Studies, 3(1), 7-23.
Howarth, J., Smith-Jackson, T., & Hartson, H. R. (2009). Supporting novice usability practitioners with usability engineering tools. International Journal of Human-Computer Studies, 67(6), 533-549.
Hudson, W. (2001). How many users does it take to change a Web site? SIGCHI Bulletin, 6.
Huh, J., Ackerman, M. S., Erickson, T., Harrison, S., & Sengers, P. (2007). Beyond usability: Taking social, situational, cultural, and other contextual factors into account. In Proceedings of the CHI Conference on Human Factors in Computing Systems (Extended Abstracts) (pp. 2113-2116). San Jose, CA.
Human Factor Research Group. (1990). SUMI Questionnaire. http://www.ucc.ie/hfrg/questionnaires/ sumi/index.html. Last accessed 11/18/2010.
Human Factor Research Group (1996a). MUMMS Questionnaire. http://www.ucc.ie/hfrg/ questionnaires/mumms/index.html.
Human Factor Research Group. (1996b). WAMMI Questionnaire. http://www.ucc.ie/hfrg/ questionnaires/wammi/index.html. Last accessed 11/18/2010.
Husserl, E. (1962). Ideas: General Introduction to Pure Phenomenology. Collier Books.
Hutchins, E. L., Hollan, J. D., & Norman, D. A. (1986). Direct manipulation interfaces. In D. A. Norman & S. W. Draper (Eds.), User Centered System Design: New Perspectives on Human-Computer Interaction (pp. 87-125). Hillsdale, NJ: Lawrence Erlbaum.
Iannella, R. (1995). HyperSAM: A management tool for large user interface guideline sets. SIGCHI Bulletin, 27(2), 42-45.
Igbari, M., Schiffman, S. J., & Wieckowski, T. J. (1994). The respective roles of perceived usefulness and perceived fun in the acceptance of microcomputer technology. Behaviour & Information Technology, 13(6), 349-361.
Ishii, H., & Ullmer, B. (1997). Tangible bits: Towards seamless interfaces between people, bits and atoms. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 234-241). Atlanta, GA.
ISO 13407. (1999). Human-centred design processes for interactive systems. International Organization for Standardization.
ISO 9241-11. (1997). Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs) Part 11: Guidance on Usability.
Jacob, R. J. K. (1993). Eye movement-based human-computer interaction techniques: Toward non- command interfaces. In R. Hartson & D. Hix (Eds.), Advances in Human-Computer Interaction (Vol. 4., pp. 151-190). Norwood, NJ: Ablex Publishing Corporation.
John, B. E., & Marks, S. J. (1997). Tracking the effectiveness of usability evaluation methods. Behav- iour & Information Technology, 16(4), 188-202.
John, B. E., & Mashyna, M. M. (1997). Evaluating a multimedia authoring tool with cognitive walkthrough and think-aloud user studies. Journal of the American Society for Information Science, 48(11), 1004-1022.
Johnson, J. (2000). Textual bloopers: An excerpt from GUI bloopers. interactions, 7(5), 28-48. Johnson, J., & Henderson, A. (2002). Conceptual models: Begin by designing what to design. inter-
actions, 9(1), 25-32.
Jokela, T. (2004). When good things happen to bad products: Where are the benefits of usability in the consumer appliance market? interactions, 11(6), 28-35.
Jones, B. D., Winegarden, C. R., & Rogers, W. A. (2009). Supporting healthy aging with new technol- ogies. interactions, 16(4), 48-51.
Jordan, P. W. (1996). Human factors in product use. Applied Ergonomics, 29, 25-33. Judge, T. K., Pyla, P. S., McCrickard, S., & Harrison, S. (2008). Affinity diagramming in multiple dis-
play environments. In Proceedings of CSCW 2008 Workshop on Beyond the Laboratory: Supporting Authentic Collaboration with Multiple Displays. San Diego, CA.
Kabbash, P., & Buxton, W. A. S. (1995). The "prince" technique: Fitts' law and selection using area cursors. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 273-279), Denver, CO.
Kameas, A., & Mavrommati, I. (2005). Extrovert gadgets. Communications of the ACM, 48(3), 69. Kane, D. (2003, June 25-28). Finding a place for discount usability engineering in agile development:
Throwing down the gauntlet. In Proceedings of the Agile Development Conference (ADC) (pp. 40-46). Kangas, E., & Kinnunen, T. (2005). Applying user-centered design to mobile application develop-
ment. Communications of the ACM, 48(7), 55-59.
Kantrovich, L. (2004). To innovate or not to innovate. interactions, 11(1), 24-31.
Kapoor, A., Picard, R. W., & Ivanov, Y. (2004). Probabilistic combination of multiple modalities to detect interest. In Proceedings of the International Conference on Pattern Recognition (ICPR) (pp. 969-972).
Kapor, M. (1991). A software design manifesto. Dr. Dobb's Journal, 16(1), 62-67.
Kapor, M. (1996). A software design manifesto. In T. Winograd (Ed.), Bringing Design to Software
(pp. 1-6). New York: ACM.
Karat, C.-M. (1990a). Cost-benefit analysis of iterative usability testing. In Proceedings of the INTERACT Conference on Human-Computer Interaction (pp. 351-356).
Karat, C.-M. (1990b). Cost-benefit analysis of usability engineering techniques. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 839-843).
Karat, C.-M. (1991). Cost-benefit and business case analysis of usability engineering, Tutorial, CHI '91.
Unpublished CHI '91 Tutorial.
Karat, C.-M. (1993). Usability engineering in dollars and cents. IEEE Software, 10(3), 88-89.
Karat, C.-M., Campbell, R., & Fiegel, T. (1992, May 3-7). Comparison of empirical testing and walk- through methods in user interface evaluation. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 397-404). New York.
Karn, K. S., Perry, T. J., & Krolczyk, M. J. (1997). Testing for power usability: A CHI 97 workshop.
SIGCHI Bulletin, 29(4).
Kaur, K., Maiden, N., & Sutcliffe, A. (1999). Interacting with virtual environments: An evaluation of a model of interaction. Interacting with Computers, 11(4), 403-426.
Kawakita, J. (1982). The Original KJ Method. Tokio: Kawakita Research Institute.
Kaye, J. J. (2004). Making scents: Aromatic output for HCI. interactions, 11(1), 48-61. Kennedy, S. (1989). Using video in the BNR usability lab. SIGCHI Bulletin, 21(2), 92-95.
Kennedy, T. C. S. (1974). The design of interactive procedures for man-machine communication.
International Journal of Man-Machine Studies, 6, 309-334.
Kensing, F., & Munk-Madsen, (1993). PD: Structure in the toolbox. Communications of the ACM, 36
(6), 78-85.
Kieras, D. E. (1988). Towards a practical GOMS model methodology for user interface design. In
M. Helander (Ed.), Handbook of Human-Computer Interaction (135-157). Elsevier Science. Kieras, D. E., & Polson, P. G. (1985). An approach to the formal analysis of user complexity. Interna-
tional Journal of Man-Machine Studies, 22, 365-394.
Killam, H. W. (1991). Rogerian psychology and human-computer interaction. Interacting with Com- puters, 3(1), 119-128.
Kim, J., & Moon, J. Y. (1998). Designing towards emotional usability in customer interfaces- Trustworthiness of cyber-banking system interfaces. Interacting with Computers, 10(1), 1-29.
Kim, J. H., Gunn, D. V., Schuh, E., Phillips, B. C., Pagulayan, R. J., & Wixon, D. (2008). Tracking real- time user experience (TRUE): A comprehensive instrumentation for complex systems. In Proceedings of CHI Conference on Human Factors in Computing Systems (pp. 443-451). Florence, Italy.
Kirakowski, J., & Murphy, R. (2009). A comparison of current approaches to usability measurement.
In Proceedings of the UPA International Conference. Portland, OR. Knemeyer, D. (2005). Who owns UX? Not us!. interactions, 12(3), 18-20.
Koenemann-Belliveau, J., Carroll, J. M., Rosson, M. B., & Singley, M. K. (1994). Comparative usability evaluation: critical incidents and critical threads. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 245-251), Boston, MA.
Koffka, K. (1935). Principles of Gestalt Psychology. Harcourt, Brace. Kreitzberg, C. B. (2000). Personal communication with Rex Hartson.
Kreitzberg, C. B. (2008). The LUCID framework: An introduction. http://www.leadersintheknow. biz/Portals/0/Publications/Lucid-Paper-v2.pdf. Last accessed 07/13/2011.
Kreitzburg, C. Technology and Chaos. http://www.digitalspaceart.com/projects/cogweb2002v2/ papers/charlie/charlie5.html. Last accessed 07/09/2011.
Kuniavsky, M. (2003). Observing the User Experience: A Practitioner's Guide to User Research. San Francisco, CA: Morgan Kaufmann.
Kwong, A. W., Healton, B., & Lancaster, R. (1998). State of siege: New thinking for the next decade of design. In Proceedings of the IEEE Aerospace Conference (pp. 85-93).
Kyng, M. (1994). Scandinavian design: Users in product development. In Proceedings of the CHI Con- ference on Human Factors in Computing Systems (pp. 3-9).
Lalis, S., Karypidis, A., & Savidis, A. (2005). Ad-hoc composition in wearable and mobile computing.
Communications of the ACM, 48(3), 67-68.
Landauer, T. K. (1995). The Trouble with Computers: Usefulness, Usability, and Productivity. Cambridge, MA: MIT Press.
Landay, J. A., & Myers, B. A. (1995). Interactive sketching for the early stages of user interface design. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 43-50), Denver, CO. Lathan, C., Brisben, A., & Safos, C. (2005). CosmoBot levels the playing field for disabled children.
interactions, 12(2), 14-16.
Lavery, D., & Cockton, G. (1997). Representing predicted and actual usability problems. In Proceedings of the International Workshop on Representations in Interactive Software Development (pp. 97-108). London.
Lavie, T., & Tractinsky, N. (2004). Assessing dimensions of perceived visual aesthetics of web sites.
International Journal of Human-Computer Studies, 60, 269-298.
Law, E. L.-C. (2006). Evaluating the downstream utility of user tests and examining the developer effect: A case study. International Journal of Human-Computer Interaction, 21(2), 147-172.
LeCompte, M. D., & Preissle, J. (1993). Ethnography and Qualitative Design in Educational Research
(2nd ed.). San Diego: Academic Press.
Lederer, A. L., & Prasad, J. (1992). Nine management guidelines for better cost estimating. Commu- nications of the ACM, 35(2), 51-59.
Lee, G. A., Kim, G. J., & Billinghurst, M (2005). Immersive authoring: What You eXperience Is What You Get (WYXIWYG). Communications of the ACM, 48(7), 76-81.
Lewis, C. (1982). Using the 'thinking-aloud' method in cognitive interface design. Report Number Research Report RC 9265. Yorktown Heights, NY: IBM T. T. Watson Research Center.
Lewis, C., Polson, P. G., Wharton, C., & Rieman, J. (1990). Testing a walkthrough methodology for theory-based design of walk-up-and-use interfaces. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 235-242), Seattle, WA.
Lewis, J. R. (1994). Sample sizes for usability studies: Additional considerations. Journal of the Human Factors and Ergonomics Society, 36, 368-378.
Lewis, J. R. (1995). IBM computer usability satisfaction questionnaires: Psychometric evaluation and instructions for use. International Journal of Human-Computer Interaction, 7, 57-78.
Lewis, J. R. (2002). Psychometric evaluation of the PSSUQ using data from five years of usability stud- ies. International Journal of Human-Computer Interaction, 14, 463-488.
Lewis, J. R., & Sauro, J. (2009). The factor structure of the System Usability Scale. In Proceedings of the International Conference on Human-Computer Interaction (HCI International). San Diego, CA.
Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140, 55. Lindgaard, G. (2004). Making the business our business: One path to value-added HCI. interactions,
11(3), 12-17.
Lindgaard, G., & Dudek, C. (2003). What is this evasive beast we call user satisfaction? Interacting with Computers, 15(3), 429-452.
Lindgaard, G., Fernandes, G. J., Dudek, C., & Brownet, J. (2006). Attention web designers: You have 50 milliseconds to make a good first impression!. Behaviour & Information Technology, 25(2), 115-126.
Logan, R. J. (1994). Behavioral and emotional usability: Thomson Consumer Electronics. In M. Wiklund (Ed.), Usability in Practice (pp. 59-82). San Diego, CA: Academic Press Professional.
Logan, R. J., Augaitis, S., & Renk, T. (1994). Design of simplified television remote controls: A case for behavioral and emotional usability. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 365-369). Santa Monica, CA.
Lohse, G. L., Biolsi, K., Walker, N., & Rueter, H. H. (1994). A classification of visual representations.
Communications of the ACM, 37(12), 36-49.
Lo�wgren, J. (2004). Animated use sketches: As design representations. interactions, 11(6), 23-27. Lund, A. M. (1997a). Another approach to justifying the cost of usability. interactions, 4(3), 48-56. Lund, A. M. (1997b). Expert ratings of usability maxims. Ergonomics in Design, 5(3), 15-20.
Lund, A. M. (2001). Measuring usability with the USE questionnaire. Usability & User Experience (the STC Usability SIG Newsletter), 8(2).
Lund, A. M. (2004). Measuring Usability with the USE Questionnaire. http://www.stcsig.org/usabil ity/newsletter/0110_measuring_with_use.html. Last accessed 7/15/2011.
Macdonald, N. (2004). Can HCI shape the future of mass communications? interactions, 11(2), 44-47. MacKenzie, I. S. (1992). Fitts' law as a research and design tool in human-computer interaction.
Human-Computer Interaction, 7, 91-139.
MacKenzie, I. S. (1992). Fitts' law as a research and design tool in human-computer interaction.
Human-Computer Interaction, 7, 91-139.
MacKenzie, I. S., & Buxton, W. (1992). Extending Fitts' law to two-dimensional tasks. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 219-226). Monterey, CA.
Macleod, M., Bowden, R., Bevan, N., & Curson, I. (1997). The MUSiC performance measurement method. Behaviour & Information Technology, 16(4), 279-293.
Macleod, M., & Rengger, R. (1993). The development of DRUM: A software tool for video-assisted usability evaluation. In Proceedings of the International Conference on Human-Computer Interaction (HCI International) (pp. 293-309).
Manning, H. (2002). Must the sale end? interactions, 9(6), 56, 55.
Mantei, M. M., & Teorey, T. J. (1988). Cost/benefit analysis for incorporating human factors in the software lifecycle. Communications of the ACM, 31(4), 428-439.
Marcus, A. (2002). The cult of cute: The challenge of user experience design. interactions, 9(6), 29-34. Marcus, A. (2007). Happy birthday! CHI at 25. interactions, 14(2), 42-43.
Marcus, A., & Gasperini, J. (2006). Almost dead on arrival: A case study of non-user-centered design for a police emergency-response system. interactions, 13(5), 12-18.
Marine, L. (1994). Common ground. The Newsletter of Usability Professionals, 4, 2.
Markopoulos, P., Ruyter, B.d., Privender, S., & Breemen, A. V. (2005). Case study: Bringing social intelligence into home dialogue systems. interactions, 12(4), 37-44.
Mason, J. G. (1968, October). How to be of two minds. Nation's Business, 94-97. May, L. J. (1998). Major causes of software project failures. Crosstalk, 9-12.
Mayhew, D. J. (1999). The Usability Engineering Lifecycle: A Practitioner's Handbook for User Interface Design (1st ed). San Francisco, CA: Morgan Kaufmann.
Mayhew, D. J. (1999a). Strategic development of the usability engineering function. interactions, 6(5), 27-33.
Mayhew, D. J. (1999b). The Usability Engineering Lifecycle: A Practitioner's Handbook for User Interface Design. San Francisco, CA: Morgan Kaufmann.
Mayhew, D. J. (2008). User experience design: The evolution of a multi-disciplinary approach. Journal of Usability Studies, 3(3), 99-102.
Mayhew, D. J. (2010). A spreadsheet-based tool for simple cost-benefit analyses of HSI contributions during software application development. In W. B. Rouse (Ed.), The Economics of Human Systems Integration (pp. 163-184). Hoboken, NJ: John Wiley & Sons.
McClelland, I., Taylor, B., & Hefley, B. (1996). CHI '96 workshop: User-centred design principles: How far have they been industrialized? SIGCHI Bulletin, 28(4), 23-25.
McCullough, M. (2004). Digital Ground: Architecture, Pervasive Computing, and Environmental Knowing.
MIT Press.
McGrenere, J., & Ho, W. (2000). Affordances: Clarifying and evolving a concept. In Proceedings of Graphics Interface (pp. 179-186).
McGuffin, M. J., & Balakrishnan, R. (2005). Fitts' law and expanding targets: Experimental studies and designs for user interfaces. ACM Transactions on Computer-Human Interaction, 12(4), 388-422.
McInerney, P., & Maurer, F. (2005). UCD in agile projects: Dream team or odd couple? interactions,
12(6), 19-23.
Meads, J. (1999). Usability Is Not Graphic Design. http://stuff.ratjed.com/UsabilityIsNotGraphicDesign
.htm. Last accessed 7/24/2011.
Meads, J. (2010). Personal communication with Rex Hartson.
Medlock, M. C., Wixon, D., McGee, M., & Welsh, D. (2005). The rapid iterative test and evalu- ation method: Better products in less time. In R. G. Bias & D. J. Mayhew (Eds.), Cost Jus- tifying Usability: An Update for an Internet Age (pp. 489-517). San Francisco, CA: Morgan Kaufmann.
Medlock, M. C., Wixon, D., Terrano, M., Romero, R., & Fulton, B. (2002). Using the RITE method to improve products: A definition and a case study. In Proceedings of the UPA International Confer- ence. Orlando, FL.
Meister, D. (1985). Behavioral Analysis and Measurement Methods. Wiley-Interscience. Memmel, T., Gundelsweiler, F., & Reiterer, H. (2007). Agile human-centered software engineering.
In Proceedings of the British HCI Group Annual Conference on People and Computers, (pp. 167-175) UK: University of Lancaster.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81-97.
Miller, L. (2010). Case study of customer input for a successful product. http://www. agileproductdesign.com/useful_papers/miller_customer_input_in_agile_projects.pdf. Last accessed 7/23/2011.
Miller, L., & Sy, D. (2009, April 4-9). Agile User Experience SIG. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 2751-2754). Boston.
Miller, R. B. (1953). A method for man-machine task analysis. Report Number 53-137. Dayton, OH: Wright Air Development Center, Wright-Patterson Air Force Base.
Moggridge, B. (2007). Designing Interactions. MIT Press.
Molich, R. (2011). Comparative Usability Evaluation Reports. http://www.dialogdesign.dk/CUE-9. htm. Last accessed 07/15/2011.
Molich, R., Bevan, N., Butler, S., Curson, I., Kindlund, E., & Kirakowski, J. (1998, June). Comparative evaluation of usability tests. In Proceedings of the UPA International Conference (pp. 189-200). Washington, DC.
Molich, R., & Dumas, J. S. (2008). Comparative Usability Evaluation (CUE-4). Behaviour & Information Technology, 27(3), 263-282.
Molich, R., Ede, M. R., Kaasgaard, K., & Karyukin, B. (2004). Comparative usability evaluation. Behav- iour & Information Technology, 23(1), 65-74.
Molich, R., Jeffries, R., & Dumas, J. S. (2007). Making usability recommendations useful and usable.
Journal of Usability Studies, 2(4), 162-179.
Molich, R., & Nielsen, J. (1990). Improving a human-computer dialogue. Communications of the ACM,
33(3), 338-348.
Molich, R., Thomsen, A. D., Karyukina, B., Schmidt, L., Ede, M., van Oel, W., et al. (1999). Compar- ative evaluation of usability tests. In Proceedings of the CHI Conference on Human Factors in Com- puting Systems (Extended Abstracts) (pp. 83-84), Pittsburgh, PA.
Monk, A., & Howard, S. (1998). The rich picture: A tool for reasoning about work context. interac- tions, 5(2), 21-30.
Moran, T. P. (1981a). The Command Language Grammar: A representation for the user interface of interactive computer systems. International Journal of Man-Machine Studies, 15(1), 3-50.
Moran, T. P. (1981b). Guest editor's introduction: An applied psychology of the user. ACM Computing Surveys, 13(1), 1-11.
Morris, J. S. (2005). Professional societies and business relevance. interactions, 12(3), 45-47. Mosier, J. N., & Smith, S. L. (1986). Application of guidelines for designing user interface software.
Behaviour & Information Technology, 5(1), 39-46.
Mowshowitz, A., & Turoff, M. (2005). Introduction to special issue: The digital society. Communica- tions of the ACM, 48(10), 32-35.
Muller, M. J. (1991). PICTIVE: An exploration in participatory design. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 225-231). New Orleans, LA.
Muller, M. J. (1992). Retrospective on a year of participatory design using the PICTIVE technique. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 455-462), Monterey, CA.
Muller, M. J. (2003). Participatory design: The third space in HCI. In J. A. Jacko & A. Sears (Eds.), The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applica- tions (pp. 1051-1058). Lawrence Erlbaum.
Muller, M. J., & Kuhn, S. (1993). Participatory design. Communications of the ACM, 36(4), 24-28. Muller, M. J., Matheson, L., Page, C., & Gallup, R. (1998). Participatory heuristic evaluation. interac-
tions, 5(5), 13-18.
Muller, M. J., Wildman, D. M., & White, E. A. (1993). 'Equal opportunity' PD using PICTIVE. Com- munications of the ACM, 36(6), 64.
Mumford, E. (1981). Participative systems design: Structure and method. Systems, Objectives, Solutions,
1(1), 5-19.
Mundorf, N., Westin, S., & Dholakia, N. (1993). Effects of hedonic (emotional) components and user's gender on the acceptance of screen-based information services. Behaviour & Information Technology, 12, 293-303.
Murano, P. (2006). Why anthropomorphic user interface feedback can be effective and preferred by users. In C.-S. Chen, J. Filipe, I. Seruca & J. Cordeiro (Eds.), Enterprise Information Systems (Vol. 7, pp. 241-248). Dordrecht, The Netherlands: Springer.
Murphy, R. R. (2005). Humans, robots, rubble, and research. interactions, 12(2), 37-39. Myers, B. A. (1989). User-interface tools: Introduction and survey. IEEE Software, 6(1), 15-23. Myers, B. A. (1992). State of the Art in User Interface Software Tools. Carnegie Mellon University.
Myers, B. A. (1993). State of the art in user interface software tools. In R. Hartson & D. Hix (Eds.),
Advances in Human-Computer Interaction (Vol. 4). Norwood, NJ: Ablex.
Myers, B. A. (1995). State of the art in user interface software tools. In R. M. Baecker, J. Grudin, W. A. S. Buxton & S. Greenberg (Eds.), Readings in Human-Computer Interaction: Toward the Year 2000 (pp. 323-343). San Francisco: Morgan-Kaufmann Publishers, Inc.
Myers, B. A., Hudson, S. E., & Pausch, R. (2000). Past, present, and future of user interface software tools. ACM Transactions on Computer-Human Interaction, 7(1), 3-28.
Myers, B. A., & Rosson, M. B. (1992). Survey on user interface programming. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 195-202), Monterey, CA.
Myers, I. B., McCaulley, M. H., Quenk, N. L., & Hammer, A. L. (1998). MBTI Manual (A Guide to the Development And Use of the Myers Briggs Type Indicator) (3rd ed.). Consulting Psychologists Press.
Nardi, B. A. (1995). Context and Consciousness: Activity Theory and Human Computer Interaction.
Cambridge, MA: MIT Press.
Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are social actors. In Proceedings of the CHI Con- ference on Human Factors in Computing Systems (pp. 72-78). Boston, MA.
Nayak, N. P., Mrazek, D., & Smith, D. R. (1995). Analyzing and communicating usability data: Now that you have the data what do you do? A CHI'94 workshop. SIGCHI Bulletin, 27(1), 22-30.
Newman, W. M. (1968). A system for interactive graphical programming. In Proceedings of the Spring Joint Computer Conference (pp. 47-54). Atlantic City, NJ.
Newman, W. M. (1998). On simulation, measurement, and piecewise usability evaluation. In G. M. Olson & T. P. Moran (Eds.), Commentary 10 on "Damaged Merchandise," Human-Computer Inter- action (Vol. 13, Issue 3, pp. 316-323). Lawrence Erlbaum.
Nielsen, J. (1987). Using scenarios to develop user friendly videotex systems. In: Proceedings of the NordDATA Joint Scandinavian Computer Conference (pp. 133-138), Trondheim, Norway.
Nielsen, J. (1989). Usability engineering at a discount. In G. Salvendy & M. J. Smith (Eds.), Designing and Using Human-Computer Interfaces and Knowledge-Based Systems (pp. 394-401). Amsterdam: Elsevier Science.
Nielsen, J. (1990). Traditional dialogue design applied to modern user interfaces. Communications of the ACM, 33(10), 109-118.
Nielsen, J. (1992a). Finding usability problems through heuristic evaluation. In: Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 373-380), Monterey, CA.
Nielsen, J. (1992b). The usability engineering lifecycle. IEEE Computer, 25(3), 12-22. Nielsen, J. (1993). Usability Engineering. Chestnut Hill, MA: Academic Press Professional.
Nielsen, J. (1994a). Enhancing the explanatory power of usability heuristics. In: Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 152-158), Boston, MA.
Nielsen, J. (1994b). Heuristic evaluation. In J. Nielsen & R. L. Mack (Eds.), Usability Inspection Methods.
New York: John Wiley.
Nielsen, J. (1994c). Guerrilla HCI: Using discount usability engineering to penetrate the intimida- tion barrier. In R. G. Bias & D. J. Mayhew (Eds.), Cost-Justifying Usability (pp. 245-272). Orlando, FL: Academic Press.
Nielsen, J. (2008). Agile development projects and usability. http://www.useit.com/alertbox/ agile-methods.html (useit.com Alertbox). Last accessed 07/23/2011.
Nielsen, J., Bush, R. M., Dayton, T., Mond, N. E., Muller, M. J., & Root, R. W. (1992). Teaching ex- perienced developers to design graphical user interfaces. In: Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 557-564), Monterey, CA.
Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. In Proceedings of the INTERACT Conference on Human-Computer Interaction and CHI Conference on Human Factors in Computing Systems (INTERCHI) (pp. 206-213), Amsterdam, The Netherlands. Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces. In Proceedings of the CHI Con-
ference on Human Factors in Computing Systems (pp. 249-256), Seattle, WA. Nieters, J. E., Ivaturi, S., & Ahmed, I. (2007). Making personas memorable. In: Proceedings of the CHI Con-
ference on Human Factors in Computing Systems (Extended Abstracts) (pp. 1817-1824), San Jose, CA. Nilsson, P., & Ottersten, I. (1998). Interaction design: Leaving the engineering perspective behind.
In L. E. Wood (Ed.), User Interface Design: Bridging the Gap from User Requirements to Design
(pp. 131-152).
Norman, D. A. (1986). Cognitive engineering. In D. A. Norman & S. W. Draper (Eds.), User Centered System Design: New Perspectives on Human-Computer Interaction (pp. 31-61). Hillsdale, NJ: Lawrence Erlbaum.
Norman, D. A. (1990). The Design of Everyday Things. New York: Basic Books.
Norman, D. A. (1998). The Invisible Computer-Why Good Products Can Fail, the Personal Computer Is So Complex, and Information Appliances Are the Solution. MIT Press.
Norman, D. A. (1999). Affordance, conventions, and design. interactions, 6(3), 38-43.
Norman, D. A. (2002). Emotion and design: Attractive things work better. interactions, 9(4), 36-42. Norman, D. A. (2004). Emotional Design: Why We Love (Or Hate) Everyday Things. New York: Basic Books. Norman, D. A. (2006). Logic versus usage: The case for activity-centered design. interactions, 13(6), 45 63. Norman, D. A. (2007a). Simplicity is highly overrated. interactions, 14(2), 40-41.
Norman, D. A. (2007b). The next UI breakthrough, part 2: Physicality. interactions, 14(4), 46-47. Norman, D. A. (2008). Simplicity is not the answer. interactions, 15(5), 45-46.
Norman, D. A. (2009a). Designing the infrastructure. interactions, 16(4), 66-69. Norman, D. A. (2009b). Systems thinking: A product is more than a product. interactions, 16(5), 52-54. Nowell, L., Schulman, R., & Hix, D. (2002). Graphical encoding for information visualization: An em-
pirical study. In Proceedings of the IEEE Symposium on Information Visualization (INFOVIS). (p. 43). Olsen, D. R., Jr. (1983). Automatic generation of interactive systems. Computer Graphics, 17(1), 53-57. O'Malley, C., Draper, S., & Riley, M. (1984, September 4-7). Constructive interaction: A method for studying human-computer-human interaction. In Proceedings of the INTERACT Conference on
Human-Computer Interaction (pp. 269-274), London, UK.
Open Software Foundation, (1990). OSF/Motif Style Guide: Revision 1.0. Prentice-Hall, Inc. Paradiso, J. A. (2005). Sensate media. Communications of the ACM, 48(3), 70.
Paradiso, J. A., Lifton, J., & Broxton, M. (2004). Sensate media-Multimodal electronic skins as dense sensor networks. BT Technology Journal, 22(4), 32-44.
Patton, J. (2002). Hitting the target: Adding interaction design to agile software development. In
Proceedings of OOPSLA 2002 Practitioners Reports (pp. 1-7). Seattle, WA.
Patton, J. (2008, June 27). Twelve emerging best practices for adding UX work to Agile development. http://agileproductdesign.com/blog/emerging_best_agile_ux_practice.html. Last accessed 11/29/2010.
Paulk, M. C., Curtis, B., Chrissis, M. B., & Weber, C. (1993). Capability Maturity Model for Software, Ver- sion 1.1. Report Number CMU/SEI-93-TR-24. Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University.
Payne, S. J., & Green, T. R. G. (1986). Task-action grammars: A model of the mental representation of task languages. Human-Computer Interaction, 2, 93-133.
Payne, S. J., & Green, T. R. G. (1989). Task-action grammar: The model and its developments. In D. Diaper (Ed.), Task Analysis for Human-Computer Interaction (pp. 75-107). Chichester, England: Ellis Horwood.
Pering, C. (2002). Interaction design prototyping of communicator devices: Towards meeting the hardware-software challenge. interactions, 9(6), 36-46.
Petersen, M. G., Madsen, K. H., & Kjaer, A. (2002). The usability of everyday technology: Emerging and fading opportunities. ACM Transactions on Computer-Human Interaction, 9(2), 74-105.
Pew, R. N., & Rollins, A. M. (1975). Dialog Specification Procedure . Report Number 5129 (Rev. ed.).
Cambridge, MA: Bolt, Beranek, and Newman.
Pogue, D. Appeal of iPad 2 is a matter of emotions. http://www.nytimes.com/2011/03/10/ technology/personaltech/10pogue.html?_r1/42&hpw. Last accessed 7/11/2011.
Potosnak, K. (1987). Where human factors fits in the design process. IEEE Computer, 90-92. Potosnak, K. (1988). Getting the most out of design guidelines. IEEE Software, 5(1), 85-86. Pressman, R. (2009). Software Engineering: A Practitioner's Approach (7th ed.). McGraw-Hill.
Pyla, P. S. (2009). Connecting the Usability and Software Engineering Life Cycles: Using a Communication- Fostering Software Development Framework and Cross-Pollinated Computer Science Courses. Saarbru� cken, Germany: VDM Verlag.
Pyla, P. S., Hartson, H. R., Arthur, J. D., Smith-Jackson, T. L., Pe�rez-Quin�ones, M. A., & Hix, D. (2007). Evaluating ripple: Experiences from a cross pollinated SE-UE study. In Proceedings of CHI 2007 Workshop on Increasing the Impact of Usability Work in Software Development.
Pyla, P. S., Pe�rez-Quin�ones, M. A., Arthur, J. D., & Hartson, H. R. (2003). Towards a model-based framework for integrating usability and software engineering life cycles. In Proceedings of Inter- act 2003 Workshop on Closing the Gaps: Software Engineering and Human Computer Interaction (pp. 67-74).
Pyla, P. S., Pe�rez-Quin�ones, M. A., Arthur, J. D., & Hartson, H. R. (2004). What we should teach, but don't: Proposal for a cross pollinated HCI-SE curriculum. In Proceedings of Frontiers in Education (FIE) Conference (S1H17-S1H22), Savannah, Georgia.
Pyla, P. S., Pe�rez-Quin�ones, M. A., Arthur, J. D., & Hartson, H. R. (2005). Ripple: An event driven design representation framework for integrating usability and software engineering life cycles. In A. Seffah, J. Gulliksen & M. Desmarais (Eds.), Human-Centered Software Engineering: Integrating Usability in the Software Development Lifecycle (Vol. 8., pp. 245-265). Springer.
Quesenbery, W. (2005). Designing theatre, designing user experience. interactions, 12(2), 55-57.
Quesenbery, W. (2005). Usability standards: Connecting practice around the world. In Proceedings of the IEEE International Professional Communication Conference (IPCC) (pp. 451-457).
Quesenbery, W. (2009). Private communication with Rex Hartson.
Radoll, P. (2009). Reconstructing Australian aboriginal governance by systems design. interactions, 16 (3), 46-49.
Reeves, B., & Nass, C. I. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Stanford, CA: CSLI Publications.
Reisner, P. (1977). Use of psychological experimentation as an aid to development of a Query lan- guage. IEEE Transactions on Software Engineering SE, 3(3), 218-229.
Rettig, M. (1992). Interface design when you don't know how. Communications of the ACM, 35(1), 29-34.
Rettig, M. (1994). Prototyping for tiny fingers. Communications of the ACM, 37(4), 21-27.
Rhee, Y., & Lee, J. (2009). A model of mobile community: Designing user interfaces to support group interaction. interactions, 16(6), 46-51.
Rice, J. F. (1991a). Display color coding: 10 rules of thumb. IEEE Software, 8(1), 86. Rice, J. F. (1991b). Ten rules for color coding. Information Display, 7(3), 12-14.
Rideout, T. (1991). Changing your methods from the inside. IEEE Software, 8(3), 99-100, 111. Rising, L., & Janoff, N. S. (2000). The scrum software development process for small teams. IEEE
Software, 17(4), 26-32.
Rogers, Y., Sharp, H., & Preece, J. (2011). Interaction Design: Beyond Human-Computer Interaction
(3rd ed.). Wiley.
Rosenbaum, S., Rohn, J. A., & Humburg, J. (2000). A toolkit for strategic usability: Results from work- shops, panels, and surveys. In: Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 337-344), The Hague, The Netherlands.
Rosenberg, D. (2004). The myths of usability ROI. interactions, 11(5), 22-29.
Rosson, M. B., & Carroll, J. M. (2002). Usability Engineering: Scenario-Based Development of Human- Computer Interaction. Morgan Kaufmann.
Royce, W. W. (1970, August 25-28). Managing the development of large scale software systems. In Proceedings of IEEE Western Electronic Show and Convention (WESCON) Technical Papers (pp. A/1 1-9). Los Angeles, CA. (Reprinted in Proceedings of the Ninth International Conference on Software Engineering, Pittsburgh, ACM Press, 1989, pp. 328-338).
Rudd, J., Stern, K., & Isensee, S. (1996). Low vs. high-fidelity prototyping debate. interactions, 3(1), 76-85.
Russell, D. M., Streitz, N. A., & Winograd, T. (2005). Building disappearing computers. Communica- tions of the ACM, 48(3), 42-48.
Salter, C. (2009, June). 100 most creative people in business. Fast Company, 60. Sauro, J. (2004). Premium usability: Getting the discount without paying the price. interactions, 11(4),
30-37.
Savio, N. (2010). Solving the world's problems through design. interactions, 17(3), 52-54.
Sawyer, P., Flanders, A., & Wixon, D. (1996). Making a difference-The impact of inspections. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 376-382). Vancouver, BC, Canada.
Schaffer, E. (2004). Institutionalization of Usability: A Step-by-Step Guide. Boston, MA: Addison-Wesley. Schmandt, C. (2011). Private communication with Rex Hartson.
Schneidewind, N. F., & Ebert, C. (1998). Preserve or redesign legacy systems? IEEE Software, 15(4), 14-42.
Scholtz, J. (2005). Have robots, need interaction with humans!. interactions, 12(2), 12-14.
Schrepp, M., Held, T., & Laugwitz, B. (2006). The influence of hedonic quality on the attractiveness of user interfaces of business management software. Interacting with Computers, 18(5), 1055-1069.
Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. Gagne & M. Scriven (Eds.),
Perspectives of Curriculum Evaluation (pp. 39-83). Chicago: Rand McNally.
Sears, A. (1997). Heuristic walkthroughs: Finding the problems without the noise. International Jour- nal of Human-Computer Interaction, 9(3), 213-234.
Sears, A., & Hess, D. J. (1999). Cognitive walkthroughs: Understanding the effect of task-description detail on evaluator performance. International Journal of Human-Computer Interaction, 11(3), 185-200.
Sellen, A., Eardley, R., Izadi, S., & Harper, R. (2006). The whereabouts clock: Early testing of a sit- uated awareness device. In Proceedings of the CHI Conference on Human Factors in Computing Sys- tems (Extended Abstracts).
Sellers, M. (1994). Designing for demanding users. interactions, 1(3), 54-64.
Shattuck, L. W., & Woods, D. D. (1994). The critical incident technique: 40 years later. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 1080-1084).
Shih, Y.-H., & Liu, M. (2007). The importance of emotional usability. Journal of Educational Technology Usability, 36(2), 203-218.
Shneiderman, B. (1980). Software Psychology: Human Factors in Computer and Information Systems.
Winthrop.
Shneiderman, B. (1982). The future of interactive systems and the emergence of direct manipula- tion. Behavior and Information Technology, 1(3), 237-256.
Shneiderman, B. (1983). Direct manipulation: A step beyond programming languages. IEEE Com- puter, 16(8), 57-69.
Shneiderman, B. (1998). Designing the User Interface: Strategies for Effective Human-Computer Interaction
(3rd ed.). Menlo Park, CA: Addison Wesley.
Shneiderman, B., & Plaisant, C. (2005). Designing the User Interface: Strategies for Effective Human- Computer Interaction (4th ed.). Reading, MA: Addison-Wesley.
Sidner, C., & Lee, C. (2005). Robots as laboratory hosts. interactions, 12(2), 24-26. Siegel, D. A. (2003). The business case for user-centered design: Increasing your power of persuasion.
interactions, 10(3), 30-36.
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129-138.
Simon, H. A. (1974). How big is a chunk? Science, 183(4124), 482-488.
Slivka, E. (2009a, October 22). Apple Employee T-Shirt Unboxing Photos. In MacRumors: Page 2. http://www.macrumors.com/2009/10/22/apple-employee-t-shirt-unboxing-photos/. Last accessed 9/2/2010.
Slivka, E. (2009b, October 5). Apple Job Offer 'Unboxing' Pictures Posted. In MacRumors: Page 2. http://www.macrumors.com/2009/10/05/apple-job-offer-unboxing-pictures-posted/. Last accessed 09/02/2010.
Smith, D. C., Irby, C., Kimball, R., Verplank, B., & Harslem, E. (1989). Designing the Star user inter- face (1982). In Perspectives on the Computer Revolution (pp. 261-283). Ablex Publishing.
Smith, S. L., & Mosier, J. N. (1986). Guidelines for Designing User Interface Software. Report Number MTR-10090. Bedford, MA: Mitre Corp.
Snodgrass, A., & Coyne, R. (2006). Interpretation in Architecture: Design as a Way of Thinking. Routledge. Sodan, A. C. (1998). Yin and yang in computer science. Communications of the ACM, 41(4),
103-114.
Sommerville, I. (2006). Software Engineering (8th ed.). Harlow, England: Addison Wesley.
Souza, F. D., & Bevan, N. (1990). The use of guidelines in menu interface design: Evaluation of a draft standard. In Proceedings of the INTERACT Conference on Human-Computer Interaction (pp. 435-440).
Spolsky, J. (2007, August 29). Even the Office 2007 box has a learning curve. http://www.joelonsoftware. com/items/2007/08/18.html. Last accessed 10/20/2010.
Spool, J., & Schroeder, W. (2001). Testing web sites: Five users is nowhere near enough. In Proceedings of the CHI Conference on Human Factors in Computing Systems (Extended Abstracts) (pp. 285-286), Seattle, WA.
Stake, R. (2004). Standards-Based and Responsive Evaluation. Sage Publications. Stevens, W. P., Myers, G. J., & Constantine, L. L. (1974). Structured design. IBM Systems Journal, 13(2),
115-139.
Stewart, T. (2002). How to cope with success. interactions, 9(6), 17-21.
Strijland, P. (1993). Human interface standards: Can we do better? StandardView, 1(1), 26-30. Sutherland, I. E. (1963). Sketchpad: A Man-Machine Graphical Communication System. Dissertation,
Cambridge, MA: MIT.
Sutherland, I. E. (1964). Sketchpad: A Man-Machine Graphical Communication System. Cambridge, United Kingdom: University of Cambridge.
Sutton, S. (2007). Review of "Cost-Justifying Usability: An Update for the Internet Age (2nd ed.) by Randolph G. Bias and Deborah J. Mayhew, Editors." interactions, 14(5), 48-50.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12, 257-285.
Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4(4), 295-312.
Tatar, D., Harrison, S., & Sengers, P. (2007). The three paradigms of HCI. In Proceedings of Alt.chi, CHI Conference on Human Factors in Computing Systems. San Jose, CA.
Taylor, F. W. (1911). The Principles of Scientific Management. New York: Harper & Brothers. The Open Group. Motif. http://www.opengroup.org/motif/. Last accessed 07/10/2011.
The Roanoke Times. (2006). Travel reservation system found to be costly flop. The Roanoke Times, (November 16).
The Standish Group. (1994). The CHAOS Report. The Standish Group. (2001). Extreme CHAOS.
Theofanos, M., & Quesenbery, W. (2005). Towards the design of effective formative test reports. Jour- nal of Usability Studies, 1(1), 27-45.
Theofanos, M., Quesenbery, W., Snyder, C., Dayton, D., & Lewis, J. (2005). Reporting on Formative Testing: A UPA 2005 Workshop Report. In Proceedings of the UPA International Conference. Montreal, Quebec.
Thibodeau, P. (2005, June 20). Large users hope for broader adoption of usability standard.
Computerworld.
Thomas, J. C. (1993). Personal communication with Rex Hartson. Thomas, J. C., & Kellogg, W. A. (1989). Minimizing ecological gaps in interface design. IEEE Software,
6(1), 78-86.
Thomas, P., & Macredie, R. D. (2002). Introduction to the new usability. ACM Transactions on Computer-Human Interaction, 9(2), 69-73.
Tognazzini, B. T. (2005). Why engineers own user experience design. interactions, 12(3), 32-34.
Tohidi, M., Buxton, W., Baecker, R. M., & Sellen, A. (2006). User sketches: A quick, inexpensive, and effective way to elicit more reflective user feedback. In Proceedings of the Nordic Conference on Human-Computer Interaction (pp. 105-114). Oslo, Norway.
Travis, A. T. (2009). Sketchy Wireframes: When you can't (or shouldn't) draw a straight line. http:// boxesandarrows.com/view/sketchy-wireframes. Last accessed 7/14/2011.
Trenner, L., & Bawa, J. (1998). The Politics of Usability: A Practical Guide to Designing Usable Systems in Industry. Secaucus, NJ: Springer-Verlag New York, Inc.
Truss, L. (2003). Eats, Shoots & Leaves: The Zero Tolerance Approach to Punctuation. United Kingdom: Profile Books.
Tscheligi, M. (2005). Ambient intelligence: The next generation of user centeredness. interactions, 12(4).
Tufte, E. R. (1983). The Visual Display of Quantitative Data. Cheshire, CT: Graphics Press. Tufte, E. R. (1990). Envisioning Information. Cheshire, CT: Graphics Press.
Tufte, E. R. (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Cheshire, CT: Graphics Press.
Tullis, T. S. (1990). High-fidelity prototyping throughout the design process. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (p. 266). Santa Monica, CA.
Tullis, T. S., & Albert, B. (2008). Measuring the User Experience. Burlington, MA: Morgan Kaufmann.
Tullis, T. S., & Stetson, J. N. (2004). A comparison of questionnaires for assessing website usability. In
Proceedings of the UPA International Conference (pp. 1-12).
Tungare, M., Pyla, P. S., Glina, V., Bafna, P., Balli, U., Zheng, W., et al. (2006). Embodied data objects: Tangible Interfaces to Information Appliances. In Proceedings of 44th ACM Southeast Conference (ACMSE) (pp. 359-364).
U.S. Department of Health and Human Services. (2006). Research-Based Web Design & Usability Guidelines .
Usability Net. (2006). Questionnaire resources. http://www.usabilitynet.org/tools/r_questionnaire. htm. Last accessed 7/15/2011.
Venkatesh, V., Ramesh, V., & Massey, A. P. (2003). Understanding usability in mobile commerce.
Communications of the ACM, 46(12), 53-56.
Vermeeren, A.P.O.S., Bouwmeester, K. D., Aasman, J., & de Ridder, H. (2002). DEVAN: A tool for detailed video analysis of user test data. Behaviour & Information Technology, 21(6), 403-423. Vermeeren, A.P.O.S., van Kesteren, I. E. H., & Bekker, M. M. (2003). Managing the evaluator effect in usertesting. In Proceedingsofthe INTERACTConferenceon Human-Computer Interaction (pp. 647-654).
Zurich, Switzerland.
Vertelney, L. (1989). Using video to prototype user interfaces. SIGCHI Bulletin, 21(2), 57-61. Virzi, R. A. (1990). Streamlining the design process: Running fewer subjects. In Proceedings of the
Human Factors and Ergonomics Society Annual Meeting (pp. 291-294).
Virzi, R. A. (1992). Refining the test phase of usability evaluation: How many subjects is enough? Jour- nal of the Human Factors and Ergonomics Society, 34(4), 457-468.
Virzi, R. A., Sokolov, J. L., & Karis, D. (1996). Usability problem identification using both low- and high- fidelity prototypes. In Proceedings of the CHI Conference on Human Factors in Computing Systems. (pp. 236-243). British Columbia, Canada: Vancouver.
Wasserman, A. I. (1973). The design of 'idiot-proof' interactive programs. In Proceedings of National Computer Conference (pp. M34-M38).
Wasserman, V., Rafaeli, A., & Kluger, A. N. (2000). Aesthetic symbols as emotional cues. In S. Fine- man (Ed.), Emotion in Organizations (pp. 140-165). London: SAGE.
Weiser, M. (1991). The computer for the 21st century. Scientific American, 265, 94-100.
Weiss, S. (2005). An alternative business model for addressing usability: Subscription research for the telecom industry. interactions, 12(4), 62-64.
Weller, H. G., & Hartson, R. (1992). Metaphors for the nature of human-computer interaction in an empowering environment: Interaction style influences the manner of human accomplish- ment. Computers in Human Behavior, 8(4), 313-333.
Westerman, S., Gardner, P. H., & Sutherland, E. J. (2006). HUMAINE D9g, Taxonomy of Affective Systems Usability Testing (Workpackage 9 Deliverable). Information Society Technologies.
Whiteside, J. A., & Wixon, D. (1985). Developmental theory as a framework for studying human- computer interaction. In R. Hartson (Ed.), Advances in Human-Computer Interaction (Vol. 1, pp. 29-48). Norwood, NJ: Ablex Publishing.
Whiteside, J. A., Bennett, J., & Holtzblatt, K. (1988). Usability engineering: Our experience and evolution. In M. Helander (Ed.), Handbook of Human-Computer Interaction (pp. 791-817). Elsevier Science.
Whiteside, J. A., Jones, S., Levy, P. S., & Wixon, D. (1985). User performance with command, menu, and iconic interfaces. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 185-191). San Francisco, CA.
Wiklund, M., Thurrott, C., & Dumas, J. S. (1992). Does the fidelity of software prototypes affect the perception of usability. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (pp. 399-403). Santa Monica, CA.
Wildman, D. (1995). Getting the most from paired-user testing. interactions, 2(3), 21-27.
Williges, R. C. (1982). Applying the human information processing approach to human/computer interactions. In W. C. Howell & E. A. Fleishman (Eds.), Information Processing and Decision Mak- ing (Vol. 2., p. 83). Hillsdale, NJ: Lawrence Erlbaum.
Williges, R. C. (1984, May). Evaluating human-computer software interfaces. In Proceedings of the International Conference on Occupational Ergonomics (pp. 81-87), Toronto, Canada.
Wilson, C. (2011, March). Perspective-Based Inspection (Method 10 in 100 User Experience Design and Evaluation Methods for Your Toolkit). http://dux.typepad.com/dux/2011/03/. Last accessed 7/15/2011.
Wilson, C. E. (2007). Please listen to me!: Or, how can usability practitioners be more persuasive?
interactions, 14(2), 44-45, 55.
Winchester, W. W., III, (2009). Catalyzing a perfect storm: Mobile phone-based HIV-prevention behavioral interventions. interactions, 16(6), 5-12.
Winograd, T., & Flores, F. (1986). Understanding Computers and Cognition: A New Foundation for Design.
Norwood, NJ: Ablex Publishing Co.
Wixon, D. (2003). Evaluating usability methods: Why the current literature fails the practitioner.
interactions, 10(4), 28-34.
Wixon, D., & Whiteside, J. A. (1985). Engineering for usability (panel session): Lessons from the user derived interface. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 144-147). San Francisco, CA.
Wood, S. (2007). CHI '07 Course: Building Affinity Diagrams to Reveal User Needs and Engage Devel- opers. Unpublished CHI '07 course notes.
Wright, P. K. (2005). Rapid prototyping in consumer product design. Communications of the ACM, 48 (6), 36-41.
Wright, P., Lickorish, A., & Milroy, R. (1994). Remembering while mousing: The cognitive costs of mouse clicks. SIGCHI Bulletin, 26(1), 41-45.
Ye, S. X., & Qiu, R. G. (2003). Global identification code scheme for promptly retrieving the perti- nent information of a worldwide uniquely identifiable object. In Proceedings of the International Conference on Control and Automation (ICCA) (pp. 1000-1004).
Young, R. M., Green, T. R. G., & Simon, T. (1989). Programmable user models for predictive eval- uation of interface designs. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 15-19).
Zhang, P. (2009). Theorizing the relationship between affect and aesthetics in the ICT design and use context. In Proceedings of the International Conference on Information Resources Management, Du- bai: United Arab Emirates.
Zhang, P., & Li, N. (2004). Love at first sight or sustained effect? The role of perceived affective qual- ity on users' cognitive reactions to information technology. In Proceedings of the International Conference on Information Systems (ICIS) (pp. 283-296). Washington, DC.
Zhang, P., & Li, N. (2005). The importance of affective quality. Communications of the ACM, 48(9), 105-110.
Zhang, Z., Basili, V., & Shneiderman, B. (1999). Perspective-based usability inspection: An empirical validation of efficacy. Empirical Software Engineering, 4(1), 43-69.
Zieniewicz, M. J., Johnson, D. C., Wong, D. C., & Flatt, J. D. (2002). The evolution of army wearable computers. IEEE Pervasive Computing, 1(4), 30-40.
Exercises
INTRODUCTION TO EXERCISES
Active Learning
The best way to learn the processes described in this book is by doing them! We have organized your participation in the process at three levels: examples for you to follow in the text, a more or less parallel set of exercises to do on your own, and a set of extensively specified team project assignments (on the
book Website).
Pointers to the exercises are indicated within many of the chapters, often right after a similar example in the text. Those pointers refer to the exercise descriptions here. The location of each forward reference is where you should consider doing the exercise before moving on, but we have put the exercise descriptions here so as not to interrupt the flow of the rest of the text in the chapters. Finally, a comprehensive set of team project assignments is available in the Instructor's Guide, available to instructors from the publisher. The exercises require medium-level engagement, somewhere in between the in-text examples and full project assignments.
Within the broader audience of this book, individual readers are encouraged to follow the examples and undertake the exercises on their own. Groups of readers, whether within classes taking the material as a course or within organizations that wish to acquire competency in these processes, will benefit even more from carrying out the exercises as a team. You should be able to figure out, from each exercise description, how to pursue the exercise either as an individual or as a team.
The exercises are for learning, not for producing a product, so you do not have to complete every detail if you think you have gotten what you need to out of each one. You should be able to learn most of what you can get from most exercises in an hour or so. In the case of a team within a classroom setting, this means that you can do the exercises as in-class activities, breaking out into teams and working in parallel, and possibly finishing the exercise as homework before the next class. This has the advantages of working next to other teams with similar goals and problems and of having an instructor present who can move
among teams as a consultant and mentor. We recommend that student team deliverables be prepared in summary form for presentation to the rest of the class so that each team can learn from the others.
Choosing a Target Application System
Your choice of a target application system should be gauged toward the goal of learning, not producing a product. That means choosing something the right size. Avoid applications that are too large or complex; choose something for which the semantics and functionality are relatively easy to understand.
However, avoid systems that are too small or too simple because they may not support the process activities very well. The bottom line: Choose something broad enough so that you can use the same system in all the exercises, each time building on your previous experience.
The criterion for selection here is that you will need to identify at least a half-dozen somewhat different kinds of user tasks. That usually means, for example, that a Website used only for information seeking is not a good candidate because information seeking is only a single type of task and often does not involve enough differences in the kinds of interaction. You should also choose a system that has more than one class of user. For example, an
e-commerce Website for ordering merchandise will have users from the public doing the ordering and employee users processing the orders.
For practitioner teams in a real development organization, we recommend against using a real development project for these exercises. There is no sense in introducing the pressure to produce a real design and the risk of failure into this learning process.
Because many parts of these processes are best learned by interacting with a "user," "customer," or "client," it helps to choose an application for which you can find (among friends, family, or fellow students or practitioners) or simulate these roles, for example, for contextual inquiry interviews.
CHAPTER 3 EXERCISES
Exercise 3-1: System Concept Statement for a System of Your Choice
Goal: Get practice in writing a concise system concept statement.
Activities:
� Write a system concept statement for a system of your choice.
� Iterate and polish it. The 150 or fewer words you write here will be among the most important words in the whole project; they should be highly polished, which means that
you should spend a disproportionate amount of time and energy thinking about, writing, reading, editing, discussing, and rewriting this system concept statement.
Deliverables: Your "final" system concept statement.
Schedule: Given the simplicity of the domain, we expect you can get what you need from this exercise in about 30 minutes.
Exercise 3-2: Contextual Inquiry Data Gathering for the System of Your Choice
Goal: Get practice in performing contextual inquiry.
Activities:
� The best conditions for this exercise are to work as a team and have a real client, as you would in a team project, for example, in a course.
� If you are working with a team but do not have a real client, divide your team members into users and interviewers and do a role-playing exercise. If you are working alone, invite some friends over for one of your famous pizza-and-beer-and-contextual-inquiry parties and have them play a user role while you interview them. We have found that you get the best results if you follow this order: eat the pizza, do the exercise, drink the beer.
� Do your best to suspend disbelief and pretend that you and your users are in a real situation within the context of your domain of investigation.
� Interviewers each take their own transcripts of raw data notes as you ask questions and listen to users talk about their work activities in this domain.
� Preface each note with the user ID, for example, U3, of the user from whom the note is derived.
Deliverables: At least a few pages of raw contextual inquiry data transcript, hand written or typed, for the investigations you conducted for your example system. Include a few interesting examples (something unexpected or unique) from your notes to share.
Schedule: Given the simplicity of the domain, we expect this exercise to take about 1 to 2 hours.
CHAPTER 4 EXERCISES
Exercise 4-1: Flow Model Sketch for Your System
Goal: Get practice in making an initial flow model sketch for the work practice of an organization.
Activities:
� For your target system sketch out a flow model diagram, in the same style as our flow model sketch for MUTTS, shown in Figure 4-3, showing work roles, information flow, information repositories, transactions, etc.
� Draw on your raw work activity data and construct a representation of the flow of data, information, and work artifacts.
� Even if there is no existing automated system, you should capture the flow of the manual work process.
� Start with representing your work roles as nodes, add in any other nodes for databases and so on.
� Label communication and flow lines.
� If you do not have enough contextual data from your limited data-gathering exercise, make some up to make this work.
Deliverables: A one-page diagram illustrating a high-level flow model for the existing work process of your target system.
Schedule: Given the simplicity of the domain, we expect this exercise to take about an hour.
Exercise 4-2: Work Activity Notes for Your System
Goal: Get practice in synthesizing work activity notes from your contextual data.
Activities:
� If you are working alone, it is time for another pizza-and-beer-and-contextual-analysis party with your friends.
� However you form your team, appoint a team leader and a person to act as note recorder.
� The team leader leads the group through raw data, synthesizing work activity notes on the fly.
� Be sure to filter out all unnecessary verbiage, fluff, and noise.
� As the work activity notes are called out, the recorder types them into a laptop (preferably with a screen projector so that the group can see the work in progress).
� Everyone in the team should work together to make sure that the individual work activity notes are disambiguated from context dependencies (usually by adding explanatory text in italics).
Deliverables: At least a few dozen work activity notes synthesized from your raw contextual inquiry data transcript for your system, hand written or typed into a laptop. Highlight a few of your most interesting synthesized work activity notes for sharing.
Schedule: Based on our experience with these activities, we expect this to take you an hour or two.
Exercise 4-3: WAAD Building for Your System
Goal: Get practice in building a work activity affinity diagram to sort and organize contextual data.
Activities:
� If you are working alone, it is time for yet another pizza-and-beer-and-contextual-analysis party with your friends (the last time you have to buy pizza, at least in this chapter).
� However you assemble your team, using all the work activity notes created from the contextual inquiry investigations you did in the previous exercise, do your best to follow the procedure we have described in this chapter for WAAD building.
� Take digital photographs of your work process and products, including the full WAAD, some medium-level details, and some close-ups of interesting parts.
� Hang them on your fridge with magnets.
Deliverables: As much of the full WAAD for your system as you were able to produce. It is probably best to keep it rolled up into a bundle for safe keeping unless you have the luxury of being able to keep it taped to the wall. You should also have the digital photos you took of your WAAD. If you are working in a classroom environment, be prepared to share the photos in a narrated slide show and to discuss your WAAD and the process of building it with other teams in the class.
Schedule: This is one of the more time-consuming exercises; expect it to take 4 to 6 hours.
CHAPTER 5 EXERCISES
Exercise 5-1: Extracting Requirement Statements for Your System
Goal: Get some practice with requirements extraction.
Activities:
� Assemble a team per the preparation guidelines in this chapter.
� Choose a leader and recorder.
� Get together with your team where you have hung your WAAD for the Ticket Kiosk System or hang it back up again if you had to take it down before.
� Number all the WAAD nodes and notes with a structured set of ID markers.
� Do a careful walkthrough, traversing the WAAD.
� For each work activity note in the WAAD, work as a team to:
� Deduce user need(s) and interaction design requirements to support the need(s).
� As you go, have the recorder capture requirements in the format of Figures 5-4 and 5-5, including extrapolation requirements and rationale statements, where appropriate.
� In the process, also make notes and lists about:
� Questions about missing data
� Software requirements inputs
� System support needs
� Marketing inputs
� Ways to enhance the overall user experience
� Information about design-informing models
� Future features and issues
� To speed things up, have each person be responsible for writing the requirement statements extracted from a different sub-tree in the WAAD structure. Set aside any work activity notes that require additional thought or discussion to be dealt with at the end by the team as a whole.
� If time permits, have the whole team read all requirement statements to assure agreement.
Deliverables:
� A requirements document covering at least one subtree of the WAAD for your system.
� Notes and lists of the other kinds of information (above bullets) that come out of this process.
Schedule: We expect that this exercise could take at least a couple of hours. If you simply do not have that kind of time to devote to it, do as much as you can to at least get a flavor of how this exercise works.
Exercise 5-2: Constraints for Your System
Goal: Get a little experience in specifying constraints for system development.
Activities: Extract and deduce what you can about development and implementation constraints from contextual data for the system of your choice.
Deliverables: A short list of same.
Schedule: A half hour should do it.
CHAPTER 6 EXERCISES
Exercise 6-1: Identifying Work Roles for Your System
Goal: Get a little practice at identifying work roles from your contextual data.
Activities: By now you should be pretty certain about the work roles for your system.
� Using your user-related contextual data notes, identify the major work roles for your system.
� Write the major ones in a list.
� For each role, add explanatory notes describing the role.
� For each role, add a description of the major task set that people in that role would be expected to perform.
Deliverables: A written list of work roles you identified for your system, each with an explanation of the role and a description of the associated task set.
Schedule: A half hour should do it.
Exercise 6-2: User Class Definitions for Your System
Goal: Get practice in defining user classes for work roles.
Activities:
� Using your user-related contextual data notes, create a few user class definitions to go with the work roles definitions you created in the previous exercise.
� For each of the work roles that you identified in the previous exercise, draw on your user-related contextual data notes to define one or two corresponding user classes, describing the characteristics of each.
Deliverables: A few user class definitions to go with the work roles identified for the system of your choice.
Schedule: A half hour to 45 minutes should be enough to get the most out of this assignment.
Exercise 6-3: A Social Model for Your System
Goal: Get a little practice in making a social model diagram.
Activities:
� Identify active entities, such as work roles, and represent as nodes in the diagram.
� Include groups and subgroups of roles and external roles that interact with work roles.
� Include system-related roles, such as a central database.
� Include workplace ambiance and its pressures and influences.
� Identify concerns and perspectives and represent as attributes of nodes.
� Identify social relationships, such as influences between entities, and represent these as arcs between nodes in the diagram.
� Identify barriers, or potential barriers, in relationships between entities and represent them as red bolts of lightning ().
Deliverables: One social model diagram for your system, with as much detail as feasible.
Schedule: This could take a couple of hours.
Exercise 6-4: A Social Model for a "Smartphone"
Sketch out an annotated social model for the use of an iPhone or similar smartphone by you and your friends.
Exercise 6-5: Creating a Flow Model for Your System
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment