Skip to content

Instantly share code, notes, and snippets.

@patilswapnilv
Created September 21, 2023 15:54
Show Gist options
  • Save patilswapnilv/7ddbc5bab7643b6e743c5fd168b548ed to your computer and use it in GitHub Desktop.
Save patilswapnilv/7ddbc5bab7643b6e743c5fd168b548ed to your computer and use it in GitHub Desktop.
paradigm is the importance of emotional impact derived from design-the pure
joy of use, fun, and aesthetics felt in the user experience.
To put the paradigms in perspective, consider the concept of a new car design. In the first paradigm, the engineering view, a car is built on a frame that holds all the parts. The question of its utility is about how it all fits together and whether it makes sense as a machine for transportation. It is also about performance, horsepower, handling, and fuel mileage. The second paradigm will see the car design as an opportunity to develop ergonomic seating and maybe new steering control concepts, as well as placement of controls to react quickly to emergency driving situations.
The design-thinking view of the third paradigm will also encompass many of the things necessary to produce a car that works, but will emphasize emotional
impact, "coolness" of the ride, and how to optimize the design to best appeal to the joy of driving and feelings of ownership pride. The design-thinking paradigm will also highlight the phenomenological aspects of how a car becomes more than just transportation from A to B, but how it becomes an integral part of one's life.
The third paradigm, our design-thinking paradigm, is about designing for
the user experience. Architects have long known that the physical building is
not really the target of the design; they are designing for the experience of
being in and using that building. Similarly, we are not designing products to
sell; we are selling the experience that the product engenders, encourages, and
supports.
Sometimes the design-thinking approach can be in opposition to what contextual inquiry and requirements might say a design should have. Frank Lloyd Wright was a master at envisioning a concept and an experience for his clients, often ignoring their inputs. You can see similarities in the design of the iPad. Popular criticism of the iPad cited the lack of so-called connection features, the ability to write free-form notes, and so on, making this a gadget that would not appeal to people. The argument was that this will be just another gadget without a clearly defined utility because it lacked the features to replace a laptop or a desktop computer.
However, the overwhelming success of this device goes to the fact that it is not about utility but the intimate experience of holding a beautiful device and accessing information in a special way. Before the iPad, there were email, digital newspapers such as CNN.com, book readers, and photo viewers, but this device introduced an experience in doing these same things that was unprecedented. With the design-thinking approach, often the outcome is an intangible something that evokes a deeper response in the user.
7.2.4 All Three Paradigms Have a Place
These paradigms are just frameworks within which to think about design. The paradigms are not necessarily mutually exclusive; they do overlap and can be complementary. In most real system or product development, there is room for more than one approach.
To read some of the new literature on design thinking, you might think that the old engineering approach to interaction design is on its way out (Nilsson & Ottersten, 1998), but a utilitarian engineering approach is still effective for
systems with complex work domains. Just because a methodical and systematic approach to contextual inquiry, requirements, and modeling is a characteristic of the engineering paradigm does not mean that we do not pay attention to such things in the other paradigms.
Even the most innovative design thinking can benefit from being grounded in a real understanding of user work practice and user needs that comes from contextual inquiry and analysis. And even creative design thinking must still be directed and informed, and informing design can mean doing contextual inquiry and analysis, modeling, requirements extraction, prototyping, and so on. Further, there is no reason why the rich approach of design thinking, using ideation and sketching, should not be followed with iterative refinement.
Similarly, there is need for creativity and innovation in all three paradigms.
Just because we single out design thinking as the main place we discuss innovation and creativity does not mean there is no call for creativity in the other paradigms.
Further, even when the engineering paradigm or design-thinking paradigm is dominant in a project, designing from HIP-like inputs is still effective for leading to an interaction that is consistent with human cognitive capabilities and limitations. A consideration of ergonomics, human factors, and carefully studied workflow can still have a valid place in almost any kind of design.
7.3 DESIGN THINKING
The "design" box in the lifecycle template is usually a placeholder for an unknown or unspecified process. The usual UX lifecycle glosses over the whole subject of what is in the box labeled "design" (Buxton, 2007b). Design should be more than just a box within a larger lifecycle; it is a separate discipline on its own.
What some call design is applied only after functionality and interaction design are completed, when the product needs a shell or skin before going to market and everyone wants to know what color the device will be. This
might help make an existing product attractive and perhaps more marketable, but this is cosmetic design, not essential design built into the product from the start.
Fortunately, this emerging mind-set that we call design thinking turns that around and puts a focus on design up front. The design-thinking paradigm is an
approach to creating an experience that includes emotional impact, aesthetics,
and social- and value-oriented interaction. The design of the product concept
and design for emotional impact and the user experience comes first; it is a
design-driven process.
Designers are called upon to create a new vision, taking customers and users to a profound and satisfying user experience. After the design concept emerges, then engineers can follow up by providing the functionality and interaction design to make the vision a reality.
Design thinking is immersive; everything is about design. Design thinking
is integrative; you pull many different inputs, inspiration, and ideas together to
focus on a design problem. Design thinking is human centered, requiring a
thorough understanding of the needs, especially the emotional needs, of
human users.
Design thinking is market oriented, requiring a thorough understanding
of the market, trends in usage and technology, and the competition. As
such, design thinking is not just the world of dreamers and geeks; it has
become an essential business tool for decision making and marketing.
Design thinking is broadly attentive to the product, packaging, presentation,
and customer support. Design thinking is an eclectic blend of art, craft,
science, and invention.
In the traditional engineering view, we use terms such as plan, analyze, build, evaluate, and optimize. In the design-thinking perspective, you are more likely
to hear terms such as create, ideate, craft, envision, interpret, excite, provoke,
stimulate, and empathize.
The Apple iPod Touch is an example of a product resulting from design thinking. The device has superb usability; its soft buttons have precise and predictable labels. The physical device itself has a marvelous design with great emotional impact. Much design effort went into aspects that had nothing to do with performance or functionality.
The packaging, gift-wrapping, and engraving appeal to a personal and social desirability. It is attractive; it is delightful. The user experience is everything and everything is about design. In fact, the label on the device does not
say, "Made by Apple"; it says, "Designed by Apple!" "You buy it for what it can do, but you love it because it is so cool." Apple's senior vice president of industrial design, Jonathan Ive, says (Salter, 2009) "With technology, the function is much more abstract to users, so the product's meaning is almost
entirely defined by the designer."
7.4 DESIGN PERSPECTIVES
We describe three design perspectives as filters through which we view design
and design representations to guide thinking, scoping, discussing, and doing
design. They are easy to understand and do not require much explanation.
7.4.1 Ecological Perspective
The ecological design perspective is about how the system or product works within its external environment. It is about how the system or product is used in its context and how the system or product interacts or communicates with its environment in the process. This is a work role and workflow view, which includes social interaction and long-term phenomenological aspects of usage as part of one's lifestyle.
System infrastructure (Norman, 2009a) plays an important role in the ecological perspective because the infrastructure of a system, the other systems and devices with which it interacts in the world, is a major part of its ecology. Infrastructure leads you to think of user activities, not just isolated usage.
Norman (2009b) states it in a way that designers should take to heart, "A product
is actually a service."
7.4.2 Interaction Perspective
The interaction design perspective is about how users operate the system or product. It is a task and intention view, where user and system come together. It is where users look at displays and manipulate controls, doing sensory, cognitive, and physical actions.
7.4.3 Emotional Perspective
The emotional design perspective is about emotional impact and value-sensitive aspects of design. It is about social and cultural implications, as well as the aesthetics and joy of use. System infrastructure (Norman, 2009b) can also play a role in the emotional perspective because the infrastructure of a system provides scaffolding for the phenomenological aspects of usage, which are about broader usage contexts over longer periods of time.
A product is not just a product; it is an experience (Buxton, 2007a). People do not usually use a product in isolation from other activities. People use products as part of an activity, which can include many different kinds of usage of many different things. And that starts with the out-of-the-box experience, which is not enhanced by difficult hard plastic encasing, large user manuals, complex installation procedures, and having to consent to a legal agreement that you cannot possibly read.
The Delicate Balance among Visual Appeal, Emotion, and Usability
Gitte Lindgaard, Distinguished Research Professor, Carleton University, Ottawa, Canada Professor, Neuro affective psychology, Swinburne University of Technology, Melbourne, Australia
"Yellow sox ! nice guy!" We know that many snap decisions, such as assessing the suitability of a person to a
particular job, are often based on less than credible, if not entirely irrelevant, information. Still, whether we are sizing up another person or deciding to stay on a given Website, first impressions are instant, effortless, powerful, and based on affect, that is, on "what my body tells me to feel." Even decisions that should involve serious contemplation, additional information, and evidence from different sources are made instantly. Worse, once we have made a decision, we set out to "prove" to ourselves that our decision was "right."
Thus, when encountering an ugly, cluttered Website, we will be out of there on the next click, before gleaning the quality of the information, goods, or services it offers. However, if we have decided a priori to buy a given product from a certain vendor, we will persevere and complete our purchase, hating every step of the interaction. In our annoyed, even angry, state, we go out of our way to identify every trivial usability flaw simply to justify that initial decision.
Yet, we are much more likely to hang around and enjoy the ride on a pretty site even if its products are of a lower quality and the usability issues more serious and more numerous than on the ugly site so unceremoniously discarded. When given a choice, even the most unusable, but very pretty, site will typically be preferred over a less appealing, more usable site. In some studies, people, well aware of the site's poor usability, have vigorously defended and justified their choice.
Numerous other studies have shown that beauty matters and that the first impression "sets the scene" for further action, at least in a Web environment where the next site is but a click away; visual appeal is simply used as a screening device. Quality of content will only be evaluated on sites that pass that initial step.
This rather uncompromising instant approach to decide on staying or leaving a Website could suggest that being pretty is all that matters. Not so! When the Canadian government wanted to attract masses of new graduates, they designed a vibrantly colorful Website with lots of animation in the belief that "this is what young people like." They then took their beautiful Website around the country for feedback from the target audience."Yeah, we like lots of bright color and movement" was the response, "but not when looking for a job in the Government!"
For an application to appeal to users, then, their judgmental criteria depend on the usage context. Even a neutral, relative boring gray color may occasionally be very appealing, pleasant to use, and highly usable. Figure 1 shows a telecommunications network alarm management system. In earlier versions of the software it was almost impossible to identify the problem nodes, making the operator's job extremely stressful. If a blockage between two nodes is not detected and rectified within a few minutes, the problems spread so quickly that the entire network may break down, blocking all communication and making it almost impossible to fix.
The gray background on the left shows a map of a recognizable part of a certain city with a network problem. The rough-looking color indicates land surrounding a small (outlined) river, shown with a smooth surface and overlaid with the network nodes currently in alarm mode. This facilitates the geographical identification of the location. The red rectangles indicate the most serious problem nodes and the seriousness of these.
In the present example, there is no communication between the two red nodes; the yellow node is affected, but is still able to communicate. Callout balloons with the letters "C" (critical, circled in green), "M", and "m" (both
Figure 1
Example of an alarm management system relying on a simple visual language
medium) show where to start fixing the problem. Clicking on the red "C" takes the operator directly to the faulty equipment, shown on the right, where clicking again on the red C shows the affected equipment.
This example takes us back to Mark Weiser's notion of "calm computing" aiming to ensure that the user feels good and "in control" at all times. There are no design gimmicks, no fun or attempt to "jazz up" the displays with smart icons or pretty colors in this user interface; it just "feels right." This simple, very effective visual language presented on a consistent, bland background has removed most of the stress previously experienced by network operators. It has been adopted by the International Telecommunications Union as a standard for network management systems.
These examples contradict the currently sexy assumption in the human-computer interaction community that even serious tasks should be couched in a colorful gaming model. Apparently, appropriateness also features prominently when deciding how much to like a Website or an application. Judgments of appropriateness are based largely on culturally constructed expectations. The domain and the purpose of an interactive product determine our expectations and hence influence how we feel about it. This emotional effect underlies our situated judgment of appeal. Indeed, in our collective quest to create great user experiences, we must be careful not to lose sight of the traditional, often sneezed at, utilitarian brand of usability.
The example in Figure 2 is from a high-pressure petrochemical plant-management system. The plant produces many types of plastic, from purified, highly compressed gas injected under high pressure into reactor vessels operating at 2000+ C. The gas is mixed with chemical catalysts, which eventually turn the mix into tiny plastic pellets. The left side of Figure 2 shows how the pressure (red pen) and temperature (green pen) were plotted automatically on a constantly scrolling paper roll before automation. The variation in each parameter is shown in rows, and time is given in columns, with each row representing 30 minutes in elapsed time. The range of movement of those two pens enabled the team leader to easily monitor four reactor vessels simultaneously.
Three minor changes in the management system are shown on the right: (1) time is now shown in rows, (2) each column represents 10 minutes (instead of 30) of lapsed time, and (3) the two indicators are shown on different screens. These apparently minor changes paralyzed production completely. The highly experienced team with over 20 years of practice was unable to achieve the required quality of product; they continually overadjusted either the pressure or the temperature.
Consequently, the company nearly lost its main customer who bought 60% of the products, and an engineer had to be on duty with the team 24/7 for the next 6 months. The screen display was just as visually appealing as the original paper roll, but relearning the system rendered the system unusable. Thus, aesthetics alone did not ensure usability; the
7.5 USER PERSONAS
For the Latin sticklers, we prefer the easy-going "personas" over the pedantic but probably more correct "personae." Personas are a powerful supplement to work
roles and user class definitions. Storytelling, role-playing, and scenarios go hand
in hand with personas.
We have leaned heavily on Cooper (2004) for our descriptions of personas with additional ideas on connecting to contextual data from Holtzblatt, Wendell, and Wood (2005, Chapter 9) and we gladly acknowledge their contributions here. Personas are an excellent way of supporting the design
thinking and design perspectives of this chapter.
7.5.1 What Are Personas?
A persona is not an actual user, but a pretend user or a "hypothetical archetype" (Cooper, 2004). A persona represents a specific person in a specific work role and sub-role, with specific user class characteristics. Built up from contextual data, a persona is a story and description of a specific individual who has a name,
a life, and a personality.
Personas are a popular and successful technique for making your users really
real. Personas have an instant appeal through their concreteness and personal engagement that makes them ideal for sharing a visualization of the design target across the whole UX team.
Stories Are at the Center of User Experience
Whitney Quesenbery, WQusability, Coauthor, Storytelling in User Experience: Crafting Stories for Better Design (Rosenfeld Media)
Perhaps you think that stories and storytelling are out of place in a book about methodology and process. Once, you
might have been right. As recently as 2004, a proposal for a talk about writing stories and personas as a way of understanding the people who use our systems was rejected out of hand with, "Personas? Stories!? We are engineers!"
They were wrong.
Stories have always been part of how human beings, including engineers, come up with new ideas and share those ideas with others. Stories may be even more important for innovative ideas. It is not very hard to explain an incremental change: "It is just like it is now, but with this one difference." But when you are trying to imagine an entirely new concept or a design that will change basic processes, you need a story to fill in the gaps and make the connections between how it is now and how it might be.
To see what I mean, try this experiment. Close your eyes and try to explain to your 1995 self why you might want to use Twitter, Yelp, or Foursquare. There are just too many steps between the world then and the world now.
Sometimes it is easy because the context is familiar. Yelp's story is like that: You are standing somewhere-the lobby of a building or a street corner-and you are hungry. Where can you go eat? Is it open right now? The idea is easy; the product is new because we could not pull off the technology, even just a few years ago.
Sometimes it is hard because the idea meets a need you did not know you even had. When Twitter first launched, people said "Why would I want to know that much about someone else's daily life?" CommonCraft's video, Twitter in Plain English2 takes up this challenge by showing how the system works in 2 minutes and 23 seconds. Not in technical terms, but in the human actions and human relationships it is based on.
Could you have predicted that (for a few years) a FAX would be the easiest way to order lunch from the local deli?
It does not make sense until you think about the entire user experience.
One place to start an innovation story is with a frustrating situation. Tell a story that explains that point of pain. Maybe your story starts with how annoying it is to take sandwich orders from a room full of people. Include context and imagery and a realistic situation. Or it might be about the noise and craziness of lunch hour in a busy city deli, with people all yelling at once and at least three different languages in the kitchen.
2www.commoncraft.com/twitter
Now change that story to give it a better ending. That is your innovation story.
You have people, in a situation, with a problem, and a solution, along with what will make it work.
Before you decide that your story is ready to share, ask yourself, "Did it all seem too easy? Did the story seem a little too perfect?" If so, take a 10-minute timeout and start over. Back in the deli, did you decide that the solution would be a laptop on the deli counter? Did you think about the people standing behind a counter, wiping mustard off their hands? It is easy to fall into the trap of writing stories about the users we wish we had.
Stories in user experience are not made up fairy tales; they are grounded in good user research and other data. They are like personas in this way. Personas start with data, organized into user profiles. It is the stories that turn a good user profile into a persona, that is, adding the emotions, detailed personal characteristics, and specific background or goals that make a persona come alive. You cannot tell much of a story about a stick figure. However, if you imagine Jason, who is leaving high school, is interested in computers, and loves his local sports team, you can begin to think about what kind of experiences will work well for Jason and how he might interact with the product you are designing.
Similarly, you can start with a task or goal. Use your favorite method to model the task. That gives you the analysis. Put that together into a sequence of actions, and you have a scenario. Add character into that narrative, with all their context and personal goals. Let their emotions be part of it; they are not robots. Are they frustrated, eager, happy, or sad? Now you are starting to craft a story.
Both personas and stories rely on data. They are the raw material. Scenarios and profiles are the skeleton-the basic shape and size of it. But it is when you add emotion and imagery that you have a story. If you understand the human and technical context, your stories will have believable characters and narratives.
The next time you want to help someone understand a design or how it will be used, try a story instead of a technical explanation. The really great thing about stories is that they make people want to tell more stories, which will get everyone engaged with the idea and its impact on our lives. All of a sudden, you are all talking about user experience.
7.5.2 What Are Personas Used For? Why Do We Need Them? Common sense might dictate that a design for a broad user population should have the broadest possible range of functionality, with maximum flexibility in how users can pick the parts they like the most. But Cooper (2004, p. 124) tells us this thinking is wrong. He has shown that, because you simply cannot make a single design be the best for everyone, it is better to have a small percentage of the user population completely satisfied than the whole population half- satisfied.
Cooper extends this to say it can be even better to have an even smaller percentage be ecstatic. Ecstatic customers are loyal customers and effective marketing agents. The logical extreme, he says, is to design for one user. This is where a persona comes in, but you have to choose that one user very carefully.
It is not an abstract user with needs and characteristics averaged across many other kinds of users. Each persona is a single user with very concrete characteristics.
Edge cases and breadth
Personas are a tool for controlling the instinct to cover everything in a design,
including all the edge cases. This tool gives us ways to avoid all the unnecessary discussion that comes with being "edge-cased to death" in design discussions.
Personas are essential to help overcome the struggle to design for the conflicting needs and goals of too many different user classes or for user classes that are too broad or too vaguely defined. In situations where users for one work role come from different user classes, but all have to take on the same work role, a persona lets us focus on designing literally for a single person and liberates them from having to sort through all the conflicting details of multiple user classes.
As Cooper (2004) put it, personas can help end feature debates. What if the user wants to do X? Can we afford to include X? Can we afford to not include X? How about putting it in the next version? With personas, you get something more like this: "Sorry, but Noah will not need feature X." Then someone says "But someone might." To which you reply, "Perhaps, but we are designing for Noah, not 'someone.'"
A specific persona makes clear what functionality or features must be included and what can be omitted. It is much easier to argue whether a person represented by a specific persona would like or use a given design feature.
Designers designing for themselves
Designing to "meet the needs of users" is a vague and ill-defined notion giving designers the slack to make it up as they go. One common way designers do stray from thinking about the user is when they design for themselves. In most project environments, it is almost impossible for designers to not think of the design in terms of how they would use it or react to it.
One of the strengths of personas is that they deflect this tendency of designers to design for themselves. Because of their very real and specific characteristics, personas hold designers' feet to the fire and help them think about designs for people other than themselves. Personas help designers look outward instead of inward. Personas help designers ask "How would Rachel use this feature?," forcing them to look at the design from Rachel's perspective. The description of a persona needs to make it so well defined as a real and living being that it is impossible for a designer or programmer to substitute themselves or their own characteristics when creating the design.
7.5.3 How Do We Make Them?
As in most other things we do in analysis and design, we create a separate set of personas for each work role. For any given work role, personas are defined by user goals arising from their sub-roles and user classes. Different sub-roles and associated user classes have different goals, which will lead to different designs.
Figure 7-2
Overview of the process of creating a persona for design.
Identifying candidate personas
Although personas are hypothetical, they are built from contextual data about real users. In fact, candidate personas are identified on the fly as you interview potential users. When you
encounter a user whose persona would have different characteristics than any of the existing ones, add it to the list of candidates.
This means that you will create multiple candidate personas generally corresponding to a major sub-role or user class, as shown in the top part of Figure 7-2. How many candidate personas do you need? As many as it takes to cover all the users. It could be in the dozens.
Goal-based consolidation
The next step is to merge personas that have similar goals. For example, in the Ticket Kiosk System we have a persona of an undergraduate student ticket buyer sub-role who lives on campus and is interested in MU soccer tickets. Another persona in the same work role, this time a graduate student who lives off campus, is interested in MU tennis tickets.
These two personas have different backgrounds, defining characteristics, and perhaps personal interests. But in the context of designing the kiosk system, they are similar in their goals: get tickets for medium popularity athletic events at MU. This step reduces the number of personas that you must consider, as shown in the middle part of Figure 7-2. But you still cannot design for a whole group of
personas that you may have selected, so we choose one in the next section.
Selecting a primary persona
Choose one of the personas selected in the previous step as the one primary persona, the single best design target, the persona to which the design will be made specific.
Making this choice is the key to success in using the persona in design. The idea is to find common denominators among the selected personas. Sometimes one of the selected personas represents a common denominator among the others and, with a little adjusting, that becomes the primary persona.
The way you get the primary persona right is to consider what the design might look like for each of the selected personas. The design specifically for the right primary persona will at least work for the others, but a design specifically for any of the other selected personas may not work for the primary persona.
An example of the primary persona for the student sub-role in the Ticket Kiosk System could be that of Jane, a biology major who is a second-generation MU attendee and a serious MU sports fan with season tickets to MU football. This persona is a candidate to be primary because she is representative of most MU students when it comes to MU "school spirit."
Another persona, that of Jeff, a music major interested in the arts, is also an important one to consider in the design. But Jeff is not a good candidate as a primary persona because his lack of interest in MU athletics is not representative of a majority of MU students.
In constructing the primary persona, making it precise and specific is
paramount. Specificity is important because that is what lets you exclude other
cases when it comes to design. Accuracy (i.e., representing a particular real user) is not as important because personas are hypothetical.
Do not choose a mixture of users or an "average" user; that will be a poor choice and the resulting design will probably not work well for any of the personas. Averaging your users just makes your persona a Mr. Potato Head, a conglomeration that is not believable and not representative of a single user.
7.5.4 Mechanics of Creating Personas
Your persona should have a first and last name to make it personal and real. Always, of course, use fictitious names for personas to protect the anonymity of the real users upon which they may be based. Mockup a photo of this person. With permission, take one of a volunteer who is a visual match to the persona or use a photo from a noncopyrighted stock collection. Write some short textual
narratives about their work role, goals, main tasks, usage stories, problems
encountered in work practice, concerns, biggest barriers to their work, etc.
Whenever a persona is developed for a work role, if there is enough space in the flow and social model diagrams, you can show the association of your personas to work roles by adding the persona represented as a "head shot" photo or drawing of a real person attached with lines to the work role icon. Label each with the persona's name.
7.5.5 Characteristics of Effective Personas
Make your personas rich, relevant, believable, specific, and precise
The detail of a persona has to be a rich part of a life story. It has to be specific and precise; this means lots of details that all fit together. Give your persona a personality and a life surrounded with detailed artifacts.
Personas are relevant and believable. Every persona must be a complete and consistent picture of a believable person. Personas excel in bringing out descriptions of user skills.
Unlike aggregate categories (e.g., user classes), a persona can be a frequent user without being an expert (because they still do not understand how it works).
Make your personas "sticky"
Some practitioners of the persona technique go far beyond the aforementioned minimal descriptions of their creations. The idea is to get everyone thinking in terms of the personas, their needs, and how they would use a given system.
Personas need to get lots of visibility, and their personalities need to be
memorable or "sticky" in the minds of those who encounter them (Nieters, Ivaturi, & Ahmed, 2007). To this end, UX teams have created posters, trading cards, coffee mugs, T-shirts, screen "wallpaper," and full-sized cardboard stand- up figures to bring their personas alive and give them exposure, visibility, and memorability to keep them on the minds of all stakeholders.
At Cisco in San Jose, designers have gone so far as to invent "action figures" (a` la Spiderman), little dolls that could be dressed and posed in different ways and photographed (and sometimes further "developed" via Photoshop) in various work contexts to represent real usage roles (Nieters, Ivaturi, & Ahmed, 2007). To us, that may be going beyond what is necessary.
Where personas work best
When personas are used in designing commercial products or systems with relatively simple work domains (i.e., projects on the left-hand side of the system complexity space of Figure 2.5), they help account for the nuances and the activities in personal lives outside organizations. Social networking and other phenomenological behavior come into play.
For example, you may have the kind of person who always carries a phone but does not always carry a camera. This might help in design discussions about whether to combine a camera in a cellphone design.
As you move toward the right-hand side of the system complexity space of Figure 2.5, toward systems for more complex work domains, the work practice often becomes more firmly defined, with less variation in goals. Individual users in a given work role become more interchangeable because they have almost the same exact goals. For example, the work goals of an astronaut are established by the mission, not by the person in the astronaut role and usage is prescripted carefully.
In this kind of project environment, personas do not offer the same advantages in informing design. Roles such as astronaut or air traffic controller are defined very restrictively with respect to background, knowledge, skills, and training, already narrowing the target for design considerably. People who take on that role face stiff user class specifications to meet and must work hard and train to join the user community defined by them. All users in the population will have similar characteristics and all personas for this kind of role will look pretty much alike.
7.5.6 Goals for Design
As Cooper (2004) tells us, the idea behind designing for a persona is that the design must make the primary persona very happy, while not making any of the selected personas unhappy. Buster will love it and it still works satisfactorily for the others.
7.5.7 Using Personas in Design
Team members tell "stories" about how
Rachel would handle a given usage
situation. As more and more of her
stories are told, Rachel becomes more
real and more useful as a medium for
conveying requirements.
Start by making your design as though Rachel, your primary persona, is the only user. In Figure 7-3, let us assume that we have chosen persona P3 as the primary persona out of four selected personas.
Figure 7-3
Adjusting a design for the primary persona to work for all the selected personas
Because D(P3) is a design specific to just P3, D(P3) will work perfectly for P3. Now we have to make adjustments to D(P3) to make it suffice for P1.
Then, in turn, we adjust it to suffice for P2 and P4. The final resulting design will retain the essence of D(P3), plus it will include most of the attributes that make D(P1), D(P2), and D(P4) work for P1, P2, and P4, respectively.
As you converge on the final design, the nonprimary personas will be accounted for, but will defer to this primary persona design concerns in case of conflict. If there is a design trade-off, you will resolve the trade-off to benefit the primary persona and still make it work for the other selected personas.
7.5.8 Example: Cooper's In-Flight Entertainment System Cooper (2004, p. 138) describes the successful use of personas in a Sony design project for an in-flight entertainment system called P@ssport. In addition to the work roles for system maintenance and the flight attendants who set up and operate the entertainment system, the main users are passengers on flights. We call this main work role the Traveler.
The user population that takes on the role of Traveler is just about the broadest possible population you can imagine, including essentially all the people who travel by air-almost everyone. Like any general user population, users might represent dozens of different user classes with very diverse characteristics. Cooper showed how the use of personas helped mitigate the breadth, vagueness, and openness of specification of the various Traveler user classes and their characteristics.
You could come up with dozens or more personas to represent the Traveler, but in that project the team got it down to four personas, each very different from the others. Three were quite specialized to match the characteristics of
a particular type of traveler, while the fourth was an older guy who was not technology savvy and was not into exploring user interface structures or features-essentially the opposite of most of the characteristics of the other personas.
They considered designs for each of the first three personas, but because none of those designs would have worked for the fourth, they came up with an initial design for the fourth persona and then adapted it to work well for all the other personas, without sacrificing its effectiveness for the target persona.
Example: User Personas-Lana and Cory
Here is an example of a persona derived from the interviews of the couple, Lana and Cory, whom we treat as a single composite persona because they share
an approach to entertainment events. (NB: The interspersed comments in
parentheses are not part of the personas, but possibly design-related observations related to various aspects of the personas.)
Lana is a young 20-something manager and yoga instructor in the Dirk Gently Holistic Yoga Studio and enjoys using her laptop during off-work hours. Cory works as a graphic designer at Annals of Myth-information, a small-sized company of creative people.
Lana does not own a car, a smart option in Middleburg, so she takes the bus for distances beyond walking or biking. Cory has to drive to work but bikes or takes public transportation to other places on weekends. Lana and Cory work hard, play hard, and are ready for entertainment on the weekends. (Because they both spend time occasionally at bus stops, it would be a good place for them to peruse the entertainment possibilities and buy tickets while waiting for the bus.)
In addition to pursuing Middleburg entertainment, Lana and Cory have also been known to skip over to Washington, DC, or New York City to visit friends and take in some world-class entertainment. (Therefore, they would love to see information about events in other cities included in the kiosk.)
They occasionally take time out on weekday evenings to do something different, to get away from the routine, which can include seeing a movie, visiting a museum, going out with friends, or traveling in the immediate area. As a balance to the routine of their jobs, they both crave opportunities for learning and personal growth so they often seek entertainment that is sophisticated and interesting, entertainment that challenges intellectually.
However, there are some days they want to rest their minds and they seek something more like mindless entertainment, often something that will make them laugh. They hear about a lot of events and places to visit through word of mouth, but they wonder about how many other interesting events do not come to their attention.
Cory, being influenced by his work in designing social Websites, wonders if sources of entertainment information could also provide a special kind of social networking. He would like to see mediated discussions about events and entertainment-related issues or at least a way to post and read reviews and opinions of various movies and other performances.
Similarly, Lana would like a way to share event information. "Like maybe this weekend there is going to be a jazz festival at a certain sculpture garden and I want Cory to know about it. It would be nice to have a button to touch to cause some kind of link or download to my iPhone or iPod." It is easy to copy information from an entertainment Website and send it via email, but sharing is not as easy from a ticket office or kiosk.
To sum up the characteristics of their joint persona, they:
� lead busy lives with a need for cooling off once or twice a week
� are sophisticated, educated, and technology savvy
� are civic minded and care about sustainability and environment
� like the outdoors
� have a good group of friends with whom they sometimes like to share entertainment
7.6 IDEATION
Ideation is an active, fast-moving collaborative group process for forming ideas
for design. It is an activity that goes with design thinking; you might say that ideation is a tool of design thinking; ideation is applied design thinking.
Ideation is where you start your conceptual design. This is a hugely creative and fun phase. Ideation is where you brainstorm to come up with ideas to solve design problems. Ideation is inseparable from sketching and evaluation aimed
at exploration of design ideas.
7.6.1 Essential Concepts
Iterate to explore
Ideation involves exploration and calls for extensive iteration (Buxton, 2007b). Be ready to try, try, try, and try again. Think about Thomas Edison and his more than 10,000 experiments to create a usable and useful light bulb. Make sketches
and physical mockups early and often, and expose customers and users to your
designs; involve them in their creation, exploration, and iteration.
The evaluation part of this kind of exploratory iteration is never formal; there are no established "methods." It is a fast, furious, and freewheeling comparison of many alternatives and inspiration for further alternatives. If you are starting out with only two or three alternatives, you are not doing this right.
Idea creation vs. critiquing
In the active give-and-take of design, there are two modes of thinking: idea
creation and critiquing. Idea creation is about the generation of new ideas and throwing them out for discussion and inspiration. Critiquing is review and judgment.
Although you will interweave idea creation and critiquing throughout the design process, you should know which mode you are in at any given time and
not mix the modes. That especially means not to mix critiquing into idea creation. Idea creation should result in a pure flow of ideas regardless of feasibility, in the classic tradition of brainstorming. Although we know that, at the end of the day, practical implementation constraints must be considered and allowed to carry weight in the final overall design, saying "Hey, wait a minute!" too early can stifle innovation.
Mason (1968) calls this separation of idea creation and critiquing "go-mode and stop-mode thinking."3 Sodan (1998) calls it the yin and yang of computer science. In idea-creation mode you adopt a freewheeling mental attitude that will permit ideas to flourish. In critiquing you revert to a cold-blooded, critical attitude that will bring your judgment into full play.
Idea creation gives a new creative idea time to blossom before it is cut at the stem and held up to the scale. Idea creation gives you permission to be radical; you get to play outside the safe zone and no one can shoot you down. Allowing early cries of "that will never work," "they have already tried that," "it will cost too much," "we do not have a widget for that," or "it will not work on our implementation platform" will unfairly hobble and frustrate this first step of creativity.
We once experienced an interesting example of this tension between innovation and implementation constraints with a consulting client, an example that we call the implementation know-it-all. The interaction designers in a cross- disciplinary team that included software folks were being particularly innovative in scenario and prototype sketching but the software team member was not going along.
He was doubtful whether their implementation platform could support the design ideas being discussed and he got his team to stop designing, start talking about technical feasibility, and explore implementation solutions. When we threw a "premature critiquing" penalty flag, he defended his position with the rationale that there was no sense spending time on an interaction design if you are only to discover that it cannot be implemented.
This might sound like a reasonable stance, but it is actually the other way around! You do not want to spend time working on technical solutions for an interaction design feature that can change easily as you evaluate and
iterate. That is the whole point of low-fidelity prototypes; they are inexpensive, fast, and easy to make without concerns about implementation platforms.
Wait and see how the design turns out before worrying about how to implement it.
3Thanks to Mark Ebersole, long ago, for this reference.
Beyond this, early stifling of a design idea prevents a chance to explore parts
of the idea that are practical. Even when the idea does turn out to be infeasible, the idea itself is a vehicle for exploring in a particular direction that can later be used to compare and contrast with more feasible ideas.
The design teams at IDEO (ABC News Nightline, 1999) signal premature critiquing by ringing a wrist-mounted bicycle bell to signal the foul of being judgmental too early in design discussions. To help engender an idea creation attitude in early design discussions, Cooper, Reimann, and Dubberly (2003,
p. 82) suggest that team members consider the user interface as all-powerfully magical, freeing it from implementation-bound concerns up front. When you do not have to consider the nuts and bolts of implementation, you might find you have much more creative freedom at the starting point.
7.6.2 Doing Ideation
If the roof doesn't leak, the architect hasn't been creative enough
-Frank Lloyd Wright (Donohue, 1989)
Figure 7-4
The Virginia Tech ideation studio, the "Kiva" (photo courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Set up work spaces
Set aside physical work spaces for ideation, individual work, and group work. Establish a place for design collaboration (B0dker & Buur, 2002).
If possible, arrange for dedicated ideation studio space that can be closed off from outside distractions, where sketches and props can be posted and displayed, and that will not be disturbed by time-sharing with other meetings and work groups.
In Figure 7-4 we show the collaborative ideation studio, called the Kiva, in the Virginia Tech Department of Industrial Design. The Kiva was originally designed and developed by Maya Design in Pittsburgh, Pennsylvania, and is used at Virginia Tech with their permission.
The Kiva is a cylindrical space in which designers can brainstorm and sketch in isolation from outside distractions. The space inside is large enough for seating and work tables. The inner surface of most of the space is a metallic skin.
It is painted so it serves an enveloping whiteboard that can hold magnetic
"push pins." The large-screen display on the outside can be used for announcements, including group scheduling for the work space.
In Figure 7-5 we show individual and group work spaces for designers.
Assemble a team
Why a team? The day of the lone genius inventor is long gone, as is the die-hard misconception of the disheveled genius inventor flailing about in a chaotic frenzy in a messy and cluttered laboratory (picture the professor in Back to the Future)(Brown, 2008).
Thomas Edison, famous not just for his inventions but for processes to create inventions, broke with the single genius inventor image and was one of the first to use a team-based approach to innovation. Thomas Edison "made it a profession that blended art, craft, science, business savvy, and an astute understanding of customers and markets" (Brown, 2008, p. 86). Today, design thinking is a direct descendant of Edison's tradition, and in this design thinking, teamwork is essential for
bouncing ideas around, for collaborative brainstorming and sketching, and for potentiating each other's creativity.
So, gather a creative and open-minded team. You might think that only a talented few brilliant and inventive thinkers could make ideation work successfully. However, we all have the innate ability to think freely and creatively; we just have to allow ourselves to get into the mode-and the mood-for a free- thinking flow of ideas without inhibition and without concern that we will be criticized.
Try to include people with a breadth of knowledge and skills, cross- disciplinary people who have experience in more than one discipline or area. Include customer representatives and representative users. If you are going to be
Figure 7-5
Individual and group designer work spaces (photos courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
thinking visually, it helps to have a visual designer on the team to bring ideas from graphic design.
Use ideation bin ideas to get started
If you gathered ideation inputs into a "bin" of work activity notes back in contextual analysis, now is the time to use them. An ideation input bin is an unconstrained and loosely organized place to gather all the work activity notes and other ideas for sparking and inspiring design.
You should also include emotional impact factors in your ideation inputs because ideation is most likely where these factors will get considered for incorporation into the design. In your contextual data, look for work activity notes about places in the work practice that are dreaded, not fun, kill joy, or drudgery so you can invent fun ways to overcome these feelings.
Shuffle the notes around, form groups, and add labels. Use the notes as points of departure in brainstorming discussions.
Conceiving and Informing the Magitti Context Aware Leisure Guide
Dr. Victoria Bellotti, Principal Scientist, and Dr. Bo Begole, PARC, a Xerox Company
In the realm of new product and service innovation, it is rare that a business places such importance on the idea of utility that it is willing to invest heavily in user-centered research before investing in design and implementation of any kind. It is especially rare before even determining who the user should be or what the product or service should do. When this happened at PARC in 2003-2006, we were delighted to participate in an extraordinary collaboration with Dai Nippon Printing (DNP), the highest-revenue printing technologies and solutions company in the world. DNP executives wished to respond to the widespread transition from printed to electronic media. So they asked PARC, with its reputation for user-centered technology innovation, to discover a new rich media technology-based business opportunity and to develop an innovative solution for the Japanese market. They wanted the solution to be centered on leisure content, as that was most compatible with the bulk of the content in their traditional media printing business.
Initially the most important thing we needed to do was to search broadly for an ideal target user. A method we call "Opportunity Discovery" was developed to handle the situation where one wants to brainstorm and eliminate possible market opportunities in a systematic manner. Many different problem statements representing a demographic plus some activity, problem, or desire were compared side by side in terms of preagreed criteria, which represented the properties of an ideal opportunity for DNP. The most promising three were selected for further, deeper exploration.
Representatives of those target markets were interviewed about their receptiveness to new technology and finally the youth market was chosen as the most likely to adopt a novel technology solution.
Using surveys, interviews, and shadowing, we determined that the 19- to 25-year-old age group had the most leisure, as they were between cram school and a demanding career. These were therefore chosen as the ideal target for our leisure technology. After engaging in some persona explorations, we brainstormed about 500 ideas for possible technology solutions and subsequently clustered them into more coherent concepts. The concepts were evaluated by a team of PARC and DNP representatives for their intuitive appeal, their match to DNP's business competencies, and their potential to generate intellectual property, which could be used to protect any business endeavor built around the technology against competitors. The five best ideas were then sketched out in a deliberately rough scenario form to elicit criticism and improvement. They were then taken to Tokyo and exposed to representatives of the target market for feedback, refinement, and an indication as to which was the most compelling.
In the end, two scenarios were neck and neck-Magic Scope (a system for viewing virtual information) and Digital Graffiti (a system for creating virtual information). These scenarios were combined into the Magitti city leisure guide concept, which was then elaborated in a much more detailed format. We crystallized the idea of recommending venues where leisure activities could be pursued that became the heart of the final system. A mockup was built out of cardboard and plastic with switchable paper screens that matched the storyline in the scenario. This was taken back to Japan for in situ evaluation on the streets of Tokyo with target market representatives. We also held focus group evaluations using just the paper screens where more penetrating questions could be asked of large groups who outnumbered the researchers and were more confident in this context.
As Magitti was taking shape, we continued our field investigations, involving more interviews, observations, and a mobile phone diary, which led to useful insights that informed the system design. One phenomenon that we noticed was that people in the city tended to travel a long way to meet friends half-way between their widely dispersed homes. The half-way points were often unfamiliar and indeed most young people we interviewed on the street reported being moderately to extremely unfamiliar with the location they were in. A second phenomenon we noticed was that our young prospective users tended not to plan everything in advance, sometimes only the meeting place was preagreed. Both of these phenomena constituted good evidence of the receptivity toward or need for a leisure guide.
We surfaced a strong requirement for one-handed operation, as most Japanese people use public transit and carry bags with only one hand free in the context of use that Magitti was intended for. We also discovered a need for photos that convey ambiance inside a venue, as it is hard to see inside many Japanese businesses, even restaurants, because they are often above ground floor level. Finally, the fact that our target users trusted the opinions of people more than businesses and advertisers led us to believe that end user-generated content would be important.
Our extensive fieldwork and user-centered design activities allowed us to develop a well-grounded idea of what we needed to build and how it should work before we ever wrote a line of code for DNP. It is quite extraordinary that this happens so rarely, given that a lot of wasted development effort can be saved in technology innovation by good user-centered work. We can use observation to drive insights and focus our efforts on solving real problems, and we can elicit feedback from target users about simple scenarios and mockups early on to elicit crucial feedback. This approach was responsible for the fact that the Magitti system concept was very appealing to representatives of its target market. The working prototype we subsequently developed was also well received and
Brainstorm
Is it wrong to cry "Brainstorm!" in a crowded theater?
- Anonymous
Ideation is not just sketching, it is brainstorming. According to Dictionary
.com, brainstorming is a "conference technique of solving specific problems, amassing information, stimulating creative thinking, developing new ideas, etc., by unrestrained and spontaneous participation in discussion." Ideation is classic brainstorming applied to design.
Setting the stage for ideation. Part of brainstorming involves the group deciding for itself how it will operate. But for groups of any size, it is a common activity to start with an overview discussion in the group as a whole.
The initial overview discussion establishes background and parameters and agreement on goals of the design exercise. Post major issues and concepts from your ideation bin (see earlier discussion). The ideation team leader must be sure that everyone on the team is in tune with the same rules for behavior (see sub- section on rules of engagement later).
Next, divide up the team into pairs or small sub-teams and go to breakout groups to create and develop ideas. The goal of breakout groups is to have intense rapid interactions to spawn and accumulate large numbers of ideas about characteristics and features. Use marking pens on flip charts and/or write on whiteboards. Put one idea per sheet of paper so that you have maximum freedom to move each around independently.
Use sketches (imperative, not optional) annotated with short phrases to produce quick half-minute representations of ideas. You can include examples of other systems, conceptual ideas, considerations, design features, marketing ideas, and experience goals. Get all your whacky, creative, and off-the-wall ideas out there. The flow should be a mix of verbal and visual.
Reconvene when the sub-teams have listed all the ideas that they can think of or when the allotted time is up. In turn, each sub-team reports on their work
to the whole group. First posting their annotated sketches around the room, the sub-teams walk the group through their ideas and explain the concepts. The sub-teams then lead a group discussion to expand and elaborate the ideas, adding new sketches and annotations, but still going for essentials, not completeness of details.
When the font of new ideas seems to have run dry for the moment, the group can switch to critiquing mode. Even in critiquing, the focus is not to shoot down ideas but to take parts that can be changed or interpreted differently and use them in even better ways.
In Figure 7-6 we show an example of ideation brainstorming in mid-process within the Virginia Tech ideation studio.
The mechanics of ideation. Use outlining as verbal sketching. An outline is easier to scan for key ideas than bulk text. An outline is an efficient way to display ideation results on flip charts or in projected images.
Immerse your sketching and ideation within a design-support ecology, a "war room" of working artifacts as inputs and inspiration to ideation. Get it all
out there in front of you to point to, discuss, and critique. Fill your walls, shelves, and work tables with artifacts, representations of ideas, images, props, toys, notes, posters, and materials.
Make the outputs of your ideation as visual and tangible as possible; intersperse the outline text with sketches, sketches, and more sketches. Post and display everything all around the room as your visual working context. Where appropriate, build physical mockups as embodied sketches.
Use teamwork and play off of each other's ideas while "living the part of the user." Talk in scenarios, keeping customers and users in the middle, telling stories of their experience as your team weaves a fabric of new ideas for design solutions.
In IDEO's "deep dive" approach, a cross-disciplinary group works in total immersion without consideration of rank or job title. In their modus operandi of focused chaos (not organized chaos), "enlightened trial and error succeeds over the planning of lone genius." Their designing process was illustrated in a
well-known ABC News documentary with a new design for supermarket shopping carts, starting with a brief contextual inquiry where team members visit different
Figure 7-6
Ideation brainstorming within the Virginia Tech ideation studio, Kiva (photo courtesy of Akshay Sharma, Department of Industrial Design).
stores to understand the work domain of shopping and issues with existing shopping cart designs and use.
Then, in an abbreviated contextual analysis process, they regrouped and engaged in debriefing, synthesizing different themes that emerged in their contextual inquiry. This analysis fed parallel brainstorming sessions in which they captured all ideas, however unconventional. At the end of this stage they indulged in another debriefing session, combining the best ideas from brainstorming to assemble a design prototype. This alternation of brainstorming, prototyping, and review, driven by their "failing often to succeed sooner" philosophy, is a good approach for anyone wishing to create a good user experience.
Rules of engagement. The process should be democratic; this is not a time for pulling rank or getting personal. Every idea should be valued the same. Ideation should be ego free, with no ownership of ideas; all ideas belong to the group; all are equally open to critiquing (when the time comes). It is about the ideas, not the people. There is to be no "showboating" or agendas of individuals to showcase their talent.
The leader should be especially clear about enforcing "cognitive firewalling" to prevent incursions of judgment into the idea-creation mode. If the designers are saying they need a particular feature that requires an interstellar
ion-propulsion motor and someone says "wait, we cannot make that out of Tinkertoys," you will have to throw out a penalty flag.
Example: Ideation for the Ticket Kiosk System
We brainstormed with potential ticket buyers, students, MU representatives, and civic leaders. Here we show selected results of that ideation session with our Ticket Kiosk System design team as a consolidated list with related categories in the spirit of "verbal sketching." As in any ideation session, ideas were accompanied with sketches. We show the idea part of the session here separately to focus on the topic of this section.
Thought questions to get started:
What does "an event" mean? How do people treat events in real life? An event is more than something that happens and maybe you attend
An event can have emotional meanings, can be thought provoking, can have meaning that causes you to go out and do something
Ontological artifacts:
Tickets, events, event sponsors, MU student ID, kiosk
Things people might want to do with tickets: People might want to email tickets to friends
Possible features and breadth of coverage:
We might want to feature customized tickets for keepsake editions Homecoming events
Parents weekend events
Visiting speakers on current topics
Visitor's guide to what's happening in town and the university Christmas tour of Middleburg
View Christmas decorations on historic homes
Walk Main Street to see decorations and festive shops Types of events:
Action movies, comedy (plays, stand-up), concerts, athletic events, specials Special themes and motifs:
Motif for the Ticket Kiosk System could be "Adventures in Entertainment," which would show up in the physical design (the shape, images and colors, the aesthetic appearance) of the kiosk itself and would carry through to the metaphor pervading the screen, dialogue, buttons, and so on in the interaction design Complete theme package: Football game theme: brunch, tailgating parties, game tickets, post-game celebrations over drinks at select places in town, followed by a good football movie
Date night theme: Dinner and a movie, restaurant ads with movie/event tickets, proximity information and driving/public transportation directions, romantic evening, flowers from D'Rose, dinner at Chateau Morrisette, tour some of the setting of the filming of Dirty Dancing, stroll down Draper Road under a full moon (calendar and weather driven), watch Dirty Dancing at The Lyric Theater, tickets for late-night wine tasting at The Vintage Cellar, wedding planner consultation (optional)
Business consideration:
Because it is a college town, if we make a good design, it can be reused in other college towns Competition: Because we are up against ubiquitous Websites, we have to make the kiosk experience something way beyond what you can get on a Website
Emotional impact:
Emotional aspect about good times with good friends Emphasize MU team spirit, logos, etc.
Entertainment event tickets are a gateway to fun and adventure Combine social and civic participation
Indoor locations could have immersive themes with video and surround sound
Immersive experience: For example, indoor kiosk (where security is less of a problem) at The University Mall, offer an experience "they cannot refuse," support with surrounding immersive visuals and audio, ATM-like installation with wrap-around display walls and surround sound, between ticket buyers, run preview of theme and its mood
Minority Report style UIs
Rock concerts for group euphoria
Monster trucks or racing: ambiance of power and noise, appeals to the more primal instincts and thrill-seeking
Other desired impact:
Part of university and community "family"
Ride on the emerging visibility of and talent at MU Collective success and pride
Leverage different competencies of MU and community technologies Patron-of-the-arts feeling: classiness, sophistication, erudition, feeling special
Community outreach:
Create public service arrangements with local government (e.g., could help advertise and sell T-shirts for annual street art fair) Advertise adult education opportunities, martial arts classes, kids camps, art and welding courses
Ubiquitous locations:
Bus stops
Library Major dorms
Student center City Hall building Shopping malls Food courts Inside busses
Major academic and administrative buildings
7.7 SKETCHING
We have already mentioned sketching several times. Sketching is the rapid
creation of freehand drawings expressing preliminary design ideas, focusing
on concepts rather than details. To start with, we credit Bill Buxton (2007b) as the champion for sketching; much of what we say about sketching can be credited to him.
7.7.1 Essential Concepts
Sketching is essential to ideation and design Design is a process of creation and exploration, and sketching is a visual medium for that exploration. Sketching for design goes back at least to the Middle Ages. Consider da Vinci and all his famous sketch books. Nilsson and Ottersten (1998)
describe sketching as an essential visual language for brainstorming and
discussion.
By adding visualization to ideation, sketching adds cognitive supercharging,
boosting creativity by bringing in more human senses to the task (Buxton, 2007a). Clearly sketching supports communication within ideation and, as Nilsson and Ottsersten (1998) point out, sketches also serve as an important longer-term design documentation. This helps other team members and designers retain understanding of the design and its details as they get into prototyping and implementation. The evolution of your sketches provides a
history of your thinking.
What sketching is and is not
Sketching is not about putting pen to paper in the act of drawing. A sketch is not about making a drawing or picture of a product to document a design. A sketch is not just an artifact that you look at; a sketch is a conversation between the
sketcher or designer and the artifact. A sketch is a medium to support a
conversation among the design team members.
In a talk at Stanford, Buxton (2007a) challenges his audience to draw his mobile phone. But he does not mean a drawing of the phone as a product. He means something much harder-a sketch that reveals the interaction,
the experience of using the phone in a situated context, where the product and its physical affordances encourage one type of behavior and experience over another.
Sketches are not the same as prototypes
Sketches are not prototypes, at least not in the usual UX process sense (Buxton, 2007b). Sketches are not used to refine a design that has been chosen. Sketches
are for exploring the possibilities for creating a design. Sketching is designing, whereas prototyping in the usual sense is implementation to build a concrete design representation for testing.
In Figure 7-7, based on Buxton's Figure 52 (2007b), we show how sketches and prototypes are different in almost every way.
Sketches evoke thinking and ideas to arrive at a design. Prototypes illustrate an instance of a design. While sketches suggest possibilities, prototypes describe
Figure 7-7 Comparison between
Buxton design exploration
sketches and traditional low-fidelity refinement prototypes.
designs already decided upon. Sketches
are to explore and raise questions. Prototypes are to refine and provide answers.
The lifecycle iteration of sketching is a divergence of discovery, an expansion of ideas and possibilities. In contrast, the lifecycle iteration of the HCI engineering process is intended to be a convergence, a closing-up of ideas and possibilities. Sketches are deliberately tentative, noncommittal, and ambiguous. Prototypes, however detailed, are depictions of specific designs.
Sketching is embodied cognition to aid invention
Sketching is not intended to be a tool for documenting designs that are first created in one's head and then transferred to paper. In fact, the sketch itself
is far less important than the process of making it. The process of sketching is a kind of cognitive scaffolding, a rich and efficient way to off-load part of
the cognition, especially the mental visualization, to physical artifacts in the world.
A sketch is not just a way to represent your thinking; the act of making the
sketch is part of the thinking. Sketching is a direct part, not an after-the-fact part, of the process of invention. Designers invent while sketching. Sketching embraces one's whole being: the hands, the mind, and all the senses.
The kinesthetics of sketching, pointing, holding, and touching bring the entire hand-eye-brain coordination feedback loop to bear on the problem solving. Your physical motor movements are coupled with visual and cognitive activity; the designer's mind and body potentiate each other in invention.
In Figure 7-8 you can see an example of a sketch to think about design.
7.7.2 Doing Sketching
Stock up on sketching and mockup supplies
Stock the ideation studio with sketching supplies such as whiteboards, blackboards, corkboards, flip chart easels, Post-its(tm) of all sizes, tape, and marking pens. Be sure to include supplies for constructing physical mockups,
including scissors, hobby knives, cardboard, foam core board, duct tape, Scotch(tm) tape, wooden blocks, push pins, thumb tacks, staples, string, bits of cloth, rubber, other flexible materials, crayons, and spray paint.
Use the language of sketching To be effective at sketching for design, you must use a particular vocabulary that has not changed much over the centuries. One of the most important language features is the vocabulary of lines, which are made as freehand "open" gestures. Instead of being mechanically
correct and perfectly straight, lines in sketches are roughed in and not connected precisely.
In this language, lines overlap, often extending a bit beyond the corner. Sometimes they "miss" intersecting and leave the corner open a little bit. Further, the resolution and detail of a sketch should be low enough to suggest that it is a concept in the making, not a finished design. It needs to look disposable and inexpensive to make. Sketches are deliberately ambiguous and abstract, leaving "holes" for the imagination.
They can be interpreted in different ways, fostering new relationships to be seen within them, even by the person who drew them. In other words, avoid the appearance of precision; if everything is specified and the
design looks finished, then the message is that you are telling something, "this is the design," not proposing exploration, "let us play with this and see what comes up." You can see this unfinished look in the sketches of Figures 7-9 and 7-10.
Here are some defining characteristics of sketching (Buxton, 2007b; Tohidi et al., 2006):
� Everyone can sketch; you do not have to be artistic
� Most ideas are conveyed more effectively with a sketch than with words
� Sketches are quick and inexpensive to create; they do not inhibit early exploration
� Sketches are disposable; there is no real investment in the sketch itself
� Sketches are timely; they can be made just-in-time, done in-the-moment, provided when needed
Figure 7-8
A sketch to think about design (photo courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Figure 7-9
Freehand gestural sketches for the Ticket Kiosk System (sketches courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
� Sketches should be plentiful; entertain a large number of ideas and make multiple sketches of each idea
� Textual annotations play an essential support role, explaining what is going on in each part of the sketch and how
In Figure 7-11, we show examples of designers doing sketching.
Example: Sketching for a Laptop/Projector Project
The following figures show sample sketches for the K-YAN project
(K-yan means "vehicle for knowledge"), an exploratory collaboration by the Virginia Tech Industrial Design Department and IL&FS.4 The objective
is to develop a combination laptop and projector in a single portable device for use in rural India. Thanks to Akshay Sharma of the Virginia Tech Industrial Design Department for these sketches. See Figures 7-12 through 7-15 for different kinds of exploratory sketches for this project.
4http://kyan.weebly.com
Figure 7-10
Ideation and design exploration sketches for the Ticket Kiosk System (sketches courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Figure 7-11
Designers doing sketching (photos courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Figure 7-12
Early ideation sketches of K-YAN (sketches courtesy of Akshay Sharma, Department of Industrial Design).
7.7.3 Physical Mockups as Embodied Sketches
Just as sketches are two-dimensional visual vehicles for invention, a physical mockup for ideation about a physical device or product is a three-dimensional sketch. Physical mockups as sketches, like all sketches, are made quickly, highly disposable, and made from at-hand materials to create tangible props for exploring design visions and alternatives.
A physical mockup is an embodied sketch because it is an even more physical manifestation of a design idea and it is a tangible artifact for touching, holding, and acting out usage (see Figures 7-16 and 7-17).
Where appropriate in your ideation, you can do the same. Build many different mockups, each as creative and different as possible. Tell stories about the mockup during ideation and stretch it as far as you can.
For later in the process, after design exploration is done and you want a 3D design representation to show clients, customers, and implementers, there are services to produce finished-looking, high-fidelity physical mockups.
7.8 MORE ABOUT PHENOMENOLOGY
7.8.1 The Nature of Phenomenology
Joy of use is an obvious emotional counterpart to ease of use in interaction. But there is a component of emotional impact that goes much deeper. Think of the kind of personal engagement and personal attachment that leads to a product being invited to become an integral part of the user's lifestyle. More than functionality or fun-this is a kind of companionship. This longer-term situated kind of emotional impact entails a phenomenological view of interaction (Russell, Streitz, & Winograd, 2005, p. 9).
Figure 7-13
Mid-fidelity exploration sketches of K-YAN (sketches courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Figure 7-14
Sketches to explore flip-open mechanism of K-YAN (sketches courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Figure 7-15
Sketches to explore emotional impact of form for K-YAN (sketches courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Figure 7-16
Examples of rough physical mockups (models courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Figure 7-17
Example of a more finished looking physical mockup (model courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Emerging from humanistic studies, phenomenology5 is the philosophical examination of the foundations of experience and action. It is about phenomena, things that happen and can be observed. But it is not about logical deduction or conscious reflection on observations of phenomena; it is about individual interpretation and intuitive understanding of human experience.
Phenomenology is part of the "modern school of philosophy founded by Edmund Husserl. Its influence extended throughout Europe and was particularly important to the early development of existentialism. Husserl
attempted to develop a universal philosophic method, devoid of presuppositions, by focusing purely on phenomena and describing them; anything that could not be seen, and thus was not immediately given to the consciousness, was excluded."6
"The phenomenological method is thus neither the deductive method of logic nor the empirical method of the natural sciences; instead it consists in realizing the presence of an object and elucidating its meaning through intuition. Husserl considered the object of the phenomenological method to be the immediate seizure, in an act of vision, of the ideal intelligible content of the phenomenon" (Husserl, 1962). His key and defining work from the early 20th century is now reprinted in an English translation.
However, it was Martin Heidegger who translated it into "the most thorough, penetrating, and radical analysis of everyday experience" (Winograd & Flores, 1986, p. 9). Heidegger, quoted often in human-computer interaction contexts, was actually a student of Professor Husserl and, although they had collaborated closely, they had a falling out during the 1940s over the social politics of World War II.7 "Writers like Heidegger challenge the dominant view of mind, declaring
that cognition is not based on the systematic manipulation of representations" (Winograd & Flores, 1986, p. 10). This view is in opposition to the human-as- information-processor paradigm discussed earlier in this chapter.
5Dictionary.com says phenomenology is: 1. the movement founded by Husserl that concentrates on the detailed description of conscious experience, without recourse to explanation, metaphysical assumptions, and traditional philosophical questions; 2. the science of phenomena as opposed to the science of being.
6http://www.reference.com/browse/Phenomenology� 7http://en.wikipedia.org/wiki/Edmund_Husserl
Because phenomenology is about observables, it enjoys a relationship with hermeneutics, the theory of interpretation (Winograd & Flores, 1986, p. 27), to fill the need to explain what is observed. Historically, hermeneutics was about interpretation of artistic and literary works, especially mythical and sacred texts and about how human understanding of those texts has changed over time. However, "one of the fundamental insights of phenomenology is that this activity of interpretation is not limited to such situations, but pervades our everyday life" (Winograd & Flores, 1986, p. 27).
7.8.2 The Phenomenological View in Human-Technology Interaction
When translated to human-computer interaction, phenomenological aspects of interaction represent a form of emotional impact, an affective state arising within the user. It is about emotional phenomena within the interaction experience and the broadest interpretation of the usage context. It is about a social role for a product in long-term relationships with human users. It is about a role within human life activities. In that regard, it is related to activity theory (Winograd & Flores, 1986) because activity theory also emphasizes that the context of use is central to understanding, explaining, and designing technology (B0dker, 1991).
7.8.3 The Phenomenological Concept of Presence
The phenomenological paradigm is central to Harrison, Back, and Tatar (2007), who make it clear that HCI is no longer just about usability and user performance, but that it is about presence of technology as part of our lives: "We argue that the coming ubiquity of computational artifacts drives a shift from efficient use to meaningful presence of information technology." This is all about moving from the desktop to ubiquitous, embedded, embodied, and situated interaction.
Hallna�s and Redstro� m (2002) also describe the "new usability" as a shift from use to "presence." To them, a key characteristic of phenomenological concepts is that the product or system that is the target of design or evaluation is present in the user's life, not just being used for something. That certainly rules out almost all desktop software, for example, but calls to mind favorite portable devices, such as the iPhone and iPod, that have become a part of our daily lives.
Use or functional descriptions are about what you do with the product.
Presence is about what it means to you. A description of presence is an existential description, meaning that the user has given the product a place to exist in the
user's life; it is about being known within the user's human experience rather than a theoretical or analytical description.
So, presence is about a relationship we have with a device or product. It is no longer just a device for doing a task, but we feel emotional ties. In Chapter 8, the Garmin handheld GPS is described as a haven of comfort, coziness, familiarity, and companionship, like a familiar old pair of boots or your favorite fleece. The device has been invited into the user's emotional life, and that is presence.
As Hallna�s and Redstro� m put it, "... 'presence' refers to existential definitions of a thing based on how we invite and accept it as part of our lifeworld." Winograd and Flores (1986, p. 31) allude to the same relationship, as expressed by Heidegger, "He [Heidegger] argues that the separation of subject and object denies the more fundamental unity of being-in-the-world." Here subject means the person having the user experience, and the object is everything they perceive and experience. You cannot separate the user, the context, and the experience.
Presence, or the potential for presence, cannot necessarily be detected directly in design or evaluation. Acceptance is usually accompanied by a "disappearance" (Weiser, 1991) of the object as a technological artifact. Hallna�s and Redstro� m use, as a simple but effective example, a chair. If your description of the chair simply refers to the fact that you sit in it without reference to why or what you do while sitting in it, you have removed the user and the usage context; it is more or less just a functional description. However, if the user describes this chair as the place where she seeks comfort each evening in front of the fire after a long day's work, then the chair has an emotional presence in that user's life.
7.8.4 The Importance of Phenomenological Context over Time
From the discussion so far, it should be abundantly clear that the kind of emotional context found in the phenomenological paradigm is a context
that must unfold over time. Usage develops over time and takes on its own life,
often apart from what designers could envision. Users learn, adapt, and change
during usage, creating a dynamic force that gives shape to subsequent usage (Weiser, 1991).
Short-term studies will not see this important aspect of usage and interaction.
So, while users can experience snapshot episodes of good or bad usability, good or bad usefulness, and even good or bad emotional impact, the
phenomenological aspects of emotional impact are about a deeper and longer-
term concept. It is not just about a point in time within usage, but it speaks to a whole style and presence of the product over time. The realization of this fact is essential in both design and evaluation for emotional impact within the phenomenological context.
Intentionally left as blank
Mental Models and Conceptual Design
8.1 INTRODUCTION
8.1.1 You Are Here
We begin each process chapter with a "you are here" picture of the chapter topic in the context of the overall Wheel lifecycle template; see Figure 8-1. This chapter is a continuation of design, which we started in Chapter 7 and will conclude in Chapter 9, for designing the new work practice and the new system.
8.2 MENTAL MODELS
8.2.1 What Is a Mental Model?
According to Wikipedia.org, "a mental model is an explanation of someone's thought process about how something works in the real world." A designer's mental model is a vision of how a system works as held by the designer. A user's mental model is a description of how the system works, as held by the user. It is the job of conceptual design (coming up soon) to connect the two.
Figure 8-1
You are here; the second of three chapters on creating an interaction design in the context of the overall Wheel lifecycle template.
8.2.2 Designer's Mental Model
Sometimes called a conceptual model (Johnson & Henderson, 2002, p. 26), the designer's mental model is the designer's conceptualization of the envisioned system-what the system is, how it is organized, what it does, and how it works. If anyone should know these things, it is the designer who is creating the system. But it is not uncommon for designers to "design" a system without first forming and articulating a mental model.
The results can be a poorly focused design, not thought through from the start. Often such designs proceed in fits and starts and must be retraced and restarted when missing concepts are discovered along the way. The result of such a fuzzy start can be a fuzzy design that causes users to experience vagueness and misconceptions. It is difficult for users to establish a mental model of how the
system works if the designer has never done the same.
As shown in Figure 8-2, the designer's mental model is created from what is learned in contextual inquiry and analysis and is transformed into design by ideation and sketching.
Johnson and Henderson (2002, p. 26) include metaphors, analogies, ontological structure, and mappings between those concepts and the task domain or work practice the design is intended to support. The closer the designer's mental model orientation is to the user's work domain and work
practice, the more likely users will internalize the model as their own. To paraphrase Johnson and Henderson's rule for relating the designer's mental model to the final design: if it is not in the designer's mental model, the system should not require users to be aware of it.
Designer's mental model in the ecological perspective: Describing what the system is, what it does,
and how it works within its ecology
Mental models of a system can be expressed in any of the design perspectives of Chapter 7. In the ecological perspective, a designer's mental model is about how
the system or product fits within its work context, in the flow of activities
involving it and other parts of the broader system. In Norman's famous book, The Design of Everyday Things, he describes the use of thermostats (Norman, 1990, pp. 38-39) and how they work. Let us expand the explanation of thermostats to a description of what the system is and what it does from the perspective of its ecological setting.
Figure 8-2
Mapping the designer's mental model to the user's mental model.
First, we describe what it is by saying that a thermostat is part of a larger system, a heating (and/or cooling) system consisting of three major parts: a heat source, a heat distribution network, and a control unit, the latter being the thermostat and some other hidden circuitry. The heat source could be gas, electric, or wood burning, for example. The heat distribution network would use fans or air blowers to send heated or cooled air through hot air ducts or a pump would send heated or cooled water through subfloor pipes.
Next, we address what it does by noting that a thermostat is for controlling the temperature in a room or other space. It controls heating and cooling so that the temperature stays near a user-settable value-neither too hot or too cold- keeping people at a comfortable temperature.
Designer's mental model in the interaction perspective: Describing how users operate it
In the interaction perspective, a designer's mental model is a different view of an
explanation of how things work; it is about how a user operates the system or
product. It is a task-oriented view, including user intentions and sensory, cognitive, and physical user actions, as well as device behavior in response to these user actions.
In the thermostat example, a user can see two numerical temperature
displays, either analog or digital. One value is for the current ambient temperature and the other is the setting for the target temperature. There will be a rotatable knob, slider, or other value-setting mechanism to set the desired target temperature. This covers the sensory and physical user actions for operating a thermostat. User cognition and proper formation of intentions with respect to user actions during thermostat operation, however, depend on understanding the usually hidden explanation of the behavior of a thermostat in response to the user's settings.
Most thermostats, as Norman explains (1990, pp. 38-39), are binary switches that are simply either on or off. When the sensed ambient temperature is below the target value, the thermostat turns the heat on. When the temperature then climbs to the target value, the thermostat turns the heat source off. It is, therefore, a false conceptualization, or false mental model, to believe that you can make a room warm up faster by turning the thermostat up higher.
The operator's manual for a particular furnace unit would probably say something to the effect that you turn it up and down to make it warmer or cooler, but would probably fall short of the full explanation of how a thermostat works. But the user is in the best position to form effective usage strategies,
connecting user actions with expected outcomes, if in possession of this knowledge of thermostat behavior.
There are at least two possible design approaches to thermostats, then. The first is the common design containing a display of the current temperature plus a knob to set the target temperature. A second design, which reveals the designer's mental model, might have a display unit that provides feedback messages such as "checking ambient temperature," "temperature lower than target; turning heat on," and "temperature at desired level; shutting off." This latter design might suffer from being more complex to produce and the added display might be a distraction to experienced users. However, this design approach does help project the designer's mental model through the system design to the user.
Designer's mental model in the emotional perspective: Describing intended emotional impact
In the emotional perspective, the mental model of a design it about the expected
overarching emotional response. Regarding the thermostat example, it is difficult to get excited about the emotional aspects of thermostats, but perhaps the visual design, the physical design, how it fits in with the house de�cor, or the craftsmanship of its construction might offer a slight amount of passing pleasure.
8.2.3 User's Mental Model
A user's mental model is a conceptualization or internal explanation each user
has built about how a particular system works. As Norman says (1990), it is a natural human response to an unfamiliar situation to begin building an explanatory model a piece at a time. We look for cause-and-effect relationships and form theories to explain what we observe and why, which then helps guide our behavior and actions in task performance.
As shown in Figure 8-2, each user's mental model is a product of many different inputs including, as Norman has often said, knowledge in the head and knowledge in the world. Knowledge in the head comes from mental models of other systems, user expertise, and previous experience. Knowledge in the world comes from other users, work context, shared cultural conventions, documentation, and the conceptual design of the system itself. This latter source of user knowledge is the responsibility of the system designer.
Few, if any, thermostat designs themselves carry any knowledge in the world, such as a cognitive affordance that conveys anything like Norman's explanation of a thermostat as a binary switch. As a result, thermostat users depend on
knowledge in the head, mostly from previous experience and shared conventions. Once you have used a thermostat and understand how it works, you pretty much understand all thermostats.
But sometimes mental models adapted from previous encounters with similar systems can work against learning to use a new system with a different conceptual design. Norman's binary switch explanation is accurate for almost every thermostat on the planet, but not for one in the heater of a mid-1960s Cadillac. In a fascinating departure from the norm, you could, in fact, speed up the heating system in this car, both the amount of heat and the fan speed, by setting the thermostat to a temperature higher than what you wanted in steady state.
Since cars were beginning to have more sophisticated (in this case, read more failure prone) electronics, why not put them to use? And they did. The output heat and fan speed were proportional to the difference between the ambient temperature and the thermostat setting. So, on a cold day, the heater would run wide open to produce as much heat as possible, but it would taper off its output as it approached the desired setting.
Lack of a correct user mental model can be the stuff of comedy curve balls, too. An example is the scene in the 1992 movie, My Cousin Vinny, where Marisa Tomei-as Vinny's fiance�e, Mona Lisa Vito-tries to make a simple phone call. This fish-out-of-water scene pits a brash young woman from New York against a rotary dial telephone. You cannot help but reflect on the mismatch in the mapping between her mental model of touch-tone operation and the reality of old-fashioned rotary dials as she pokes vigorously at the numbers through the finger holes.
But, lest you dismiss her as a ditzy blond, we remind you that it was she who solved the case with her esoteric knowledge in the head, proving that the boys' 1964 Buick Skylark could not have left the two tire tracks found outside the convenience store because it did not have a limited-slip differential.
8.2.4 Mapping and the Role of Conceptual Design
The mapping in Figure 8-2 is an abstract and objective ideal transformation of the designer's mental model into the user's mental model (Norman, 1990,
p. 23). As such the mapping is a yardstick against which to measure how closely the user's mental model matches the reality of the designer's mental model.
The conceptual design as it is manifest in the system is an implementation of this mapping and can be flawed or incomplete. A flawed conceptual design leads to a mismatch in the user's mental model. In reality, each user is likely to have a different mental model of the same system, and mental models can be incomplete and even incorrect in places.
8.3 CONCEPTUAL DESIGN
8.3.1 What Is a Conceptual Design?
A conceptual design is the part of an interaction design containing a theme, notion, or idea with the purpose of communicating a design vision about a system or product. A conceptual design is the manifestation of the designer's mental model within the system, as indicated in Figure 8-2. It is the part of the system design that brings the designer's mental model to life within the system. A conceptual design corresponds to what Norman calls the "system image" of the designer's mental model (Norman, 1990, pp. 16, 189-190), about which he makes the important point: this is the only way the designer and user can communicate.
Conceptual design is where you innovate and brainstorm to plant and first nurture the user experience seed. You can never iterate the design later to yield a good user experience if you do not get the conceptual part right up front.
Conceptual design is where you establish the metaphor or the theme of the
product-in a word, the concept.
8.3.2 Start with a Conceptual Design
Now that you have done your contextual inquiry and analysis, requirements, and modeling, as well as your ideation and sketching, how do you get started on design? Many designers start sketching out pretty screens, menu structures, and clever widgets.
But Johnson and Henderson (2002) will tell you to start with conceptual
design before sketching any screen or user interface objects. As they put it, screen sketches are designs of "how the system presents itself to users. It is better to start by designing what the system is to them." Screen designs and widgets will come, but time and effort spent on interaction details can be wasted without a
well-defined underlying conceptual structure. Norman (2008) puts it this way: "What people want is usable devices, which translates into understandable ones" (final emphasis ours).
To get started on conceptual design, gather the same team that did the ideation and sketching and synthesize all your ideation and sketching results
into a high-level conceptualization of what the system or product is, how it fits
within its ecology, and how it operates with users.
For most systems or products, especially domain-complex systems, the best way to start conceptual design is in the ecological perspective because that captures the system in its context. For product concepts where the emotional
impact is paramount, starting with that perspective is obvious. At other times the "invention" of an interaction technique like that of the iPod Classic scroll wheel might be the starting point for a solution looking for a problem and is best visualized in the interaction perspective.
8.3.3 Leverage Metaphors in Conceptual Design
One way to start formulating a conceptual design is by way of metaphors- analogies for communication and explanations of the unfamiliar using familiar conventional knowledge. This familiarity becomes the foundation underlying and pervading the rest of the interaction design.
What users already know about an existing system or existing phenomena can be adapted in learning how to use a new system (Carroll & Thomas, 1982). Use metaphors to control complexity of an interaction design, making it easier to learn and easier to use instead of trying to reduce the overall complexity (Carroll, Mack, & Kellogg, 1988).
One of the simple and oldest examples is the use of a typewriter metaphor in a word processing system. New users who are familiar with the knowledge, such as margin setting and tab setting in the typewriter domain, will already know much of what they need to know to use these features in the word processing domain.
Metaphors in the ecological perspective
Find a metaphor that can be used to describe the broader system structure. An example of a metaphor from the ecological perspective could be the description of iTunes as a mother ship for iPods, iPhones, and iPads. The intention is that all operations for adding, removing, or organizing media content, such as applications, music, or videos, are ultimately managed in iTunes and the results are synced to all devices through an umbilical connection.
Metaphors in the interaction perspective
An example of a metaphor in the interaction perspective is a calendar application in which user actions look and behave like writing on a real calendar. A more modern example is the metaphor of reading a book on an iPad. As the user moves a finger across the display to push the page aside, the display takes on the appearance of a real paper page turning. Most users find it comfortingly familiar.
Another great example of a metaphor in the interaction perspective can be found in the Time Machine feature on the Macintosh operating system. It is a backup feature where the user can take a "time machine" to go back to older
backups-by flying through time as guided by the user interface-to retrieve lost or accidentally deleted files.
One other example is the now pervasive desktop metaphor. When the idea of graphical user interfaces in personal computers became an economic feasibility, the designers at Xerox Parc were faced with an interesting interaction design challenge: How to communicate to the users, most of whom were going to see this kind of computer for the first time, how the interaction design works?
In response, they created the powerful "desktop" metaphor. The design leveraged the familiarity people had with how a desktop works: it has files, folders, a space where current work documents are placed, and a "trash can" where documents can be discarded (and later recovered, until the trash can itself is emptied). This analogy of a simple everyday desk was brilliant in its simplicity and made it possible to communicate the complexity of a brand new technology.
As critical components of a conceptual design, metaphors set the theme of how the design works, establishing an agreement between the designer's vision and the user's expectations. But metaphors, like any analogy, can break down when the existing knowledge and the new design do not match.
When a metaphor breaks down, it is a violation of this agreement. The famous criticism of the Macintosh platform's design of ejecting an external disk by dragging its icon into the trashcan is a well-known illustration of how a metaphor breakdown attracts attention. If Apple designers were faithful to the desktop metaphor, the system should probably discard an external disk, or at least delete its contents, when it is dragged and dropped onto the trashcan, instead of ejecting it.
Metaphors in the emotional perspective
An example of a metaphor from the emotional perspective is seen in advertising in Backpacker magazine of the Garmin handheld GPS as a hiking companion. In a play on words that ties the human value of self-identity with orienteering, Garmin uses the metaphor of companionship: "Find yourself, then get back." It highlights emotional qualities such as comfort, cozy familiarity, and companionship: "Like an old pair of boots and your favorite fleece, GPSMAP 62ST is the ideal hiking companion."
8.3.4 Conceptual Design from the Design Perspectives
Just as any other kind of design can be viewed from the three design perspectives of Chapter 7, so can conceptual design.
Conceptual design in the ecological perspective
The purpose of conceptual design from the ecological perspective is to communicate a design vision of how the system works as a black box within its environment. The ecological conceptual design perspective places your system or product in the role of interacting with other subsystems within a larger infrastructure.
As an example, Norman (2009) cites the Amazon KindleTM -a good example of a product designed to operate within an infrastructure. The product is for reading books, magazines, or any textual material. You do not need a computer to download or use it; the device can live as its own independent ecology.
Browsing, buying, and downloading books and more is a pleasurable flow of activity. The Kindle is mobile, self-sufficient, and works synergistically with an existing Amazon account to keep track of the books you have bought through Amazon.com. It connects to its ecology through the Internet for downloading and sharing books and other documents. Each Kindle has its own email address so that you and others can send lots of materials in lots of formats to it for later reading.
As discussed previously, the way that iPods and iTunes work together is another example of conceptual design in the ecological perspective. Norman calls this designing an infrastructure rather than designing just an application. Within this ecosystem, iTunes manages all your data. iTunes is the overall organizer through which you buy and download all content. It is also where you create all your playlists, categories, photo albums, and so on. Furthermore, it is in iTunes that you decide what parts of your data you want on your "peripherals," such as an iPod, iPad, or iPhone. When you connect your iDevice to the computer and synchronize it, iTunes will bring it up to date, including an installation of the latest version of the software as needed.
tasks now possible on small devices. For example, one can do photo and video editing on an iPhone. The "cloud" is tying all of these together and providing access to computing and information anytime, anywhere.
In this new environment, the biggest challenge for usability engineers is that all of these devices are used together to accomplish user's information needs and goals. Whereas before we had tools dedicated to particular tasks (e.g., email programs), now we have a set of devices, each with a set of tools to support the
same tasks. The usability of these tasks must be evaluated as a collection of devices working together, not as the sum of the usability of individual tools. Some tasks, on the surface, can be done on any of our many devices. Take email, for example. You can read, reply, forward, and delete emails in your phone, tablet device, laptop, desktop, game console, or even TV or entertainment center. However, managing email sometimes entails more than that.
Once you get to filing and refinding previous email messages, the tasks gets very complicated on some of
these devices. And opening some attachments might not be possible in other devices. Also, even though we have connectivity to talk to anyone in the world, you do not quite have enough connectivity to print an email remotely
at home or at the office. The result is that not all devices support all the tasks required to accomplish our work, but the collection of devices together do, while allowing mobility and 24/7 access to information.
The challenge comes on how to evaluate a system of coordinated device usage that spans multiple manufacturers, multiple communication capabilities, and multiple types of activities. The experience of using (and configuring and managing) multiple devices together is very different than using only one device. As a matter of fact, the usability of just one device is barely a minimum fit for it to work within the rest of devices used in our day-to-day information management. Furthermore, the plethora of devices creates a combinatorial explosion of device choices that make assessing the usability of the devices together practically impossible.
Part of the problem is that we lack a way to understand and study this collection of devices. To alleviate this need, we have proposed a framework, called a personal information ecosystem (PIE) (Pe�rez-Quin�ones et al., 2008), that at least helps us characterize different ecologies that emerge for information management. The idea of ecosystems in information technology is not new, but our approach is most similar to Spinuzzi's (2001) ecologies of genre. Spinuzzi argues that usability is not an attribute of a single product or artifact, but that instead it is best studied across the entire ecosystem used in an activity. His approach borrows ideas from distributed cognition
and activity theory.
At the heart of the ecology of devices is an information flow that is at its optimum point (i.e., equilibrium) when the user is exerting no extra effort to accomplish his/her tasks. At equilibrium, the user rarely needs to think of the devices, the data format, or the commands to move information to and from devices. This equilibrium, however, is disrupted easily by many situations: introduction of a new device, disruption in service (wifi out of range), changes in infrastructure, incompatibility between programs, etc. It is often quite a challenge to have all of your devices working together to reach this equilibrium. The usability of the ecosystem depends more on the equilibrium and ease of information flow than on the individual usability of each device.
However, having a terminology and understanding the relationships between devices are only the beginning.
I would claim that designing and assessing user experience within an ecology of devices is what Rittel (1972) calls a "wicked problem." A wicked problem, according to Rittel, is a problem that by its complexity and nature
cannot have a definitive formulation. He even states that a formulation of the problem itself corresponds to a particular solution of the problem. Often, wicked problems have no particular solution, instead we judge a solution
as good or bad. We often cannot even test a solution to a wicked problem, we can only indicate to a degree to which a given solution is good. Finally, in wicked problems, according to Rittel, there are many explanations for the same discrepancy and there is no way to test which of these explanations is the best one. In general, every wicked problem can be considered a symptom of another problem.
Why is designing and assessing usability of an ecology a wicked problem? First, different devices are often designed by different companies. We do not really know which particular combination of devices a given user will own. Evaluating all combinations is prohibitively expensive, and expecting one company to provide all the devices is not ideal either, as monopolies tend to stifle innovation. As a result, the user is stuck in an environment that can at best provide a local optimum-"if you use this device with this other device, then your email will work ok."
Second, while some problems are addressed easily by careful design of system architecture, eventually new uses emerge that were not anticipated by the designers. For example, if a user is using IMAP as the server protocol for his/her email, then all devices are "current" with each other as the information about her/his email is stored in a central location. But even this careful design of network protocols and systems architecture cannot
account for all the uses that evolve over time. The email address autocompletion and the signature that appears at the bottom of your email are both attributes of the clients and are not in the IMAP protocol. Thus, a solution based on standards can only support agreed common tasks from the past but does not support emergent behavior.
Third, the adoption of a new device into the ecology often breaks other parts that were already working effectively. As a result, whatever effort has gone into solving a workflow problem is lost when a different combination of devices is present. For example, I use an Apple MacBook Pro as my main computer, an iPad for most of my home use, and an Android phone for my communication needs. At times, finding a good workflow for these three devices is a challenge. I have settled on using GMail and Google Calendar in all three devices because there is excellent support for all three. But other genres are not as well supported. Task management, for example, is one where I currently do not have a good solution that works in my phone, the most recent addition to my PIE.
New devices upset the equilibrium of the ecosystem; the problem that I am addressing (task management) is a symptom of another problem I introduced.
Fourth, the impact of the changes in an ecosystem is highly personalized. I know users whose email management and information practices improved when they obtained a smartphone. For them, most of their email traffic was short and for the purpose of coordinating meetings or upcoming events. Introduction of a smartphone
allowed them to be more effective in their email communication. For me, for example, the impact was the opposite. As most knowledge workers, I do a lot of work over email with discussions and document exchanges. The result is that I tag my email and file messages extensively. But because my phone and tablet device provide poor support for filing messages, I now leave more messages in my inbox to be processed when I am back on my laptop. Before I added my smartphone to my ecosystem, my inbox regularly contained 20 messages. Now, my inbox has pending tasks from when I was mobile. The result is that I have 50 to 60 messages regularly in my inbox. Returning to my laptop now requires that I "catch-up" on work that I did while mobile. The impact of adding a smartphone has been negative to me, in some respects, whereas for other users it had a positive effect.
Finally, a suitable solution to a personal ecosystem is one that depends on the user doing some work as a designer of his or her own information flow. Users have to be able to observe their use, identify their own inefficiencies, propose solutions, and design workflows that implement those solutions. Needless to say, not every user has the skills
to be a designer and to even be able to self-assess where their information flow is disrupted. Spinuzzi (2001) discusses this point using the B0dker (1991) concept of breakdowns. Paraphrasing Spinuzzi, breakdowns are points at which a person realizes that his or her information flow is not working as expected and thus the person must devote attention to his or her tools/ecosystem instead of his or her work. Typically this is what a usability engineer would consider a usability problem, but in the context of a PIE, this problem is so deeply embedded in the particular combination of devices, user tasks, and user information flows that it is practically impossible for a usability engineer to identify this breakdown. We are left with the user as a designer as the only option of improving the usability of a PIE.
As usability engineers, we face a big challenge on how to study, design, and evaluate user experience of personal information ecosystem that have emerged in today's ubiquitous environments.
References
B0dker, S. (1991). Through the Interface: A Human Activity Approach to User Interface Design. Hillsdale, New Jersey: Erlbaum. Pe�rez-Quin�ones, M. A., Tungare, M., Pyla, P. S., & Harrison, S. (2008). Personal Information Ecosystems: Design Concerns for Net-Enabled Devices. In Proceedings of Latin American-WEB'2008 Conference, (pp. 3-11). October 28-30, Vila
Velha, Espi�rito Santo, Brasil.
Rittel, H. (1972). On the planning crisis: Systems Analysis of the 'first and Second Generations'. Bedriftskonomen, 8, 390-396.
Spinuzzi, C. (2001). Grappling with Distributed Usability: A Cultural-Historical Examination of Documentation Genres over Four Decades. Journal of Technical Writing and Communication, 31(1), 41-59.
Weiser, M. (1991). The Computer for the 21st Century. Scientific American, 94-100, September.
Conceptual design in the interaction perspective
The conceptual design from the interaction perspective is used to communicate a design vision of how the user operates the system. A good example of conceptual design from an interaction perspective is the Mac Time Machine backup feature discussed previously. Once that metaphor is established, the interaction design can be fleshed out to leverage it.
The designers of this feature use smooth animation through space to represent traveling through the different points in time where the user made backups. When the user selects a backup copy from a particular time in the past, the system lets the user browse through the files from that date. Any files from that backup can be selected and they "travel through time" to the present, thereby recovering the lost files.
As an example of designers leveraging the familiarity of conceptual designs from known applications to new ones, consider a well-known application such as Microsoft Outlook. People are familiar with the navigation bar on
the left-hand side, list view at the top right-hand side, and a preview of the
selected item below the list. When designers use that same idea in the conceptual design of a new application, the familiarity carries over.
Conceptual design in the emotional perspective
Conceptual design from the emotional perspective is used to communicate a vision of how the design elements will evoke emotional impact in users. Returning to the car example, the design concept could be about jaw- dropping performance and how your heart skips a beat when you see its aerodynamic form or it could be about fun and being independent from the crowd. Ask any MINI driver about what their MINI means to them.
In Figure 8-3 we summarize conceptual design in the three perspectives.
Figure 8-3
Designer workflow and connections among the three conceptual design perspectives.
Example: Conceptual Design for the Ticket Kiosk System There is a strong commonly held perception of a ticket kiosk that includes a box on a pedestal and a touchscreen with colorful displays showing choices of events. If you give an assignment to a team of students, even most HCI students, to come up with a conceptual design of a ticket kiosk in 30 minutes, 9 times out of 10 you will get something like this.
But if you teach them to approach it with design thinking and ideation, they can come up with amazingly creative and varied results.
In our ideation about the Ticket Kiosk System, someone mentioned making it an immersive experience. That triggered more ideas and sketches on how to make it immersive, until we came up with a three-panel overall design. In Figure 8-4 we show this part of a conceptual design for the Ticket Kiosk System showing immersion in the emotional perspective.
Here is a brief description of the concept, in outline form.
� The center screen is the interaction area, where immersion and ticket-buying action occur.
� The left-hand screen contains available options or possible next steps; for example, this screen might provide a listing of all required steps to complete a transaction, including letting user access these steps out of sequence.
� The right-hand screen contains contextual support, such as interaction history and related actions; for example, this screen might provide a summary of the current transaction so far and related information such as reviews and ratings.
� The way that the three panels lay out context as a memory support and for consistent use is a kind of human-as-information-processor concept.
� Using the sequence of panels to represent the task flow is a kind of engineering concept.
� Each next step selection from the left-hand panel puts the user in a new kind of immersion in the center screen, and the previous immersion situation becomes part of the interaction history on the right-hand panel.
Figure 8-4
Part of a conceptual design showing immersion in the emotional perspective (sketch courtesy of Akshay Sharma, Virginia
Tech Department of Industrial Design).
� Addressing privacy and enhancing the impression of immersion: When the ticket buyer steps in, rounded shields made of classy materials gently wrap around. An "Occupied" sign glows on the outside. The inside of the two rounded half-shells of the shield become the left-hand-side and right-hand-side interaction panels.
In Figure 8-5 we show ideas from an early conceptual design for the Ticket Kiosk System from the ecological perspective.
In Figure 8-6 we show ideas from an ecological conceptual design for the Ticket Kiosk System focusing on a feature for a smart ticket to guide users to seating.
In Figure 8-7 we show ecological conceptual design ideas for the Ticket Kiosk System focusing on a feature showing communication connection with a smartphone. You can have a virtual ticket sent from a kiosk to your mobile device and use that to enter the event.
In Figure 8-8 we show ecological conceptual design ideas for the Ticket Kiosk System focusing on the features for communicating and social networking.
Figure 8-5
Early conceptual design ideas from the ecological perspective (sketch courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Figure 8-6
Ecological conceptual design ideas focusing on a feature for a smart ticket to guide users to seating (sketch courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Figure 8-7
Ecological conceptual design ideas focusing on a feature showing communication connection with a smartphone (sketch courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Figure 8-8
Ecological conceptual design ideas focusing on the features for
communicating and social networking (sketch courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
In Figure 8-9 we show part of a conceptual design for the Ticket Kiosk System in the interaction perspective.
8.4 STORYBOARDS
8.4.1 What Are Storyboards?
A storyboard is a sequence of visual "frames" illustrating the interplay between a
user and an envisioned system. Storyboards bring the design to life in graphical "clips," freeze-frame sketches of stories of how people will work with the system. This narrative description can come in many forms and at different levels.
Storyboards for representing interaction sequence designs are like visual scenario sketches, envisioned interaction design solutions. A storyboard might be thought of as a "comic-book" style illustration of a scenario, with actors, screens, interaction, and dialogue showing sequences of flow from frame
to frame.
8.4.2 Making Storyboards to Cover All Design Perspectives
From your ideation and sketches, select the most promising ideas for each of the
three perspectives. Create illustrated sequences that show each of these ideas in
a narrative style.
Include things like these in your storyboards:
� Hand-sketched pictures annotated with a few words
� All the work practice that is part of the task, not just interaction with the system, for example, include telephone conversations with agents or roles outside the system
� Sketches of devices and screens
� Any connections with system internals, for example, flow to and from a database
� Physical user actions
Figure 8-9
Part of a conceptual design in the interaction perspective (sketch courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
� Cognitive user actions in "thought balloons"
� Extra-system activities, such as talking with a friend about what ticket to buy
Figure 8-10
Example of a sequence of sketches as a storyboard
in the ecological perspective (sketches courtesy of Akshay Sharma,
Virginia Tech Department of Industrial Design).
For the ecological perspective, illustrate high-level interplay among human
users, the system as a whole, and the surrounding context. Look at the envisioned flow model for how usage activities fit into the overall flow. Look in the envisioned social model for concerns and issues associated with the usage in context and show them as user "thought bubbles."
As always in the ecological perspective, view the system as a black box to illustrate the potential of the system in a context where it solves particular problems. To do this, you might show a device in the hands of a user and connect its usage to the context. As an example, you might show how a handheld device could be used while waiting for a flight in an airport.
In the interaction perspective, show screens, user actions, transitions, and
user reactions. You might still show the user, but now it is in the context of user thoughts, intentions, and actions upon user interface objects in operating the device. Here is where you get down to concrete task details. Select key tasks from the HTI, design scenarios, and task-related models to feature in your interaction perspective storyboards.
Use storyboards in the emotional perspective to illustrate deeper user
experience phenomena such as fun, joy, and aesthetics. Find ways to show the experience itself-remember the excitement of the mountain bike example from Buxton (Chapter 1).
Example: Ticket Kiosk System Storyboard Sketches in the Ecological Perspective
See Figure 8-10 for an example of a sequence of sketches as a storyboard depicting a sequence using a design in the ecological perspective.
Continued
Figure 8.10, cont'd
Example: More Ticket Kiosk System Storyboard Sketches in the Ecological Perspective
In Figure 8-11 we show part of a different Ticket Kiosk System storyboard in the ecological perspective.
Figure 8-11
Part of a different Ticket Kiosk System storyboard
in the ecological perspective (sketches courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Example: Ticket Kiosk System Storyboard Sketches in the Interaction Perspective
The following is one possible scenario that came out of an ideation session for an interaction sequence for a town resident buying a concert ticket from the Ticket Kiosk System. This example is a good illustration of the breadth we intend for the scope of the term "interaction," including a person walking with respect to the kiosk, radio-frequency identification at a distance, and audio sounds being made and heard. This scenario uses the three-screen kiosk design, where
LS 1/4 left-hand screen, CS 1/4 center screen, RS 1/4 right-hand screen, and
SS 1/4 surround sound.
� Ticket buyer walks up to the kiosk
� Sensor detects and starts the immersive protocol
� Provides "Occupied" sign on the wrap-around case
� Detects people with MU passports
� Greets buyer and asks for PIN
� [CS] Shows recommendations and most popular current offering based on buyer's category
� [RS] Shows buyer's profile if one exists on MU system
� [LS] Lists options such as browse events, buy tickets, and search
� [CS] Buyer selects "Boston Symphony at Burruss Hall" from the recommendations
� [RS] "Boston Symphony at Burruss Hall" title and information and images
� [SS] Plays music from that symphony
� [CS] Plays simulated/animated/video of Boston Symphony in a venue that looks like Burruss Hall. Shows "pick date and time"
� [LS] Choices, pick date and time, go back, exit.
� [CS] Buyer selects "pick date and time" option
� [CS] A calendar with "Boston Symphony at Burruss Hall" is highlighted, with other known events and activities with clickable dates.
� [CS] Buyer selects date from the month view of calendar (can be changed to week)
� [RS] The entire context selected so far, including date
� [CS] A day view with times, such as Matinee or evening. The rest of the slots in the day show related events such as wine tasting or special dinner events.
� [LS] Options for making reservations at these special events
� [CS] Buyer selects a time
� [RS] Selected time
� [CS] Available seating chart with names for sections/categories aggregate number of available seats per each section
� [LS] Categories of tickets and prices
� [CS] Buyer selects category/section
� [RS] Updates context
� [CS] Immerses user from a perspective of that section. Expands that section to show individual available seats. Has a call to action "Click on open seats to select" and an option to specify number of seats.
� [LS] Options to go back to see all sections or exit
� [CS] Buyer selects one or more seats by touching on available slots.
A message appears "Touch another seat to add to selection or touch selected seat to unselect."
� [CS] Clicks on "Seat selection completed"
� [RS] Updates context
� [CS] Shows payment options and a virtual representation of selected tickets
� [LS] Provides options with discounts, coupons, sign up for mailing lists, etc.
� [CS] Buyer selects a payment option
� [CS] Provided with a prompt to put credit card in slot
� [CS] Animates to show a representation of the card on screen
� [CS] Buyer completes payment
� [LS] Options for related events, happy hour dinner reservations, etc. These are contextualized to the event they just bought the tickets just now.
� [CS] Animates with tickets and CC coming back out of their respective slots
In Figure 8-12 we have shown sample sketches for a similar storyboard.
8.4.3 Importance of Between-Frame Transitions
Storyboard frames show individual states as static screenshots. Through a series of such snapshots, storyboards are used to show the progression of interaction over time. However, the important part of cartoons (and, by the same token, storyboards) is the space between the frames (Buxton, 2007b). The frames do not reveal how the transitions are made.
For cartoons, it is part of the appeal that this is left to the imagination, but in storyboards for design, the dynamics of interaction in these transitions are where the user experience lives and the actions between frames should be part of what is sketched. The transitions are where the cognitive affordances in your design earn their keep, where most problems for users exist, and where the challenges lie for designers.
We can augment the value of our storyboards greatly to inform design by showing the circumstances that lead to and cause the transitions and the context,
situation, or location of those actions. These include user thoughts, phrasing, gestures, reactions, expressions, and other experiential aspects of interaction. Is the screen difficult to see? Is the user too busy with other things to pay attention to the screen? Does a phone call lead to a different interaction sequence?
In Figure 8-13 we show a transition frame with a user thought bubble explaining the change between the two adjacent state frames.
Figure 8-12
Sample sketches for a similar concert ticket purchase storyboard in the interaction perspective (sketches courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
Continued
Figure 8.12, cont'd
Figure 8.12, cont'd
Figure 8-13
Storyboard transition frame with thought bubble explaining state change (sketches courtesy of Akshay Sharma, Virginia Tech Department of Industrial Design).
8.5 DESIGN INFLUENCING USER BEHAVIOR
Beale (2007) introduces the interesting concept of slanty design. "Slanty design is an approach that extends user-centered design by focusing on the things people should (and should not) be able to do with the product(s) behind the design." Design is a conversation between designers and users about both desired and undesired usage outcomes. But user-centered design, for example, using contextual inquiry and analysis, is grounded in the user's current
M E NTAL MODELS AND C ONCEP T UAL D ESIGN 325
behavior, which is not always optimal. Sometimes, it is desirable to change, or even control, the user's behavior.
The idea is to make a design that works best for all users taken together and for the enterprise at large within the ecological perspective. This can work against what an individual user wants. In essence, it is about controlling user behavior through designs that attenuate usability from the individual user's interaction perspective, making it difficult to do things not in the interest of other users or the enterprise in the ecological perspective, but still allowing the individual users to accomplish the necessary basic functionality and tasks.
One example is sloped reading desks in a library, which still allow reading but make it difficult to place food or drink on the desk or, worse, on the documents. Beale's similar example in the domain of airport baggage claims is marvelously simple and effective. People stand next to the baggage conveyor belt and many people even bring their carts with them.
This behavior increases usability of the system for them because the best ease of use occurs when you can just pluck the baggage from the belt directly onto the cart.
However, crowds of people and carts cause congestion, reducing accessibility and usability of other users with similar needs. Signs politely requesting users to remain away from the belt except at the moment of luggage retrieval are regrettably ineffective. A slanty design for the baggage carousel, however, solves the problem nicely. In this case, it involves something that
is physically slanty; the surrounding floor slopes down away from the baggage carousel.
This interferes with bringing carts close to the belt and significantly reduces the comfort of people standing near the belt, thus reducing individual usability by forcing people to remain away from the carousel and then make a dash for the bags when they arrive within grasping distance. But it works best overall for everyone in the ecological perspective. Slanty design includes evaluation to eliminate unforeseen and unwanted side effects.
There are other ways that interaction design can influence user behavior.
For example, a particular device might change reading habits. The Amazon Kindle device, because of its mobility and connectedness, makes it possible for users to access and read their favorite books in many different environments. As another example, interaction design can influence users to be "green" in their everyday activities. Imagine devices that detect the proximity of the user, shutting themselves down when the user is no longer there, to conserve power.
The Green Machine User-Experience Design: An Innovative Approach to Persuading People to Save Energy with a Mobile Device That Combines Smart Grid Information Design Plus Persuasion Design
Aaron Marcus, President, and Principal Designer/Analyst, Aaron Marcus and Associates, Inc. (AM�A) In past decades, electric meters in homes and businesses were humble devices viewed primarily by utility company service technicians. Smart Grid developments to conserve energy catapult energy data into the forefront of high- technology innovation through information visualization, social media, education, search engines, and even games and entertainment. Many new techniques of social media are transforming society and might incorporate Smart Grid data. These techniques include the following:
� Communication: Blogs, microblogging, social networking, soc net aggregation, event logs/tracking
� Collaboration: wikis, social bookmarking (social tagging), social news, opinions, Yelp
� Multimedia: photo/video sharing, livecasting, audio/music sharing
� Reviews and opinions: product/business reviews, community Q+As
� Entertainment: platforms, virtual worlds, game sharing
Prototypes of what might arise are to be found in many places around the Internet. As good as these developments are, they do not go far enough. Just showing people information is good, but not sufficient. What seems to be missing is persuasion.
We believe that one of the most effective ways in which to reach people is to consider mobile devices, in use by more than three billion people worldwide. Our Green Machine mobile application prototype seeks to persuade people to save energy.
Research has shown that with feedback, people can achieve a 10% energy-consumption reduction without a significant lifestyle change. In the United States, this amount is significant, equal to the total energy provided by wind and solar resources, about 113.9 billion kwh/year. President Obama allocated more than $4 billion in 2010 Smart Grid funding to help change the context of energy monitoring and usage. Most of the Smart Grid software development has focused on desktop personal computer applications. Relatively few have taken the approach of exploring the use of mobile devices, although an increasing number are being deployed.
For our Green Machine project, we selected a home-consumer context to demonstrate in an easy-to-understand example how information design could be merged with persuasion design to change users' behavior. The same principles can be reapplied to the business context, to electric vehicle usage, and to many other contexts. For our use scenario, we assumed typical personas, or user profiles: mom, dad, and the children, who might wish to see their home energy use status and engage with the social and information options available on their mobile devices.
We incorporated five steps of behavior-changing process: increasing frequency of use of sustainability tools, motivating people to reduce energy consumption, teaching them how to reduce energy consumption, persuading them to make short-term changes, and persuading them to make long-term changes in their behavior. This process included, for example, the following techniques: rewards, using user-centered design, motivating people via views into the future, motivating them through games, providing tips to help people get started and to learn new behaviors, providing visual feedback, and providing social interaction.
We tested the initial designs with about 20 people, of varying ages (16-65), both men and women, students, professionals, and general consumers. We found most were quite positive about the Green Machine to be effective in motivating them and changing their behavior in both the short and the long term. A somewhat surprising 35% felt a future view of the world in 100 years was effective even though the news was gloomy based on current trends. We made improvements in icon design, layout, and terminology based on user feedback.
The accompanying two figures show revised screen designs for comparison of energy use and tips for purchasing green products. The first image shows how the user compares energy use with a friend or colleague. Data charts can appear, sometimes with multiple tracks, to show recent time frames, all of which can be customized, for example,
a longer term can show performance over a month's time, or longer. The second image shows data about a product purchase that might lead the user to choose one product/company over another because of their "green" attributes. A consumption meter at the top of each screen is a constant reminder of the user's performance. Other screens offer a view into the future 100 years from now to show an estimate of what the earth will be like if people behave as the user now does. Still other screens show social networking and other product evaluation screens to show how a user might use social networks and product/service data to make smarter choices about green behavior.
The Green Machine concept design proved sturdy in tests with potential users. The revised version stands ready for further testing with multicultural users. The mental model and navigation can be built out further to account for shopping, travel, and other energy-consuming activities outside the home. The Green Machine is ready to turn over to companies or governmental sponsors of commercial products and services based on near-term Smart Grid technology developments, including smart-home management and electric/hybrid vehicle management. Even more important, the philosophy, principles, and techniques are readily adapted to other use contexts, namely that of business, both enterprise and small-medium companies, and with contexts beyond ecological data, for example, healthcare. Our company has already developed a follow-on concept design modeled on the Green Machine called the Health Machine.
Coupled with business databases, business use contexts, and business users, the Green Machine for Business might provide another example of how to combine Smart Grid technology with information design and persuasion design for desktop, Web, and mobile applications that can more effectively lead people to changes in business,
home, vehicle, and social behavior in conserving energy and using the full potential of the information that the Smart Grid can deliver.
Acknowledgment
This article is based on previous publications (Jean and Marcus, 2009, 2010; Marcus 2010a,b); it includes additional/ newer text and newer, revised images.
References
Jean, J., & Marcus, A. (2009). The Green Machine: Going Green at Home. User Experience (UX), 8(4), 20-22ff. Marcus, A. (2010a). Green Machine Project. DesignNet, 153(6) June 2010, 114-115 (in Korean).
Marcus, A. (2010b). The Green Machine. Metering International, (2), July 2010, South Africa, 90-91.
Marcus, A., & Jean, J. (2010). Going Green at Home: The Green Machine. Information Design Journal, 17(3), 233-243.
8.6 DESIGN FOR EMBODIED INTERACTION
Embodied interaction refers to the ability to involve one's physical body in interaction with technology in a natural way, such as by gestures. Antle (2009) defines embodiment as "how the nature of a living entity's cognition is shaped by the form of its physical manifestation in the world." As she points out, in contrast to the human as information processor view of cognition, humans are primarily active agents, not just "disembodied symbol processors." This means bringing interaction into the human's physical world to involve the human's own physical being in the world.
Embodied interaction, first identified by Malcolm McCullough in Digital Ground (2004) and further developed by Paul Dourish in Where the Action Is (2001) is central to the idea of phenomenological interaction. Dourish says that embodied interaction is about "how we understand the world, ourselves, and interaction comes from our location in a physical and social world of embodied factors." It has been
described as moving the interaction off the screen and into the real world. Embodied interaction is action situated in the world.
To make it a bit less abstract, think of a person who has just purchased something with "some assembly required." To sit with the instruction manual and just think about it pales in comparison to supplementing that thinking with physical actions in the working environment-holding the pieces and moving them around, trying to fit them this way and that, seeing and feeling the spatial relations and associations among the pieces, seeing the assembly take form, and feeling how each new piece fits.
This is just the reason that physical mockups give such a boost to invention and ideation. The involvement of the physical body, motor movements, visual connections, and potentiation of hand-eye-mind collaboration lead to an embodied cognition far more effective than just sitting and thinking.
Simply stated, embodiment means having a body. So, taken literally, embodied interaction occurs between one's physical body and surrounding technology. But, as Dourish (2001) explains embodiment does not simply refer to physical reality but "the way that physical and social phenomena unfold in real time and real space as a part of the world in which we are situated, right alongside and around us."
As a result, embodiment is not about people or systems per se. As Dourish puts it, "embodiment is not a property of systems, technologies, or artifacts; it is a property of interaction. Cartesian approaches separate mind, body, and thought from action, but embodied interaction emphasizes their duality."
Although tangible interaction (Ishii & Ullmer, 1997) seems to have a following of its own, it is very closely related to embodied interaction. You could say that they are complements to each other. Tangible design is about interactions between human users and physical objects. Industrial designers have been dealing with it for years, designing objects and products to be held, felt, and manipulated by humans. The difference now is that the object involves some kind of computation. Also, there is a strong emphasis on physicality, form, and tactile interaction (Baskinger & Gross, 2010).
More than ever before, tangible and embodied interaction calls for physical prototypes as sketches to inspire the ideation and design process. GUI interfaces emphasized seeing, hearing, and motor skills as separate, single-user, single- computer activities. The phenomenological paradigm emphasizes other senses, action-centered skills, and motor memory. Now we collaborate and communicate and make meaning through physically shared objects in the real world.
In designing for embodied interaction (Tungare et al., 2006), you must think about how to involve hands, eyes, and other physical aspects of the human body
Figure 8-14
The Scrabble Flash Cube game.
in the interaction. Supplement the pure cognitive actions that designers have considered in the past and take advantage of the user's mind and body as they potentiate each other in problem solving.
Design for embodied interaction by finding ways to shape and augment human cognition with the physical manifestations of motor movements, coupled with visual and other senses. Start by including the environment in the interaction design and understand how it can be structured and
physically manipulated to support construction of meaning within interaction.
Embodied interaction takes advantage of several things. One is that it leverages our innate human traits of being able to manipulate with our hands. It also takes advantage of humans' advanced spatial cognition abilities-laying things on the ground and using the relationships of things within the space to support design visually and tangibly.
If we were to try to make a digital version of a game such as Scrabble (example shown later), one way to do it is by creating a desktop application where people operate in their own window to type in letters or words. This makes it an interactive game but not embodied.
Another way to make Scrabble digital is the way Hasbro did it in Scrabble Flash Cubes
(see later). They made the game pieces into real physical objects with built-in technology.
Because you can hold these objects in your hands, it makes them very natural and tangible and contributes to emotional impact because there is something fundamentally natural about that.
Example: Embodied and Tangible Interaction in a Parlor Game Hasbro Games, Inc. has used embedded
technology in producing an electronic version of the old parlor game Scrabble. The simple but fun new Scrabble Flash Cubes game is shown in Figure 8-14. The fact that players hold the cubes, SmartLink letter tiles, in their hands and manipulate and arrange them with their fingers makes this a good example of embodied and tangible interaction.
At the start of a player's turn, the tiles each generate their own letter for the turn. The tiles can read each other's letters as they touch as a player physically shuffles them around. When the string of between two and five letters makes up a word, the tiles light up and beep and the player can try for another word with the same tiles until time is up.
The tiles also work together to time each player's turn, flag duplicates, and display scores. And, of course, it has a built-in dictionary as an authority (however arbitrary it may be) on what comprises a real word.
8.7 UBIQUITOUS AND SITUATED INTERACTION
8.7.1 Ubiquitous, Embedded, and Ambient Computing The phenomenological paradigm is about ubiquitous computing (Weiser, 1991). Since the term "computing" can conjure a mental image of desktop
computers or laptops, perhaps the better term would be ubiquitous interaction with technology, which is more about interaction with ambient computer-like technology worn by people and embedded within appliances, homes, offices, stereos and entertainment systems, vehicles, and roads.
Kuniavsky (2003) concludes that ubiquitous computing requires extra careful attention to design for the user experience. He believes ubiquitous computing devices should be narrow and specifically targeted rather than multipurpose or general-purpose devices looking more like underpowered laptops. And he emphasizes the need to design complete systems and infrastructures instead of just devices.
The concept of embedded computing leans less toward computing in the living environment and more toward computing within objects in the environment. For example, you can attach or embed radio-frequency identification chips and possibly limited GPS capabilities in almost any physical object and connect it wirelessly to the Internet. An object can be queried about what it is and where it is. You can ask your lost possessions where they are (Churchill, 2009).
There are obvious applications to products on store or warehouse shelves and inventory management. More intelligence can be built into the objects, such as household appliances, giving them capabilities beyond self- identification to sensing their own environmental conditions and taking initiative to communicate with humans and with other objects and devices. As example is ambient computing as manifest in the idea of an aware and proactive home.
8.7.2 Situated Awareness and Situated Action
The phenomenological paradigm is also about situated awareness in which the technology and, by the same token, the user are aware of their context. This includes awareness of the presence of others in one's own activity space and their awareness of your virtual presence in their activity spaces. In a social interaction setting, this can help find other people and can help cultivate a feeling of community and belonging (Sellen et al., 2006).
Being situated is all about a sense of "place," the place of interaction within the broader usage context. An example of situated awareness (credit not ours) is a cellphone that "knows" it is in a movie theater or that the owner is in a nonphone conversation; that is, the device or product encompasses knowledge of the rules of human social politeness.
Design Production 9
Objectives
After reading this chapter, you will:
1. Know how to use requirements to drive design
2. Understand the macro view of lifecycle iteration for design
3. Be able to unpack conceptual designs and explore strategies for realization in intermediate design
4. Understand wireframes and how to make and use them
5. Be prepared to use annotated scenarios, prototypes, and wireframes to represent screens and navigation in detailed design
6. Know how to maintain a custom style guide in design
7. Understand the concept of interaction design specifications for software implementation
9.1 INTRODUCTION
9.1.1 You Are Here
We begin each process chapter with a "you are here" picture of the chapter topic in the context of the overall Wheel lifecycle template; see Figure 9-1. This chapter is a continuation of the previous one about designing the new work practice and the new system.
In Chapter 7 we did ideation and sketching and in Chapter 8 we conceptualized design alternatives. Now it is time to make sure that we account
for all the requirements and envisioned models in those designs. This is especially important for domain-complex systems where it is necessary to maintain connections to contextual data.
The translation from requirements to design is often regarded as the most
difficult step in the UX lifecycle process. We should expect it to be difficult because now that we have made the cognitive shift from analysis-mode thinking
to synthesis-mode thinking, there are so many possible choices for design to meet any one given requirement and following requirements does not guarantee an integrated overall solution.
Figure 9-1
You are here; the third of three chapters on creating an interaction design in the context of the overall Wheel lifecycle template.
Beyer, Holtzblatt, and Wood (2005, p. 218) remind us that "The design isn't explicit in the data." "The data guides, constrains, and suggests directions" that design "can respond to." The requirements, whether in a requirements document or as an interpretation of the work activity affinity diagram (WAAD), offer a large inventory of things to be supported in the design.
9.2 MACRO VIEW OF LIFECYCLE ITERATIONS FOR DESIGN
In Figure 9-2 we show a "blow up" of how lifecycle iteration plays out on a macroscopic scale for the various types of design. Each type of design has its own
iterative cycle with its own kind of prototype and evaluation. Among the very first to talk about iteration for interaction design were Buxton and Sniderman (1980).
The observant reader will note that the progressive series of iterative loops in Figure 9-2 can be thought of as a kind of spiral lifecycle concept. Each loop in turn addresses an increasing level of detail. For each different project context and each stage of progress within the project, you have to adjust the amount of and kind of design, prototyping, and evaluation to fit the situation in each of these incarnations of that lifecycle template.
9.2.1 Ideation Iteration
At "A" in Figure 9-2, iteration for ideation and sketching (Chapter 7) is a lightning-fast, loosely structured iteration for the purpose of exploring design ideas. The role of prototype is played by sketches, and the role of evaluation is carried out by brainstorming, discussion, and critiquing. Output is possibly multiple alternatives for conceptual designs, mostly in the form of annotated rough sketches.
Figure 9-2
Macro view of lifecycle iterations in design.
9.2.2 Conceptual Design Iteration
At "B" in Figure 9-2, iteration for conceptual design is to evaluate and compare possibly multiple design concepts and weigh concept feasibility. The type of prototype evolves with each successive iteration, roughly from paper prototype to low-fidelity wireframes and storyboards. The type of evaluation here is usually
in the form of storytelling via storyboards to key stakeholders. The idea is to communicate how the broader design concepts help users in the envisioned
work domain.
Depending on the project context, one or more of the design perspectives may be emphasized in the storyboards. This is usually the stage where key
stakeholders such as users or their representatives, business, software
engineering, and marketing must be heavily involved. You are planting the seeds for what the entire design will be for the system going forward.
9.2.3 Intermediate Design Iteration
At "C" in Figure 9-2, the purpose of intermediate design (coming up soon) iteration is to sort out possible multiple conceptual design candidates and to
arrive at one intermediate design for layout and navigation. For example, for the Ticket Kiosk System, there are at least two conceptual design candidates in the interaction perspective. One is a traditional "drill-in" concept where users are shown available categories (e.g., movies, concerts, MU athletics) from which they choose one. Based on the choice on this first screen, the user is shown further options and details, navigating with a back button and/or "bread crumb" trail, if necessary, to come back to the category view. A second conceptual design is the one using the three-panel idea described in the previous chapter.
Intermediate prototypes might evolve from low-fidelity to high-fidelity or
wireframes. Fully interactive high-fidelity mockups can be used as a vehicle to demonstrate leading conceptual design candidates to upper management stakeholders if you need this kind of communication at this stage. Using such wireframes or other types of prototypes, the candidate design concepts are validated and a conceptual design forerunner is selected.
9.2.4 Detailed Design Iteration
At "D" in Figure 9-2, iteration for detailed design is to decide screen design
and layout details, including "visual comps" (coming up soon) of the "skin" for look and feel appearance. The prototypes might be detailed wireframes
and/or high-fidelity interactive mockups. At this stage, the design will be fully
specified with complete descriptions of behavior, look and feel, and information on how all workflows, exception cases, and settings will be handled.
9.2.5 Design Refinement Iteration
At "E" in Figure 9-2, a prototype for refinement evaluation and iteration is usually medium to high fidelity and evaluation is either a rapid method (Chapter 13) ora full rigorous evaluation process (Chapters 12 and 14 through 18).
9.3 INTERMEDIATE DESIGN
For intermediate design, you will need the same team you have had since ideation and sketching, plus a visual designer if you do not already have one. Intermediate design starts with your conceptual design and moves forward with increasing detail and fidelity. The goal of intermediate design is to create a
logical flow of intermediate-level navigational structure and screen designs. Even though we use the term screen here for ease of discussion, this is also applicable to other product designs where there are no explicit screens.
9.3.1 Unpacking the Conceptual Design: Strategies for Realization
At "C" in Figure 9-2, you are taking the concepts created in conceptual design, decomposing them into logical units, and expanding each unit into different possible design strategies (corresponding to different conceptual design candidates) for concept realization. Eventually you will decide on a design strategy, from which spring an iterated and evaluated intermediate prototype.
9.3.2 Ground Your Design in Application Ontology with Information Objects
Per Johnson and Henderson (2002, p. 27), you should begin by thinking in terms of the ontological structure of the system, which will now be available in analyzed and structured contextual data. This starts with what we call information objects that we identified in modeling (Chapter 6).
As these information objects move within the envisioned flow model, they are accessed and manipulated by people in work roles. In a graphics-drawing application, for example, information objects might be rectangles, circles, and other graphical objects that are created, modified, and combined by users.
Identify relationships among the application objects-sometimes hierarchical, sometimes temporal, sometimes involving user workflow. With the
help of your physical model, cast your ontological net broadly enough to identify other kinds of related objects, for example, telephones and train tickets, and their physical manipulation as done in conjunction with system operation.
In design we also have to think about how users access information objects; from the user perspective, accessing usually means getting an object on the screen so that it can be operated on in some way. Then we have to think about what kinds of operations or manipulation will be performed.
For example, in the Ticket Kiosk System, events and tickets are important information objects. Start by thinking about how these can be represented in the design. What are the best design patterns to show an event? What are the design strategies to facilitate ways to manipulate them?
In your modeling you should have already identified information objects, their attributes, and relationships among them. In your conceptual design and later in intermediate design, you should already have decided how information objects will be represented in the user interaction design. Now you can decide how users get at, or access, these information objects.
Typically, because systems are too large and complex to show all information objects on the screen at once initially, how do your users call up a specific information object to operate on it? Think about information seeking, including browsing and searching.
Decide what operations users will carry out on your information objects. For example, a graphics package would have an operation to create a new rectangle object and operations to change its size, location, color, etc. Think about how users will invoke and perform those operations.
Add these new things to your storyboards. The design of information object operations goes hand in hand with design scenarios (Chapter 6), personas (Chapter 7), and storyboards (Chapter 8), which can add life to the static wireframe images of screens.
9.3.3 Illustrated Scenarios for Communicating Designs
One of the best ways to describe parts of your intermediate interaction design in a document is through illustrated scenarios, which combine the visual communication capability of storyboards and screen sketches with the capability of textual scenarios to communicate details. The result is an excellent vehicle for sharing and communicating designs to the rest of the team, and to management, marketing, and all other stakeholders.
Making illustrated scenarios is simple; just intersperse graphical storyboard frames and/or screen sketches as figures in the appropriate places to illustrate
the narrative text of a design scenario. The storyboards in initial illustrated scenarios can be sketches or early wireframes (coming up later).
9.3.4 Screen Layout and Navigational Structure During this phase, all layout and navigation elements are fully fleshed out. Using sequences of wireframes, key workflows are represented while describing what happens when the user interacts with the different user interface objects in the design. It is not uncommon to have wireframe sets represent part of the workflow or each task sequence using click-through prototypes.
9.4 DETAILED DESIGN
At "D" in Figure 9-2, for detailed design you will need the same team you had for intermediate design, plus documentation and language experts, to make sure that the tone, vocabulary, and language are accurate, precise and consistent, both with itself and with terminology used in the domain.
9.4.1 Annotated Wireframes
To iterate and evaluate your detailed designs, refine your wireframes more completely by including all user interface objects and data elements, still represented abstractly but annotated with call-out text.
9.4.2 Visual Design and Visual Comps
As a parallel activity, a visual designer who has been involved in ideation, sketching, and conceptual design now produces what we call visual "comps," meaning variously comprehensive or composite layout (a term originating in the printing industry). All user interface elements are represented, now with a very specific and detailed graphical look and feel.
A visual comp is a pixel-perfect mockup of the graphical "skin," including objects, colors, sizes, shapes, fonts, spacing, and location, plus visual "assets" for user interface elements. An asset is a visual element along with all of its defining characteristics as expressed in style definitions such as cascading style sheets for a Website. The visual designer casts all of this to be consistent with company branding, style guides, and best practices in visual design.
9.5 WIREFRAMES
In Figure 9-3 we show the path from ideation and sketching, task interaction models, and envisioned design scenarios to wireframes as representations of your designs for screen layout and navigational flow.
Along with ideation and sketching, task interaction models and design scenarios are the principal inputs to storytelling and communication of designs. As sequences of sketches, storyboards are a natural extension of sketching.
Storyboards, like scenarios, represent only selected task threads. Fortunately, it is a short and natural step from storyboards to wireframes.
To be sure, nothing beats pencil/pen and paper or a whiteboard for the sketching needed in ideation (Chapter 7), but, at some point, when the design concept emerges from ideation, it must be communicated to others who pursue the rest of the lifecycle process. Wireframes have long been the choice in the
field for documenting, communicating, and prototyping interaction designs.
9.5.1 What Are Wireframes?
Wireframes, a major bread-and-butter tool of interaction designers, are a form of prototype, popular in industry practice. Wireframes comprise lines and outlines (hence the name "wire frame") of boxes and other shapes to represent emerging interaction designs. They are schematic diagrams and "sketches" that define a Web page or screen content and navigational flow. They are used to illustrate high-level concepts, approximate visual layout, behavior, and sometimes even look and feel for an interaction design. Wireframes are embodiments of maps of screen or other state transitions during usage, depicting envisioned task flows in terms of user actions on user interface objects.
The drawing aspects of wireframes are often simple, offering mainly the use of rectangular objects that can be labeled, moved, and resized. Text and graphics
Figure 9-3
The path from ideation and sketching, task interaction models, and envisioned design scenarios to wireframes.
representing content and data in the design is placed in those objects. Drawing templates, or stencils, are used to provide quick means to represent the more common kinds of user interface objects (more on this in the following sections). Wireframes are often deliberately unfinished looking; during early stages of design they may not even be to scale. They usually do not contain much visual content, such as finished graphics, colors, or font choices. The idea is to create
design representations quickly and inexpensively by just drawing boxes, lines, and other shapes.
As an example of using wireframes to illustrate high-level conceptual designs, see Figure 9-4. The design concept depicted in this figure is comprised of a three-column pattern for a photo manipulation application. A primary navigation pane (the "nav bar") on the left-hand side is intended to show a list of all the user's photo collections. The center column is the main content display area for details, thumbnail images and individual photos, from the collection selected in the left pane.
The column on the right in Figure 9-4 is envisioned to show related contextual information for the selected collection. Note how a simple wireframe using just boxes, lines, and a little text can be effective in describing a broad
Figure 9-4
An example wireframe illustrating a high-level conceptual design.
Figure 9-5
Further elaboration of the conceptual design and layout of Figure 9-4.
interaction conceptual design pattern. Often these kinds of patterns are explored during ideation and sketching, and selected sketches are translated into wireframes.
While wireframes can be used to illustrate high-level ideas, they are used more commonly to illustrate medium-fidelity interaction designs. For example, the idea of Figure 9-4 is elaborated further in Figure 9-5. The navigation bar in the left column now shows several picture collections and a default "work bench" where all uploaded images are collected. The selected item in this column, "Italy trip," is shown as the active collection using another box with the same label and a fill color of gray, for example, overlaid on the navigation bar. The center content area is also elaborated more using boxes and a few icons to show a scrollable grid of thumbnail images with some controls on the top right. Note how certain details pertaining to the different manipulation options are left incomplete while showing where they are located on the screen.
Wireframes can also be used to show behavior. For example, in Figure 9-6 we show what happens when a user clicks on the vertical "Related information" bar in Figure 9-5: a pane with contextual information for this collection (or individual photo) slides out. In Figure 9-7 we show a different view of the content
Figure 9-6
The display that results when a user clicks on the "Related information" bar.
Figure 9-7
The display that results when a user clicks on the "One-up" view button.
pane, this time as a result of a user clicking on the "One-up" view switcher button in Figure 9-5 to see a single photo in the context pane. Double-clicking a thumbnail image will also expand that image into a one-up view to fill the content pane.
9.5.2 How Are Wireframes Used?
Wireframes are used as conversational props to discuss designs and design
alternatives. They are effective tools to elicit feedback from potential users and other stakeholders. A designer can move through a deck of wireframes one slide at a time, simulating a potential scenario by pretending to click on interaction widgets on the screen. These page sequences can represent the flow of user activity within a scenario, but cannot show all possible navigational paths.
For example, if Figures 9-5, 9-6, and 9-7 are in a deck, a designer can narrate a design scenario where user actions cause the deck to progress through the corresponding images. Such wireframes can be used for rapid and early lab-based evaluation by printing and converting them into low-fidelity paper prototypes (Chapter 11). A rough low- to medium-fidelity prototype, using screens like the ones shown in Figures 9-5, 9-6, and 9-7, can also be used for design walkthroughs and expert evaluations. In the course of such an evaluation, the expert can extrapolate intermediate states between wireframes.
What we have described so far is easy to do with almost all wireframing tools. Most wireframing tools also provide hyperlinking capabilities to make the deck a click-through prototype. While this takes more effort to create, and even more to maintain as the deck changes, it provides a more realistic representation of the envisioned behavior of the design. However, the use of this kind of prototype in an evaluation might require elaborating all the states of the design in the workflow that is the focus of the evaluation.
Finally, after the design ideas are iterated and agreed upon by relevant stakeholders, wireframes can be used as interaction design specifications. When wireframes are used as inputs to design production, they are annotated with details to describe the different states of the design and widgets, including mouse-over states, keyboard inputs, and active focus states. Edge cases and transition effects are also described. The goal here is completeness, to enable a
developer to implement the designs without the need for any interpretation.
Such specifications are usually accompanied by high-fidelity visual comps, discussed previously in this chapter.
9.5.3 How to Build Wireframes?
Wireframes can be built using any drawing or word processing software package that supports creating and manipulating shapes, such as iWork Pages, Keynote, Microsoft PowerPoint, or Word. While such applications suffice for simple wireframing, we recommend tools designed specifically for this purpose, such as OmniGraffle (for Mac), Microsoft Visio (for PC), and Adobe InDesign.
Many tools and templates for making wireframes are used in combination- truly an invent-as-you-go approach serving the specific needs of prototyping. For example, some tools are available to combine the generic-looking placeholders in wireframes with more detailed mockups of some screens or parts of screens. In essence they allow you to add color, graphics, and real fonts, as well as representations of real content, to the wireframe scaffolding structure.
In early stages of design, during ideation and sketching, you started with thinking about the high-level conceptual design. It makes sense to start with that here, too, first by wireframing the design concept and then by going top down to address major parts of the concept. Identify the interaction conceptual design using boxes with labels, as shown in Figure 9-4.
Take each box and start fleshing out the design details. What are the different kinds of interaction needed to support each part of the design, and what kinds of widgets work best in each case? What are the best ways to lay them out? Think about relationships among the widgets and any data that need to go with them. Leverage design patterns, metaphors, and other ideas and concepts from the work domain ontology. Do not spend too much time with exact locations of these widgets or on their alignment yet. Such refinement will come in later iterations after all the key elements of the design are represented.
As you flesh out all the major areas in the design, be mindful of the information architecture on the screen. Make sure the wireframes convey that inherent information architecture. For example, do elements on the screen follow a logical information hierarchy? Are related elements on the screen positioned in such a way that those relationships are evident? Are content areas indented appropriately? Are margins and indents communicating the hierarchy of the content in the screen?
Next it is time to think about sequencing. If you are representing a workflow, start with the "wake-up" state for that workflow. Then make a wireframe representing the next state, for example, to show the result of a user action such as clicking on a button. In Figure 9-6 we showed what happens when a user clicks
on the "Related information" expander widget. In Figure 9-7 we showed what happens if the user clicks on the "One-up" view switcher button.
Once you create the key screens to depict the workflow, it is time to review and refine each screen. Start by specifying all the options that go on the screen (even those not related to this workflow). For example, if you have a toolbar, what are all the options that go into that toolbar? What are all the buttons, view switchers, window controllers (e.g., scrollbars), and so on that need to go on the screen? At this time you are looking at scalability of your design. Is the design pattern and layout still working after you add all the widgets that need to go on this screen?
Think of cases when the windows or other container elements such as navigation bars in the design are resized or when different data elements that need to be supported are larger than shown in the wireframe. For example, in Figures 9-5 and 9-6, what must happen if the number of photo collections is greater than what fits in the default size of that container? Should the entire page scroll or should new scrollbars appear on the left-hand navigation bar alone? How about situations where the number of people identified in a collection are large? Should we show the first few (perhaps ones with most number of associated photos) with a "more" option, should we use an independent scrollbar for that pane, or should we scroll the entire page? You may want to make wireframes for such edge cases; remember they are less expensive and easier to do using boxes and lines than in code.
As you iterate your wireframes, refine them further, increasing the fidelity of the deck. Think about proportions, alignments, spacing, and so on for all the widgets. Refine the wording and language aspects of the design. Get the wireframe as close to the envisioned design as possible within the constraints of using boxes and lines.
9.5.4 Hints and Tips for Wireframing
Because the point of wireframing is to make quick prototypes for exploring design ideas, one of the most important things to remember about wireframing is modularity. Just as in paper prototyping, you want to be able to create multiple design representations quickly.
Being modular means not having too many concepts or details "hard coded" in any one wireframe. Build up concepts and details using "layers." Most good wireframing tools provide support for layers that can be used to abstract related design elements into reusable groups. Use a separate layer for each repeating set of widgets on the screen. For example, the container "window" of the
application with its different controls can be specified once as a layer and this layer can be reused in all subsequent screens that use that window control.
Similarly, if there is a navigationarea that is not going tochangeinthiswireframe deck, for example, the left-hand collections pane in Figure 9-5, use one shared layer for that. Layers can be stacked upon one another to construct a slide. This stacking also provides support for ordering in the Z axis to show overlapping widgets.
Selection highlights, for example, showing that "Italy trip" is the currently selected collection in Figure 9-5, can also created using a separate "highlight" layer.
Another tip for efficient wireframing is to use stencils, templates, and libraries of widgets. Good wireframing tools often have a strong community following of users who share wireframing stencils and libraries for most popular domains- for example, for interaction design-and platforms-for example, Web, Apple iOS, Google's Android, Microsoft's Windows, and Apple's Macintosh. Using these libraries, wireframing becomes as easy as dragging and dropping different widgets onto layers on a canvas.
Create your own stencil if your design is geared toward a proprietary platform or system. Start with your organization's style guide and build a library of all common design patterns and elements. Apart from efficiency, stencils and libraries afford consistency in wireframing.
Some advanced wireframing tools even provide support for shared objects in a library. When these objects are modified, it is possible to automatically update all instances of those objects in all linked wireframe decks. This makes maintenance and updates to wireframes easier.
Sketchy wireframes
Sometimes, when using wireframes to elicit feedback from users, if you want to convey the impression that the design is still amenable to changes, make wireframes look like sketches. We know from Buxton (2007a) that the style or "language" of a sketch should not convey the perception that it is more developed than it really is. Straight lines and coloring within the lines give the false impression that the design is almost finished and, therefore, constructive criticism and new ideas are no longer appropriate.
However, conventional drawing tools, such as Microsoft Visio, Adobe Illustrator, OmniGraffle, and Adobe inDesign, produce rigid, computer-drawn boxes, lines, and text. In response, "There is a growing popularity toward something in the middle: Computer-based sketchy wireframes. These allow computer wireframes to look more like quick, hand-drawn sketches while retaining the reusability and polish that we expect from digital artifacts" (Travis, 2009).
Fortunately, there are now a number of templates and tools such as Balsamic Mockups1 that let you use the standard drawing packages to draw user interface objects in a "sketchy" style that makes lines and text have a look as if done by hand.
9.6 MAINTAIN A CUSTOM STYLE GUIDE
9.6.1 What Is a Custom Style Guide?
A custom style guide is a document that is fashioned and maintained by
designers to capture and describe details of visual and other general design
decisions that can be applied in multiple places. Its contents can be specific to one project or an umbrella guide across all projects on a given platform or over a whole organization.
A custom style guide is a kind of internal documentation integral to the design process. Every project needs one. Your custom style guide documents all the design decisions you make about style issues in your interaction design, especially your screen designs.
Because your design decisions continue to be made throughout the project and because you sometimes change your mind about design decisions, the custom style guide is a living document that grows and is refined along with the design. Typically this document is private to the project team and is used only internally within the development organization.
Although style guides and design guidelines (Chapter 22) both give guidance for design, they are otherwise almost exact opposites. Guidelines are usually suggestions to be interpreted; compliance with style guides is often required.
Guidelines are very general and broad in their applicability and usually independent of implementation platforms and interaction styles. Style guides are usually very specific to a platform and interaction style and even to a particular device.
9.6.2 Why Use a Custom Style Guide?
Among the reasons for designers to use a custom style guide within a project are:
� It helps with project control and communication. Without documentation of the large numbers of design decisions, projects-especially large projects-get out of control.
Everyone invents and introduces his or her own design ideas, possibly different each day. The result almost inevitably is poor design and a maintenance nightmare.
1http://balsamiq.com/products/mockups
� It is a reliable force toward design consistency. An effective custom style guide helps reduce variations of the details of widget design, layout, formatting, color choices, andso on, giving you consistency of details throughout a product and across product lines.
� A custom style guide is a productivity booster through reuse of well-considered design ideas. It helps avoid the waste of reinvention.
9.6.3 What to Put in a Custom Style Guide?
Your custom style guide should include all the kinds of user interface objects where your organization cares the most about consistency (Meads, 2010). Most style guides are very detailed, spelling out the parameters of graphic layouts and grids, including the size, location, and spacing of user interface elements. This includes widget (e.g., dialogue boxes, menus, message windows, toolbars) usage, position, and design. Also important are the layouts of forms, including the fields, their formatting, and their location on forms.
Your style guide is the appropriate place to standardize fonts, color schemes, background graphics, and other common design elements. Other elements of a style guide include interaction procedures, interaction styles, message and dialogue fonts, text styles and tone, labeling standards, vocabulary control for terminology and message wording, and schemes for deciding how to use defaults and what defaults to use. It should be worded very specifically, and you should spell out interpretations and conditions of applicability.
You should include as many sample design sketches and pictures taken from designs on screens as possible to make it communicate visually. Supplement with clear explanatory text. Incorporate lots of examples of good and bad design, including specific examples of UX problems found in evaluation related to style guide violations.
Your style guide is also an excellent place to catalog design "patterns" (Borchers, 2001), your "standard" ways of constructing menus, icons, dialogue boxes, and so on. Perhaps one of the most important parts of a style guide are rules for organizational signature elements for branding.
Example: Make up Your Minds At the Social Security Administration (SSA), we encountered a design discussion about whether to put the client's name or the client's social security number first on a form used in telephone interviews. The current system had the social security number first, but some designers changed it because they thought it would be friendlier to ask the name first.
Later, another group of designers had to change it back to social security number first because the SSA's policy for contact with clients requires first
asking the social security number in order to retrieve a unique SSA record for that person. Then the record is used to verify all the other variables, such as name and address. This policy, in fact, was the reason it had been done this way in the beginning, but because that first design group did not document the design decision about field placement in this type of form or the rationale behind it in their custom style guide, others had to reinvent and redesign-twice.
9.7 INTERACTION DESIGN SPECIFICATIONS
9.7.1 What Is an Interaction Design Specification?
Interaction design specifications are descriptions of user interface look and feel
and behavior at a level of completeness that will allow a software programmer to
implement it precisely.
Discussions of "specifications" often lead to a diversity of strongly felt opinions. By definition, a specification is a complete and correct description of something. Specifications play an indispensable role in software engineering. However, because it is difficult or impossible to construct complete and correct descriptions of large complex systems, it is not uncommon to find incomplete and ambiguous specifications in the software development world. Also, there are no standards for interaction design specifications.
As a result, this connection between the two domains persists as one of the great mysteries in the trade, one of the things people on both sides seem to know the least about. In each organization, people in project roles on both sides figure out their own ways to handle this communication, to varying degrees of effectiveness, but there is no one general or broadly shared approach. See Chapter 23 for a more in-depth discussion about this communication problem.
In human-computer interaction (HCI), some argue that it is not practical to create a design specification because as soon as they invest the effort, the specification is more or less rendered useless by changes in the design due to our iterative lifecycle concept. However, there is no reason that a design specification cannot be just as dynamic as the design itself. In fact, a series of versions of a design specification can be valuable in tracking the trajectory of the evolving design and as a way to reflect on the process. In addition, by maintaining the interaction design specifications as the design progresses, it is possible to give the SE team periodic previews, avoiding surprises at the end.
9.7.2 Why Should We Care about Interaction Design Specifications?
Well, when we have devoted our resources to design and iterative refinement of the interaction part of a system, we would really like to get that design into the software of the system itself. To do that, we have to tell the SE people, the ones who will implement our designs, what to build for the interaction part. The user interaction design on the UX side becomes the user interface software requirements for the user interface software design on the SE side.
In simple terms, we UX folks need a design representation because the SE folks need a requirements specification for the user interface software. You want it to be a very specific specification so there is no room for the SE people to do interaction design on their own.
Without some kind of interaction design specifications, the software result could be almost anything. However, in practice, it is prohibitively expensive to produce specifications that are "complete." Designers usually infuse enough graphical and textual details for a programmer to understand design intent, and issues that are not clear in the specification are handled on the social back channels. If programmers are part of the process early on, they will have a better understanding on the design as it evolved and therefore have less need for explanations outside of the specification.
9.7.3 What about Using a Prototype as a Design Specification?
The case for prototypes as interaction design representations is built on the fact that prototypes already exist naturally as concrete, living design representations. Abstract textual design specifications do not lend themselves to visualization of the design, whereas a prototype can be "touched" and manipulated to examine the design in action. Plus, prototypes capture all that design detail in a way that no descriptive kind of representation can.
It is especially easy to view an iteratively refined and relatively complete high-fidelity prototype as a wonderfully rich and natural way to represent an interaction design. And it looks even better when compared to the enormous, tedious, and cumbersome additional task of writing a complete specification document describing the same design in text. For example, just one dialogue box in an interaction would typically require voluminous narrative text, including declarative definitions of all objects and their attributes. The resulting long litany of descriptor attributes and values, which when read (or if read), would fail to convey the simple idea conveyed by seeing and "trying" the dialogue box itself.
However, while prototypes make for good demonstrations of the design, they
are not effective as reference documents. A prototype cannot be "searched" to find where a specific design point or requirement is addressed. A prototype does not have an "index" with which to look up specific concepts. A prototype cannot be treated as a list of features to be implemented. Some say there is no substitute for having a formal document that spells everything out and that can be used to resolve arguments and answer questions about the requirements.
Also, some prototypes are not complete or even 100% accurate in all details. Taken as a specification, this kind of prototype does not reveal which parts are incomplete or only representative.
A prototype requires interpretation as a specification. There is still a great deal about a dialogue box, for example, not necessarily conveyed by a picture. Is it every detail that you see, including the text on the labels, the font and colors, and so on? For example, is the font size of a particular button label within a complicated dialogue box the exact font style and size that shall be used or just something used because they had to use some font. It does not say. Of course, the more high fidelity it is, the more literally it is to be taken, but the dividing line is not always explicit.
9.7.4 Multiple, Overlapping Representation Techniques as a Possible Solution
Because no single representation technique serves all purposes as a interaction design specification, we must do our best to compile sets of representations to include as much of the interaction design as possible. In the current state of the art this can mean coalescing descriptions in multiple and sometimes overlapping dimensions, each of which requires a different kind of representation technique.
These multiple descriptions come from the many work products that have evolved in parallel as we moved through the formulation of requirements and early design-informing models (Chapter 6), including hierarchical task inventory (HTI) diagrams, usage scenarios, screen designs, user interface object details (graphical user interface objects, not the OO software kind), wireframes, lists of pull-down menu options, commands, dialogue boxes, messages, and behaviors, and of course the prototype.
9.8 MORE ABOUT PARTICIPATORY DESIGN
Although we do not describe participatory design as a specific technique in the main part of this chapter, users certainly can and should participate in the entire design process, starting from ideation and sketching to refinement. Because the specific technique of participatory design is an important part of HCI history and literature, we touch on it here.
9.8.1 Basics of Participatory Design
At the very beginning of a design project, you often have the user and customers on one side and system designers on the other. Participatory design is a way to combine the knowledge of work practice of the users and customers with the process skills of system designers.
It is interesting that although participatory design has a lot in common with, including its origins, contextual inquiry and contextual analysis, many applications of participatory design have been in the absence of upfront contextual inquiry or contextual analysis processes. Regardless of how it gets started, many design teams end up realizing that although participatory design is a good way to get at real user needs by involving users in design, it is not a substitute for involving users in defining requirements, the objective of contextual inquiry and contextual analysis.
A participatory design session usually starts with reciprocal learning in which the users and the designers learn about each others' roles; designers learn about work practices and users learn about technical constraints (Carmel, Whitaker, & George, 1993). The session itself is a democratic process. Rank or job title has no effect; anyone can post a new design idea or change an existing feature. Only positive and supportive attitudes are tolerated. No one can criticize or attack another person or their ideas. This leads to an atmosphere of freedom to express even the farthest out ideas; creativity rules.
In our own experience, we have found participatory design very effective for specific kinds of interaction situations. For example, we think it could be a good approach, especially if used in conjunction with design scenarios, to sketching out the first few levels of screens of the Ticket Kiosk System interaction. These first screens are very important to the user experience, where first impressions formed by users and where we can least afford to have users get lost and customers turn away. However, in our experience, the technique sometimes does not scale up well to complete designs of large and complex systems.
9.8.2 PICTIVE2-An Example of an Approach to Participatory Design
Inspired by the mockup methods of the Scandinavian project called UTOPIA (B0dker et al., 1987), which provided opportunities for workers to give inputs to workplace technology and organizational work practices, PICTIVE (Muller, 1991; Muller, Wildman, & White, 1993) is an example of how participatory design has been operationalized in HCI. PICTIVE supports rapid group
2Plastic Interface for Collaborative Technology Initiatives through Video Exploration.
prototype design using paper and pencil and other "low technology" materials on a large table top in combination with video recording.
The objective is for the group to work together to find technological design solutions to support work practice and, sometimes, to redesign the work practice in the process. Video recording is used to chronicle and communicate the design process and to record walkthroughs used to summarize the designs.
PICTIVE is, as are most participatory design approaches, a hands-on design- by-doing technique using low-tech tools, such as those used for paper prototyping: blackboards, large sheets of paper, bulletin boards, push pins, Post- it notes, colored marking pens, index cards, scissors, and tape. PICTIVE deliberately uses these low-tech (noncomputer, nonsoftware) representations to level the playing field between users and technical design team members.
Otherwise using even the most primitive programming tools for building prototypes on the fly can cast the users as outsiders and the design practitioners as intermediaries through whom all user ideas must flow. It then is no longer a collaborative storytelling activity.
After the mutual introduction to each others' backgrounds and perspectives, the group typically discusses the task at hand and the design objectives to get on the same page for doing the design. Then they gather around a table on which there is a large paper representation of a generic computer "window." Anyone can step forward and "post" a design feature, for example, button, icon, menu, dialogue box, or message, by writing or drawing it on a Post-it note or similar piece of paper, sticking it on the "window" working space, and explaining the rationale. The group can then discuss refinements and improvements. Someone else can edit the text on the object, for example, and change its location in the window.
The group works collaboratively to expand and modify, adding new objects, changing objects, and moving objects to create new layouts and groupings and changing wording of labels and messages, and so on, all the while communicating their thinking and reasons behind each change. The results can be evaluated immediately as low-fidelity prototypes with walkthroughs (usually recorded as video for further sharing and evaluation). In most project environments that use this kind of participatory design, it is often used in the consultative design mode, where users participate in forming parts of the design but the professional design practitioners have the final responsibility for the overall design.
PICTIVE has been evaluated informally in the context of several real product design projects (Muller, 1992). User participants report getting enjoyment from the process and great satisfaction in having a receptive audience for their own design ideas and, especially, in seeing those design ideas included in the group's output.
9.8.3 History and Origins
Participatory design entails user participation in design for work practice. Participatory design is a democratic process for design (social and technological) of systems involving human work, based on the argument that users should be involved in designs they will be using, and that all stakeholders, including and especially users, have equal inputs into interaction design (Muller & Kuhn, 1993).
The idea of user participation in system design harkens back (as does the work on contextual studies) at least to a body of effort called work activity theory (B0dker, 1991; Ehn, 1990). Originating in Russia and Germany, it flourished in Scandinavia in the 1980s where it was closely related to the workplace democracy movement. These early versions of participatory design embraced a view of design based on work practice situated in a worker's own complete environment, but also espoused empowerment of workers to "codetermine the development of the information system and of their workplace" (Clement & Besselaar, 1993).
Going back to the 1980s and earlier, probably the most well-known participatory design project was the Scandinavian project called UTOPIA (B0dker et al., 1987). A main goal of Project UTOPIA was to overcome limitations on opportunities for workers to affect workplace technology and organizational work practices. UTOPIA was one of the
first such projects intended to produce a commercial product at the end of the day.
Participatory design has been practiced in many different forms with different rules of engagement. In some projects, participatory design limits user power to creating only inputs for the professional designers to consider, an approach called consultative design by Mumford (1981). Other approaches give the users full power to share in the responsibility for the final outcome, in what Mumford calls consensus design.
Also beginning in the 1970s and 1980s, an approach to user involvement in design (but probably developed apart from the participatory design history in Scandinavia) called Joint Application Design was emerging from IBM in the United States and Canada (August 1991). Joint Application Design falls between consultative design and consensus design in the category of representative design (Mumford, 1981), a commonly used approach in industry in which user representatives become official members of the design teams, often for the duration of the project. In comparison with participatory design, Joint Application Design is often a bit more about group dynamics, brainstorming, and organized group meetings.
In the early 1990s, the Scandinavian approach to democratic design was adapted and extended within the HCI community in the form of participatory design. Muller's (1991) vision of participatory design as embodied in his PICTIVE approach is the most well-known adaptation of the general concept specifically to HCI. The first Participatory Design Conference met in 1990 and it has been held biannually ever since. Participatory design has since been codified for practice (Greenbaum & Kyng, 1991), reviewed (Clement & Besselaar, 1993), and summarized (Muller, 2003a,b).
Summary of the Flow of Actitives in Chapters 3 through 9
Intentionally left as blank
UX Goals, Metrics, and Targets
10.1 INTRODUCTION
10.1.1 You Are Here
We are making splendid progress in moving through the Wheel UX lifecycle template. In this chapter we establish operational targets for user experience to
assess the level of success in your designs so that you know when you can move on
to the next iteration. UX goals, metrics, and targets help you plan for evaluation that will successfully reveal the user performance and emotional satisfaction bottlenecks. Because UX goals, metrics, and targets are used to guide much of the process from analysis through evaluation, we show it as an arc around the entire lifecycle template, as you can see in Figure 10-1.
10.1.2 Project Context for UX Metrics and Targets
In early stages, evaluation usually focuses on qualitative data for finding UX problems. In these early evaluations the absence of quantitative data precludes the use of UX metrics and targets. But you may still want to establish them at this point if you intend to use them in later evaluations.
However, there is another need why you might forego UX metrics and targets. In most practical contexts, specifying UX metrics and targets and following up with
Figure 10-1
You are here; the chapter on UX goals, metrics, and targets in the context of the overall Wheel lifecycle template.
them may be too expensive. This level of completeness is only possible in a few organizations where there are established UX resources.
In most places, one round of evaluation is all one gets. Also, as designers, we can know which parts of the design need further
investigation just by looking at the results of the first round of evaluation. In such cases, quantitative UX metrics and targets may not be useful
but benchmark tasks are still essential as vehicles for driving evaluation.
Regardless, the trend in the UX field is moving away from a focus on user performance and more toward user satisfaction and enjoyment. We include
the full treatment of UX goals, metrics, and targets here and quantitative
data collection and analysis in the later UX evaluation chapters for completeness and because some readers and practitioners still want coverage of the topic.
In any case, we find that this pivotal interaction design process activity of
specifying UX goals, metrics, and targets is often overlooked, either because of lack of knowledge or because of lack of time. Sometimes this can be unfortunate because it can diminish the potential of what can be accomplished with the resources you will be putting into user experience evaluation. This chapter will help you avoid that pitfall by showing you
metrics, and targets.
Fortunately, creating UX metrics and targets, after a little practice, does not
take much time. You will then have specific quantified UX goals against which to test rather than just waiting to see what happens when you put users in front of your interaction design. Because UX metrics and targets provide feasible
objectives for formative evaluation efforts, the results can help you pinpoint
where to focus on redesign most profitably.
And, finally, UX goals, metrics, and targets offer a way to help manage the
lifecycle by defining a quantifiable end to what can otherwise seem like endless
iteration. Of course, designers and managers can run out of time, money, and
patience before they meet their UX targets-sometimes after just one round of evaluation-but at least then they know where things stand.
10.1.3 Roots for UX Metrics and Targets
The concept of formal UX measurement specifications in tabular form, with various metrics operationally defining success, was originally developed by Gilb (1987). The focus of Gilb's work was on using measurements in managing software development resources. Bennett (1984) adapted this approach to usability specifications as a technique for setting planned usability levels and managing the process to meet those levels.
These ideas were integrated into usability engineering practice by Good et al. (1986) and further refined by Whiteside, Bennett, and Holtzblatt (1988).
Usability engineering, as defined by Good et al. (1986), is a process through which quantitative usability characteristics are specified early and measured throughout the lifecycle process.
Carroll and Rosson (1985) also stressed the need for quantifiable usability specifications, associated with appropriate benchmark tasks, in iterative refinement of user interaction designs. And now we have extended the concept to UX targets. Without measurable targets, it is difficult to determine, at least
quantitatively, whether the interaction design for a system or product is meeting
your UX goals.
10.2 UX GOALS
UX goals are high-level objectives for an interaction design, stated in terms
of anticipated user experience. UX goals can be driven by business goals and reflect real use of a product and identify what is important to an organization, its customers, and its users. They are expressed as desired effects to be experienced in usage by users of features in the design and they translate into a set of UX measures. A UX measure is a usage attribute to be assessed in evaluating a UX goal.
You will extract your UX goals from user concerns captured in work activity notes, the flow model, social models, and work objectives, some of which will be market driven, reflecting competitive imperatives for the product. User
experience goals can be stated for all users in general or in terms of a specific
work role or user class or for specific kinds of tasks.
Examples of user experience goals include ease-of-use, power performance for experts, avoiding errors for intermittent users, safety for life-critical systems, high customer satisfaction, walk-up-and-use learnability for new users, and so on.
Example: User Experience Goals for Ticket Kiosk System
We can define the primary high-level UX goals for the ticket buyer to include:
� Fast and easy walk-up-and-use user experience, with absolutely no user training
� Fast learning so new user performance (after limited experience) is on par with that of an experienced user [from AB-4-8]
� High customer satisfaction leading to high rate of repeat customers [from BC-6-16] Some other possibilities:
� High learnability for more advanced tasks [from BB-1-5]
� Draw, engagement, attraction
� Low error rate for completing transactions correctly, especially in the interaction for payment [from CG-13-17]
10.3 UX TARGET TABLES
Through years of working with real-world UX practitioners and doing our own user experience evaluations, we have refined the concept of a UX target table, in the form shown in Table 10-1, from the original conception of a usability specification table, as presented by Whiteside, Bennett, and Holtzblatt (1988). A spreadsheet is an obvious way to implement these tables.
For convenience, one row in the table is called a "UX target." The first three columns are for the work role and related user class to which this UX target applies, the associated UX goal, and the UX measure. The three go together because each UX measure is aimed at supporting a UX goal and is specified with respect to a work role and user class combination. Next, we will see where you get the information for these three columns.
As a running example to illustrate the use of each column in the UX target table, we will progressively set some UX targets for the Ticket Kiosk System.
Table 10-1
Our UX target table, as evolved from the Whiteside, Bennett, and Holtzblatt (1988) usability specification table
Work Role: User Class
UX
UX
Measuring
UX
Baseline
Target
Observed
Goal
Measure
Instrument
Metric
Level
Level
Results
10.4 WORK ROLES, USER CLASSES, AND UX GOALS
Because UX targets are aimed at specific work roles, we label each UX target by work role. Recall that different work roles in the user models perform different
task sets.
So the key task sets for a given work role will have associated usage scenarios,
which will inform benchmark task descriptions we create as measuring
instruments to go with UX targets. Within a given work role, different user classes will generally be expected to perform to different standards, that is, at different target levels.
Example: A Work Role, User Class, and UX Goal for the Ticket Kiosk System
In Table 10-1, we see that the first values to enter for a UX target are work role, a corresponding user class, and related UX goal. As we saw earlier, user class
definitions can be based on, among other things, level of expertise, disabilities
and limitations, and other demographics.
For the Ticket Kiosk System, we are focusing primarily on the ticket buyer. For this work role, user classes include a casual town resident user from Middleburg and a student user from the Middleburg University. In this example, we feature the casual town user.
Translating the goal of "fast-and-easy walk-up-and-use user experience" into a UX target table entry is straightforward. This goal refers to the ability of a typical occasional user to do at least the basic tasks on the first try, certainly without training or manuals. Typing them in, we see the beginnings of a UX target in the first row of Table 10-2.
Table 10-2
Choosing a work role, user class, and UX goal for a UX target
Work Role: User Class
UX Goal
UX
Measure
Measuring Instrument
UX Baseline Metric Level
Target Level
Observed Results
Ticket buyer: Casual new user, for
occasional personal use
Walk-up ease of use for new user
10.5 UX MEASURES
Within a UX target, the UX measure is the general user experience
characteristic to be measured with respect to usage of your interaction design. The choice of UX measure implies something about which types of measuring instruments and UX metrics are appropriate.
UX targets are based on quantitative data-both objective data, such as observable user performance, and subjective data, such as user opinion and satisfaction.
Some common UX measures that can be paired with quantitative metrics include:
� Objective UX measures (directly measurable by evaluators)
� Initial performance
� Long-term performance (longitudinal, experienced, steady state)
� Learnability
� Retainability
� Advanced feature usage
� Subjective UX measures (based on user opinions)
� First impression (initial opinion, initial satisfaction)
� Long-term (longitudinal) user satisfaction
Initial performance refers to a user's performance during the very first use (somewhere between the first few minutes and the first few hours, depending on the complexity of the system). Long-term performance typically refers to performance during more constant use over a longer period of time (fairly regular use over several weeks, perhaps). Long-term usage usually implies a steady-state learning plateau by the user; the user has become familiar with the system and is no longer constantly in a learning state.
Initial performance is a key UX measure because any user of a system must, at some point, use it for the first time. Learnability and retainability refer, respectively, to how quickly and easily users can learn to use a system and how well they retain what they have learned over some period of time.
Advanced feature usage is a UX measure that helps determine user experience of more complicated functions of a system. The user's initial opinion
of the system can be captured by a first impression UX measure, whereas long-term user satisfaction refers, as the term implies, to the user's opinion after using the system for some greater period of time, after some allowance for learning.
Initial performance and first impression are appropriate UX measures for virtually every interaction design. Other UX measures often play support roles to address more specialized UX needs. Conflicts among UX measures are not unheard of. For example, you may need both good learnability and good expert performance. In the design, those requirements can work against each other. This, however, just reflects a normal kind of design trade-off. UX targets based on the two different UX measures imply user performance requirements pulling in two different directions, forcing the designers to stretch the design and face the trade-off honestly.
Example: UX Measures for the Ticket Kiosk System
For the walk-up ease-of-use goal of our casual new user, let us start simply with just two UX measures: initial performance and first impression. Each UX measure will appear in a separate UX target in the UX target table, with the user class of the work role and UX goal repeated, as in Table 10-3.
10.6 MEASURING INSTRUMENTS
Within a UX target, the measuring instrument is a description of the method for
providing values for the particular UX measure. The measuring instrument
is how data are generated; it is the vehicle through which values are measured for the UX measure.
Although you can get creative in choosing your measuring instruments, objective measures are commonly associated with a benchmark task-for example, a time-on-task measure as timed on a stopwatch, or an error rate measure made by counting user errors-and subjective measures are commonly associated with a user questionnaire-for example, the average user rating-scale scores for a specific set of questions.
Table 10-3
Choosing initial performance and first impression as UX measures
Work Role: User Class
UX Goal
UX Measure Measuring
Instrument
UX Baseline
Metric Level
Target Level
Observed Results
Ticket buyer: Casual new user, for occasional personal use
Walk-up ease of use for new user
Initial user performance
Ticket buyer: Casual new user, for occasional personal use
Initial customer satisfaction
First impression
For example, we will see that the objective "initial user performance" UX measure in the UX target table for the Ticket Kiosk System is associated with a benchmark task and the "first impression" UX measure is associated with a questionnaire. Both subjective and objective measures and data can be
important for establishing and evaluating user experience coming from a
design.
10.6.1 Benchmark Tasks
According to Reference.com, the term "benchmark" originates in surveying, referring to:
Chiseled horizontal marks that surveyors made in stone structures, into which an angle-iron could be placed to form a "bench" for a leveling rod, thus ensuring that a leveling rod could be accurately repositioned in the same place in future. These marks were usually indicated with a chiseled arrow below the horizontal line.
As a measuring instrument for an objective UX measure, a benchmark task is a representative task that you will have user participants perform in evaluation where you can observe their performance and behavior and take qualitative data (on observations of critical incidents and user experience problems) and quantitative data (user performance data to compare with UX targets). As such, a benchmark task is a "standardized" task that can be
used to compare (as an engineering comparison, not a rigorous scientific
comparison) performance among different users and across different design
versions.
Address designer questions with benchmark tasks and UX targets
As designers work on interaction designs, questions arise constantly. Sometimes the design team simply cannot decide an issue for themselves and they defer it to UX testing ("let the users decide"). Perhaps the team does not agree on a way to treat one design feature, but they have to pick something in order to move forward.
Maybe you do agree on the design for a feature but you are very curious about how it will play out with real users. Perchance you do not believe an input you got in your requirements from contextual analysis but you used it, anyway, and now you want to see if it pans out in the design.
We have suggested that you keep a list of design questions as they came up
in design activities. Now they play a role in setting benchmark tasks to get
feedback from users regarding these questions. Benchmark tasks based on designer issues are often the only way this kind of issue will get considered in evaluation.
Selecting benchmark tasks
In general, of course, the benchmark tasks you choose as measuring
instruments should closely represent tasks real users will perform in a real work
context. Pick tasks where you think or know the design has weaknesses. Avoiding such tasks violates the spirit of UX targets and user experience evaluation; it is about finding user experience problems so that you can fix them, not about proving you are the best designer. If you think of UX targets as a measure of how good you are as a designer, you will have a conflict of interest because you are setting your own evaluation criteria. That is not the point of UX targets at all.
Here are some guidelines for creating effective benchmark tasks.
Create benchmark tasks for a representative spectrum of user
tasks. Choose realistic tasks intended to be used by each user class of a work role across the system. To get the best coverage for your evaluation investment, your choices should represent the cross section of real tasks with respect to frequency of performance and criticality to goals of the users of the envisioned product. Benchmark tasks are also selected to evaluate new features, "edge cases" (usage at extreme conditions), and business-critical and mission-critical tasks. While some of these tasks may not be performed frequently, getting them wrong could cause serious consequences.
Start with short and easy tasks and then increase difficulty progressively. Because your benchmark tasks will be faced by participant users in a sequence, you should consider their presentation order. In most cases, start with relatively easy ones to get users accustomed to the design and feeling comfortable in their role as evaluators. After building user confidence and engagement, especially with the tasks for the "initial performance" UX measure, you can introduce more features, more breadth, variety, complexity, and higher levels of difficulty.
In some cases, you might have your user participants repeat a benchmark task, only using a different task path, to see how users get around in multiple ways.
The more advanced benchmark tasks are also a place to try your creativity
by introducing intervening circumstances. For example, you might lead the user
down a path and then say "At this point, you change your mind and want to do such and such, departing from where you are now."
For our ticket kiosk system, maybe start with finding a movie that is currently playing. Then follow with searching for and reserving tickets for a movie that is to be showing 20 days from now and then go to more complex tasks such as purchasing concert tickets with seat and ticket type selection.
Include some navigation where appropriate. In real usage, because users usually have to navigate to get to where they will do the operations specific to performing a task, you want to include the need for this navigation even in your earliest benchmark tasks. It tests their knowledge of the fact that they do need to go elsewhere, where they need to go, and how to get there.
Avoid large amounts of typing (unless typing skill is being evaluated). Avoid anything in your benchmark task descriptions that causes large user performance variation not related to user experience in the design. For example, large amounts of typing within a benchmark task can cause large variations in user performance, but the variations will be based on differences in typing skills and can obscure performance differences due to user experience or usability issues.
Match the benchmark task to the UX measure. Obviously, if the UX measure is "initial user performance," the task should be among those a first-time user realistically would face. If the UX measure is about advanced feature usage, then, of course, the task should involve use of that feature to match this requirement. If the UX measure is "long-term usage," then the benchmark task should be faced by the user after considerable practice with the system. For a UX measure of "learnability," a set of benchmark tasks of increasing complexity might be appropriate.
Adapt scenarios already developed for design. Design scenarios clearly represent important tasks to evaluate because they have already been selected as key tasks in the design. However, you must remember to remove information about how to perform the tasks, which is usually abundant in a scenario. See guideline "Tell the user what task to do, but not how to do it" in the next section for more discussion.
Use tasks in realistic combinations to evaluate task flow. To measure user performance related to task flow, use combinations of tasks such as those that will occur together frequently. In these cases, you should set UX targets
for such combinations because difficulties related to user experience that appear during performance of the combined tasks can be different than for the same tasks performed separately. For example, in the Ticket Kiosk System, you may wish to measure user performance on the task thread of searching for
an event and then buying tickets for that event.
As another example, a benchmark task might require users to buy four tickets for a concert under a total of $200 while showing tickets in this price range for the upcoming few days as sold out. This would force users to perform the task of searching through other future concert days, looking for the first available day with tickets in this price range.
Do not forget to evaluate with your power users. Often user experience for power users is addressed inadequately in product testing (Karn, Perry, & Krolczyk, 1997). Do your product business and UX goals include power use by a trained user population? Do they require support for rapid repetition of tasks, complex and possibly very long tasks? Does their need for productivity demand shortcuts and direct commands over interactive hand-holding?
If any of these are true, you must include benchmark tasks that match this kind of skilled and demanding power use. And, of course, these benchmark tasks must be used as the measuring instrument in UX targets that match up with the corresponding user classes and UX goals.
To evaluate error recovery, a benchmark task can begin in an error state. Effective error recovery is a kind of "feature" that designers and evaluators can easily forget to include. Yet no interaction design can guarantee error-free usage, and trying to recover from errors is something most users are familiar with and can relate to. A "forgiving" design will allow users to recover from errors relatively effortlessly. This ability is definitely an aspect of your design that should be evaluated by one or more benchmark tasks.
Consider tasks to evaluate performance in "degraded modes" due to partial equipment failure. In large interconnected, networked systems such as military systems or large commercial banking systems, especially involving multiple kinds of hardware, subsystems can go down. When this happens, will your part of the system give up and die or can it at least continue some of its intended functionality and give partial service in a "degraded mode?" If your application fits this description, you should include benchmark tasks to evaluate the user's perspective of this ability accordingly.
Do not try to make a benchmark task for everything. Evaluation driven by UX targets is only an engineering sampling process. It will not be possible to establish UX targets for all possible classes of users doing all possible tasks.
It is often stated that about 20% of the tasks in an interactive system account for 80% of the usage and vice versa. While these figures are obviously folkloric guesses, they carry a grain of truth to guide in targeting users and tasks in establishing UX targets.
Constructing benchmark task content
Here we list a number of tips and hints to consider when creating benchmark task content.
Remove any ambiguities with clear, precise, specific, and repeatable instructions. Unless resolving ambiguity is what we want users to do as part of the task, we must make the instructions in benchmark task descriptions clear and not confusing. Unambiguous benchmark tasks are necessary for consistent results; we want differences in user performance to be due to differences in users or differences in designs but usually not due to different interpretations of the same benchmark task.
As a subtle example, consider this "add appointment" benchmark task for the "initial performance" UX measure for an interdepartmental event scheduling system. Schedule a meeting with Dr. Ehrich for a month from today at 10 AM in 133 McBryde Hall concerning the HCI research project.
For some users, the phrase "1 month from today" can be ambiguous. Why? It can mean, for example, on the same date next month or it can mean exactly 4 weeks from now, putting it on the same day of the week. If that difference in meaning can make a difference in user task performance, you need to make the wording more specific to the intended meaning.
You also want to make your benchmark tasks specific so that participants do not get sidetracked on irrelevant details during testing. If, for example, a "find event" benchmark task is stated simply as "Find an entertainment event for sometime next week," some participants might make it a long, elaborate task, searching around for some "best" combination of event type and date, whereas others would do the minimum and take the first event they see on the screen. To mitigate such differences, add specific information about event selection criteria.
Tell the user what task to do, but not how to do it. This guideline is very important; the success of user experience evaluation based on this task will depend on it. Sometimes we find students in early evaluation exercises
presenting users with task instructions that spell out a series of steps to perform. They should not be surprised when the evaluation session leads to uninteresting results.
The users are just giving a rote performance of the steps as they read them from the benchmark task description. If you wish to test whether your interaction design helps users discover how to do a given task on their own, you must avoid giving any information about how to do it. Just tell them what task to do and let them figure out how.
Example (to do): "Buy two student tickets for available adjacent seats as close to the stage as possible for the upcoming Ben King concert and pay with a credit card."
Example (not to do): "Click on the Special Events button on the home screen; then select More at the bottom of the screen. Select the Ben King concert and click on Seating Options "
Example (not to do): "Starting at the Main Menu, go to the Music Menu and set it as a Bookmark. Then go back to the Main Menu and use the Bookmark feature to jump back to the Music Menu."
Do not use words in benchmark tasks that appear specifically in the interaction design. In your benchmark task descriptions, you must avoid using any words that appear in menu headings, menu choices, button labels, icon pop-ups, or any place in the interaction design itself. For example, do not say "Find the first event (that has such and such a characteristic)" when there is
a button in the interaction design labeled "Find." Instead, you should use words such as "Look for .. ." or "Locate "
Otherwise it is very convenient for your users to use a button labeled "Find" when they are told to "Find" something. It does not require them to think and, therefore, does not evaluate whether the design would have helped them
find the right button on their own in the course of real usage.
Use work context and usage-centered wording, not system-oriented wording. Because benchmark task descriptions are, in fact, descriptions of user tasks and not system functionality, you should use usage-centered words from the user's work context and not system-centered wording. For example, "Find information about xyz" is better than "Submit query about xyz." The former is task oriented; the latter is more about a system view of the task.
Have clear start and end points for timing. In your own mind, be sure that you have clearly observable and distinguishable start and end points for each benchmark task and make sure you word the benchmark task description
to use these end points effectively. These will ensure your ability to measure the time on task accurately, for example.
At evaluation time, not only must the evaluators know for sure when the task is completed, but the participant must know when the task is completed. For purposes of evaluation, the task cannot be considered completed until the user experiences closure.
The evaluator must also know when the user knows that the task has been completed. Do not depend on the user to say when the task is done, even
if you explicitly ask for that in the benchmark task description or user instructions. Therefore, rather than ending task performance with a mental or sensory state (i.e., the user knowing or seeing something), it is better to incorporate a user action confirming the end of the task, as in the (to do) examples that follow.
Example (not to do): "Find out how to set the orientation of the printer paper to "landscape." Completion of this task depends on the user knowing something and that is not a directly observable state. Instead, you could have the user actually set the paper orientation; this is something you can observe directly.
Example (not to do): "View next week's events." Completion of this task depends on the user seeing something, an action that you may not be able to confirm. Perhaps you could have the user view and read aloud the contents of the first music event next week. Then you know whether and when the user has seen the correct event.
Example (to do): "Find next week's music event featuring Rachel Snow and add it to the shopping cart."
Example (to do): Or, to include knowing or learning how to select seats, "Find the closest available seat to the stage and add to shopping cart."
Example (to do): "Find the local weather forecast for tomorrow and read it aloud."
Keep some mystery in it for the user. Do not always be too specific about what the users will see or the parameters they will encounter. Remember that real first-time users will approach your application without necessarily knowing how it works. Sometimes try to use benchmark tasks that give approximate values for some parameters to look for, letting the rest be up to the user. You can still create a prototype in such a way that there is only one possible "solution" to this task if you want to avoid different users in the evaluation ending in a different state in the system.
Example (to do): "Purchase two movie tickets to Bee Movie within 1.5 hours of the current time and showing at a theatre within 5 miles of this kiosk location."
Annotate situations where evaluators must ensure pre-conditions for running benchmark tasks. Suppose you write this benchmark task: "Your dog, Mutt, has just eaten your favorite book and you have decided that he is not worth spending money on. Delete your appointment with the vet for Mutt's annual checkup from your calendar."
Every time a user performs this task during evaluation, the evaluator must be sure to have an existing appointment already in your prototype calendar so that each user can find it and delete it. You must attach a note in the form of rubrics (next point later) to this benchmark task to that effect-a note that will be read and followed much later, in the evaluation activity.
Use "rubrics" for special instructions to evaluators. When necessary or useful, add a "rubrics" section to your benchmark task descriptions as special instructions to evaluators, not to be given to participants in evaluation sessions. Use these rubrics to communicate a heads-up about anything that needs to be done or set up in advance to establish task preconditions, such as an existing event in the kiosk system, work context for ecological validity, or a particular starting state for a task.
Benchmark tasks for addressing designer questions are especially good candidates for rubrics. In a note accompanying your benchmark task you can alert evaluators to watch for user performance or behavior that might shed light on these specific designer questions.
Put each benchmark task on a separate sheet of paper. Yes, we want to save trees but, in this case, it is necessary to present the benchmark tasks to the participant only one at a time. Otherwise, the participant will surely
read ahead, if only out of curiosity, and can become distracted from the task at hand.
If a task has a surprise step, such as a midtask change of intention, that step should be on a separate piece of paper, not shown to the participant initially. To save trees you can cut (with scissors) a list of benchmark tasks so that only one task appears on one piece of paper.
Write a "task script" for each benchmark task. You should write a "task script" describing the steps of a representative or typical way to do the task and include it in the benchmark task document "package." This is just for use by the evaluator and is definitely not given to the participant. The evaluator may not have been a member of the design team and initially may not be too familiar with how to perform the benchmark tasks, and it helps the evaluator to be able to
anticipate a possible task performance path. This is especially useful in cases where the participant cannot determine a way to do the task; then, the evaluation facilitator knows at least one way.
Example: Benchmark Tasks as Measuring Instruments
for the Ticket Kiosk System
For the Ticket Kiosk System, the first UX target in Table 10-3 contains an objective UX measure for "Initial user performance." An obvious choice for the corresponding measuring instrument is a benchmark task. Here we need a simple and frequently used task that can be done in a short time by a casual new user in a walk-up ease-of-use situation. An appropriate benchmark task would involve buying tickets to an event. Here is a possible description to give the user participant:
"BT1: Go to the Ticket Kiosk System and buy three tickets for the Monster Truck Pull on February 28 at 7:00 PM. Get three seats together as close to the front as possible. Pay with a major credit card."
In Table 10-4 we add this to the table as the measuring instrument for the first UX target.
Let us say we want to add another UX target for the "initial performance"
UX measure, but this time we want to add some variety and use a different benchmark task as the measuring instrument-namely, the task of buying a movie ticket. In Table 10-5 we have entered this benchmark task in the second UX target, pushing the "first impression" UX target down by one.
Table 10-4 Choosing "buy special event ticket" benchmark task as measuring instrument for "initial performance" UX measure in first UX target
Work Role: User Class
UX Goal
UX Measure
Measuring Instrument
UX Baseline
Metric Level
Target Level
Observed Results
Ticket buyer: Casual new user, for occasional personal use
Walk-up ease of use for new user
Initial user performance
BT1: Buy
special event ticket
Ticket buyer: Casual new user, for occasional personal use
Initial customer satisfaction
First impression
Table 10-5
Choosing "buy movie ticket" benchmark task as measuring instrument for second initial performance UX measure
Work Role: User Class
UX Goal
UX Measure
Measuring UX Baseline Instrument Metric Level
Target Level
Observed Results
Ticket buyer: Casual new user, for occasional personal use
Walk-up ease of use for new user
Initial user performance
BT1: Buy
special event ticket
Ticket buyer: Casual new user, for occasional personal use
Walk-up ease of use for new user
Initial user performance
BT2: Buy
movie ticket
Ticket buyer: Casual new user, for occasional personal use
Initial customer satisfaction
First impression
How many benchmark tasks and UX targets do you need?
As in most things regarding human-computer interaction, it depends. The size and complexity of the system should be reflected in the quantity and complexity of the benchmark tasks and UX targets. We cannot even give you an estimate of a typical number of benchmark tasks.
You have to use your engineering judgment and make enough benchmark
tasks for reasonable, representative coverage without overburdening the
evaluation process. If you are new to this, we can say that we have often seen a dozen UX targets, but 50 would probably be too much-not worth the cost to pursue in evaluation.
How long should your benchmark tasks be (in terms of time to perform)?
The typical benchmark task takes a range of a couple of minutes to 10 or
15 minutes. Some short and some long are good. Longer sequences of related tasks are needed to evaluate transitions among tasks. Try to avoid really long benchmark tasks because they may be tiring to participants and evaluators during testing.
Ensure ecological validity
The extent to which your evaluation setup matches the user's real work context is called ecological validity (Thomas & Kellogg, 1989). One of the valid criticisms of lab-based user experience testing is that a UX lab can be kind of a sterile environment, not a realistic setting for the user and the tasks. But you can take steps to add ecological validity by asking yourself, as you
write your benchmark task descriptions, how can the setting be made more realistic?
� What are constraints in user or work context?
� Does the task involve more than one person or role?
� Does the task require a telephone or other physical props?
� Does the task involve background noise?
� Does the task involve interference or interruption?
� Does the user have to deal with multiple simultaneous inputs, for example, multiple audio feeds through headsets?
As an example for a task that might be triggered by a telephone call, instead of writing your benchmark task description on a piece of paper, try calling the participant on a telephone with a request that will trigger the desired task. Rarely do task triggers arrive written on a piece of paper someone hands you. Of course, you will have to translate the usual boring imperative statements of the benchmark task description to a more lively and realistic dialogue: "Hi, I am Fred Ferbergen and I have an appointment with Dr. Strangeglove for a physical exam tomorrow, but I have to be out of town. Can you change my appointment to next week?"
Telephones can be used in other ways, too, to add realism to work context.
A second telephone ringing incessantly at the desk next door or someone talking loudly on the phone next door can add realistic task distraction that you would not get from a "pure" lab-based evaluation.
Example: Ecological Validity in Benchmark Tasks for the Ticket Kiosk System
To evaluate use of the Ticket Kiosk System to manage the work activity of ticket buying, you can make good use of physical prototypes and representative locations. By this we mean building a touchscreen display into a cardboard or wooden kiosk structure and place it in the hallway of a relatively busy work area. Users will be subject to the gawking and questions of curiosity seekers. Having co-workers join the kiosk queue will add extra realism.
10.6.2 User Satisfaction Questionnaires
As a measuring instrument for a subjective UX measure, a questionnaire related
to various user interaction design features can be used to determine a user's
satisfaction with the interaction design. Measuring a user's satisfaction provides
a subjective, but still quantitative, UX metric for the related UX measure.
As an aside, we should point out that objective and subjective measures are not always orthogonal.
As an example of a way they can intertwine, user satisfaction can actually affect user performance over a long period of time. The better users like
the system, the more likely they are to experience good performance with it over the long term. In the following examples we use the QUIS questionnaire (description in Chapter 12), but there are other excellent choices, including the System Usability Scale or SUS (description in Chapter 12).
Example: Questionnaire as Measuring Instrument for the Ticket Kiosk System
If you think the first two benchmark tasks (buying tickets) make a good foundation for assessing the "first-impression" UX measure, then you can specify that a particular user satisfaction questionnaire or a specific subset thereof be administered following those two initial tasks, stipulating it as the measuring instrument in the third UX target of the growing UX target table, as we have done in Table 10-6.
Example: Goals, Measures, and Measuring Instruments
Before moving on to UX metrics, in Table 10-7 we show some examples of the close connections among UX goals, UX measures, and measuring instruments.
Table 10-6
Choosing questionnaire as measuring instrument for first-impression UX measure
Work Role: User Class
UX Goal
UX Measure
Measuring Instrument
UX Baseline
Metric Level
Target Level
Observed Results
Ticket buyer:
Casual new user, for occasional personal use
Walk-up ease of use for new user
Initial user performance
BT1: Buy special event ticket
Ticket buyer: Casual new user, for occasional personal use
Walk-up ease of use for new user
Initial user performance
BT2: Buy movie ticket
Ticket buyer: Casual new user, for occasional
personal use
Initial customer satisfaction
First impression
Questions Q1-Q10 in
the QUIS
questionnaire
Table 10-7
Close connections among UX goals, UX measures, and measuring instruments
Ease of first-time use Initial performance Time on task
Ease of learning Learnability Time on task or error rate, after
given amount of use and compared with initial performance
High performance for experienced users
Long-term performance
Time and error rates
Low error rates Error-related performance
Error rates
Error avoidance in safety critical tasks
Task-specific error performance
Error count, with strict target levels (much more important than time on task)
Error recovery performance
Task-specific time performance
Time on recovery portion of the task
Overall user satisfaction User satisfaction Average score on questionnaire
User attraction to product
User opinion of attractiveness
Average score on questionnaire, with questions focused on the effectiveness of the "draw" factor
Quality of user experience
User opinion of overall experience
Average score on questionnaire, with questions focused on quality of the overall user experience, including specific points about your product that might be associated most closely with emotional impact factors
Overall user satisfaction User satisfaction Average score on questionnaire, with
questions focusing on willingness to be a repeat customer and to recommend product to others
Continuing ability of users to perform without relearning
Retainability Time on task and error rates re-evaluated
after a period of time off (e.g., a week)
Avoid having user walk away in dissatisfaction
User satisfaction, especially initial satisfaction
Average score on questionnaire, with questions focusing on initial impressions and satisfaction
10.7 UX METRICS
A UX metric describes the kind of value to be obtained for a UX measure. It states
what is being measured. There can be more than one metric for a given measure. As an example from the software engineering world, software complexity is a
measure; one metric for the software complexity measure (one way to obtain values for the measure) is "counting lines of code."
Most commonly, UX metrics are objective, performance-oriented, and taken
while the participant is doing a benchmark task. Other UX metrics can be
subjective, based on a rating or score computed from questionnaire results. Typical objective UX metrics include time to complete task1 and number of errors made by the user. Others include frequency of help or documentation use; time spent in errors and recovery; number of repetitions of failed commands (what are users trying to tell us by repeating an action that did not work before?); and the number of commands, mouse-clicks, or other user actions to perform task(s).
If you are feeling adventurous you can use a count of the number of times the user expresses frustration or satisfaction (the "aha and cuss count") during his or her first session as an indicator of his or her initial impression of the interaction design. Of course, because the number of remarks is directly related to the length of the session, plan your levels accordingly or you can set your levels as a count per unit time, such as comments per minute, to factor out the time differences. Admittedly, this measuring instrument is rather participant dependent, depending on how demonstrative a participant feels during a session, whether a participant is generally a complainer, and so on, but this metric can produce some interesting results.
Typically, subjective UX metrics will represent the kind of numeric outcome you want from a questionnaire, usually based on simple arithmetic statistical measures such as the numeric average. Remember that you are going only for an engineering indicator of user experience, not for statistical significance.
Interestingly, user perceptions of elapsed time, captured via a questionnaire or post-session interview, can sometimes be an important UX measure. We know of such a case that occurred during evaluation of a new software installation procedure. The old installation procedure required the user to perform repeated disk (CD-ROM) swaps during installation, while the new installation procedure required only one swap. Although the new procedure took less time, users thought it took them longer because they were not kept busy swapping disks.
And do not overlook a combination of measures for situations where you have performance trade-offs. If you specify your UX metric as some function, such as a sum or an average, of two other performance-related metrics, for
1Although the time on task often makes a useful UX metric, it clearly is not appropriate in some cases. For example, if the task performance time is affected by factors beyond the user's control, then time on task is not a good measure of user performance. This exception includes cases of long and/or unpredictable communication and response-time delays, such as might be experienced in some Website usage.
example, time on task and error rate, you are saying that you are willing to give up some performance in one area if you get more in the other.
We hope you will explore many other possibilities for UX metrics, extending beyond what we have mentioned here, including:
� percentage of task completed in a given time
� ratio of successes to failures
� time spent moving cursor (would have to be measured using software instrumentation, but would give information about the efficiency of such physical actions, necessary for some specialized applications)
� for visibility and other issues, fixations on the screen, cognitive load as indicated by correlation to pupil diameter, and so on using eye-tracking
Finally, be sure you match up your UX measures, measuring instruments, and metricstomakesenseina UXtarget. Forexample, ifyouplantouseaquestionnaire in a UX target, do not call the UX measure "initial performance." A questionnaire does not measure performance; it measures user satisfaction or opinion.
Example: UX Metrics for the Ticket Kiosk System
For the initial performance UX measure in the first UX target of Table 10-6, as already discussed in the previous section, the length of time to buy a special event ticket is an appropriate value to measure. We specify this by adding "time on task" as the metric in the first UX target of Table 10-8.
Table 10-8
Choosing UX metrics for UX measures
Work Role: User Class
UX goal
UX Measure
Measuring Instrument
UX Metric
Baseline Level
Target Level
Observed Results
Ticket buyer: Casual new user, for occasional personal use
Walk-up ease of use for new user
Initial user performance
BT1: Buy
special event ticket
Average time on task
Ticket buyer: Casual new user, for occasional personal use
Walk-up ease of use for new user
Initial user performance
BT2: Buy
movie ticket
Average number of errors
Ticket buyer: Casual new user, for occasional
personal use
Initial customer satisfaction
First impression
Questions Q1-Q10 in
the QUIS
questionnaire
Average rating across users and across
questions
As a different objective performance measure, you might measure the number of errors a user makes while buying a movie ticket. This was chosen as the value to measure in the second UX target of Table 10-8. You will often want to measure both of these metrics during a participant's single performance of the same single task. A participant does not, for
example, need to perform one "buy ticket" task while you time performance and then do a different (or repeat the same) "buy ticket" task while you count errors.
Finally, for the UX metric in the third UX target of Table 10-8, the subjective UX target for the first impression UX measure, let us use the simple average of the numeric ratings given across all users and across all the questions for which ratings were given (i.e., Q1 to Q10).
10.8 BASELINE LEVEL
The baseline level is the benchmark level of the UX metric; it is the "talking point" level against which other levels are compared. It is often the level that has been measured for the current version of the system (automated or manual). For example, the Ticket Kiosk System might be replacing the ticket counter in the ticket office.
The baseline level for time on task can be an average of measured times to do the task in person over the ticket counter. That might be quite different from what you expect users will be able to achieve using our new system, but it is a stake in the sand, something for comparison. Measuring a baseline level helps ensure that the UX metric is, in fact, measurable.
10.9 TARGET LEVEL
A UX target is a quantitative statement of an aimed-at or hoped-for value for a UX metric. Thus, a UX target is an operationally defined criterion for success of user experience stemming from an interaction design, an engineering judgment about the quality of user experience expected from an interactive system.
The target level for a UX metric is the value indicating attainment of user experience success. It is a quantification of the UX goal for each specific
UX measure and UX metric. UX metrics for which you have not yet achieved the target levels in evaluation serve as focal points for improvement by designers.
Just barely meeting a target level is the minimum performance acceptable for any UX measure; it technically meets the UX goals-but only barely. In theory, you hope to achieve better than the target level on most UX measures; in reality, you are usually happy to pass regardless of by how much.
Because "passing" the user experience test means meeting all your target levels simultaneously, you have to ensure that the target levels for all UX measures in the entire table must be, in fact, simultaneously attainable. That is, do not build in trade-offs of the kind where meeting one target level goal might make it much more difficult to meet another related target level.
So how do you come up with reasonable values for your target levels? As a general rule of thumb, a target level is usually set to be an improvement over the corresponding baseline level. Why build a new system if it is not going to be better? Of course, improved user performance is not the only motivation for building a new system; increased functionality or just meeting user needs at a higher level in the design can also be motivating factors. However, the focus here is on improving user experience, which often means improved user performance and satisfaction.
For initial performance measures, you should set target levels that allow enough time, for example, for unfamiliar users to read menus and labels, think a bit, and look around each screen to get their bearings. So do not use levels for initial performance measures that assume users are familiar with the design.
10.10 SETTING LEVELS
The baseline level and target level in the UX target table are key to quantifying
user experience metrics. But sometimes setting baseline and target levels can be a challenge. The answer requires determining what level of user performance and user experience the system is to support.
Obviously, level values are often "best guesses" but with practice UX people become quite skilled at establishing reasonable and credible target levels and setting reasonable values. This is not an exact science; it is an engineering endeavor and you get better at it with experience.
Among the yardsticks you can use to set both baseline and target levels are:
� an existing system or previous version of the new system being designed
� competing systems, such as those with a large market share or with a widely acclaimed user experience
What if there are no existing or competing systems? Be creative and use your problem-solving skills. Look at manual ways of doing things and adjust for automation. For example, if there were no calendar systems, use a paper calendar. Start with some good educated engineering estimates and improve with experience from there.
Although it may not always be explicitly indicated in a UX target table, the baseline and target levels shown are the mean over all participants of the corresponding measure. That is, the levels shown do not have to be achieved by every participant in the formative evaluation sessions. So, for example, if we specify a target level of four errors for benchmark task BT 2 in the second UX target of Table 10-8 as a worst acceptable level of performance, there must be no more than an average of four errors, as averaged across all participants
who perform the "buy movie ticket" task.
Example: Baseline Level Values for the Ticket Kiosk System To determine the values for the first two UX target baseline levels for the Ticket Kiosk System, we can have someone perform the benchmark tasks for buying a ticket for a special event and a movie using MUTTS. Suppose that buying a ticket for a special event takes about 3 minutes. If so, this value, 3 minutes, makes a plausible baseline level for the first UX target in Table 10-9. Because most people are already experienced with ticket offices, this value is not really for initial performance, but it gives some idea for that value.
To set a baseline value for the second UX target, for buying a movie ticket, it can be assumed that almost no one should make any errors doing this at a ticket counter, so let us set the baseline level as less than 1, as in Table 10-9.
To establish a baseline value for the first impression UX measure in the third UX target, we could administer the questionnaire to some
users of MUTTS. Let us say we have done that and got an average score of a 7.5 out of 10 for the first impression UX measure (a value we put in Table 10-9).
Example: Target Level Values for the Ticket Kiosk System
In Table 10-10, for the first initial performance UX measure, let us set the target level to 2.5 minutes. In the absence of anything else to go on, this is a reasonable choice with respect to our baseline level of 3 minutes. We enter this
Table 10-9
Setting baseline levels for UX measures
Key User Role: User Class
UX goal
UX Measure
Measuring Instrument
UX Metric
Baseline Level
Target Level
Observed Results
Ticket buyer:
Walk-up
Initial user
BT1: Buy
Average time
3
Casual new user,
ease of use
performance
special event
on task
minutes
for occasional
for new
ticket
personal use
user
Ticket buyer: Casual new user,
Walk-up ease of use
Initial user performance
BT2: Buy
movie ticket
Average number of
< 1
for occasional
for new
errors
personal use
user
Ticket buyer:
Initial
First
Questions
Average
7.5/10
Casual new user,
customer
impression
Q1-Q10 in
rating across
for occasional
satisfaction
questionnaire
user and
personal use
XYZ
across
questions
value into the "Target level" column for the first UX target of the UX target table in Table 10-10.
With a baseline level of less than one error for the "Buy movie ticket" task, it would again be tempting to set the target level at zero, but that does not allow for anyone ever to commit an error. So let us retain the existing level,
< 1, as the target level for error rates, as entered into the second UX target
of Table 10-10.
For the first impression UX measure, let us be somewhat conservative and set a target level of a mean score of 8 out of 10 on the questionnaire. Surely 80% is passing in most anyone's book or course. This goes in the third UX target of Table 10-10.
Just for illustration purposes, we have added a few additional UX targets to Table 10-10. The UX target in the fourth row is for a regular music patron's task of buying a concert ticket using a frequent-customer discount coupon. The UX measure for this one is to measure experienced usage error rates using the "Buy concert ticket" benchmark task, with a target level of
0.5 (average).
Additional benchmark tasks used in the last two UX targets of the table are:
BT5: You want to buy a ticket for the movie Almost Famous for between 7:00 and 8:00 PM tonight at a theater within a 10-minute walk from the Metro station. First check to
be sure this movie is rated PG-13 because you will be with your 15-year-old son. Then
Table 10-10
Setting target levels for UX metrics
Work Role: User Class
UX Goal
UX Measure
Measuring Instrument
UX Metric
Baseline Level
Target Level
Observed Results
Ticket buyer:
Walk-up
Initial user
BT1: Buy
Average
3 min, as
2.5 min
Casual new
ease of use
performance
special event
time on
measured
user, for
ticket
task
at the
occasional
MUTTS
personal use
ticket
counter
Ticket buyer: Casual new
Walk-up ease of use
Initial user performance
BT2: Buy
movie ticket
Average number of
< 1
< 1
user, for
for new user
errors
occasional
personal use
Ticket buyer:
Initial
First
Questions
Average
7.5/10
8/10
Casual new
customer
impression
Q1-Q10 in
rating
user, for
satisfaction
questionnaire
across users
occasional
XYZ
and across
personal use
questions
Ticket buyer: Frequent
Accuracy
Experienced usage error
BT3: Buy
concert ticket
Average number of
< 1
< 1
music
rate
errors
patron
Casual
Walk-up
Initial user
BT4: Buy
Average
5 min
2.5 min
public ticket
ease of use
performance
Monster Truck
time on
(online
buyer
for new user
Pull tickets
task
system)
Casual public ticket
Walk-up ease of use
Initial user performance
BT4: Buy
Monster Truck
Average number of
< 1
< 1
buyer
for new user
Pull tickets
errors
Casual
Initial
First
QUIS
Average
6/10
8/10
public ticket
customer
impression
questions 4-7,
rating
buyer
satisfaction
10, 13
across users
and across
questions
Casual
Walk-up
Just post-
BT5: Buy
Average
5 min
2 min
public ticket
ease of use
initial
Almost
time on
(including
buyer
for user with
performance
Famous movie
task
review)
a little
tickets
experience
Casual public ticket
Walk-up ease of use
Just post- initial
BT6: Buy Ben Harper
Average number of
< 1
< 1
buyer
for user with
performance
concert tickets
errors
a little
experience
go to the reviews for this movie (to show us you can find the reviews, but you do not have to spend time reading them now) and then buy two general admission tickets.
BT6: Buy three tickets to the Ben Harper concert on any of the nights on the weekend of September 29th-October 1st. Get the best seats you can for up to $50 per ticket. Print out the directions for taking the Metro to the concert.
10.11 OBSERVED RESULTS
The final column in Table 10-10 is for observed results, a space reserved for recording values measured while observing users performing the prescribed tasks during formative evaluation sessions. As part of the UX target table, this column
affords direct comparisons between specified levels and actual results of testing.
Because you typically will have more than one user from which observed results are obtained, you can either record multiple values in a single observed results column or, if desired, add more columns for observed results and use this column for the average of the observed values. If you maintain your UX target tables in spreadsheets, as we recommend, it is easier to manage observed data and results (Chapter 16).
10.12 PRACTICAL TIPS AND CAUTIONS FOR CREATING UX TARGETS
Here we present some hints about filling out your UX target table, some of which were adapted from Whiteside, Bennett, and Holtzblatt (1988). These suggestions are not intended to be requirements, but rather to show the range of possibilities.
Are user classes for each work role specified clearly enough?
User class definitions are important in identifying representative users who will serve as participants in evaluation sessions (Chapter 15). As already mentioned, the characteristics of users playing a work role may affect the setting of UX targets, resulting in different measuring instruments and UX metrics for different user classes while performing the same task. If there are several user classes for which different UX targets are appropriate, you will address them with separate and different UX targets in the table.
Have you taken into account potential trade-offs among user groups? For example, you must consider the trade-offs between learnability for new users and the possibility that "help" for these new users might get in the way of power performance by experienced users.
Are the values for the various levels reasonable? This may be one of the hardest questions to answer. In fact, the first few times you create UX targets, you will probably be making a lot of guesses. You do get better at it with practice.
Be prepared to adjust your target level values, based on initial observed results. Sometimes in evaluation you observe that users perform dramatically differently than you had expected when you set the levels. These cases can help you refine the target levels in UX targets, too. While it is possible to set the levels too leniently, it is also possible that you make your initial UX targets too demanding, especially in early cycles of iteration.
When your observed results are much worse than specified levels, there typically are two possibilities. In the first (and preferable) case, the process of evaluation and refinement is working just as it should; the UX targets are reasonable, and evaluation has shown that there are serious UX problems with the design. When these problems are solved, the design will meet the specified UX goals.
In the second case, the UX targets have been set for an unrealistically high level of expectation, and no matter how much you improve the design and its user experience, the UX goals might never be met. Sometimes, for example, a task simply takes longer than its designers first anticipated, even with a good design.
If you are not meeting your levels, especially after a few rounds of iteration, you will need to assess them to see whether they are simply too difficult to attain or whether the design just needs a great deal of work. Determining which of these cases you have is, of course, not always easy. You will have to rely on your knowledge of interaction design, experience, intuition, and ultimately your best judgment to decide where the problem lies-with the UX target levels or with the design.
Remember that the target level values are averages. So do not set impossible average goals such as zero errors.
How well do the UX measures capture the UX goals for the design? Again, this can be elusive. It is entirely possible to establish UX targets that have little or nothing to do with assessing the real user experience of a design. For example, a benchmark task might be very non-representative, leading to design improvements in parts of the application that will rarely be used.
It is equally easy to omit inadvertently UX targets that are critical to assessing user experience. Again, with experience, you will gain a better understanding of when you have established UX measures and levels that capture the user experience of the design.
What if the design is in its early stages and you know the design will change significantly in the next version, anyway? Will it be a waste of time to create benchmark tasks and UX targets if the system is expected to undergo major changes in the near future? A UX representative of one project team we worked with sent email saying "We spent 2 days evaluating the XXX tool
(first version) only to discover that the more recent version was significantly different and many of the issues we identified were no longer valid."
Our answer: As long as the tasks have not changed significantly, as long as users would still do those same tasks with the new design (even if they are now done in a different way), your work in creating benchmark tasks and UX targets should not have been wasted. Benchmark tasks and level settings are supposed to be independent of the design details.
What about UX goals, metrics, and targets for usefulness and emotional impact? Quantitative measures and metrics for UX goals about usefulness and emotional impact, including phenomenological aspects and social or cultural impact, and value-sensitive design are more limited. The principal measuring instrument for these measures is the questionnaire and, possibly, post-session interviews.
And, of course, there are experimental data collection techniques for detecting and/or measuring emotional responses (Chapter 12). You can use the number of smiles per interaction as a UX metric if you can detect, and therefore, count, smiles. Phenomenological aspects require longer term measures (also in Chapter 12).
Questionnaires and interviews can also be used to assess branding issues. For example, you can ask if the user thinks this product is "on-brand" or you
can show two variations and ask which is better associated with the brand and why. Although this kind of data collection leans more toward qualitative, you can find ways to quantify it, if desired.
10.13 HOW UX TARGETS HELP MANAGE THE USER EXPERIENCE ENGINEERING PROCESS
First of all, the end of evaluation activity in each iteration of the lifecycle is a good time to evaluate your benchmark task descriptions and UX targets. How well did they work for you? If you think they should be improved, do it now.
Also, after each iteration of evaluation, we have to decide whether to continue iterating. But we cannot keep iterating forever. So how do we know when to stop? We tell how the project manager can use the evaluation results in conjunction with UX targets to decide when to stop iterating in Chapter 16.
10.14 AN ABRIDGED APPROACH TO UX GOALS, METRICS, AND TARGETS
As in most of the other process chapters, the process here can be abridged, trading completeness for speed and lower cost. Possible steps of increasing abridgement include:
� Eliminate objective UX measures and metrics, but retain UX goals and quantitative subjective measures. Metrics obtained with questionnaires are easier and far less costly than metrics requiring empirical testing, lab based or in the field.
� Eliminate all UX measures and metrics and UX target tables. Retain benchmark tasks as a basis for user task performance and behavior to observe in limited empirical testing for gathering qualitative data (UX problem data).
� Ignore UX goals, metrics, and targets altogether and use only rapid evaluation methods later, producing only qualitative data.
Intentionally left as blank
Prototyping 11
Objectives
After reading this chapter, you will:
1. Be able to articulate what prototyping is and why it is needed
2. Understand how to choose the appropriate depth and breadth, level of fidelity, and amount of interactivity of prototypes
3. Understand special types of prototypes, such as physical mockups and Wizard of Oz prototypes
4. Understand the appropriate type of prototype for a given stage of design evolution
5. Understand the role of prototypes in the transition to a product
6. Know how to make effective paper prototypes
11.1 INTRODUCTION
11.1.1 You Are Here
We begin each process chapter with a "you are here" picture of the chapter topic in the context of the overall Wheel lifecycle template; see Figure 11-1. Although prototyping is a kind of implementation, design and prototyping in practice
often overlap and occur simultaneously. A prototype in that sense is a design
representation.
So, as you create the design and its representation, you are creating the prototype. Therefore, although in Figure 11-1 it might seem that prototyping is limited to a particular place within a cycle of other process activities, like all other activities, prototyping does not happen only at some point in a rigid sequence.
11.1.2 A Dilemma, and a Solution
Have you ever rushed to deliver a product version without enough time to check it out? Then realized the design needed fixing? Sorry, but that ship has already left the station. The sooner you fail and understand why, the sooner you can
succeed. As Frishberg (2006) tells us, "the faster you go, the sooner you know." If only you had made some early prototypes to work out the design changes before
Figure 11-1
You are here; the chapter on prototyping in the context of the overall Wheel lifecycle template.
releasing it! In this chapter we show you how to use prototyping as a hatching oven for partially baked designs within the overall UX lifecycle process.
Traditional development approaches such as the waterfall method were heavyweight processes that required enormous investment of time, money, and personnel. Those linear development processes have tended to force a commitment to significant amounts of design detail without any means for visualizing and evaluating the product until it was too late to make any major changes.
Construction and modification of software by ordinary programming techniques in the past have been
notoriously expensive and time-consuming activities. Little wonder there have been so many failed software development projects (Cobb, 1995; The Standish Group, 1994, 2001)-wrong requirements, not meeting requirements, imbalanced emphasis within functionality, poor user experience, and so much customer and user dissatisfaction.
In thinking about how to overcome these problems, we are faced with a dilemma. The only way to be sure that your system design is the right design and that your design is the best it can be is to evaluate it with real users. However, at the beginning you have a design but no system yet to evaluate. But after it is implemented, changes are much more difficult.
Enter the prototype. A prototype gives you something to evaluate before you
have to commit resources to build the real thing. Because prototyping provides an early version of the system that can be constructed much faster and is less expensive, something to stand in stead of the real system to evaluate and inform refinement of the design, it has become a principal technique of the iterative lifecycle.
Universality of prototyping
The idea of prototyping is timeless and universal. Automobile designers build and test mockups, architects and sculptors make models, circuit designers use "bread-boards," artists work with sketches, and aircraft designers build and fly
experimental designs. Even Leonardo da Vinci and Alexander Graham Bell made prototypes.
Thomas Edison sometimes made 10,000 prototypes before getting just the right design. In each case the concept of a prototype was the key to affording the design team and others an early ability to observe something about the final product-evaluating ideas, weighing alternatives, and seeing what works and what does not.
Alfred Hitchcock, master of dramatic dialogue design, is known for using prototyping to refine the plots of his movies. Hitchcock would tell variations of stories at cocktail parties and observe reactions of his listeners. He would experiment with various sequences and mechanisms for revealing the story line. Refinement of the story was based on listener reactions as an evaluation criterion. Psycho is a notable example of the results of this technique.
Scandinavian origins
Like a large number of other parts of this overall lifecycle process, the origins of prototyping, especially low-fidelity prototyping, go back to the Scandinavian work activity theory research and practice of Ehn, Kyng, and others (Bjerknes, Ehn, & Kyng, 1987; Ehn, 1988) and participatory design work (Kyng, 1994). These formative works emphasized the need to foster early and detailed communication about design and participation in understanding the requirements for that design.
11.2 DEPTH AND BREADTH OF A PROTOTYPE
The idea of prototypes is to provide a fast and easily changed early view of the envisioned interaction design. To be fast and easily changed, a prototype must be something less than the real system. The choices for your approach
to prototyping are about how to make it less. You can make it less by focusing on just the breadth or just the depth of the system or by focusing on less than full fidelity of details in the prototype (discussed later in this chapter).
11.2.1 Horizontal vs. Vertical Prototypes
Horizontal and vertical prototypes represent the difference between slicing the system by breadth and by depth in the features and functionality of a prototype (Hartson & Smith, 1991). Nielsen (1987) also describes types of prototypes based on how a target system is sliced in the prototype. In his usability
Figure 11-2
Horizontal and vertical prototyping concepts, from Nielsen (1993), with permission.
engineering book (1993), Nielsen illustrates the relative concepts of horizontal and vertical prototyping, which we show as Figure 11-2.
A horizontal prototype is very broad in the features it incorporates, but offers less depth in its coverage of functionality. A vertical prototype contains as much depth of functionality as possible in the current state of progress, but only for a narrow breadth of features.
A horizontal prototype is a good place to start with your prototyping,
as it provides an overview on which you can base a top-down approach.
A horizontal prototype is effective in demonstrating the product concept and
for conveying an early product overview to managers, customers, and users (Kensing & Munk-Madsen, 1993) but, because of the lack of details in depth, horizontal prototypes usually do not support complete workflows, and user experience evaluation with this kind of prototype is generally less realistic.
A horizontal prototype can also be used to explore how much functionality will really be used by a certain class of users to expose typical users to the breadth of proposed functionality and get feedback on which functions would be used or not.
A vertical prototype allows testing a limited range of features but those
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment