317
Let's delve into the concepts of interactions and interaction design, along with related areas, drawing from the provided sources.
Interactions and Interaction Design:
An interaction is broadly defined as a transaction between two entities, often involving an exchange of information, but can also include goods or services.
Interaction design is the practice of designing for the possibility of interaction between people, machines, and systems in various combinations. It is what makes technology usable, useful, and fun. Poor interaction design leads to frustration and difficulty in using products and services.
Bill Moggridge, a principal at IDEO, coined the term "interaction design" to describe this field that connects people through the products they use.
Interaction design is about behavior, making digital artifacts, environments, and systems behave in defined ways and provide feedback based on user actions.
There are three major schools of thought when defining interaction design:
A technology-centered view: Interaction designers make technology, especially digital technology, useful, usable, and pleasurable.
A behaviorist view: Interaction design focuses on defining the behavior of artifacts, environments, and systems, emphasizing functionality and feedback.
The Social Interaction Design view: This view considers interaction design as inherently social, facilitating communication between humans through products, with technology being less central.
Differences Between Related Design Disciplines:
User Experience (UX) Design: UX design is an umbrella discipline that looks at all aspects of the user's encounter with a product—visual design, interaction design, sound design, and more—ensuring they are in harmony. Interaction design falls at least partially under UX design.
Information Architecture (IA): IA is concerned with the structure of content: organizing and labeling content effectively so users can find what they need. Navigation is an area where interaction design and IA meet. The challenge of organizing and structuring content across numerous hyperlinked pages on the web led to the rise of information architecture.
Visual (or Graphic) Design: Visual design focuses on creating a visual language to communicate content, including fonts, colors, and layout of interfaces and printed materials. Interface design is where visual and interaction design converge.
Industrial Design (ID): Industrial design is about form—shaping physical objects in a way that communicates their use while making them functional. Examples include furniture and appliances. Interaction design existed as an activity within industrial design before being explicitly named.
Interaction Design: As discussed above, interaction design specifically focuses on the behavior of products and the interactions users have with them. While related to the other disciplines, it centers on how users manipulate and engage with the functionality of a product.
Products of Interaction Design:
The "products" of interaction design can be varied. While the outcome of the design process is often called a "product" in a general sense, it can take many forms:
Digital Products: This includes websites, desktop software, mobile apps. Wireframes are crucial documents produced by interaction designers for digital products, showing structure, information hierarchy, controls, and content. Prototypes of various fidelities are also created to test concepts.
Physical Products: Interaction designers also work on consumer electronics, robots, mobile and medical devices, and interactive environments. For physical products, designers consider physical controls like knobs, switches, and buttons, determining how they will be used.
Services: Increasingly, interaction designers are involved in designing services, which are chains of activities or events that form a process and have value for the end user. The "product" here is the designed service experience, considering touchpoints and interactions across different levels.
In essence, any system or artifact that requires user engagement and has a defined behavior is a product of interaction design.
Iterations and Their Importance:
The provided sources do not explicitly detail "iterations" in the design process or their critical importance. However, the concept of ongoing research in competitive analysis suggests an iterative approach. Generally, in interaction design (though not explicitly stated in these sources), iteration refers to the cyclical process of designing, prototyping, testing, and refining a product or service based on user feedback and insights. This iterative nature is crucial because it allows designers to:
Identify and fix usability issues early on.
Incorporate user needs and feedback throughout the design process.
Explore different design solutions and converge on the most effective one.
Adapt to changing requirements and constraints.
While not directly mentioned, the emphasis on user research and understanding user behavior inherently implies that design is not a linear process but requires revisiting and refining based on what is learned.
Core Concepts of Interaction Design and Examples:
Affordances: How something appears gives cues about how it behaves and how we should interact with it.
Good Example: A button that has a raised appearance and a label like "Click Here" visually suggests that it can be pressed to initiate an action.
Bad Example: A flat image on a screen that looks like a button but doesn't react when tapped provides a misleading affordance, leading to user frustration.
Feedback: Every user action should be acknowledged in some way. Feedback should be early and often.
Good Example: When you click a button, it might change color momentarily or a loading indicator might appear to show that the action is being processed. Pressing a key on a mobile phone displays the number.
Bad Example: Clicking a button that performs an action with no visual or auditory confirmation can leave the user wondering if the action was successful.
Standards: Following established interface conventions and standards helps users because they bring pre-existing expectations.
Good Example: The company logo typically being located at the top left of a website. Using Ctrl+C or Command+C to copy text is a standard that users rely on. Bottom navigation on mobile apps with three to five clear, tappable icons for top-level destinations follows a common standard.
Bad Example: Placing the main navigation menu at the bottom of a desktop website or using unconventional icons without labels can make the site harder to use because it deviates from user expectations.
Direct and Indirect Manipulation: Interaction designers decide how digital objects can be manipulated. While not elaborated in the provided excerpts with examples, direct manipulation involves interacting with objects on screen as if they are physical, while indirect manipulation uses controls like buttons or menus to affect objects.
Tesler's Law (Law of Conservation of Complexity): This law suggests that every system has a certain amount of complexity that cannot be reduced. Designers need to decide where to place this complexity.
Good Example: Instead of requiring users to manually enter their email address every time they send an email, the software manages and automatically fills this information.
Bad Example: Forcing users to perform repetitive and complex steps that the system could automate increases the burden of complexity on the user.
Let's explore these key concepts in interaction design, drawing from the sources and our previous discussion:
Direct and Indirect Manipulation:
Interaction designers decide how digital objects in their products can be manipulated: directly, indirectly, or (increasingly) in both ways.
Direct manipulation involves interacting with digital objects on the screen as if they were physical objects. You directly act upon them, for example, by dragging and dropping files or resizing windows with your mouse.
Indirect manipulation involves using controls such as buttons, menus, or commands to affect digital objects. For example, using a menu command to resize a picture or clicking a button to move an item in a list.
Differentiation: Direct manipulation feels more immediate and WYSIWYG ("What You See Is What You Get"), while indirect manipulation requires an intermediary control to perform an action on an object.
Feedback and Feedforward:
Feedback is some indication that something has happened. It should be early and often, accompanying every action by a user. Examples include a button changing color when pressed or a sound indicating a successful operation. Without feedback, users might repeat actions, leading to errors. Feedback can range from simple visual changes to complex indicators.
Feedforward is about knowing what will happen before you perform an action. It gives users confidence in their actions by providing a preview of the outcome. Examples include descriptive names for hypertext links or a message like "Pushing this button will submit your order".
Differentiation: Feedback occurs after an action, confirming that the action has been registered or completed. Feedforward occurs before an action, informing the user about the potential outcome of that action.
"Metaphors":
Physical controls have strong metaphors and history attached to them. For instance, knobs and sliders typically suggest adjusting something along a spectrum, like volume or temperature. Buttons and switches usually indicate making a choice, like turning something on or off.
Metaphors are important in interaction design because they can leverage users' existing real-world knowledge and understanding to make digital interfaces more intuitive and easier to learn. By drawing parallels between digital elements and familiar physical objects or concepts, designers can create affordances that are easier to grasp.
When to Follow Interface Standards and When to Break Them:
There are good reasons for having and using standards. Over time, users are trained to expect certain elements in specific locations (e.g., company logo top left of a website) and features to work in particular ways (e.g., Ctrl+Z for undo).
A design that ignores these conventions forces users to learn something different, potentially causing frustration and annoyance. Following standards can contribute to a clear and simple user experience.
However, there might be situations where breaking standards is justified, for example, to introduce a genuinely innovative interaction that significantly improves usability or to better align with the specific context or brand of a product. If your product aims to be a differentiator, your design principles might not apply to competitors, suggesting some deviation from the norm.
The decision should be made intentionally, with a clear understanding of the potential benefits and drawbacks of deviating from established norms.
What Predicts Fitts's Law? Design Implications:
Fitts's Law states that the time it takes to move from a starting position to a final target is determined by two things: the distance to the target and the size of the target. The larger the target and the closer it is, the faster it can be pointed to.
Design Implications:
Clickable objects like buttons need to be a reasonable size, especially on touchscreens or screens viewed from a distance.
The edges and corners of screens are excellent places to position things like menu bars and buttons because they act as large, "infinite" targets.
Controls that appear next to what the user is working on (like right-click menus) can usually be accessed more quickly than elements located farther away.
What Predicts Hick’s Law? Design Implications:
Hick's Law is not explicitly mentioned in the provided sources. However, it is a fundamental principle in interaction design.
Hick's Law states that the time it takes a user to make a decision increases with the number and complexity of choices available.
Design Implications:
Minimize the number of options presented to users at any given time, especially when the choices are not clearly differentiated or organized.
Break down complex tasks into smaller, more manageable steps with fewer choices at each step.
Use progressive disclosure to show only essential options initially and reveal more advanced features or choices as needed.
Categorize and organize options logically to help users quickly understand and navigate the available choices.
Identify Both the Basic Controls and Non-traditional Controls that Interaction Designers Can Use:
Basic Controls: These are common UI elements that users are generally familiar with:
Buttons: Used to initiate actions. They can be toggle buttons or self-resetting buttons.
Switches: Simple controls to toggle between two states (on/off).
Sliders: Allow users to select a value along a continuous range.
Knobs/Dials: Typically indicate adjustment along a spectrum.
Input Boxes (Text Fields): Allow users to enter text.
Radio Buttons: Allow selection of one option from a set.
Check Boxes: Allow selection of multiple options from a set.
Dropdown Menus: Present a list of options when clicked.
Hyperlinks: Allow navigation to other pages or content.
Non-traditional Controls: These go beyond the conventional GUI elements:
Gestural Interfaces: Utilize body movements (e.g., swipes, pinches) detected by sensors or cameras to interact with devices. Designers need to consider physiology and how to instruct users in their use.
Audible Interfaces: Rely on voice commands and spoken feedback, found in voice message systems and automated call centers. Navigation in these interfaces is critical.
Touchscreens: Allow direct interaction with the display through touch.
Soft Keys: Physical buttons with labels that change based on the context displayed on an adjacent screen.
Modeless Feedback: Rich visual feedback that doesn't interrupt the user's flow.
What is a Strong Design Problem Statement?
A strong design problem statement is human-centered, focusing on the needs of the user. It clearly identifies who the user is, what their need is, and why they have that need (the insight).
According to one format, a problem statement can be phrased as: _____(user) needs to _____(need) because_____(insight).
A good problem statement should be broad enough to allow for creative freedom in generating solutions but narrow enough to be solvable by a design solution. It helps to reframe the design problem after user research, incorporating common pain points and user goals.
Here are the characteristics of a good design problem statement, a differentiation of qualitative and quantitative data, the importance of user research and stakeholder involvement, an analysis of user interviews, how to conduct various research methods, the difference between contextual inquiry and ethnographic interviews, the process of creating an affinity diagram and user journey map, and the distinction between user roles and personas, drawing from the provided sources and our conversation history:
Characteristics of a Good Design Problem Statement:
A good design problem statement should be human-centered, focusing on the needs of the user.
It should be broad enough to allow for creative freedom in generating solutions.
Yet, it should also be narrow enough to be solvable by a design solution.
As mentioned in our previous conversation and supported by, a problem statement often articulates who the user is, what their need is, and the insight behind that need, sometimes following the format: "_____(user) needs to _____(need) because_____(insight)".
The problem statement should be a living document that is revisited often as the problem becomes better understood. It might need to be reframed after user research to reflect user pain points and insights.
Differentiate Qualitative and Quantitative Data in Design Research:
Quantitative research typically yields numerical data that can answer questions about "how much" or "how many" along a few reductive axes.
Qualitative research, on the other hand, can tell you "what, how, and why" in rich detail that reflects the complexities of real human situations. It explores behaviors, attitudes, and aptitudes.
While quantitative data is often associated with science and objectivity, it is subject to interpretation and manipulation, just like textual data. Data about human activities is different from that of hard sciences.
Qualitative measures can be used to evaluate user experiences. The decision of using qualitative and/or quantitative presentation depends on the type of data captured, the audience, and ease of understanding.
Both types of data play a role in design research.
Why User Research is of Critical Importance in Interaction Design:
User research is a critical part of the research process in UX Design. Understanding the landscape of solutions is crucial to the foundation of the solution being designed.
It helps in understanding the features, functions, flows, and feelings evoked by the design solutions of your competitors, allowing you to strategically design a superior product.
User research helps bridge business goals with user goals.
It is essential because you are NOT the USER! Users have different experiences, terminology, and ways of looking at the world.
User research helps improve usability (ease-of-use) and usefulness (relevance).
It helps understand what problems customers want solved and analyze the importance of these problems.
Understanding users and their goals through methods like ethnographic interviews is considered the most effective and efficient tool for gathering qualitative data.
Qualitative research helps understand a product's domain, context, and constraints and identify patterns of behavior among users.
It gives the design team credibility and authority, as design decisions can be traced to research results.
Understanding the user population deeply can provide valuable business insights not revealed through traditional market research.
A UX competitive analysis, which involves understanding competitors and their users (indirectly), is imperative, especially when new to a particular vertical, to grow understanding of basic features and functions. It also informs overall product strategy.
Why Involving Stakeholders is Important to Design? How are Stakeholder Interviews Conducted? Stakeholder Types:
Involving stakeholders is important to gather different perspectives on the product vision, as each business department might have a slightly different view. Harmonizing these perspectives with user and customer views is a key part of the design approach.
Discussions with stakeholders about budget and schedule provide a reality check on the design effort's scope.
Understanding technical constraints and opportunities from stakeholders helps determine design scope.
It is crucial to understand business drivers from stakeholders to ensure the design aligns with business goals.
Stakeholders who have relationships with users (e.g., customer support) may have important insights into users that can help formulate your user research plan.
Involving stakeholders helps develop a common language and understanding among design, management, and engineering teams.
As a designer, you aim to develop a vision the entire team believes in, and understanding everyone's perspective is crucial for this.
Stakeholder involvement is generally the best way to make user interviews happen, especially for business and technical products.
Stakeholder interviews are conducted early in a design project. It's recommended to conduct them before user interviews. Designers should ask about preliminary product vision, budget and schedule, technical constraints and opportunities, business drivers, and stakeholders' perceptions of their users. It's important to remember that stakeholder perspectives should not be accepted at face value, as they might propose solutions rather than underlying problems. The designer's role is to root out the real problems.
The sources mention stakeholders and subject matter experts (SMEs) as important groups to involve. SMEs are authorities on the domain and can provide valuable perspectives on the product and its users, especially in complex or technical domains. They can be expert users, trainers, managers, or consultants.
Analyze User Interviews: The Aim, in Which Development Phases Can Be Applied, Who is Involved, What to Prepare For, What Questions to Avoid, What Outcomes Can Be Obtained:
Aim: The primary aim of user interviews is to understand the user, their tasks (what they are trying to do), the environment (where, why, and how they do it), their problems (needs, behaviors, backgrounds, expectations, pain points), and the tools they use. Ethnographic interviews specifically aim to understand the why behind user behaviors and their goals. The goal is to gather qualitative data about users and their goals.
Development Phases: User interviews are valuable in the early research steps of the UX design process. They should be done prior to starting work on a new project to inform design decisions strategically. They are crucial for requirements gathering and design. Ethnographic interviews occur in three distinct, chronological phases: early (exploratory, broad focus), middle (pattern identification, clarifying questions), and late (pattern confirmation, fine adjustments).
Who is Involved: Typically involves designers as interviewers. Ethnographic interviews often involve a team of two designers: a moderator and a facilitator. Interview participants are representative users, potentially current users of similar systems. Stakeholders can help get in touch with users. For complex domains, involving more patient and articulate subjects in early interviews can be beneficial.
What to Prepare For: Identify candidates based on a persona hypothesis that considers potential roles and behavioral, demographic, and environmental variables. Create an interview plan. Be prepared to interview where the interaction happens. Have types of questions in mind, including goal-oriented, system-oriented, workflow-oriented, and attitude-oriented questions. Consider using interview recording technologies like notebooks and digital recorders.
What Questions to Avoid: Avoid a fixed set of questions to allow for flexibility. Don't ask leading questions that suggest answers. Avoid making the user a designer by asking for solutions rather than problems. Avoid discussing technology without understanding the underlying purpose. Avoid asking what they would do/like/want in hypothetical scenarios or what they think someone else might do. Avoid binary questions.
What Outcomes Can Be Obtained: User interviews can reveal behaviors, attitudes, and aptitudes of users. They help understand the vocabulary and social aspects of the domain. They uncover how existing products are used. They can identify goals and needs, pain points and challenges, feelings, task frequency and behaviors, priorities, mental models, and skills. Observed patterns and interesting stories can be identified. Interviews help discover what is missing in current solutions and explain unusual things, leading to insights and design requirements. They contribute to building personas.
Understand How to Conduct Focus Group, Contextual Inquiry, and Card Sorting:
Focus Group: Gather a diverse group of representative users to ask a structured set of questions and provide a structured set of choices. The discussion can evolve to areas not initially considered. Pros include easily understanding customer wants and serving as a starting point for future research. Cons include potential for skewed conclusions, facilitator bias, and group think.
Contextual Inquiry: Involves observing people in their natural context and asking questions to fill in the gaps of your observation. It's based on a master-apprentice model. Principles include context, partnership, interpretation, and focus. Participants take a more active role in leading the session. It can reveal information users might not be aware of, and observing in their natural environment increases the veracity of information.
Card Sorting: A method used to help decide the navigation of a website or app, how to label menus, and how to group content. In open card sorting, users organize topics into categories they create. In closed card sorting, the categories are fixed, and users fit content into an existing structure. Tools like OptimalSort can be used.
Tell the Differences Between Contextual Inquiry and Ethnographic Interviews:
Both are combinations of observation and interview techniques for gathering qualitative data.
Contextual inquiry, pioneered by Beyer and Holtzblatt, is based on a master-apprentice model.
Ethnographic interviews take the spirit of ethnographic research (immersive study of cultures) and apply it on a micro level, focusing on behaviors and rituals of people interacting with products.
Contextual inquiry often assumes full-day interviews, while ethnographic interviews can be shorter (e.g., one hour) with a sufficient number of interviews.
Contextual inquiry assumes a large design team conducting parallel interviews, while ethnographic interviews can be more effective with smaller teams conducting interviews sequentially.
Ethnographic interviews prioritize identifying user goals first, before tasks, while contextual inquiry is more task-focused.
The vocabulary of contextual inquiry often assumes a business product and corporate environment, while ethnographic interviews are also common in consumer domains.
Why is it Important to Involve "Extreme Users" in the Design Process?
The provided sources do not explicitly discuss the importance of involving "extreme users" in the design process.
Explain How to Do Affinity Diagram:
An affinity diagram is a design synthesis method used to organize, manipulate, prune, and filter gathered data into a cohesive structure. It's particularly useful for a large number of ideas.
Tools needed include sticky notes, markers, and a large, flat writing surface. Digital tools like Google Draw, Slides, Miro, and Mural can also be used.
The process typically involves:
Writing down each idea or observation from research on separate sticky notes.
Placing the notes on a large surface.
Silently grouping similar notes together without discussion initially.
Creating categories for these groups. If a note doesn't fit, think of a new category or place it in a "?" category.
Sorting each category into subcategories.
Summarizing the categories and presenting them.
Optionally, determining priorities, e.g., by voting.
What is User Journey Map? When to Use User Journey Map?
A user journey map visualizes the experience of a persona as they interact with a product or service to achieve a specific scenario.
It typically includes zones that describe:
Zone A (The Lens): Defines the persona ("who") and the scenario ("what") being examined.
Zone B (The Heart of the Map): Visualizes the experience across chunkable phases of the journey, including the user's actions, thoughts, and emotional experience, potentially supplemented with quotes or videos from research.
Zone C (The Output): Describes insights and pain points discovered, opportunities to focus on going forward, and potentially business goals supported by the map.
User journey maps are used to understand the user's experience from their perspective, identify pain points and opportunities for improvement, and align the team's understanding of the user's journey. They are useful during the design and synthesis phase after user research to make meaning out of data.
User Role vs. Persona:
A user role is a more abstract concept defined by the tasks a class of users performs and their related information needs. It describes a segment of a product's target user base, sometimes accounting for demographics and background.
A persona is a representation of a group of users, an archetype that reflects patterns based on behavior, goals, attitude, and other variables. Personas are depicted as specific, individual human beings synthesized from research. They convey broader human motivations and contexts through narratives and goals. Personas engage the empathy of the design team around user goals.
While user roles can be useful for business products where roles often map to job descriptions, personas provide a more holistic model of users and their contexts, especially in consumer domains where roles like "car buyer" are too broad. It is possible to create a persona that represents the needs of several user roles.
Based on the sources you provided:
Three Types of User Goals
The sources explain that user goals are the drivers behind user behaviors and serve as a lens through which designers must consider a product's functions. According to Don Norman's book Emotional Design, product design should address three different levels of cognitive and emotional processing, which relate to user goals:
Visceral: This is the most immediate level, relating to our initial reactions to a product's visual and sensory aspects before significant interaction. It helps us make rapid decisions about what is good, bad, safe, or dangerous.
Behavioral: This is the middle level, concerning simple, everyday behaviors and constitutes the majority of human activity. Historically, interaction design and usability practices have nearly exclusively addressed this level.
Reflective: This is the least immediate level, involving conscious consideration and reflection on past experiences. Through reflection, we can integrate our experiences with designed artifacts into our broader life experiences and associate meaning and value with them over time.
Different Sections of Design Requirements
Design requirements can be extracted by analyzing context scenarios and represent the personas' needs. These requirements can be thought of as consisting of objects, actions, and contexts. Alternatively, they can be separated into the following categories:
Data requirements: These are the objects and information that must be represented in the system, often described as objects and adjectives related to those objects. Common examples include accounts, people, and documents, along with their attributes like status and dates.
Functional requirements: These are the operations or actions that need to be performed on the system's objects and are typically translated into interface controls. They also define places or containers where objects or information must be displayed.
Contextual requirements: These describe relationships or dependencies between sets of objects in the system. This includes which objects need to be displayed together for workflow or to meet specific persona goals. They also consider the physical environment where the product will be used and the skills of the users.
Beyond these, there are other types of requirements to consider:
Business requirements: Include stakeholder priorities, timelines, budgets, regulations, pricing, and business models.
Brand and experience requirements: Reflect the attributes users should associate with the product and company.
Technical requirements: Can include physical aspects like weight and size, as well as software platform choices.
Customer and partner requirements: Include ease of installation, maintenance, support, and licensing.
Basic Layout Patterns for Phone Format Mobile Devices
There are basic layout patterns commonly used for phone format mobile devices:
Vertical stacks: This pattern arranges content in a list or grid, often with a top and/or bottom bar for navigating content and accessing functions. The tall and narrow form factor of smartphones dictates this list-like display for most content. Most iOS, Android, and Windows Phone apps follow this top-level pattern.
Screen carousels: This pattern provides a dashboard-like display with multiple instances or variants (screens or cards) that the user can quickly navigate between via a swipe gesture to the left or right. A classic example is the iOS Weather app, where users can swipe between weather information for different locations. Carousels may have a page marker widget to show the user's position.
Basic Layout Patterns for Tablet Format Mobile Devices
Tablet format apps also utilize layout patterns adapted to their larger screen size:
Stacks and index panes: Similar to phone layouts, tablets use the stack pattern with a primary area and navigation/action bars. However, the extra space allows for one or more supporting panes, typically an index pane that lists content items (like an email inbox) while the selected item is displayed in detail in the main content pane. In portrait mode, index panes often overlap the main content area, launched by a button, while in landscape mode, they may become permanent adjacent panes.
Pop-up control panels: Tablet screens are large enough to support pop-up panels that don't overlay the entire screen. These can replace navigation to a full-screen control panel screen found on handheld devices, improving task flow by retaining the context of the background screen. These pop-ups are often attached to a specific control or content object and may use a speech balloon caret to indicate this association.
Common Design Patterns on Mobile Devices to Browse and Select Content
Mobile apps use several design patterns optimized for browsing and selecting content:
Lists: The most frequently used pattern on handheld devices, organizing content into line items or blocks of text, often including controls and their labels, and image or video thumbnails. Tapping an item typically leads to more detailed content. Examples include lists of albums, artists, or songs in a music app. Lists can be finite or use infinite scrolling, presenting more items as the user reaches the bottom.
Grids (Gallery view): Used to organize content like apps, thumbnails, and function icons into regular rows and columns, often for presenting media objects like photos, videos, and music albums (with cover art). The iPhone home screen is a prime example of an app icon grid. Within apps, grid views can scroll vertically or horizontally. Sometimes, the bottom-most visible row is partly cut off to hint at vertical scrolling.
Content carousels: These involve a horizontal swipe gesture to navigate between similar full-screen layouts. They are suitable for dashboard-like displays or featured content. The Crackle app uses a carousel at the top of its "Featured" tab to showcase content. They may wrap around and often include a page marker widget.
Swimlanes: A combination of the carousel and a grid, presenting a vertical stack of carousels, each of which can be scrolled horizontally, independent of the others. This allows users to browse multiple categories of content with minimal vertical scrolling.
Cards: These are chunks of rich-media content that combine media, text, web links, and social actions. They are often displayed in a scrolling vertical list but can also be used in grids, carousels, and swimlanes. Facebook and LinkedIn apps use cards as a central idiom. Google Now uses cards to display contextual information.
Mechanisms for Navigating to Different Functional and Content Areas of Handheld Mobile Apps
Handheld mobile apps employ various mechanisms for navigation:
Tab bars: Located at the bottom of iOS screens and often at the top of Android and Windows Phone screens, tab bars contain a set of text and/or icon buttons. Tapping a tab button switches to a different list or grid view in the main content area, with each tab maintaining its own content hierarchy. Apple's Music app uses a bottom tab bar to navigate between lists of albums, artists, and songs.
Tab carousels: These combine tabs with horizontally swipable carousels. Tabs extend off the edges of the screen, with the selected tab centered. Swiping the tab bar selects adjacent tabs and slides the content into view. Spotify's iPhone app uses a tab carousel in its "Your Music" section.
Nav bars and action bars: Situated at the top of the screen, nav bars (called action bars in Android) typically contain a back button on the left, the title in the center, and sometimes function menus or buttons on the right. They help navigate a list or grid hierarchy.
"More…" controls: Due to limited screen space in bars, a "More…" control provides access to additional navigation options. In iOS, this is often a tab leading to a screen with more options, sometimes allowing customization of the main tab bar. In Android, it's usually a control on the right of the action menu that opens a pop-up menu.
Drawers (Hamburger Menu): Represented by an icon (three stacked lines), tapping this icon (or swiping) slides the main content area horizontally to reveal a vertical list of navigational elements hidden underneath. Tapping an item in the drawer swaps the content and closes the drawer. Google's Gmail app on the iPhone uses a drawer for navigation.
The Drawer Idiom and Its Usage
The drawer idiom (often represented by the hamburger menu icon) provides access to a vertical list of navigational elements similar to tabs while using minimal screen real estate. It hides in a panel that lives in a layer under the main content area. Users can reveal the drawer by tapping the hamburger menu icon or sometimes by swiping across the main content area. The current selection within the drawer is usually highlighted. Tapping an item in the drawer simultaneously swaps the displayed content and snaps the drawer back shut. Items in the drawer are typically textual but can include icons. Drawers can also be used for secondary actions, sometimes deploying from the right side of the screen to access features like a list of online friends. Some apps even use double drawers, with one on each side for different types of navigation or functions.
Drawer behaviors to avoid include:
Overloading animated transitions for accessing options within the drawer, where multiple panes pop and slide in different directions, causing confusion. The Gmail app's drawer implementation is cited as an example of this.
Breaking drawer convention by sliding the drawer open on top of the main content area instead of having the content slide away. The Google+ app is given as an example of this less appropriate behavior.
Some argue against using drawers because they hide functionality, potentially hampering user engagement. However, others believe they are beneficial for apps with complex feature sets or infrequently used but necessary functions, providing a cleaner main interface.
When to Show Welcome and Help Screens and Their Design Principles
Welcome and help screens are shown to help users learn a mobile interface, especially given the limitations of screen real estate, reliance on gestures without visible affordance, and the absence of hover states for tooltips.
Welcome screens typically appear on a user's first-time entry into an app after purchase and login. They provide guidance on important activities and how to perform them.
Help screens provide much the same information but are available on demand when the user requests it.
Common idioms for welcome and help screens include:
Guided tours: Usually a carousel of cards with text and images or video describing the use of specific functions. They are used at first use, after major releases, and as help. Users should be able to exit the tour at any point.
Overlays: A semitransparent layer covering the entire screen with instructions, often hand-drawn with arrows highlighting gestures or controls. Tapping anywhere dismisses the overlay. Overlays are used to explicate gestures.
ToolTip overlays: A variation that attempts to provide a ToolTip-like display of all primary functions on a single overlay screen. These are best used as help screens in more complex authoring apps, not as welcome screens.
Here is a comprehensive response to your query, drawing on the provided sources:
Most Frequently Used Multi-Touch Gestures
The most frequently used multi-touch gestures include:
Tap: Used to select objects and toggle the activation state of controls. Tapped items should receive a selection highlight or an indication of their activation/deactivation state or animation.
Drag to scroll: Can be used horizontally or vertically and is a fundamental direct manipulation gesture.
Vertical dragging can scroll lists or reorder objects in a list using drag handles. It can also initiate a refresh by dragging down at the top of a list or load more items by dragging up at the bottom. Vertical dragging can also access top and bottom drawers in some mobile OSs.
Horizontal dragging can scroll a carousel or swimlane or open/close left and right-hand drawers.
Drag to move: Used to move or copy an object between lists, panes, or containers, or to move an object freely within a canvas or grid.
Drag to control: Used to operate knobs, switches, sliders, virtual x-y control pads, and contextual touch controls, as well as palette tools like brushes.
Swipe up/down: Often synonymous with dragging up/down. Swiping a list or grid up/down can cause it to continue scrolling with simulated momentum.
Swipe left/right: Often synonymous with dragging left/right. Swiping a carousel or swimlane left/right can cause it to continue scrolling with momentum. Swiping left/right can also open/close drawers. Safari uses swipe left for forward navigation and swipe right for back navigation. Chrome uses swipe left/right to delete browser tabs in edit mode.
Pinch in/out: Pinch-in shrinks or zooms out on objects physically (like a map) or performs a semantic zoom out one level in a hierarchy. Pinch-out expands or zooms in physically or performs a semantic zoom in one level.
Rotate: A gesture using the thumb and forefinger twisted clockwise or counterclockwise, used to actuate knob controls (though a horizontal or vertical drag is suggested as a more discoverable alternative) or to rotate objects.
Primary, Secondary, and Content Navigation: Definitions and Differences
Primary Navigation: This refers to how the user gets to the major areas or sections of a website or application. Conventionally, these are persistent links often found along the top or left side of the interface. Top navigation is generally considered superior.
Secondary Navigation: This provides access to sub-levels or more specific areas within the primary sections. It often appears as left-hand menus or a second row of horizontal links. Fat navigation, where primary navigation items expand to reveal more choices, is also a type of secondary navigation.
Content Navigation: This refers to how users navigate through the actual content within a page or section, such as browsing photos in a gallery or moving between articles. This can include listings, featured content carousels, and different organizational schemes (by topic, author, date).
The key difference lies in their scope and purpose. Primary navigation guides users to the main destinations. Secondary navigation helps them delve deeper within those main areas. Content navigation allows them to explore and move through the specific information they have accessed.
Design Principles for Primary Navigation, Secondary Navigation, Searching, Scrolling, and Infinite Scroll
Primary Navigation:
Top navigation is generally a superior approach compared to side navigation.
Forcing designers to reduce the number of major areas and keep titles short and punchy usually results in a more comprehensible and useful experience.
Consider how well the navigation works on smaller (mobile) screens. A common approach is to reveal navigation via a menu or "hamburger icon".
Use persistent headers to maintain context when users scroll.
Secondary Navigation:
Keep the navigation space as flat and compact as possible to aid the user's mental model of the application's organization. Aim to avoid burying information in hierarchies deeper than two levels.
Provide persistent feedback about the user's current location through visual cues in the navigation and breadcrumbs (a sequence of links showing the user's path).
Consider fat navigation to provide easily accessible links to sub-pages upon interaction with primary navigation.
Breadcrumbs with lateral links help speed navigation by allowing users to navigate to different parts of the site hierarchy more easily.
Searching:
An effective search pattern should help users go from their initial search term to a relevant page.
Utilize auto-complete (type ahead) to suggest complete search terms as the user types.
Implement disambiguation (auto-suggest) to provide suggestions for misspelled or similarly spelled words.
Incorporate faceted search to allow users to specify attributes of what they are looking for.
Consider categorized suggestions when a search term applies to multiple categories.
Scrolling:
On the web, scrolling is prevalent, especially with touchscreen interactions and responsive design.
Make scrolling an engaging experience by using effective visual rhythm, white space, and a strong typography system.
Be generous with font and control sizes for touch users and improved scanability.
Help users stay oriented on long pages with cues like docking primary navigation and visual indicators of progress.
Pagination makes sense for very long lists of similar elements like search results or news articles, but can complicate finding and using finite content.
The header (top of the page) should include branding, primary navigation, sign-in status, and often search.
The footer (bottom of the page) is a good place for related content suggestions and persistent access to less frequently visited areas.
Infinite Scroll:
Can be a useful and natural-feeling interaction if latency is kept low.
Infinite scroll and site footers are mutually exclusive idioms; implementing one means the other is inaccessible.
Should be used judiciously due to potential usability challenges.
Keyboard and screen-reader navigation typically do not work well with infinite scrolling, causing accessibility issues.
May not retain its place in the list after using the browser back and forward buttons.
Makes it difficult to page directly and predictably to items far down the list.
Most appropriate for contexts like news feeds where older information loses relevance quickly and browsing recent items is the primary activity.
Should never be used for interfaces where users need to get to the end of the list quickly or return to a specific item after navigating elsewhere.
Good Practices for Implementing Infinite Scrolling
Ensure low latency when loading new content.
Avoid using infinite scrolling if a footer with important information or navigation is needed.
Carefully consider accessibility implications and provide alternative navigation methods if necessary.
Address the issue of losing the user's place after navigating away and returning (e.g., by preserving scroll position or providing a "back to top" function that remembers the last viewed item).
Do not use infinite scrolling if users need to reach the end of the list or specific items far down quickly.
Limit its use to scenarios where browsing recent or frequently updated content is the primary activity, and older content has less relevance.
Rules of Brainstorming
The sources mention the following related to brainstorming:
Warm-up exercise: Start with a warm-up exercise to engage everyone's brains, hands, and mouths before generating ideas. The specific exercise doesn't matter as much as getting people engaged. Examples include word association, drawing, or discussing personal experiences related to the project.
Metaphors: Utilize metaphors to aid brainstorming by comparing the product to unlike objects. This can uncover new directions for the design. Ask: What is this product like? What is it not like?.
Brainwriting: A technique where each person silently writes or sketches the beginning of an idea on paper and then passes it to a neighbor to continue, repeating the process.
Break the Rules: List the constraints of the project and then, one by one, figure out how to break them.
Memorable Design Principles: The best design principles are easily remembered, often using funny, witty, or provocative statements.
Cross-feature Design Principles: Principles should be applicable across the product. If a "principle" only applies to one feature, it's likely a requirement.
Specific Design Principles: Design principles should be specific enough to provide guidance when making design decisions. General statements like "Easy to Use" are not effective principles.
Simplify: Continuously simplify the design until it is simple enough, recognizing that great design often takes time but avoid spending too much time on overly complex approaches.
Design for the users: This is presented as the single unbreakable "law" in interaction design.
While the sources don't explicitly list "rules of brainstorming" in a numbered format, these points highlight key considerations and techniques for effective idea generation.
What is a Strong HMW Question?
The sources do not explicitly define what constitutes a "strong HMW question." However, the concept of brainstorming and generating design concepts is discussed, which often involves framing challenges as "How Might We..." questions. Based on general brainstorming principles and the emphasis on user-centered design in the sources, a strong HMW question would likely be:
User-focused: Directly relate to the needs, goals, or pain points of the users.
Broad enough to encourage a range of solutions: Avoid being too narrow or specific.
Actionable and solution-oriented: Frame the problem in a way that invites exploration of potential solutions.
Concise and easy to understand: Clearly articulate the challenge to be addressed.
What are the Two Brainstorming Techniques? Be Able to Identify the Best Technique to Use for a Given Scenario.
The sources explicitly mention two brainstorming techniques:
Brainwriting: Individuals silently generate initial ideas on paper and then build upon each other's ideas by passing the papers around.
Break the Rules: Identify and challenge existing constraints to explore unconventional solutions.
While "Metaphors" is also mentioned as a tool for brainstorming, it's not presented as a standalone technique in the same way. Standard verbal brainstorming (where ideas are shared aloud in a group) is implied as a general starting point, especially with the warm-up exercises.
Identifying the "best" technique for a given scenario depends on various factors, such as:
Group dynamics: If the group is large or has members who are hesitant to speak up, brainwriting can be beneficial as it allows everyone to contribute anonymously and reduces the influence of dominant personalities.
Stage of the design process: "Break the Rules" might be more useful later in the process when initial constraints are understood and the team wants to explore more radical or innovative solutions. Warm-up exercises and general verbal brainstorming are good for initial idea generation. Using metaphors can be helpful throughout the process to spark new perspectives.
Nature of the problem: Complex or abstract problems might benefit from the use of metaphors to create a more tangible framework. Problems with clear constraints can be effectively tackled with the "Break the Rules" technique.
The sources emphasize getting brains engaged and exploring different angles, suggesting that a mix of techniques might be most effective, and the "best" technique can vary depending on the specific context and goals of the brainstorming session.