In Just Enough Research, co-founder of Mule Design Erika Hall distills her experience into a brief cookbook of research methods. The book does a fantastic job of covering different aspects of design research along with helpful examples and resources. I find conducting user research and usability testing incredibly rewarding and I’m sure I’ll refer back to it often Here are my notes:
Introduction
- The myth of the creative genius makes it very difficult to say “I don’t know.”
- You can use the techniques and methods described to:
- Determine whether you’re solving the right problem.
- Figure out who in an organization is likely to tank your project.
- Discover your best competitive advantages.
- Learn how to convince your customers to care about the same things you do.
- Identify small changes with a huge potential influence.
- See where your own blind spots and biases are preventing you from doing your best work.
- Once you start getting answers, you’ll keep asking more questions. And that skeptical mind-set is more valuable than any specific methodology.
- Businesses and designers are keen on innovation, as well they should be. But the better you know the current state of things and why they’re like that, the better you will be positioned to innovate.
- Design research both inspires imagination and informs intuition through a variety of methods with related intents: to expose patterns underlying the rich reality of people’s behaviors and experiences, to explore reactions to probes and prototypes, and to shed light on the unknown through iterative hypothesis and experiment.
- Design research requires us to approach familiar people and things as though they are unknown to us to see them clearly. We need to peel away our assumptions
- Asking your own questions and knowing how to find the answers is a critical part of being a designer.
- Discovering how and why people behave as they do and what opportunities that presents for your business or organization will open the way to more innovative and appropriate design solutions than asking how they feel or merely tweaking your current design based on analytics.
- You will find that when you ask the hard questions, your job gets much easier. You will have stronger arguments, clarity of purpose, and the freedom to innovate that only comes with truly knowing your constraints.
- “Like” is not a part of the critical thinker’s vocabulary.
- You may run into sample-size queens who dispute the validity or utility of applied qualitative research. These people are often pollsters and marketers who run a lot of surveys. Avoid arguments about statistical significance; you will not win. Instead, keep the focus on gathering useful insights.
The basics
- Ideally, everyone who is on the design team should also participate in the research.
- People who have a hand in collecting the insights will look for opportunities to apply them. Being the smart person is more fun than obeying the smart person, which is how the researcher/designer dynamic can feel if designers are merely the recipients of the analysis.
- The most important thing is that everyone involved knows the purpose or goal of the research, their role, and the process.
- To choose the best research tool for your project, you’ll need to know what decisions are in play (the purpose) and what you’re asking about (the topic). Then you can find the best ways to gather background information, determine the project’s goals and requirements, understand the project’s current context, and evaluate potential solutions.
- Generative or exploratory research: “What’s up with…?” This is the research you do before you even know what you’re doing. It leads to ideas and helps define the problem.
- Generative research can include interviews, field observation, and reviewing existing literature—plus feeling fancy about saying “generative research.”
- Descriptive and explanatory: “What and how?” Descriptive research involves observing and describing the characteristics of what you’re studying. This is what you do when you already have a design problem and you need to do your homework to fully understand the context to ensure that you design for the audience instead of yourself. While the activities can be very similar to generative research, descriptive research differs in the high-level question you’re asking. You’ve moved past “What is a good problem to solve?” to “What is the best way to solve the problem I’ve identified?”
- Evaluative research: “Are we getting close?” Once you have a very clear idea of the problem you’re trying to solve, you can begin to define potential solutions. And once you have ideas for potential solutions, you can test them to make sure they work and meet the requirements you’ve identified. This is research you can, and should, do in an ongoing and iterative way as you move through design and development. The most common type of evaluative research is usability testing, but any time you put a proposed design solution in front of your client, you really are doing some evaluative research.
- Causal research: “Why is this happening?”. Causal research often includes looking at analytics and conducting multivariate testing. This means reviewing your site traffic to see how visitors are entering and moving around the site and what words they might be searching for, as well as trying design and language variations to see which ones are more effective.
- As long as you’re clear about your questions and your expectations, don’t fret too much about the classification of the research you want to undertake. Remain open to learning at every stage of the process. And share this love of learning with your team. Your research will benefit from a collaborative approach that includes assigning specific responsibilities to different people.
- Just as with design and coding, every time you complete some research, you’ll have ideas for how to do it better next time and you’ll have found new ways to incorporate learning into your work.
- Listen. Be interested. Ask questions. Write clearly. And practice. Whatever your day job is, adding research skills will make you better at it.
- The whole point of doing research is to have a stronger basis for decision-making, so if another level of decision-making, such as executive fiat, trumps the research, you will have wasted your time. Get ready to advocate for your research project—before you start it.
- This is applied research. You just need to have (or develop) a few qualities in common with a good scientist:
- Your desire to find out needs to be stronger than your desire to predict. Otherwise you’ll be a mess of confirmation bias, looking for answers that confirm what you already assume.
- You need to be able to depersonalize the work. There are no hurt feelings or bruised toes in research, only findings.
- You need to be a good communicator and a good analytical thinker. Otherwise questions and reports get muddy, and results will be worse. This is just a set of skills that most people can develop if they have the right attitude.
- Upfront research can provide a basis for decision-making that makes the rest of the work go much faster. Nothing slows down design and development projects as much as arguing over personal opinions or wasting effort solving the wrong problem. And you can start small. A couple of weeks can mean very little to your overall schedule while adding significantly to your potential for success.
- There are a lot of things you can find out in beta: what functionality is working, whether users have a hard time finding core features. But there are also a lot of things that are very helpful to know before you start designing or coding at all, and you can find those out pretty fast: what your target audience is doing right now to solve the problems your product or service purports to solve, whether people want this product at all, and what your organization has to do to support it.
- Familiarity breeds assumptions and blind spots.
- It’s better to adjust the scope intentionally at the start than be surprised when new information pops up down the road to amend your plans. Research is an excellent prophylactic against unexpected complexity.
- Relevance to the real world is what separates innovation from invention. Understanding why and how people do what they do today is essential to making new concepts fit into their lives tomorrow.
- Research needs to be integrated into process and workflow or it will get shoved in a corner. If your project has a project manager, talk with them about finding ways to make it work.
- The cult of the individual genius designer/developer/entrepreneur is strong. In certain “rockstar knows best” cultures, wanting to do research can come across as a sign of weakness or lack of confidence. Fight this. Information and iteration are the keys to a successful design. Research is just one set of inputs.
- Falling back on ignorance can be a position of strength. Asking naive questions can cut right to the heart of assumptions and open people up to thinking about problems in a new way.
- “How does that benefit the business?” and “Why do you do it that way?” are a couple of terrific questions that can be very tricky for someone on the inside to get away with.
- In addition to, or instead of, direct access to the customer service people, get hold of the inbound support requests. This will be a fantastic source of insights into the ways different types of customers think about their needs and the language they use to describe them.
- You don’t just want insights; you also want a way to put those insights back into the product.
- It’s very helpful to have a clear idea of how product and marketing decisions are made in your company.
- You do need to have some clarity around your audience and the business context you’re operating in. You’re trying to introduce something new into the world. Who needs it and what is important to those people? When you’re discussing the initial design and development of your product, discuss the role of research with the team. Document and review assumptions to identify the areas in which doing some research might be the most beneficial. Get some early agreement on how research will be involved, keep track of your assumptions, and adopt a skeptical point of view. The approach and biases of the founder and the investors might dominate, so if you aren’t one of those, you will have to be very clear about the value of research to your endeavor and savvy about how to make your case.
- From a user experience perspective, the primary problem with Agile is that it’s focused on the process, not the outcomes. It doesn’t offer guidance on what to build, only how. Perhaps your team is more efficient and happier making a lot of stuff together, but how do you know that stuff is the best it could be, meeting real user needs and fit to compete in the marketplace?
- If you’re always reacting without a framework, you need some guiding mandates. Which customers do you listen to and why? Which user stories do you prioritize? What are you ultimately building toward? Research is not antithetical to moving fast and shipping constantly. You’ll need to do some upfront work for background and strategy and the overall framework. Then, as the work progresses, do continual research.
- Aggressively prioritize the highest-value users. Analyze and model data quickly and collaboratively. Defer less urgent research and complete it while the software is being constructed.
- Recruiting and scheduling participants is the most difficult part, so always be recruiting. Set up windows of time with different participants every three weeks. When you have them, you can either conduct an ethnographic interview to understand their behavior before the next round of development or do some usability testing on the current state of the application.
- Use what you learn from the initial user research and analysis to create personas that inform high-level sketches and user stories. Then, when the team is working on a feature that has a lot more engineering complexity than interaction design complexity, you can fit in additional evaluative research.
- Throughout the development cycle, the designers can use research to function as a periscope, keeping an eye out for new insights about users and competitive opportunities while doing usability testing on whatever is ready.
- Wherever there is research there is bias. Your perspective is colored by your habits, beliefs, and attitudes. Any study you design, run, or analyze will have at least a little bit of bias.
- You can’t eliminate it completely—but the simple act of noting potential or obvious bias in your research process or results will allow you to weigh the results more appropriately.
- Below is a starter set of ethical concerns you should keep in mind whenever you are doing research. (For more Thorough guidelines of ethical concerns you should keep in mind whenever you are doing research can be found on the ICC/ESOMAR Code on Market and Social Research website.
- Be a skeptic. Get in the habit of asking a lot of questions. Question all your assumptions and determine whether you need to check your facts. If you’re constantly on the lookout for threats and potential points of failure, you and your products will be stronger.
- You need to be aware of how much you don’t know and what that means.
- Discipline requires you to be ever-watchful for bad habits, shoddy thinking, and other human frailties that will undermine your efforts. Checklists substitute the experience of others for your own.
- Unless you know and can clearly state what you’re trying to find out and why, applied research is a pointless exercise.
- A successful study is preceded by expectation-setting for everyone involved, including the questions to be answered, the methods to be used, and the decisions to be informed by the findings.
- Allow sufficient time for analysis
- Notes or it didn’t happen. Effective research requires effective reporting, and sharing your results and recommendations with others. A good report doesn’t have to be arduous to compile or read. It needs to be sufficiently informative and very clear to anyone who needs to make decisions based on the research.
- To make the best use of your time and truly do just enough research, try to identify your highest-priority questions—your assumptions that carry the biggest risk.
- some assumptions are higher-risk than others.
- Ask this question: given our stated business goals, what potential costs do we incur—what bad thing will happen—if, six months from now, we realize:
- We are solving the wrong problem.
- We were wrong about how much organizational support we have for this project.
- We don’t have a particular competitive advantage we thought we had, or we didn’t see a particular competitive advantage before our competitor copied us.
- We were working on features that excited us but don’t actually matter that much to our most important customers.
- We failed to reflect what is actually most important to our users.
- Our users don’t really understand the labels we’re using.
- We missed a key aspect of our users’ environments.
- We were wrong about our prospective users’ habits and preferences.
- Better understanding of your target users mitigates the risk by validating the assumption and informing your design with real user priorities. In addition, you might uncover opportunities to provide something of even greater value to that same audience.
- No matter how much research you do, there will still be things you wish you’d known, and there are some things you can only learn once your design is out there in the world. Design is an iterative process. Questions will continue to crop up. Some of them you can answer with research and some you can only answer with design. Even with research, you’ll need to create a few iterations of the wrong thing to get to the right thing.
- If you don’t have enough information, or what you’re finding doesn’t quite hold together, the pieces will rattle around in your head. Ask a few more questions or talk to a few more people. Talk through the results. The pieces will fall into place. Learn to listen for that click.
The Process
- One way to know you’ve done enough research is to listen for the satisfying click. That’s the sound of the pieces falling into place when you have a clear idea of the problem you need to solve and enough information to start working on the solution.
- Whatever type of research you’re doing, and wherever it falls in your schedule, follow these six steps:
- Define the problem.
- Select the approach.
- Plan and prepare for the research.
- Collect the data.
- Analyze the data.
- Report the results.
- Just as you need a clearly articulated problem to create a solid design solution, a useful research study depends on a clear problem statement. In design, you’re solving for user needs and business goals. In research, you’re solving for a lack of information. A research problem statement describes your topic and your goal.
- You want to know when you’re finished, so base your statement on a verb that indicates an outcome, such as “describe,” “evaluate,” or “identify.” Avoid using open-ended words like “understand” or “explore.” You’ll know when you have described something. Exploration is potentially infinite.
- The topic and nature of your questions will guide your choice of research activities.
- If your question is about users themselves, you’ll be doing user research, or ethnography. If you want to assess an existing or potential design solution, you’ll be doing some sort of evaluative research
- Once you’ve selected the approach, write a quick description of the study by incorporating the question. For example: “We will describe how parents of school-age children select and plan weekend activities by conducting telephone interviews and compiling the results.”
- In the beginning, don’t worry about getting everything right. If you don’t know, go with your best guess. Since research is about seeking out new information, you’re going to encounter new situations and unpredictable circumstances. Make friends with the unexpected. And prepare to change the plan you’ve made to adapt once you have facts.
- Your research plan should include your problem statement, the duration of the study, who will be performing which roles, how you will target and recruit your subjects, plus any incentives or necessary tools and materials.
- Good recruiting puts the quality in your qualitative research. Since you’ll probably be working with a small sample size, you need the individual participants to be as good as they can be. Participants are good to the extent they represent your target. If participants don’t match your target, your study will be useless.
- A good research participant:
- Shares the concerns and goals of your target users.
- Embodies key characteristics of your target users, such as age or role.
- Can articulate their thoughts clearly.
- Is as familiar with the relevant technology as your target users.
- If you need people in a certain geographic area, see whether there are local community sites or blogs that would announce it as a service. Referring to it as “design research” rather than “marketing research” goes a long way in the goodwill department.
- When recruiting, be vague about the contents of the actual test. If you recruit people from the site you are testing, then just refer to “an interview about this website.”
- The more organized you are in gathering and storing your data, the more effective and pleasant the analysis will be.
- Use a consistent naming convention for research documentation, such as “Study-Subject Name-Year-Month-Day.”
- Take a few moments between sessions to check the files and make sure they’re named correctly and saved in the right place, and note your initial impressions while you’re at it. A few quick thoughts while you’re fresh will give you a great place to get the analysis started.
- A simple interview remains the most effective way to get inside another person’s head and see the world as they do.
- Usability testing is simply the process of conducting a directed interview with a representative user while they use a prototype or actual product to attempt certain tasks. The goal is to determine to what extent the product or service as designed is usable—whether it allows users to perform the given tasks to a predetermined standard—and hopefully to uncover any serious, resolvable issues along the way. The sooner and more often you start doing it, and the more people on your team are familiar with the process, the more useful it is. You shouldn’t even think of it as a separate activity, just another type of review to ensure you’re meeting that set of needs. Business review. Design review. Technical review. Usability review.
- Usability testing can:
- Uncover significant problems with labeling, structure, mental model, and flow, which will prevent your product from succeeding no matter how well it functions.
- Let you know whether the interface language works for your audience.
- Reveal how users think about the problems you purport to solve with your design.
- Demonstrate to stakeholders whether the approved approach is likely to meet stated goals.
- Usability testing cannot:
- Provide you with a story, a vision, or a breakthrough design.
- Tell you whether your product will be successful in the marketplace.
- Tell you which user tasks are more important than others.
- Substitute for QA-testing the final product.
- The Pew Research Center’s Internet & American Life Project is a free and reputable source of data. As the name implies, the work focuses on Americans, but it’s a terrific place to start. Their work is typically survey-based, and good for thinking about trends. (Also, their reports are a good model for communicating clearly about research.)
- What does it all mean? Once you have collected the data, gather it all together and look for meaningful patterns. Turn the patterns into observations, and from those, recommendations will emerge.
- If you are working with a design team, get as many members as possible involved in the analysis. A group can generate more insights faster, and those insights will be shared and internalized far more effectively than if you simply circulated a report.
- Analysis is a fun group activity. You get into a room with your team, review all the notes together, make observations, and turn those into actionable insights.
- Here’s a good baseline structure that can be modified to suit your project’s needs:
-
- Summarize the goals and process of the research. (What did you want to find out? Who from your side participated and in which roles?)
- Describe who you spoke with and under which circumstances (number of people, on the phone or in person, etc.).
- Describe how you gathered the data.
- Describe the types of analysis you will be doing.
- Pull out quotes and observations.
- Group quotes and observations that typify a repeated pattern or idea into themes; for example “participants rely on pen and paper to aid memory,” or “the opinions of other parents are trusted.”
- Summarize findings, including the patterns you noticed, the insights you gleaned from these patterns, and their implications for the design.
- Document the analysis in a shareable format.
- You are looking for quotes and observations that indicate:
- Goals (what the participant wants to accomplish that your product or service is intended to help them with or otherwise relates to).
- Priorities (what is most important to the participant).
- Tasks (actions the participant takes to meet their goal).
- Motivators (the situation or event that starts the participant down the task path).
- Barriers (the person, situation, or thing that prevents the participant from doing the task or accomplishing the goal).
- Habits (things the participant does on a regular basis).
- Relationships (the people the participant interacts with when doing the tasks).
- Tools (the objects the participant interacts with while fulfilling the goals).
- Environment (what else is present or going on that affects the participant’s desire or ability to do the tasks that help them meet their goals).
- There will be some people who would never realistically use your product. Don’t try to accommodate them in your model just because you talked to them.
- Always write up a brief, well-organized summary that includes goals, methods, insights, and recommendations. When you’re moving fast, it can be tempting to talk through your observations and move straight to designing, but think of your future self. You’ll be happy you took the trouble when you want to refer to the results.
- The only way to design systems that succeed for imperfect humans in the messy real world is to get out and talk to people in the messy real world. Once you start researching, you won’t feel right designing without it.
Organisational Research
- Design doesn’t happen in a vacuum. Design happens in the proximity of people with a lot on their minds.
- It’s inescapable that the nature of an organization matters to the design process. Budgets, approvals, timing, and resource availability can all depend on successfully negotiating an organization. The ultimate success of a product or service depends on how well it fits into everything else the organization is doing and how well the organization can and will support it.
- Alternatively, to support “not failing at all, if we can avoid it,” identify the assumptions that pose the greatest risk and suggest activities to address those assumptions. Design Staff is an excellent product design and research blog is written by the Google Ventures Design Studio team specifically for startups.
- Your research should include anyone without whose support your project will fail. This might include executives, managers, subject matter experts, as well as staff in various roles. Be generous in your selection. A few additional hours in conversation will help ensure you’re both well informed and protected from an overlooked stakeholder popping up too late.
- Stakeholder interviews will help you understand the essential structure of the organization, how your work fits into the organization as a whole, and the approval process for various aspects of your project. They’ll also provide you with some less obvious opportunities to influence your project’s chances of success.
- “Interviews with project stakeholders offer a rich source of insights into the collective mind of an organization. They can help you uncover areas of misalignment between a company’s documented strategy and the attitudes and day-to-day decision-making of stakeholders. They can also highlight issues that deserve special consideration due to their strategic importance to a business.”
- A significant benefit of organizational research is political. You don’t want your hard work to get trampled in a turf war you didn’t know existed.
- You may find that someone in the organization is deeply opposed to your work. If you know why, you may be able to get them on your side. Talking with stakeholders is an excellent opportunity to sell people on the value of your work in terms that matter to them.
- You need to understand how your work might solve or create problems throughout the organization, and how the organization will prioritize those problems and solutions.
- It’s shocking how many projects get underway lacking clear, or even any, business requirements. How do you know whether your work has succeeded? If it’s fully functional? If the users are happy? If your work doesn’t support the business, you have failed, no matter how good the design.
- How important is the work to the organization, really? The answer might be surprising. It makes a big difference whether the project at hand is genuinely valued by the organization.
- For the definitive word on making influential people feel heard, read Paul Ford’s excellent essay “The Web Is a Customer Service Medium”. Here is the heart of it: “Why wasn’t I consulted,” which I abbreviate as WWIC, is the fundamental question of the web. It is the rule from which other rules are derived. Humans have a fundamental need to be consulted, engaged, to exercise their knowledge (and thus power).
- Asking someone for input before you get started is a peerless prophylactic against that person rearing up late in the game with insurmountable objections. Inquiry is flattery. Inviting people to participate empowers them.
- Never underestimate the ability of a single individual—no matter how seemingly unimportant or obscure—to really fuck things up for you once they set their mind to it.
- Your work will affect everyone in an organization, even those who don’t directly use the product, service, or system you’re designing on its behalf. Executives will have to defend it as a part of the overall strategy. Customer service will have to support it. Salespeople will have to sell it. Production staff will have to maintain it.
- The purported customers or audience members are not the only users of the product you’re building. Founders may be using it as proof of concept to raise more capital from investors. Salespeople may rely on prospects interacting with it before they can close the deal. Company representatives might expect to be asked questions about it when they’re out at conferences. You’ll benefit from gaining their perspectives and knowing their priorities in that regard.
- Don’t wait for people inside the organization to come to you, and don’t rely on a higher-up to tell you who to talk to. Based on the goals of this project, it’s up to you to determine whose input you
- Understanding how what you’re proposing to build relates to the organization responsible for it means that you can anticipate changes to workflow and minimize them where possible, or prepare people for changes when they’re necessary.
- Send an agenda and the key questions ahead—not all the questions, but the ones the participant will benefit from knowing in advance. More complex topics might require some forethought. It’s best to avoid making people feel ambushed or unprepared.
- The basic flow of a stakeholder interview is as follows:
- Introduce yourself and restate the purpose of the meeting. It should be something like: “We’re starting to work on a complete redesign of Company X’s website and we want to get your input. We’ll use your input to make sure that the design meets your needs as well as those of the visitors.”
- Explain to what extent the information will be shared, by role or business function. “Please feel free to be totally frank. Honest answers are essential to this process. We’re talking to people throughout the organization, and will group answers together rather than focusing on what one person said. If we use a direct quote, we will not attribute it to you personally.”
- Like a good journalist, don’t narc on your sources. Get something in writing from the person directing or approving this research, stating that people can speak freely without fear of reprisal.
- In addition to name and title, these are the basic questions you’ll want to ask:
- How long have you been in this role?
- What are your essential duties and responsibilities?
- What does a typical day look like?
- Who are the people and teams you work most closely with?
- How well is that relationship working?
- Regarding the project we’re working on, how would you define success
- From your perspective, what will have changed for the better once it’s complete?
- Do you have any concerns about this project?
- What do you think the greatest challenges to success are?
- Internal and external?
- How do you expect your interactions with other people inside or outside this organization will change based on the outcome of this project?
- Then, there are the more specific questions that depend on the project. Stakeholders may themselves be users, often of back-end systems or administrative functions:
- What are your most common tasks with the system?
- What problems have you noticed?
- What kinds of work-arounds do you use?
- Have you any concerns about this project?
- Is there anyone else I should talk to?
- The goal of gathering and documenting business requirements is to ensure that all the stakeholders agree on the purpose and limitations of what you’re doing. You want to increase your chance of success, connect what you’re doing to the goals of the business, increase collaboration, and save costs, particularly those associated with changes.
- Requirements must be:
- Cohesive. The requirements all refer to the same thing.
- Complete. No missing information. No secret requirements that didn’t make it onto the list.
- Consistent. The requirements don’t contradict each other.
- Current. Nothing obsolete.
- Unambiguous. No jargon. No acronyms. No opinions. Feasible. Within the realm of possibility on this project.
- Concise. Keeping them short and clear will increase the chances that they are read, understood, remembered, and used. Aim for no more than two to three pages.
- What to include in your documentation
- Problem statement and assumptions
- Goals
- Success metrics
- Completion criteria
- Scope
- Risks, concerns, and contingency plans
- Verbatim quotes
- Workflow diagrams
- A solid understanding and honest assessment of an organization and its business is necessary for the success of any significant design project.
- Even just the process of conducting research can be beneficial if only because it provides the motivation to open atrophied or nonexistent communication channels. Performed with tact and rigor, organizational research can neutralize politics, clarify requirements, and improve the odds that changes will be fully understood and take hold.
- Organizational habits and capabilities are just as relevant as target user behaviors and needs, although they’re less frequently included as fundamental topics of design research. And the true nature of workflow and interpersonal relationships is just as ripe for ethnographic exploration.
User Research
- You do user research to identify patterns and develop empathy. From a designer’s perspective, empathy is the most useful communicable condition: you get it from interacting with the people you’re designing for.
- When we talk about user research as distinguished from usability testing, we’re talking about ethnography, the study of humans in their culture. We want to learn about our target users as people existing in a cultural context. We want to understand how they behave and why. This is very different from gathering opinions. It isn’t just surveying or polling. And it’s definitely not focus groups.
- Ethnographic design research allows design teams to:
- Understand the true needs and priorities of your customers/readers/target audience/end users.
- Understand the context in which your users will interact with what you’re designing.
- Replace assumptions about what people need and why with actual insight.
- Create a mental model of how the users see the world.
- Create design targets (personas) to represent the needs of the user in all decision-making.
- Hear how real people use language to develop the voice of the site/application.
- For you to design and develop something that appeals to real people and reflects their priorities, you’ll need to talk with or observe representative users directly in their context—their regular environment. This reduces your risk of making bad assumptions based on your own experiences, hopes, or subjective preferences. That context includes the physical environment, mental model, habits, and relationships.
- Because it’s impossible to want what you can’t imagine, you risk the scope of your ideas being limited by the imaginations of others.
- The first rule of user research: never ask anyone what they want.
- Your challenge as a researcher is to figure out how to get the information you need by asking the right questions and observing the right details.
- Radically simplified, the fundamental question of ethnography is, “What do people do and why do they do it?” In the case of user research, we add “…and what are the implications for the success of what I am designing?”
- To do user research, you’ll need to make a slight mental shift to “how should what I’m designing interact with this person” and then do your best to be totally nonjudgmental. That’s all it takes to stoke the human data-gathering machine.
- Lively narratives help everyone on your team rally around and act on the same understanding of user behavior. From the mundane realities of real people, personas emerge—fictional protagonists with important goals—along with scenarios, the stories of how they use the product you’re designing to meet those goals. Personas will keep you honest. You design for them, not for you or for your boss.
- The goal of interviewing users is to learn about everything that might influence how the users might use what you’re creating. Good interviewing is a skill you develop with practice. The great myth is that you need to be a good talker. Conducting a good interview is actually about shutting up. This can be very hard, especially when you’re enthusiastic about the topic. Remember, the people you’re interviewing want to be liked. They want to demonstrate their smarts. When you’re interviewing someone you know nothing. You’re learning a completely new and fascinating subject: that person.
- An interview has three acts, like a play or a spin class: the introduction and warm-up, the body of the interview, and the conclusion.
- Once you have established who you want to talk to and what you want to find out, create your interview guide. This is a document you should have with you while you’re interviewing to ensure that you stay on topic and get all of the information you need. The interview guide should contain:
- The brief description and goal of the study. This is for you to share with the participant and use to remind yourself to stay close to the topic.
- The basic factual or demographic questions for putting the participant’s answers in context. These will vary depending on the purpose of the interview, but often include name, gender, age, location, and job title or role.
- A couple of icebreaker or warm-up questions to get the participant talking. Most people know this as “small talk.” Improvise these based on the demographic information.
- The questions or topics that are the primary focus of the interview.
- If the subject doesn’t offer enough information on a topic, ask a follow-up or probing question, such as “Tell me more about that.” Allow pauses to let the story through. Silence is uncomfortable. Get used to it and don’t rush to fill gaps in the flow of conversation. You want your subject to do that.
- Once you have the information you were looking for, and hopefully even more, make a gentle transition to the wrap-up. Say something like “That’s it for my questions. Is there anything else you’d like to tell me about what we discussed?”
- Once you’ve done your part to get the subject talking, get out of the way. You should strive to be a nearly invisible, neutral presence soaking up everything the other person has to say. Think of them as the world’s foremost expert on themselves, which is the all-absorbing matter at hand. Insert yourself only when necessary to redirect back on topic or get clarification. You will know when your interview is going particularly well because you won’t be able to get a word in, but you will be getting answers to all your questions.
- Keep an ear out for vague answers You want details and specifics. Always be ready to bust out a probing question such as “Why is that?” or “Tell me more about that.”
- Handy checklist for effective user research:
- Create a welcoming atmosphere to make participants feel at ease.
- Always listen more than you speak.
- Take responsibility to accurately convey the thoughts and behaviors of the people you are studying.
- Conduct your research in the natural context of the topic you’re studying. Start each interview with a general description of the goal, but be careful of focusing responses too narrowly.
- Encourage participants to share their thoughts and go about their business.
- Avoid leading questions and closed yes/no questions. Ask follow-up questions.
- Prepare an outline of your interview questions in advance, but don’t be afraid to stray from it.
- Whenever possible, snap photos of interesting things and behaviors.
- Also note the exact phrases and vocabulary that participants use.
- Pay attention after you stop recording. You might get a valuable revelation.
- Here is a sample set of questions to modify to meet your needs:
- Tell me about your job.
- Walk me through a typical week in your life.
- How often are you online?
- What computers or devices do you use?
- When do you use each of them?
- Do you share any of them?
- What do you typically do online?
- What do you typically do on your days off?
- How do you decide what to do?
- Tell me about how your children use the internet. How do you decide what to do on your days off with your kids?
- What are your particular non-work interests?
- What do you read online besides the news?
- How frequently do you visit museums in your town? Which ones?
- What prompts you to go?
- The interview is the basic unit of ethnographic research. Once you’ve completed your interviews, analyze them all together to find themes, including user needs and priorities, behavior patterns, and mental models. Note the specific language and terms you heard so you can better reflect the way users think and talk in the actual interface.
- If you are doing generative research, look to the needs and behaviors you discover to point out problems that need solving. Turn the clusters around user types into personas that you can use for the life of the product or service you’re working on.
- Enter the participant’s actual environment and observe as they go about the specific activities you’re interested in studying. By doing this you will be able to see actual behaviors in action and learn about all of the small things you might not hear about in an interview, such as a janky work-around so unconscious and habitual the individual has completely forgotten it.
- Contextual inquiry is a deeper form of ethnographic interview and observation. It is particularly useful for developing accurate scenarios, the stories of how users might interact with potential features, as well as identifying aspects of the user’s environment that will affect how someone might use a particular product.
- Contextual inquiry can be very inspirational. You might observe problems and opportunities you had no idea existed and open the door to some innovative and interesting ideas. Be ready to learn that people don’t need what you thought they need at all, but that they do need something totally different. Joyfully release all of your preconceived plans and notions.
- Focus groups are the antithesis of ethnography. Unlike interviewing participants individually or observing people in their natural environment, the focus group creates an artificial environment that bears no resemblance to the context in which what you’re designing would actually be used.
- Recruiting and screening participants is the most time-consuming and least informative aspect of user research. If you are doing a focus group, one bad recruit in the group can tank the entire session. In one-on-one interviews, at least that recruit won’t taint the pool.
- Accept no substitute for listening to and observing real people who need to do the things you’re designing a thing to help people do.
- the information you gather will continue to pay dividends as you continue to gather and examine it, grounding your design decisions in real human needs and behaviors.
- As you develop the skill of stepping out of yourself to become an effective design ethnographer you will develop powerful empathy that can inspire you to find creative, effective solutions.
- The hardest competitor to beat is the one your potential customers are using right now. If they have to stop using that one to start using yours, they may incur a switching cost. People are lazy, forgetful creatures of habit. Your target customers have to love you more than they hate change.
- You need to know not only who your competitors are from the perspective of the business (that’s generally obvious) but who competes for attention in the minds of your target users. Attention is the rarest resource and the one you need to survive. Unless your goal is to sell one very expensive item to a small number of people, you need to convert attention into habit.
- Competitive research begins with a broad perspective on the competition. You may be looking for things to steal, like approaches and customers. You need to see how other people are solving similar problems, and identify opportunities to offer something uniquely valuable. You need to do this frequently and quickly; get in the habit of constantly asking not only “What matters to our customers?” (the user question) but “How are we better at serving that need than any competitor?” (the product question) and “How can we show our target customers that our product is the superior choice?” (the marketing question).
- A SWOT analysis organized in a simple grid can help you grasp your competitive position.
- For each competitor and each site, product, service, or touchpoint, answer the following:
- How do they explicitly position themselves?
- What do they say they offer?
- Who do they appear to be targeting?
- How does this overlap or differ from your target audience or users?
- What are the key differentiators?
- The factors that make them uniquely valuable to their target market, if any
- To what extent do they embody each of your positive/negative attributes?
- How do the user needs or wants they’re serving overlap or differ from those that you’re serving or desire to serve?
- What do you notice that they’re doing particularly well or badly?
- Based on this assessment, where do you see emerging or established conventions in how they do things, opportunities to offer something clearly superior, or good practices you’ll need to adopt or take into consideration to compete with them?
- In addition to looking at how your competitors position and differentiate themselves, take a good, hard look at your own brand. Is it doing the work it needs to and setting the right expectations for the overall experience? Do you need to do some work on it?
- For many interactive products and services, there is no “brand” apart from the service itself. The brand experience is the user experience. The visual design of the interface is the brand identity. The brand personality is the voice of the interface language.
- Your brand is simply your reputation and those things that signify your identity and reputation to your current and potential customers. That reputation offers a promise of all the good things you do for your customers, most of which exist only in the customer’s mind. The stronger the brand, the more awesome associations pop up in more people’s minds.
- Here are the questions you need to ask about your brand:
- Attributes: which characteristics do you want people inside and outside the company to associate with the brand or product, and which do you want to avoid?
- Value proposition: what does your product or service offer that others do not and how does your brand communicate this?
- Customer perspective: when you conduct ethnographic interviews with existing or potential customers, what associations do they have with your brand?
- What makes a good name varies from market to market like everything else, but at a minimum, it needs to be unique, unambiguous, and easy to spell and say.
- Don’t just test your own product—test the competitor’s! You can use task-based usability testing to evaluate a competitor’s website or application. This allows you to understand their strengths and weaknesses directly from the user’s point of view, identify opportunities to develop your advantages, and gain insight into how target users conceptualize core tasks and key
- The competitive landscape and how what you’re designing fits into it may be the fastest moving target of all research topics. New options are appearing and product categories are collapsing every day. Just taking a user-eye view at how your company, product, and message measure up will give you some competitive advantage. The accurate, user-centered perspective of your comparative strengths and weaknesses will help you focus your message and hone your image.
Evaluative Research
- Evaluation is assessing the merit of your design. It’s the research you never stop doing. There are several ways to go about it, depending on where you are in the project.
- In the early stages, evaluation takes the form of heuristic analysis and usability testing. You can test an existing site or application before redesigning. If you have access to a competitor’s service or product, you can test that. You can test even the very earliest sketches.
- Once a site or application is live, even if it’s in private alpha, you can start looking at quantitative data and use site analytics to see how people are actually interacting with the system and whether that meets your expectations.
- The best way to assess a functional design is through a combination of quantitative and qualitative methods. The numbers will tell you what’s going on, and the individual people will help you understand why it’s happening.
- Heuristic inspection is not a substitute for usability testing, but it can be a good sanity check.
- Nielsen’s ten heuristics are a useful reference for heuristic inspection
- Every internal design review is an opportunity for a mini heuristic evaluation. If you’re about to embark on a major redesign, it makes a tremendous amount of sense to identify key issues through usability testing.
- Usability is the absolute minimum standard for anything designed to be used by humans. If a design thwarts the intended users who attempt the intended use, that design is a failure from the standpoint of user-centered design.
- The easier it is for your customers to switch to an alternative, the more important usability is to the success of your product or service.
- The more complex a system is to design and build, the more work is required to ensure that it’s usable—but that work is always worth doing. (This is also an argument for keeping feature sets simple.) If the desire to rush to market trumps usability, you might see your first mover advantage dissolve as soon as a competitor copies all your functionality and leapfrogs your ease of use.
- Barriers to usability are barriers to sales. On the other hand, a more usable product will get better word of mouth and lower support costs.
- According to Nielsen, usability is a quality attribute defined by five components:
- Learnability: how easy is it for users to accomplish basic tasks the first time they come across the design?
- Efficiency: once users have learned the design, how quickly can they perform tasks?
- Memorability: when users return to the design after a period of not using it, how easily can they reestablish proficiency?
- Errors: how many errors do users make, how severe are these errors, and how easily can they recover from the errors?
- Satisfaction: how pleasant is it to use the design?
- Every aspect of a digital design that thwarts an intention it purported to fulfill might as well be a sharp ragged edge, a piece of broken glass, or a splinter. Would you offer a broken glass to a guest? All of your users are your guests. It is your job to make sure they don’t cut themselves on the stuff you make.
- Cheap tests first, expensive tests later
- Usability testing can be more or less expensive. Don’t use expensive testing—costly in money or time—to find out things you can find out with cheap tests. Find out everything you can with paper prototypes or quick sketches before you move to a prototype. Find out everything you can in the comfort of your own office before you move into the field. Test with a general audience before you test with specific audiences who take more time and effort to find.
- The second most expensive kind of usability testing is the kind that you put off until very late in the process, when you risk finding out that there are huge usability problems that will be very difficult to fix. The most expensive of all is the kind your customers do for you after launch by way of customer service.
- How often you test depends on how frequently significant design decisions are being made.
- Preparing for usability testing: The most difficult part of usability testing is determining how it fits into your process as a decision-making input. There is no one way, but there are a few essential principles: Build usability practices into your workflow from the start, the same way you account for internal reviews of work in progress. Create a testing process and checklist that includes all of the information and equipment you need. Always be recruiting. Maintain a database, even just a Google doc, of potential participants and their contact information. Decide who’s in charge of this stuff. A point person makes everything operate more smoothly.
- What you will need for usability testing:
- A plan.
- A prototype or sketch.
- Four to eight participants of each target user type based on personas (ideally) or marketing segments.
- A facilitator.
- An observer.
- One or more methods of documentation.
- A timer or watch.
- Not all tasks are created equal. When you go into a usability test, you should have a clear idea which failures are a bigger deal.
- A usability test revolves around tasks. Ideally you have personas that you have been using throughout the design process and you can use them and their core tasks as a jumping off point for usability. The features you want to test should likewise have associated scenarios and tasks. For each feature, write a very brief story that offers background on how the user arrived there and what they are trying to accomplish.
- Once you have your tasks, make a checklist test plan that you use to run and document each round of testing.
- The test plan lays out what you’re going to do, how you’re going to conduct the test, which metrics you’ll capture, the number of participants you’re going to test, and which scenarios you’ll use.
- The US Department of Health and Human Services maintains usability.gov, which is a resource for making useful and usable websites.
- This checklist can be used for both planning the usability test and writing a report. Modify it to fit your needs:
- Objectives.
- Subject of the test: what are you testing and what state is it in?
- Methodology.
- Participants and recruiting.
- Procedure.
- Tasks.
- Usability goals.
- Completion rate (the percentage of tasks the user was able to complete).
- Error-free rate (the percentage of tasks completed without errors or hiccups).
- As long as you have an open mind, nothing is more interesting and valuable than seeing your precious theories of how people will interact with a design crash against the rocky shoals of reality.
- It’s up to the facilitator to present the scenarios and tasks that are being tested. Unclear tasks can’t be tested. A good facilitator is personable and patient. A good facilitator can warm the participant up and then dispassionately observe as the participant flails about with no idea what to do next.
- Avoid leading the user and helping them when they get lost. Embrace the uncomfortable silences.
- Frequently, users who encounter a usability issue are quick to blame themselves rather than the system. This is how people have been conditioned by frequent exposure to less than usable products. If this happens, ask the participant to describe how they expected the system to work and why they had that expectation.
- Even if you are set up to record, it’s very important to have a second person observing the tests and taking notes. This allows the facilitator to be responsive and the observer to be as observant as possible, creating the smallest number of distractions.
- Audio recording is fantastic. Designers should be recording everything all the time. Video recording, on the contrary, can be less valuable. The value of video is frequently a matter of good editing, and good editing takes vast amounts of time. Video also takes vast amounts of storage space.
- The observer will need to note the following:
- The participant’s reaction to the task.
- How long it takes to complete the task.
- If the user failed to complete the task.
- Any terminology that presented a stumbling block.
- The note-taker should work from a copy of the test script with space to insert annotations. The most important items to note are areas where the user exhibited nonverbal frustration, verbatim quotes, and any features that were particularly successful or unsuccessful. If the notetaker can manage an approximate time code, that will make analysis easy.
- The aim of usability testing is to identify specific significant problems in order to fix them. The outcome is essentially a ranked punch list with a rationale. Keep your source materials (e.g., session recordings or notes) organized so you can easily refer to them or provide more detail to anyone who is interested, or skeptical. Focus your written documentation on the issues, their severity, and recommended fixes.
- Rate each problem users encountered during the test on each of the following two scales: severity and frequency. You must look at both to ensure you’re prioritizing real obstacles, rather than chasing a fluke. Severity: High: an issue that prevents the user from completing the task at all. Moderate: an issue that causes some difficulty, but the user can ultimately complete the task. Low: a minor problem that doesn’t affect the user’s ability to complete the task. Frequency: High: 30% or more participants experience the problem. Moderate: 11–29% of participants experience the problem. Low: 10% or fewer of participants experience the problem.
- Once you’ve conducted the tests, and rated the issues, sort them into three tiers. Each represents the combination of severity and frequency.
- Tier 1: high-impact problems that often prevent a user from completing a task. If you don’t resolve these you have a high risk to the success of your product.
- Tier 2: either moderate problems with low frequency or low problems with moderate frequency.
- Tier 3: low-impact problems that affect a small number of users. There is a low risk to not resolving these.
- Need to convince someone before you can make any changes? Watching actual users struggle with the system is more convincing than reading a report, and offers all the agitation of a suspense film. (Why doesn’t he see the button? It’s right there!)
- Verbatim quotes and video clips of failure presented in conjunction with a report can also be effective. Just make sure to connect the tasks you tested and the problems you found to high-priority business goals.
Analysis and Models
- This is where design truly starts. You take all this messy data and begin to organize it, and group it, and label the groupings.
- Through conversation, clarity will start to emerge. Clarity in the data analysis will translate to clarity of concept, content relationships, navigation, and interactive behaviors. And best of all, if you work collaboratively that clarity and deep understanding will be shared.
- The process is actually pretty simple:
- Closely review the notes.
- Look for interesting behaviors, emotions, actions, and verbatim quotes.
- Write what you observed on a sticky note (coded to the source, the actual user, so you can trace it back).
- Group the notes on the whiteboard.
- Watch the patterns emerge.
- Rearrange the notes as you continue to assess the patterns.
- You will end up with a visual representation of your research that you can apply toward your design work in a few different ways.
- An affinity diagram helps turn research into evidence-based
- The act of creating an affinity diagram will allow you to distill the patterns and useful insights from the many individual quotes and data points you gather through interviews and observation.
- If you work collaboratively with your team on identifying and documenting these patterns, the value of that research will be multiplied rather than lost in translation.
- As you review the notes or recordings, write down anything interesting you observed on a sticky note. An observation is a direct quote or objective description of what the user did or said. Pull out all of the particularly interesting quotes. Flag those that seem to represent the particular needs of each user type. These will be useful for your personas. Also note the vocabulary that participants used to describe their goals and the elements of the tasks or systems you are working with, particularly if they differ from those used in your organization.
- The final step of the analysis is to identify the actionable design mandate or principle.
- In the usual course of product development, every interest other than the user has a say: business leaders will provide business goals and requirements, marketers will speak to marketing targets, engineers will speak to the technical constraints and level of effort required to develop particular features. Personas allow designers to advocate for users’ needs.
- A persona is a fictional user archetype—a composite model you create from the data you’ve gathered by talking to real people—that represents a group of needs and behaviors.
- Good personas might be the most useful and durable outcome of user research. Design, business strategy, marketing, and engineering can each benefit in their own way from a single set of personas. If you’re following an agile process, you can write your user stories based on a particular persona.
- A persona is a tool for maintaining an empathetic mind-set rather than designing something a certain way just because someone on the team likes it.
- Design targets are not marketing targets.
- Market segments do not translate into archetypes. And the user type with the highest value to your business may not be the one with the most value to the design process.
- How many personas do you need? As few as possible, while representing all relevant behavior patterns.
- A truly useful persona is the result of collaborative effort following firsthand user research.
- If you have interviewed some real people and worked collaboratively with your team to identify some patterns, you should be able to create some useful personas.
- The documentation doesn’t need to be lengthy or involved. You can create a vivid individual from a few key details
- A persona document should feel like the profile of a real individual while capturing the characteristics and behaviors most relevant to your design decisions
- If personas are your characters, scenarios are your plots. Each scenario is the story of how a persona interacts with your system to meet one (or more) of their goals. Running a persona through a scenario helps you think through your design from the user’s point of view. You can use scenarios at several points in your process: To flesh out requirements. To explore potential solutions. To validate proposed solutions. As the basics for a usability test script.
- You can write a scenario as a short text narrative, a step-by-step flow, or even a set of comic panels—whatever is easy for your team to create and use to keep each persona represented in design and technology decision-making.
- Scenarios are not themselves use cases or user stories, although they can influence each. A use case is a list of interactions between a system and a user, and is typically a way to capture functional requirements. Scenarios are from the perspective of the individual human user represented by the persona, not the perspective of the system or business process.
- You will know your personas are working when they become the first people you want to see any new idea.
- a mental model is an internal representation of something in the real world—the sum total of what a person believes about the situation or object at hand, how it functions, and how it’s organized.
- This representation is based on a combination of hearsay and accumulated experience.
- In design, “intuitive” is a synonym for “matches the user’s mental model.” The closer an interface fits that image, the easier it will be to learn, use, and navigate.
- You can use data from user research to diagram the (composite) mental model of each particular user type, and use that diagram to guide the design. This is, strictly speaking, a mental model model. However, particularly following consultant and author Indi Young’s work in this area (Mental Models: Aligning Design Strategy with Human Behavior), people in the business tend to use the one term as a catchall. So there are two types of mental models: the type each of us holds in our head to help us deal with the world, and the type designers sketch out to better create that world. For maximum success, be aware of the former and get to work on the latter.
- To design an application or a website, think about the mental models of the activities you want to support.
- As a designer, you have your own mental model of what you’re designing, and you have a mental model of the users themselves, your set of assumptions about what they know and how they will interact with your design. It’s easy to overestimate how well your view matches their reality.
- Documenting the user’s mental model allows you to not just get inside their head but get the inside of their head out of your head for everyone else to see.
- How to create a mental model:
- Do user research.
- Make an affinity diagram.
- Place affinity clusters in stacks representing the user’s cognitive space to create the model.
- These groups will include actions, beliefs, and feelings.
- Group the stacks around the tasks or goals they relate to.
- A conceptual model bridges the gap between mental model and system map.
- You can translate the mental model to a conceptual map that relates content and functionality according to the target user’s view. The model will form the application framework or the basis of the information architecture as you proceed into more detailed design.
- Task analysis is simply breaking one particular task into the discrete steps required to accomplish it.
- If you’re designing a site or application that addresses one or many complex tasks in helping users meet their goals, you can use task analysis. This method can be particularly helpful to map what people do in the real world to functionality you can offer on a site or in an application. For example, “purchasing tickets” sounds simple, but the online process is often a complex and stressful multistep flow with a lot of decision points.
- In addition to informing the feature set and flow of an application, task analysis will help you identify where specific content might support a user along their task path. Users might take very different paths than you anticipated, or be influenced by particular factors in the environment that you’ll need to consider in your designs
- Communicating the meaning and value of research is a design activity itself. And the act of working together to synthesize individual observations will ensure that your team has a better shared understanding than a report could ever deliver.
Quantitative Research
- Optimizing a design is the chief aim of quantitative research and analysis.
- When you set out to optimize, you will run up against one of the oldest and thorniest philosophical problems, that of the Good. What is good? How do you know it’s good? What does it mean to be best? What are you optimizing for? How will you know when you have attained that optimal state and have reached the best of all possible worlds? What if, in optimizing for one thing, you cause a lot of other bad things to happen?
- You release your work into the world to see how right you were—and the fun begins. No matter how much research and smart design thinking you did up front, you won’t get everything right out of the gate, and that’s OK.
- Once you can measure your success in numerical terms, you can start tweaking. The elements of your design become so many knobs and levers you can manipulate to get to the level of success you’d envisioned, and beyond.
- Decision-makers love data, so being handy with the stats can be to your advantage in arguments.
- As soon as you have some data, you can start looking for trends and patterns. It might be a little overwhelming at first, but this sort of direct feedback gets addictive fast.
- Access to data is no longer the issue. The question is what to do with all of these numbers. If you don’t already have quantitative goals, define some.
- If you aren’t making your numbers, review the data and prioritize changes.
- The general outline of events for split testing is as follows:
- Select your goal.
- Create variations.
- Choose an appropriate start date.
- Run the experiment until you’ve reached a ninety-five percent confidence level.
- Review the data.
- Decide what to do next: stick with the control, switch to the variation, or run more tests.
- You will need a specific, quantifiable goal.
- You should have no room for interpretation on the goal. You have to know the current conversion rate (or other metric) and how much you want to change it.
- Small, incremental changes will have a more significant influence on a high-traffic site (one percent of one million versus one percent of one thousand) and tests will be faster and more reliable with a larger sample size.
- To rule out the effect of other variables, such as day of the week, you would ideally let the test run over a two-week holiday-free period, allowing you to make day-over-day comparisons.
- The winner of a split-test is often counterintuitive.
- If there’s agreement about which metric you’re optimizing for and the math is sound, it’s an opportunity to learn. After a number of tests you might see patterns begin to emerge that you can apply to your design work when solving for specific conversion goals.
- Focusing on small positive changes can lead to a culture of incrementalism and risk aversion. How will you ever make a great leap that might have short-term negative effects?
- You can only do so much optimizing within an existing design system. If you focus on optimizing what you have rather than also considering larger innovations, who knows what vastly greater heights you might miss
- You can only do so much optimizing within an existing design system. If you focus on optimizing what you have rather than also considering larger innovations, who knows what vastly greater heights you might miss
- The best teams embrace data while encouraging and inspiring everyone working on a product to look beyond what can be measured to what might be valued. You can optimize everything and still fail, because you have to optimize for the right things. That’s where reflection and qualitative approaches come in. By asking why, we can see the opportunity for something better beyond the bounds of the current best. Even math has its limits.
- Be excited about asking questions. Questions are more powerful than answers. And asking often takes more courage than sticking with comfortable assumptions.
Conclusion
-
- Every time you find a product or service that’s a joy to use, meets a need maybe you didn’t even know you had, and fits seamlessly into your life, you know that someone on the other end asked hard questions. Why should this exist? Who benefits? How can we make this better?
- Your effort and craft also deserve to be put to use in a way that has real meaning. So, always make sure you inquire into the real-world context surrounding your work.
- When blue-sky thinking meets reality, reality always wins. Make friends with reality. Cultivate a desire to be proven wrong as quickly as possible and for the lowest cost.
- The right questions will keep you honest. They will help improve communication within your team. They will prevent you from wasting time and money. They will be your competitive advantage, guiding you toward workable solutions to real problems.
- Form questions. Gather data. Analyze. One sequence, many approaches.