To solve for this, I created a Notion database of the resources I’ve found useful in my work. And I’ve made it public.
Please note that my collection is not exhaustive—there are other great resources out there! Nor is every bookmark something I absolutely agree with. I tend not to bookmark generic guides or overviews (like those published by software companies), as these resources are abundant and easy enough to find in the wild.
If you have a trusted resource you think is missing from the collection, you’ll also find a linked form to suggest additions.
Happy learning!
]]>You can watch the original video here.
Me talking about evangelizing research to senior leadership (screenshot only—click here to watch!)
User research helps everyone make informed decisions, including your CEO and executive leadership team. And the instinct to evangelize research to a senior—and influential—audience is a good one.
But interrogate that instinct and ask yourself: are you being ambitious for the benefit of your organization, or are you being ambitious for yourself? There’s a thoughtful and strategic path toward getting research—and yourself—on your executive leadership team’s radar, and there’s a treacherous path that will lead to confusion, bad vibes, and hurt feelings.
Let’s start with the treacherous path. You might be tempted to approach your CEO directly—either in person or over email or Slack—and share some insights that you think are fascinating and worth pursuing.
Your CEO will either ignore you, send a message to your manager and ask WTF is going on, or call a meeting and ask a bunch of VPs why they’re not right on top of the shiny object you just put in front of your CEO.
Ignoring you is actually the best outcome here, because if your manager gets a message from the CEO—or the CEO’s chief of staff—everyone will wonder why you went rogue and didn’t follow any chain of command. You’ll make your manager look not great.
And if your CEO starts asking everyone in their vicinity about your pet insight, your manager will be the least of your concerns. You’ll have a bunch of executives whose roadmaps and sprint plans are now being jeopardized because you just had to get on the CEO’s radar.
So let’s instead take the happy path and assume your ambition is for your team—that your intention is to evangelize the great work you and your peers are doing to a more senior audience.
One way to do this is to share recent impactful findings at a regularly scheduled meeting. A company all-hands might fit the bill, or an executive leadership meeting. If a presentation isn’t what you had in mind, maybe there’s a regular research update your executive team would appreciate. I’d suggest working with your manager and your CEO’s chief of staff on what this research update might look like, how detailed it should be, and in what format executives would prefer to receive it.
Something my teams have done in the past is send a monthly newsletter for senior leadership with updates on what we learned about our competitors and what impact that might have on our org.
You have the right idea if you’re asking how to get your research into the hands of senior decision makers. But take the time to do so collaboratively and thoughtfully in a way that your CEO, executive team, and manager will value.
]]>For Learners, I answered the question, “Can research ever come to an end?”
You can watch the original video here.
Me talking about if research can ever come to an end (screenshot only—click here to watch!)
Research never ends. But research projects have to end.
Another way to say this is that you could keep researching a topic until the end of time—there are always new directions you can take a study. Every new user and everything they do is yet another data point for you to potentially examine. You can add every possible competitor to your competitive analysis, and analyze every new transaction for emerging trends.
But this is neither a good use of your time nor talent. Your org hired you to help make design and product decisions that support the business goals of the organization. A research project that doesn’t end has an opportunity cost of other research projects that you aren’t doing.
(Another reason to not research forever: you will likely reach theoretical saturation if you keep researching the same topic long enough. Theoretical saturation is the point at which further research will yield no new insights. In other words, you’ve covered it and it’s time to move on.)
So research doesn’t have to end, but it should for the sake of the business and to open opportunities to support other product and design decisions. When you scope a project, include a clear stopping point. Stopping points might be based on the date your partners need to make a decision (“We need to present findings at the end of the next sprint”), or upon successfully answering the key research questions you and your cross-functional partners agreed upon at a kickoff (“We’re scoping this project to these three research questions. Anything else is nice to know but not critical”).
So be curious. Be rigorous. But keep in mind that your research is in support of making decisions. Scope your projects to optimize for more decisions, not infinite research.
]]>What keeps trail running interesting is that the same trail differs runner by runner, day by day, year by year. A strong rain exposes hidden roots. Heavy footfall loosens and displaces rocks and pebbles. Fallen trees block a path or destroy a bridge, forcing travelers to create competing alternative routes until one becomes permanent. Season over season, entire sections of creekside trail erode, only for tentative steps to lead to traversable new trails. Give it enough years and a trail remains the same in name only.
Depending on your social networks or your newsletter subscriptions, you have seen posts bemoaning that user researchers are doing the wrong work with the wrong teams. That we’re either underutilized or unnecessary. That we should hold all the power or that powerful new advances in technology can replace us.
All of the above scenarios are true, because the world of user research is not a monolith. Another way to say this is that context matters; depending on your industry, your org, all the way down to your manager, your scenario is unique. The demands of research roles change researcher by researcher, quarter over quarter, fiscal year over fiscal year. Even the name of the role varies according to the naming conventions, or lack thereof, of our organizations. The responsibilities of a User Researcher and a Design Researcher might be identical in different contexts, just as a path and a trail describe the same passage in different communities.
The handwringing about our field doesn’t change the reality that we typically don’t get to write our own job descriptions. Jobs are in short supply, and we often land roles with descriptions that closely match what we’ve already done, not what we aspire to do. Thus we might find ourselves in a cycle of employment situations where our work is applicable beyond how it’s typically used—where our mandate is frustratingly shortsighted compared to our capabilities. We might get hired onto design teams yet yearn to advise executives. We might embed on tightly scoped product teams while bemoaning our lack of visibility into larger strategy decisions.
From the perspective of those around you—those who opened your role and those who benefit from its existence, everything is fine. And if you’re delighted in your particular research ecosystem, that is fine too. I’ll bet that’s the case for most user researchers.
But if the scenarios above apply to you—if you feel boxed in, misused, under leveraged, or like you’re doing the wrong projects, what are you doing about it? There is no one way to do user research—a feature of the profession, not a bug. It’s on us to make our respective cases for more by doing the work we were hired to do, understanding how decisions are made in our unique orgs, and then jostling to position our remarkable skills as an input into that particular decision-making process.
I said something similar at the Strive conference in 2019, and then repurposed it in my Research Practice book:
It’s a long journey from the place we’re hired to the place we think we should be, but we’re not without agency. We’re researchers—our superpower is to take stock of a complex scenario and spot the possible paths forward.
Big thanks to Behzod Sirjani and Danny Spitzberg for content feedback.
]]>Me talking about moving from design to research (screenshot only—click here to watch!)
I was a designer and design professor before I transitioned into UX research. While there is plenty to learn to successfully make the transition from design to research, one thing that worked for me was to think of every facet of the research process as a discrete experience to design. Here’s how you might do this:
To be sure, each one of the tasks I mentioned above is a lot to think about. But by breaking the research process into a series of smaller experiences to design for our participants and colleagues, we’re able to leverage our design training in service of our research goals.
So in short: think of your research process as yet another experience to design.
]]>Here I’ll share the guidance I typically provide my colleagues when we work on a survey together. I’ve divided this post into four sections:
To be sure, I am not a survey scientist. Nor am I a statistician. The advice that follows comes from extensive experience, trial and error, and my prioritization of the survey experience for the respondent.
Surveys take practice. There’s no way around it. Writing crystal clear question and answer combinations that build upon each other in a way that keeps your respondent engaged, while also yielding valuable insights, takes some trial and error. Your first few survey attempts will be terrible.
Surveys are useful if you need a signal or a direction. A survey alone is no substitute for engaging with users. But to establish the general contours of a problem space, a survey is a good start.
Surveys are better if you can tailor them to your target audience. Phrasing matters. The more familiar you are with the vocabulary of your respondents, the more accurate you can be in how you frame questions and answer choices.
Surveys are good for finding people to talk to; they’re not a substitute for talking to people. A survey is great for sourcing people to talk to. Recruit people based on their answer choices, or to get more detail about their open-ended responses.
Surveys are a cheat code for team collaboration. Does this phrasing make sense? Did we miss any possible responses? Can you go through the flow once more and test the logic? Surveys are a great forcing mechanism for teams to work together. You absolutely need multiple sets of eyes on a survey, especially as you write, design, and test it.
Never ask for information you already have. If you send a survey to a customer for whom you have data, don’t ask them to provide you with that same data. That’s just a waste of time for everyone.
Never ask for information you won’t use. You will be tempted to include questions that “might be nice to know the answer to.” Fight that temptation. Unless you’re incentivizing your respondents for their time, your extra questions are taking advantage of their goodwill.
Never require questions that you don’t absolutely need. Actually, never require questions, period. Unless you’re incentivizing your respondents for their time, be grateful someone is voluntarily providing you with any survey responses.
Never, ever require someone to provide a comment or an open-ended response. The previous guidance is doubly applicable here. You’re lucky your respondent is making selections and filling in checkboxes. Unless you’re incentivizing your respondents for their time, don’t you dare tell them they are “required” to write sentences for you. This isn’t school, and you’re not assigning homework.
What exactly am I being asked?
Never use ambiguous phrasing. This is where practice comes into play. Make sure that what you ask and what your respondents think you’re asking are the same. For example, look at the image above of a one question micro-survey I received in my Gmail app: “Is this useful?”
Is what useful, exactly? If the question is whether these specific ads are relevant to me, then no. If the question is whether it’s generally useful to see ads in my inbox, also no. The ambiguous phrasing that made me consider at least two possible interpretations of the question guarantees that Google will not get useful data. Be specific in what you ask.
Never be lengthy. If you’re not compensating your respondents, at least give them the gift of brevity.
Never use noisy scales when you need specifics. On a 10-point satisfaction scale, what’s the difference between a 6 and 7? Instead, ask “Are you satisfied?” followed by Yes or No, or use a tighter scale, like a scale from 1 to 3 or from 1 to 5. (Come to think of it, follow Jared Spool’s advice and use a better word than satisfied.)
The exception here is if you’re tracking survey responses over time, in which case the wider scale allows you to see historical changes. Further reading from me on survey scales.
Never ask people to predict their future behavior. You know we can’t see the future, right? Please don’t ask your respondents if they’re likely to recommend or purchase something. Instead, ask them if they did recommend or purchase something. Further reading from me on asking about the future.
Why would anyone want to complete this?
Never use matrix questions. Matrix questions, like the example provided above, require time and concentration. They look like a lot of work—so much so that respondents might drop out of your survey at the sight of one, especially on mobile. Avoid these if you can. If you can’t, remember to make it optional.
Never ask a question in a way that you won’t be able to analyze. Asking an open-ended question of 20 respondents: easy to analyze. Asking that same question of 12,000 respondents: not so much. That’s not to say it can’t be done, but make sure you have the staff or decent AI to do it (not to mention an agreed-upon taxonomy for coding answers!).
Always explain the rationale and value of the survey right off the bat. Don’t just expect people to know why they should give you information. Tell them why you’re asking for input, how you’ll use it, and how you’re protecting their privacy.
Always write like a human. Think of a survey as an asynchronous conversation. Phrase your questions clearly and conversationally.
Always allow for edge cases. Your answer choices might suit 99% of your respondents, but you still want to leave room for the unexpected. Whenever possible, add “Something else (please specify)” or “Other” answer choices. However: never, ever, ever use “Other” as a choice for a question related to gender or sexuality. See below.
Always provide inclusive choices. While it’s truly none of your business, if you must ask about gender identity, sex, race, ethnicity, or other demographics, do the work of providing inclusive answer choices. This post from Mei Ke is a great primer for building LGBTQ+ -friendly surveys.
Always include a catch-all question.
This is the exception to asking unnecessary questions. If a respondent spends time answering all your questions, give them a chance to share what’s top of mind for them by concluding with one simple question: “Anything else you want to share with us?” More often than not, this is where your mind will be blown.
When it comes to surveys, I find these resources to be the gold standard:
Principles of effective survey design. Annie Steel’s Principles of effective survey design is so thorough, clear, and educational, yet so concise. It’s annoying how great it is.
Surveys That Work. For a brief time the link above was the only survey resource I endorsed without hesitation. And then Caroline Jarrett released Surveys That Work: A Practical Guide for Designing and Running Better Surveys in 2021 and the world had two amazing survey resources.
]]>One of my favorite ways to spend a weekend is to rent a small cabin in the mountains or near a lake, usually in a state park. There I brew a pot of strong coffee, take in the views, and read or write for hours on end with breaks here and there to take a family walk and eat meals together.
Last year, my family bought a small parcel of undeveloped lakefront land near a state park we love. The land is in a remote spot one hour and ten minutes by car from where we live—close enough that we can go anytime, but far enough that the surroundings appear different than what we’re used to. We are now developing the land in order to place a small, née tiny, cottage on it.
A future room with a view.
Developing land is not for the faint of heart. Our lot is on a steep slope that leads down to the lake, which means we need to grade the land to create a flat spot for our cottage. There are neither water nor sewage lines nearby, so we’re drilling a well and installing a septic system.
The most cost- and time-effective way to construct a house is via modular construction. Modular homes are built in a factory, which means weather delays are not a factor. The materials, tools, and laborers are centralized, which mitigates logistics issues. Because each home follows a template, the manufacturer can order materials in bulk to increase economies of scale. At the end of the construction period, the finished home is delivered to the property and tied into the waiting plumbing and electrical hookups.
Construction is just one of many ongoing processes within the larger ecosystem of building a home. Securing financing for a home is a process. Land development is a process. Permits are a process. Any interruption in one part of the ecosystem impact the others.
There are plenty of inefficient and nonsensical processes, born of ignorance or malice. Those are not what I’m talking about here. I’m thinking of processes that connote that someone has married time and expertise in service of reaching a desired outcome. A thoughtful process is a signal of care. I picked up on these signals loud and clear recently.
In considering a builder for our project, we wanted to partner with a firm that understood and demystified the overlapping processes at play. The first builder we considered created beautiful—and beautifully crafted—homes. But when we engaged them for our project, the sales manager was slow to reply to emails and phone calls. We found ourselves repeating information we had previously shared with him, or following up in search of information that he had promised.
How you climb a mountain is more important than reaching the top.
Yvon Chouinard
Zooming out, we noticed that the company’s website hadn’t been updated in nearly a year. That the materials and appliances they use for their homes are bought piecemeal and off-the-shelf from a big-box home improvement store, rather than in bulk from a supplier where the cost per unit is lower. Yvon Chouinard, founder of Patagonia, wrote in his memoir that “how you climb a mountain is more important than reaching the top.” Nothing in how this builder works felt intentional—it was all a series of ad hoc decisions en route to the top.
Then we engaged with a second builder who presented an experience altogether different from the first. The second builder, Wind River, had us complete an intake form. The form was a precursor to a phone call, during which time a sales manager presented us with an outline of their end-to-end building process, talked through our specific needs, and provided an estimate of costs.
We visited Wind River’s factory in Chattanooga, Tennessee to get a sense of their models’ suitability for our needs. The homes are built with an impressive attention to detail, which is no surprise: every facet of their building process is intentional. Nowhere was this more evident than on the factory floor, where there is a dedicated space for every tool and every part. Their homes—and their process of building them—are the result of a thoughtful process. They care about the result and are intentional about how they get there.
I recently started holding user research office hours for my product colleagues. Every other Tuesday, my team is available for an hour to answer questions, review plans, or talk through ideas that don’t come up during other meetings.
I want everyone I work with to benefit from user research—after all, our goal as researchers is to help everyone make informed decisions. The easy thing would be to block time on my calendar, share a Zoom link, and give folks permission to drop in. But easy isn’t always effective.
My constant refrain since I started in my role a few months ago is that I’m putting infrastructure in place. You can call this operationalizing or templatizing. I’d prefer to say I’m being intentional—I want to design an experience that makes office hours productive for everyone. To do so, I need to know who plans to attend ahead of time and what’s on their minds so that me and my team come prepared to help.
The office hours signup form in Slack.
I didn’t want to put too much friction between potential attendees and office hours—our process needed to be lightweight yet fruitful. So I ended up building a Slack automation in our public user research channel. Anyone is welcome to attend office hours, but to do so they need to reserve a spot by completing a form that asks what they hope to learn, how this information will help, and when they plan to run a study. The form responses are forwarded to my research team, which gives us a chance to come prepared to office hours. The responses are also sent to a spreadsheet, which gives us an office hours topic tracker that we can use to evolve our research training materials and documentation.
Companies are living ecosystems where people come and go, and tools and workflows change. In this environment, we must examine and evolve not just our work but how we work.
Office hours might work just fine without the signup form, but “might” is doing a lot of heavy lifting in that scenario. By being intentional about the entire office hours experience, we increase the chances that the session is more impactful and leads to better outcomes. This is the promise of user-centered design: that every experience, like making an appointment or building a home, can be made more thoughtful and intentional.
End notes
You can watch the original video here.
Me talking about the ideal number of direct reports for a ux research manager (screenshot only—click here to watch!)
My short answer is that a manager who is also doing research projects should have no more than two to three direct reports. However, if your sole duty is to manage (meaning you’re not also conducting your own studies), then five to seven direct reports is sensible. Here’s how I landed on those numbers…
The bread and butter of competent management requires a great deal of time and attention. There’s no way around it. At a minimum, here’s where you can expect to spend your time as a manager:
That list doesn’t include the time you spend on tasks related to people management (time-off requests, goal setting, performance reviews), enablement (procurement, vendor calls, and budgeting), and culture building (team rituals and org events).
The business management term for the number of direct reports per manager is called the span of control. And there are a number of variables that impact the ideal span of control:
The work of UX research is complex, and oversight of that work is a commitment. As a research manager, you must afford time and space to talk through project plans with your directs, and you must also dedicate time to review the evolving plans and documentation created by everyone on your team. There’s no shortcut: the buck stops with you. You’re there to ensure sound methods, timelines, and deliverables.
What’s more, your direct reports are working cross-functionally with designers, product managers, and engineers, which requires that you pave the way for those cross-functional relationships to thrive. And you still have your own manager and peers to think about—the work your team prioritizes is an output of close collaboration with your fellow managers and your own manager. That close collaboration is a further investment of time on your part.
Those are just the table stakes. There are plenty of good resources on what constitutes competent management, but a good manager does everything above while also performing the emotional labor of creating a safe, inclusive, and thoughtful environment for their team.
With all that said, if you are doing both IC work and research management, two to three direct reports is sensible. You’ll be able to attend to your team and do your own work. More than that and you’re going to be context switching all the time, which serves neither you nor your team well.
If you are a full time manager, five to seven direct reports should allow you enough time to fully invest in your team and their work while leaving time for the many other tasks on your plate.
Management is a privilege. There’s no better feeling than mentoring your team and being party to their excellent work. Be sensible about what you’re capable of and what you can reasonably offer to your team. They—and your organization—need you at your best.
]]>You can watch the original video here.
Me talking about transitioning from journalism to user research (screenshot only—click here to watch!)
I’ve worked in media with journalists pretty steadily since 2016, and there is a lot of overlap between journalism and user research. It’s a topic I’ve covered before on the Vox Product blog, but here I’ll answer it with an eye toward getting a job.
Journalists identify stories worthy of the public interest. They conduct desk research and identify the right folks with whom to conduct primary research. And they analyze their findings with an eye toward educating, informing, or swaying public opinion.
User researchers, instead of working in the general public’s interest, advocate for change on behalf of their users and toward the goals of their organizations. And though the exact methods differ, the mindset is similar: be curious, do your homework, talk to the right people, analyze what you’ve learned, and present it back.
And that takes us back to the question at hand: how do I leverage my background to break into user research? My advice is to explicitly call out the similar approaches between journalism and user research in any cover letters or resumés you send out, and any job interviews you go on.
That extends to a portfolio, too. Revisit your publishing experience and see if you can present that same work as a user researcher would. Here’s what that might look like:
For this story, the question I was trying to answer was ____.
My methods for answering that question were ____.
For my secondary or desk research, I looked at ____.
The people I interviewed for this story were ____.
From my collected data, I concluded ____.
After my story was published, ____ happened.
If I had to do it over again, I would have changed ____.
That sounds like a user research project to me!
The curiosity inherent to journalism will get you most of the way there; put some time into making the similarities as clear as day to any hiring manager and you should be well-positioned to break into this field.
Good luck!
]]>You can watch the original video here.
Me talking about disrespectful participants (screenshot only—click here to watch!)
Off the bat, encountering an aggressive or disrespectful person is no doubt uncomfortable. I have some tips on how to avoid that situation, and what to do if it happens to you.
First, after a bad experience with a participant, my colleagues Claire and Anna smartly took it upon themselves to add language to our participant consent form* that lays out that both participants and researchers are empowered to end the session at any time in the face of abusive or intolerant behavior. This way we establish the ground rules going in: abuse and intolerance are nonstarters. If a participant crosses that line, end the session.
*As with any agreements between an organization and a participant, run any changes you want to make by your legal team!
However, as with all things research, there’s nuance to unpack here. Is the aggression or disrespect about you, or is it about the topic you’re discussing? If we’re talking to someone who encountered a hardship, they will have strong feelings.
If we’re interviewing someone who feels angry about changes to a product we work on because we made their job harder, they are well within their rights to be pissed off. I think that’s within the bounds of acceptable—and expected—behavior. For what it’s worth, I start every interview explaining that I’m looking for honest feedback and that the participant won’t hurt my feelings.
In sum I think the right approach here is:
It can be fuzzy and uncomfortable, but as long as the participant is making it about their experience and how it made them feel, it’s not about you. If they venture into personal attacks or abusive or intolerant language, the session is over.
]]>