<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://gregg.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://gregg.io/" rel="alternate" type="text/html" /><updated>2026-01-17T16:08:56+00:00</updated><id>https://gregg.io/feed.xml</id><title type="html">Gregg Bernstein</title><subtitle>The personal site of Gregg Bernstein: researcher &amp; educator, author of Research Practice, and speaker.</subtitle><author><name>Gregg Bernstein</name></author><entry><title type="html">The only winning move is not to play</title><link href="https://gregg.io/the-only-winning-move" rel="alternate" type="text/html" title="The only winning move is not to play" /><published>2025-12-01T10:00:25+00:00</published><updated>2025-12-01T10:00:25+00:00</updated><id>https://gregg.io/the-only-winning-move</id><content type="html" xml:base="https://gregg.io/the-only-winning-move"><![CDATA[<h2 id="lets-not-debase-ourselves-as-user-researchers-further">Let’s not debase ourselves as user researchers further</h2>

<p>The premise and value of human-centered research is that subject matter experts apply their skills to the design of studies that uncover relevant information from appropriate research participants in service of organizational goals. <strong>Every concession we make in the name of efficiency or innovation that removes humanity from this process is debasement and a step toward irrelevance.</strong></p>

<p>What is the role of the user researcher if we offload both users and research to generative AI platforms and tools? Once you use prompts or mash buttons to generate an end-to-end research plan; or automate synthetic or AI interviews, surveys, or prototype tests; or generate personas, jobs-to-be-done, insights, product recommendations, or a marketing plan, <em>then what the fuck is your unique value?</em> What are you when you offload your craft and expertise to a simulation in the name of innovation or efficiency? What makes you anything less than redundant?</p>

<p>AI is fantastic for pattern recognition, with realized benefits in medical imaging analysis. It’s great at statistical modeling and multivariate analysis! But the very best possible outcome from outsourcing research expertise to LLMs is <a href="https://doi.org/10.1002/jocb.70077">an average result</a> (non-gated overview of that academic article <a href="https://www.psypost.org/a-mathematical-ceiling-limits-generative-ai-to-amateur-level-creativity/">here</a>). While it sounds pragmatic or “better than nothing” for researchers and the orgs that employ them to lean on AI for research, it also leads everyone to the exact same average quality results and removes the differentiation that leads to innovation or unique experiences. </p>

<p>If organizations want to stand out in crowded marketplaces, asking for run-of-the-mill research advice from a bot over trusting subject-matter experts sure does sound like self-sabotage. And the sabotage is doubly so for the researchers who embrace the tools that will serve as their replacements.</p>

<h3 id="but-gregg-this-is-how-the-profession-is-evolving">But Gregg, this is how the profession is evolving</h3>
<p>Says who? The executives who never cared about research in the first place? The PMs who never had the patience to wait for quality research results? The investors and tech companies that need (literal) buy-in? The tools and platforms with a vested interest in selling AI research as something that literally anyone in an org can do? Actually…</p>

<h3 id="but-this-is-how-research-is-done-today">But this is how research is done today</h3>
<p>Bullshit: this is how user research is <em>marketed</em> today. For the past 15 years platforms like Usertesting and UserZoom (before they were both acquired by the same private equity company and merged to form a near-monopoly in the enterprise research space) positioned themselves as tools for design and product teams to “become customer-centric” and “listen to the voice of the user.” The value proposition was that orgs could use these platforms either as an add-on to existing research and design practices or before they had an in-house research expert.</p>

<p>Today tooling platforms see an opportunity to sell AI-assisted research tools to organizations <em>as an alternative to hiring research experts</em>. When 80% of the sponsors of a large user research conference are selling tools that <em>replace</em> user researchers with AI in the name of democratization, we’re not the customers; we’re marks. If your business model relies on seat licenses, it’s much more profitable to sell a tool that makes <em>everyone</em> a researcher rather than a tool that supports a dwindling number of researchers.</p>

<p>But marketing isn’t reality. Just because a handful of user research thought leaders who should know better were paid to run and promote studies using AI research tools <em>without disclosing the sponsorship in their breathless LinkedIn posts</em> doesn’t necessarily mean these are the tools your organization should adopt. In fact, an undisclosed sponsorship is a good way to create the illusion that a product is gaining widespread adoption by experts, <strong>which is why the Federal Trade Commission <a href="https://www.ftc.gov/business-guidance/resources/disclosures-101-social-media-influencers">regulates against it</a></strong>.</p>

<p>If I use a tool and tell you about it, that’s a recommendation. But if I am paid a fee to use a product and then tell you about it, that’s different—that’s a sponsorship. Then I’m no longer a researcher recommending a tool—I’m an influencer peddling sponsored content. If a product resorts to shady advertising practices that requires a pliant thought leader’s complicity in constructing a Potemkin industry, <em>maybe the whole enterprise is rotten</em>. </p>

<p>This is also why <a href="https://gregg.io/ethics/">ethics statements</a> are important. Let’s uphold some professional standards lest we become grifters.</p>

<h3 id="but-regular-ie-rigorous-research-takes-too-long">But regular (i.e., rigorous) research takes too long</h3>
<p>For who? What product decision is so important that planning and spending time with users is not viable? Better yet, what product decision wouldn’t <em>benefit</em> from time with flesh and blood humans to gain context, mitigate risks, and zero in on the right thing?</p>

<p>Every research planning decision is a tradeoff between time and confidence—a good researcher can always learn something within a given time period and budget. But frequently the problem is that neither time period nor budget factor into the arbitrary milestones and deadlines a group of people place on a calendar. </p>

<p>If that group of people repeatedly fails to include enough time for research, I’d argue that <em>they might not value research in the first place</em>. Shoehorning a half-assed generative AI research effort into an unreasonable project window isn’t going to make you look like a team player nor make people see the value of research; it’s only going to validate that research should never require time (nor researchers).</p>

<p>Going further, for the founders and executives who never believed in user research, AI research is a way to skip doing research entirely while presenting the veneer of “listening” to their users. When user researchers adopt AI research tools it not only debases their contributions to understanding users, it also reinforces the notion that you don’t really need to do user research to seem human-centric.</p>

<h3 id="but-ai-lets-us-10x-our-research-efficiency">But AI lets us 10x our research efficiency</h3>
<p>Are you listening to yourself? You sound like every bad AI-generated post on LinkedIn now. I said earlier that the work of research can be optimized to fit time and organizational constraints, but that’s not the “efficiency” I see being adopted now:</p>
<ul>
  <li><em>I fed Claude some survey results and asked it to create unique one-pagers for my executive, product, and design partners.</em>  An expert might be able to get away with this one time because they can evaluate the validity and quality of the one-pager (though why you’d rather proofread the work of an LLM than create something original is beyond me). But once you cross this chasm, you’ve demonstrated that this is how research can be summarized and shared… by anyone with access to Claude. You’ve made yourself—and those with your job title—dispensible.</li>
  <li><em>We created a gem for designers to get started with their own research without having to work with a researcher.</em> Right—because the problem was never that asking designers to take on an entirely different job <em>in addition to design but without additional time</em> was too much to ask. The problem was having to collaborate with a living and breathing research expert.</li>
</ul>

<h3 id="but-theres-still-a-human-in-the-loop">But there’s still a human in the loop!</h3>
<p>Research is already a human-to-human loop, with meaning conveyed by participants and contextualized by researchers. Adding a human back to what was already a perfectly functional loop doesn’t enrich anything and only adds inefficiencies—even <a href="https://www.theguardian.com/technology/2025/nov/22/ai-workers-tell-family-stay-away">the people who review LLM answer quality</a> warn against using LLMs for accurate answers.</p>

<p>Personally, I transitioned from design and design education to user research because I was—and still am—blown away that I could learn from other humans <em>as my job</em>. A more religious person might say I’ve been blessed to earn a living by talking to writers, readers, editors, small business owners, designers, agencies, and more in support of organizations who build products for these groups. </p>

<p>But it’s not just that I enjoy practicing research—I’m good at it. User researchers are <em>experts</em> at it. Why would I reduce myself to quality control on a slop assembly line and then, with my whole chest, tell people I am the human in the loop? Why should we debase ourselves by implying that our expertise is replaceable?</p>

<h3 id="maybe-you-just-hate-or-dont-get-ai">Maybe you just hate or don’t get AI</h3>
<p>Au contraire! AI can be magical (especially in medical imaging and programming). I used Gemini to update a SQL query recently at the encouragement of a data science peer. I use a product called Granola (not a paid mention, fwiw) for call transcription, notes organization, and pulling up quotes. I work with designers who spin up prototypes with Figma Make that I then test with humans. I work with engineers who use AI for spam mitigation and trust and safety tasks. <a href="https://genpurpose.substack.com/p/research-slop">Jess Holbrook</a> smartly advocated for using AI to take a dissent pass on research artifacts and to challenge yourself and your findings.</p>

<p>What I don’t do is use generative AI or LLMs to spit out an entire research plan, synthesize hours of interviews, or conduct my interviews for me (?!). One reason why I don’t do any of these is that generative AI can’t replace the meaning-making that human researchers do. Why would we even <em>want</em> to use AI to replace the tasks that humans are uniquely good at, or the tasks that humans enjoy, or the tasks that connect us to other humans? To me the personal connection is the best part of being a user researcher or user-centered designer!</p>

<p>This is what gets my goat: AI has many useful applications. This moment in time really is akin to the start of the internet era, in that AI has broken containment and entered mainstream conversation (in no small part due to marketing hype centered on illogical use cases). However, the hype has created acolytes with an ill-fitting solution to the non-existent problem of <em>how to study humans better</em>.</p>

<h3 id="you-sound-like-a-luddite">You sound like a Luddite</h3>
<p>The Luddites were not anti-progress; they were pro-worker. Automation increased production but eliminated jobs, lowered wages, and reduced quality. Sound familiar? </p>

<p>Researchers already document findings at a faster velocity than orgs can act on them. It strains credulity that tech leaders are clamoring for even more <em>yet worse</em> findings.</p>

<p>The folks extolling the virtues of offloading critical research tasks to faulty tech are eroding not just the value of an entire professional class but of human curiosity and knowledge. Listen to <a href="https://youtu.be/VwP-5ac51Dw?si=4tF7H1Uq9VJ0hqOu">Billy Bragg</a>, support unions, and always stand on the side of workers… especially when replacing them with unreliable facsimiles helps no one but the people who stand to profit from such a move.</p>

<h3 id="so-what-do-we-do">So what do we do?</h3>
<p>This is a scary time! There have been thousands upon thousands of tech workers—including researchers—laid off in the last couple of years in the name of progress who kept their heads down, did quality work, and earned wonderful performance reviews. Going along just to get along didn’t earn anyone a reprieve. So it’s not like we have anything to lose by advocating for ourselves.</p>

<p>The only people who stand to gain the most from the adoption of generative AI research platforms and practices are those who claim it makes research better and those whose job depends on that belief. These claims are self-promoting narratives, sponsored content, or both.</p>

<p>My move is not to play the game of debasing ourselves in the name of progress. Just because a bunch of smart people say that “this is the future” doesn’t mean they’re right, as we just saw with web3, crypto, and NFTs. No one can predict the future (despite what NPS proponents might say).</p>

<p>I didn’t enter this field and take this type of job only to <em>not do the job</em>. My red line is conceding the things I am—we are—uniquely good at to a product, platform, or bot. My red line is trading in the parts of the job I am both an expert in and enjoy for tasks that make the job something else entirely. </p>

<p><strong><em>What is your red line?</em></strong></p>

<h3 id="end-hits">End hits</h3>
<ul>
  <li>No part of this blog post used AI. I like writing—it helps me think; I like thinking—it helps me write.</li>
  <li>However, my human friends and fellow researchers Meghan Cetera, Joey Jakob, and Gabe Trionfi generously provided feedback and further reading recommendations, for which I am grateful.</li>
  <li>For more on humans in the loop, read Pavel Samsonov’s <em><a href="https://productpicnic.beehiiv.com/p/human-in-the-loop-is-a-thought-terminating-cliche">‘Human in the loop’ is a thought-terminating cliche</a></em>.</li>
  <li>The title of this post comes from the movie <em><a href="https://www.imdb.com/title/tt0086567/">WarGames</a></em>, in which a supercomputer learns about futility and no-win scenarios.</li>
</ul>]]></content><author><name>Gregg Bernstein</name></author><category term="AI" /><category term="POV" /><summary type="html"><![CDATA[Let’s not debase ourselves as user researchers further]]></summary></entry><entry><title type="html">In praise of redundant publishing practices</title><link href="https://gregg.io/in-praise-of-redundancy" rel="alternate" type="text/html" title="In praise of redundant publishing practices" /><published>2025-06-25T10:00:25+00:00</published><updated>2025-06-25T10:00:25+00:00</updated><id>https://gregg.io/in-praise-of-redundancy</id><content type="html" xml:base="https://gregg.io/in-praise-of-redundancy"><![CDATA[<p>Over the years I’ve shared my writing through Mailchimp (a former employer), Substack (regrettably), Buttondown, and Medium (my current employer). I’ve written guest posts for corporate blogs (Adobe, Shopify), employer blogs (Mailchimp again, Vox Media), and here on my own blog. I’ve posted on Twitter, Mastodon, and now Bluesky. I even achieved a professional milestone by publishing (about user research, natch) on <a href="https://www.theverge.com/2018/1/26/16933458/facebook-news-trust-survey-problems-editing">The Verge</a>!</p>

<p>While I’ve been fortunate to have my words appear in many places, the unfortunate truth is that posterity is promised to no one. Corporate blog posts tend to disappear with website redesigns or organizational restructures. Newsletter archives can disappear with a migration to a new platform. Platforms can outright disappear or become a Nazi bar from which you need to make a hasty retreat.</p>

<p>This is why I always recommend optimizing for redundancy. Share your work wherever your audience will find it, but make sure you also publish to one or more personal repositories. For me this means</p>
<ol>
  <li>I draft my posts in Markdown in Apple Notes on my laptop or phone, which backs up to iCloud</li>
  <li>I copy and paste completed posts to my Github repo</li>
  <li>Which I then push to Netlify, where my Jekyll blog runs</li>
  <li>New step: Now I copy and paste the post to <a href="https://medium.com/@greggcorp">Medium</a> (use the products you work on, people!)</li>
  <li>And every so often I pull down a copy of my repo to back it up locally and to Google Drive</li>
</ol>

<p>I write to help myself think. Losing old content is like losing a part of my memory. My backup process might sound like (and likely is!) overkill, but having seen my writing disappear from the internet through no fault of my own, and having struggled to find archived versions of my writing on the Internet Archive, it’s worth it to me. And it might be worth it to you, too.</p>]]></content><author><name>Gregg Bernstein</name></author><category term="writing" /><category term="process" /><summary type="html"><![CDATA[Publish for both your audience and your future self]]></summary></entry><entry><title type="html">How I write</title><link href="https://gregg.io/how-I-write" rel="alternate" type="text/html" title="How I write" /><published>2025-06-24T01:00:25+00:00</published><updated>2025-06-24T01:00:25+00:00</updated><id>https://gregg.io/how-I-write</id><content type="html" xml:base="https://gregg.io/how-I-write"><![CDATA[<h2 id="a-brief-reintroduction">A brief reintroduction</h2>

<p>I tend to write concisely.* If I can get an idea across with an economy of words, why belabor my point? I edit as I write, rephrasing and cutting until what’s left is my point and little more.</p>

<p>A <a href="http://aworkinglibrary.com/">mentor</a> once told me to think about the point I’m trying to get across, and determine whether it’s a talk, a blog post, or a tweet. What begins as a grand exposition shrinks to a blog post before reducing, like a stock, to nothing more than a sentence-length post shared on social media. Cut until there’s nothing left to cut.</p>

<p>This works for me. It doesn’t always work when I edit others. When writing is a rare activity, every word feels precious. But when you edit in real time, all the time — whether on the page or in conversation — no words are spared. “I’ve never been edited like this,” a self-described thought leader once complained to me when I returned my notes on his book’s introduction. His ideas had also never been clearly understood.</p>

<p>My love of writing is also a love for writers, and how I write differs from what I read. Charles Dickens, <a href="https://bookshop.org/beta-search?keywords=Emily+St+John+Mandel">Emily St. John Mandel</a>, <a href="https://www.joeposnanski.com/">Joe Posnanski</a>, and <a href="https://dansinker.com/">Dan Sinker</a> have little in common in writing style (and have likely never appeared together in the same sentence). Each has instructed me. Exposure to many voices — and many more ideas — tells me not just what ideas would be worth sharing, but how I might express them. It took time and practice to land on what works for me, but it starts with exposure to other writers.</p>

<p>I’ve been fortunate to work as a researcher in media and publishing since 2012, and I am even more fortunate to learn from writers and readers in my current role as staff researcher at <a href="http://medium.com/">Medium</a>. I plan to write about my work at Medium on Medium going forward, while also cross-posting to my blog. I’ll do so as briefly as possible.</p>

<p>*I considered revising this sentence to read, “I write concisely,” but decided against such a declaration. “I tend” signals that concision is a goal, not a rule.</p>]]></content><author><name>Gregg Bernstein</name></author><category term="writing" /><category term="process" /><summary type="html"><![CDATA[A brief reintroduction]]></summary></entry><entry><title type="html">Making research more durable</title><link href="https://gregg.io/durable-research" rel="alternate" type="text/html" title="Making research more durable" /><published>2024-11-29T01:00:25+00:00</published><updated>2024-11-29T01:00:25+00:00</updated><id>https://gregg.io/durable-research</id><content type="html" xml:base="https://gregg.io/durable-research"><![CDATA[<h2 id="if-we-answer-whats-asked-we-only-answer-known-unknowns-our-value-comes-from-excavating-deeper">If we answer what’s asked, we only answer known unknowns. Our value comes from excavating deeper.</h2>

<blockquote>
  <p>Always design a thing by considering it in its next larger context — a chair in a room, a room, in a house, a house in an environment, an environment in a city plan.—Eliel Saarinen</p>
</blockquote>

<p><a href="https://bookshop.org/p/books/thinking-fast-and-slow-daniel-kahneman/943943?ean=9780374533557">Daniel Kahneman</a> described the bias of omitting critical context as “What you see is all there is.” This is what happens when we accept the provided information at face value without probing for context, verifying for accuracy, or questioning what might be missing.</p>

<p>Something I stress to my teams is that the questions asked of us as researchers are often tightly scoped to what’s top of mind for our cross-functional partners and stakeholders. Sure, answering these questions at face value will help us with <em>today’s</em> decisions and will likely earn us positive peer reviews, but at enormous opportunity cost. We instead should provide helpful answers while also illuminating the next larger context. This is how research findings shift from disposable to durable.</p>

<p>For example, a popular men’s magazine I worked with, in addition to publishing political opinions and fashion advice, also offers entertainment content like movie, television, book, and podcast reviews. When thinking about redesigning the entertainment section, the face value questions touched on:</p>

<ol>
  <li>How do people navigate entertainment content?</li>
  <li>How can we promote additional content that our site visitors might be interested in?</li>
</ol>

<p>These questions lend themselves to rapid research—we could easily proceed with an iterative build —&gt; measure —&gt; learn approach and back into a new entertainment section design and article promotion method. But what would we really gain? We didn’t have previous research into entertainment content, so we’d never be able to contextualize the new findings.</p>

<p>Instead, we broadened our approach to better understand where entertainment reviews fit into a larger information ecosystem. Some of our (many) questions included:</p>

<ul>
  <li>How do people decide what to watch or listen to?
    <ul>
      <li>What’s the last thing they watched or listened to?</li>
      <li>How did it get on their radar?</li>
    </ul>
  </li>
  <li>What resources do people rely on?
    <ul>
      <li>How does a resource become trusted?</li>
      <li>How does a resource become a habit?</li>
    </ul>
  </li>
  <li>What media do people prefer for recommendations? (Website, SMS, newsletters…?)</li>
</ul>

<p>By combining the macro and micro questions, we were able to learn where entertainment content fits into a media diet, how trust is established, and how people prefer to consume entertainment news and reviews. And we were able to prototype a new entertainment experience while also illuminating additional pathways to connect people to content they find valuable.</p>

<p>This is how we ensure our research is durable. By providing additional context into attitudes and behaviors, we contribute to a more cohesive understanding of our users and limit the unknown unknowns. Your colleagues might not have asked for it, but they will certainly appreciate it now and in the future.</p>]]></content><author><name>Gregg Bernstein</name></author><category term="POV" /><category term="Management" /><category term="Process" /><summary type="html"><![CDATA[If we answer what’s asked, we only answer known unknowns. Our value comes from excavating deeper]]></summary></entry><entry><title type="html">I don’t care about AI</title><link href="https://gregg.io/i-dont-care-about-ai" rel="alternate" type="text/html" title="I don’t care about AI" /><published>2024-09-04T01:00:25+00:00</published><updated>2024-09-04T01:00:25+00:00</updated><id>https://gregg.io/i-dont-care-about-ai</id><content type="html" xml:base="https://gregg.io/i-dont-care-about-ai"><![CDATA[<h2 id="i-dont-care-about-ai-because-ai-is-neither-the-product-nor-the-solution">I don’t care about AI, because AI is neither the product nor the solution.</h2>

<p>I don’t care about AI. I don’t care that your product has it. I don’t care that you think we’re on the cusp of some huge developments—that the real AI breakthrough is just around the corner.</p>

<p>I’ll go further: <em>I don’t think most people care about AI</em>, because “AI” is not a thing to care about. AI is a broad term that covers a whole lot of territory, akin to how a previous generation sweatily appended the terms “net” and “cyber” to everything to signal alignment with a vague conception of “the internet.” Nothing furrows my brow and sets off my BS detector faster than seeing “AI-assisted” slapped onto your product’s marketing messaging, because that tells me you’re more focused on a buzzword rather than a tangible benefit.</p>

<p>AI is a catchall term that describes a thing in service of another thing. It’s neither the product nor the goal. It’s like selling people on the capabilities of plumbing and paper manufacturing when <em>they just want to make a bathroom stop during a road trip after a poorly-planned Taco Bell menu decision</em>. It overlooks the <strong>goals</strong> (functional restroom, stocked with toilet paper) and <strong>desired outcomes</strong> (gastrointestinal relief, resumption of travel) in favor of the infrastructure.</p>

<p>I don’t care about AI, but I do care about people:<br /> 
I care about their problems and how to help them solve them.<br />
I care about their challenges and how to overcome them.<br />
I care about their habits and how to support or change them.<br />
<strong>I care about human-centered goals and how to reach them.</strong></p>

<p>If the best way to help people includes AI, great! If not, also great. AI is one potential ingredient in service of a greater, more meaningful thing. The place of AI is in the middle ground between people and their goals. Focus on the humans.</p>

<hr />

<p><em>PS. My first draft of this post was an homage to Mazieres and Kohler’s brilliant academic paper, “<a href="https://www.scs.stanford.edu/~dm/home/papers/remove.pdf">Get me off Your Fucking Mailing List</a>.” It went something like this:</em><br />
I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI. I don’t care about AI.<br /></p>

<p><em>I’m still not sure which version of this post I like better.</em></p>]]></content><author><name>Gregg Bernstein</name></author><category term="POV" /><category term="AI" /><summary type="html"><![CDATA[I don't care about AI, because AI is neither the product nor the solution]]></summary></entry><entry><title type="html">Sharing my research bookmarks</title><link href="https://gregg.io/bookmarks" rel="alternate" type="text/html" title="Sharing my research bookmarks" /><published>2024-01-25T01:00:25+00:00</published><updated>2024-01-25T01:00:25+00:00</updated><id>https://gregg.io/bookmarks</id><content type="html" xml:base="https://gregg.io/bookmarks"><![CDATA[<p>I’ve collected a number of research-related bookmarks over the years. However, I often struggle to find the specific links I need at the exact moment I need them.</p>

<p>To solve for this, I created a Notion database of the resources I’ve found useful in my work. And I’ve made it public.</p>

<p><a href="https://greggcorp.notion.site/Research-Practice-Bookmarks-6e6ffb6c8cc0484b8dc4b0485e584afd?pvs=4">Research Practice | Bookmarks</a></p>

<p>Please note that my collection is not exhaustive—there are other great resources out there! Nor is every bookmark something I absolutely agree with. I tend not to bookmark generic guides or overviews (like those published by software companies), as these resources are abundant and easy enough to find in the wild.</p>

<p>If you have a trusted resource you think is missing from the collection, you’ll also find a linked form to suggest additions.</p>

<p>Happy learning!</p>]]></content><author><name>Gregg Bernstein</name></author><category term="Resources" /><summary type="html"><![CDATA[I've collected a number of research-related bookmarks over the years. I finally organized them in a public Notion database]]></summary></entry><entry><title type="html">Two ambitious paths</title><link href="https://gregg.io/two-paths" rel="alternate" type="text/html" title="Two ambitious paths" /><published>2023-08-14T01:00:25+00:00</published><updated>2023-08-14T01:00:25+00:00</updated><id>https://gregg.io/two-paths</id><content type="html" xml:base="https://gregg.io/two-paths"><![CDATA[<p>For Learners, I answered the question, <strong>How can I get my research on the radar of the CEO and other top level people at my company?</strong></p>

<p><img src="../images/learners_ceo.png" alt="Talking about evangelizing research to senior leadership" title="Learners screenshot" />
<em>Me talking about elevating research (screenshot only)</em></p>

<p>User research helps <strong>everyone</strong> make informed decisions, including your CEO and executive leadership team. And the instinct to evangelize research to a senior—and influential—audience is a good one.</p>

<p>But interrogate that instinct and ask yourself: are you being ambitious for the benefit of your organization, or are you being ambitious for yourself? There’s a thoughtful and strategic path toward getting research—and yourself—on your executive leadership team’s radar, and there’s a treacherous path that will lead to confusion, bad vibes, and hurt feelings.</p>

<p>Let’s start with the treacherous path. You might be tempted to approach your CEO directly—either in person or over email or Slack—and share some insights that you think are fascinating and worth pursuing.</p>

<p>Your CEO will either ignore you, send a message to your manager and ask WTF is going on, or call a meeting and ask a bunch of VPs why they’re not right on top of the shiny object you just put in front of your CEO.</p>

<p>Ignoring you is actually the best outcome here, because if your manager gets a message from the CEO—or the CEO’s chief of staff—everyone will wonder why you went rogue and didn’t follow any chain of command. You’ll make your manager look <em>not great</em>.</p>

<p>And if your CEO starts asking everyone in their vicinity about your pet insight, your manager will be the least of your concerns. You’ll have a bunch of executives whose roadmaps and sprint plans are now being jeopardized because <em>you just had to get on the CEO’s radar</em>.</p>

<p>So let’s instead take the happy path and assume your ambition is for your team—that your intention is to evangelize the great work you and your peers are doing to a more senior audience.</p>

<p>One way to do this is to share recent impactful findings at a regularly scheduled meeting. A company all-hands might fit the bill, or an executive leadership meeting. If a presentation isn’t what you had in mind, maybe there’s a regular research update your executive team would appreciate. I’d suggest working with <strong>your manager</strong> and <strong>your CEO’s chief of staff</strong> on what this research update might look like, how detailed it should be, and in what format executives would prefer to receive it.</p>

<p>Something my teams have done in the past is send a monthly newsletter for senior leadership with updates on what we learned about our competitors and what impact that might have on our org.</p>

<p>You have the right idea if you’re asking how to get your research into the hands of senior decision makers. But take the time to do so collaboratively and thoughtfully in a way that your CEO, executive team, and manager will value.</p>]]></content><author><name>Gregg Bernstein</name></author><category term="POV" /><category term="Learners" /><summary type="html"><![CDATA[For Learners, I differentiate the thoughtful and strategic path toward getting research on your executive leadership team’s radar from the treacherous path that will lead to confusion, bad vibes, and hurt feelings]]></summary></entry><entry><title type="html">Can research ever come to an end?</title><link href="https://gregg.io/can-research-end" rel="alternate" type="text/html" title="Can research ever come to an end?" /><published>2023-06-28T01:00:25+00:00</published><updated>2023-06-28T01:00:25+00:00</updated><id>https://gregg.io/can-research-end</id><content type="html" xml:base="https://gregg.io/can-research-end"><![CDATA[<h2 id="optimize-for-more-decisions-not-infinite-research">Optimize for more decisions, not infinite research</h2>

<p>For Learners, I answered the question, “Can research ever come to an end?”</p>

<p><img src="../images/learners_research_ends.png" alt="Talking about if research can ever come to an end" title="Learners screenshot" />
<em>Me talking about when research comes to an end (screenshot only)</em></p>

<p>Research never ends. But research <em>projects</em> have to end.</p>

<p>Another way to say this is that you <em>could</em> keep researching a topic until the end of time—there are always new directions you can take a study. Every new user and everything they do is yet another data point for you to potentially examine. You can add every possible competitor to your competitive analysis, and analyze every new transaction for emerging trends.</p>

<p>But this is neither a good use of your time nor talent. Your org hired you to help make design and product decisions that support the business goals of the organization. A research project that doesn’t end has an opportunity cost of other research projects that you aren’t doing.</p>

<p>(Another reason to not research forever: you will likely reach theoretical saturation if you keep researching the same topic long enough. Theoretical saturation is the point at which further research will yield no new insights. In other words, you’ve covered it and it’s time to move on.)</p>

<p>So research doesn’t have to end, but it should for the sake of the business and to open opportunities to support other product and design decisions. When you scope a project, <strong>include a clear stopping point</strong>. Stopping points might be based on the date your partners need to make a decision (“We need to present findings at the end of the next sprint”), or upon successfully answering the key research questions you and your cross-functional partners agreed upon at a kickoff (“We’re scoping this project to these three research questions. Anything else is nice to know but not critical”).</p>

<p>So be curious. Be rigorous. But keep in mind that your research is in support of making decisions. Scope your projects to optimize for more decisions, not infinite research.</p>]]></content><author><name>Gregg Bernstein</name></author><category term="POV" /><category term="Learners" /><summary type="html"><![CDATA[Optimize for more decisions, not infinite research]]></summary></entry><entry><title type="html">There is no one way to do UX research</title><link href="https://gregg.io/on-the-right-path" rel="alternate" type="text/html" title="There is no one way to do UX research" /><published>2023-06-12T01:00:25+00:00</published><updated>2023-06-12T01:00:25+00:00</updated><id>https://gregg.io/on-the-right-path</id><content type="html" xml:base="https://gregg.io/on-the-right-path"><![CDATA[<h2 id="were-all-navigating-our-own-paths">We’re all navigating our own paths</h2>

<p>What keeps trail running interesting is that the same trail differs runner by runner, day by day, year by year. A strong rain exposes hidden roots. Heavy footfall loosens and displaces rocks and pebbles. Fallen trees block a path or destroy a bridge, forcing travelers to create competing alternative routes until one becomes permanent. Season over season, entire sections of creekside trail erode, only for tentative steps to lead to traversable new trails. Give it enough years and a trail remains the same in name only.</p>

<p>Depending on your social networks or your newsletter subscriptions, you have seen posts bemoaning that user researchers are doing the wrong work with the wrong teams. That we’re either underutilized or unnecessary. That we should hold all the power or that powerful new advances in technology can replace us.</p>

<p>All of the above scenarios are true, because the world of user research is not a monolith. Another way to say this is that context matters; depending on your industry, your org, all the way down to your manager, your scenario is unique. The demands of research roles change researcher by researcher, quarter over quarter, fiscal year over fiscal year. Even the name of the role varies according to the naming conventions, or lack thereof, of our organizations. The responsibilities of a <strong>User Researcher</strong> and a <strong>Design Researcher</strong> might be identical in different contexts, just as a <strong>path</strong> and a <strong>trail</strong> describe the same passage in different communities.</p>

<h3 id="we-apply-for-jobs-that-someone-else-scoped">We apply for jobs that someone else scoped</h3>

<p>The handwringing about our field doesn’t change the reality that <em>we typically don’t get to write our own job descriptions</em>. Jobs are in short supply, and we often land roles with descriptions that closely match what we’ve already done, not what we aspire to do. Thus we might find ourselves in a cycle of employment situations where our work is applicable beyond how it’s typically used—where our mandate is frustratingly shortsighted compared to our capabilities. We might get hired onto design teams yet yearn to advise executives. We might embed on tightly scoped product teams while bemoaning our lack of visibility into larger strategy decisions.</p>

<p>From the perspective of those around you—those who opened your role and those who benefit from its existence, <em>everything is fine</em>. And if you’re delighted in your particular research ecosystem, that is fine too. I’ll bet that’s the case for most user researchers.</p>

<h3 id="what-are-you-doing-to-change-your-situation">What are you doing to change your situation?</h3>

<p>But if the scenarios above apply to you—if you feel boxed in, misused, under leveraged, or like you’re doing the wrong projects, <em>what are you doing about it?</em> There is no one way to do user research—a feature of the profession, not a bug. It’s on us to make our <strong>respective</strong> cases for <em>more</em> by doing the work we were hired to do, understanding how decisions are made in our <strong>unique</strong> orgs, and then jostling to position our <strong>remarkable</strong> skills as an input into that <strong>particular</strong> decision-making process.</p>

<p>I said something similar at the <a href="https://youtu.be/f6h0nvL7xWA">Strive conference</a> in 2019, and then repurposed it in my <a href="http://researchpractice.co/">Research Practice</a> book:</p>

<ul> It’s one thing to incorporate user research into your design or product team—proximity goes a long way. It’s another thing entirely to integrate research practices beyond the people you work with regularly. In my experience, you can measure user research’s impact across three organizational milestones: awareness, demand, and influence. When my colleagues know of my work, that is awareness. When my colleagues know of research and actively seek to incorporate it into their own work streams, that is demand. And when I can propose research projects that will inform organizational strategy, that’s influence.
</ul>

<p>It’s a long journey from the place we’re hired to the place we think we should be, but we’re not without agency. We’re researchers—our superpower is to take stock of a complex scenario and spot the possible paths forward.</p>

<hr />

<p><em>Big thanks to <a href="https://behzod.com/">Behzod Sirjani</a> and <a href="https://www.linkedin.com/in/danielspitzberg/">Danny Spitzberg</a> for content feedback.</em></p>]]></content><author><name>Gregg Bernstein</name></author><category term="POV" /><category term="Design" /><category term="Hiring" /><summary type="html"><![CDATA[We're all navigating our own paths]]></summary></entry><entry><title type="html">On transitioning from design to research</title><link href="https://gregg.io/transition-design-to-research" rel="alternate" type="text/html" title="On transitioning from design to research" /><published>2023-03-29T01:00:25+00:00</published><updated>2023-03-29T01:00:25+00:00</updated><id>https://gregg.io/transition-design-to-research</id><content type="html" xml:base="https://gregg.io/transition-design-to-research"><![CDATA[<p>Over on Learners, I answered the question, “What’s your advice for designers transitioning into a research role?”</p>

<p><img src="../images/learners_transition.png" alt="Talking about moving from design to research" title="Learners screenshot" />
<em>Me talking about transitioning to design research (screenshot only)</em></p>

<p>I was a designer and design professor before I transitioned into UX research. While there is <em>plenty</em> to learn to successfully make the transition from design to research, one thing that worked for me was to <strong>think of every facet of the research process as a discrete experience to design</strong>. Here’s how you might do this:</p>

<ul>
  <li>In creating a research plan, your goal is to design the combination of methods and participants that will yield the right knowledge.</li>
  <li>In writing a discussion guide, you can design, test, and finalize a question flow that will yield informative answers.</li>
  <li>In planning a usability test, you have to design the right mix of tasks and questions to yield conclusive findings.</li>
  <li>You can design how your colleagues will <em>experience</em> your research; what will your workshops, research updates, or presentations be like?</li>
</ul>

<p>To be sure, each one of the tasks I mentioned above is a lot to think about. But by breaking the research process into a series of smaller experiences to design for our participants and colleagues, we’re able to leverage our design training in service of our research goals.</p>

<p>So in short: think of your research process as yet another experience to design.</p>]]></content><author><name>Gregg Bernstein</name></author><category term="POV" /><category term="Learners" /><category term="Design" /><summary type="html"><![CDATA[Break the research process into a series of smaller experiences to design for your participants and colleagues]]></summary></entry></feed>