The Art and Science of Good Decisions: Balancing Instinct and Data
Instincts (including domain expertise + taste) may be the "new" moat needed to win. (#94)
Data. Data. Data. I’m not a zealot, but I’m definitely a fan. I appreciate data as a communication tool. It has a way—when used correctly—of simplifying things and providing focus.
Without data, every decision is made on a whim, often by the loudest person in the room. No one is right all the time (or even most of the time) based solely on their opinions (although those that think they are will 100% argue with me on this point!)
For the last 15+ years, I’ve been preaching the “simple” framework: build, measure, learn. Collect data. Analyze it. Make rational, informed decisions. Repeat.
But if you operate solely off the data you often miss where the magic actually happens. Instincts still matter—and I’d argue, they matter now more than ever.
In fact, your instincts might be the only thing keeping you from drowning in a sea of dashboards, vanity metrics, and AI-generated noise.
Data Isn’t Always Enough
I once worked with a startup where the data told a clear story: one user segment was converting way better than the others. Naturally, we focused there. Double down on what’s working, right?
But something felt off. The usage patterns weren’t sticky. The behaviour was erratic. Retention was weak. Despite the metrics, my gut told me we were seeing a false positive.
Six weeks later, that segment evaporated. Turned out they were driven by a low-quality, third-party traffic source we hadn’t accounted for. The mistake? We trusted the data too blindly—and ignored the instinct that something wasn’t right.
Domain Expertise Is Instinct in Disguise
I recently heard someone say, “domain expertise is the new moat.” That stuck with me. AI is making everything absurdly easy (not a bad thing!) and levelling the playing field (which is good!). But then the question is, “How do you actually compete?”
➡️ Read Michelle Moon’s take, “In the Era of AI, What Is Your Moat?”
When everyone has access to the same tools and information, the differentiator becomes your understanding of how to use those tools in context. It’s about understanding a problem very deeply, perhaps in an industry or vertical that’s underserved because no one is paying attention. That’s domain expertise. It’s not just about knowing what to do—it’s about knowing why, when, and what not to do.
Vertical Venture Studios = Domain Expertise = Driving Better Returns
As a quick aside, one of the key reasons my company, Highline Beta, is focusing on vertical venture studios is because we see the power of domain expertise as a moat.
If you can understand problems better than anyone else and have a network of potential customers that you can reach more easily than anyone else, you increase the odds of building startups that matter, and by extension, generating better outcomes (and returns).
Domain expertise = knowing an industry very well; but it’s more than that, you also need to be known (have relevance in the industry) and have network (access to potential partners, customers, co-investors, acquirers and talent). Domain expertise in a vertical venture studio means being a magnet for everything in a specific industry.
In the context of data vs. instincts, domain expertise often shows up as instinct.
It’s the quiet voice in your head saying, “We’ve seen this before,” or “This is going to break in three months,” or “This user behaviour looks good on the surface, but it won’t last.” It’s not written down in a PRD (although I’m a fan of PRDs!) It’s not in the data yet. But it’s real—and often right.
That instinct is the result of hard-earned experience, repetition, failure, and time. Domain expertise is actually an operating system for intuition.
But Expertise Can Be a Trap
Here’s the caveat: expertise also creates blind spots.
You’ve seen patterns before, but that doesn’t mean they’re the same this time. You’ve built something that worked in one market, but it may not work in the next one. The danger is overfitting your past experiences to new, nuanced situations. That’s where instinct turns into bias.
I’ve made this mistake—more than once. I’ve relied too heavily on what worked before. I’ve dismissed ideas because they “didn’t work in the past,” without realizing the context had changed. I still struggle with anything in HR Tech because of how badly I failed with Standout Jobs (which is a company I co-founded in 2007). But a lot has chanced since then—AI for one is on the scene and having a huge impact on recruiting. New startups are going to win in this market, where I previously failed. The market still makes me sweat though! 😂
That’s the risk with domain expertise: you start to think you know, even when you don’t.
How to Use Instinct Without Being Blinded By It
The key is to treat instinct as a starting point, not the final verdict. In Lean Analytics, we wrote: “Instincts are experiments. Data is proof.”
Here are a few ways I’ve tried to balance instinct with open-mindedness:
Pair instinct with fresh eyes. Surround yourself with people who don’t have your experience, especially people who ask “why?” a lot. Their questions will poke holes in your assumptions. It may be annoying, but it’s a good thing.
Turn instinct into a hypothesis. If your gut tells you something’s wrong, try to articulate why. What are you expecting to see? What metric would confirm or contradict it?
Run small tests before scaling. If you’re leaning on instinct but don’t have clear data, find the smallest possible version of the idea to test. Prove it—or learn from it—quickly.
Document your predictions. When you go with your gut, write down what you expect and why. Revisit it. That feedback loop strengthens your instincts over time.
Practice “strong opinions, weakly held.” Be confident in your experience. But be ready—eager, even—to change your mind when the evidence points somewhere new.
Taste = Instinct, Too
I’ve been watching the rise of “taste” as a differentiator in product and startup circles.
Taste is another word for instinct. It’s the subtle, unteachable sense of what’s good. What’s right. What feels just a bit more elevated.
Sari Azout, in her article “What matters in the age of AI is taste,” puts it well:
“AI is powerful but taste-blind. It can make anything but it has no idea what’s actually worth making.”
Similarly, Aarron Walter, in “The Brief: Why taste is your most important design tool,” writes:
“Gen AI is like a chef cooking up an endless buffet, but there’s a lot on the menu that’s not great.”
In a world of generative everything, taste becomes the filter. Instinct becomes the guide. Remember: You still have to build things people want, figure out how to reach them, and create value. Nothing has changed about the fundamentals.
Instincts Guide You to the Right Questions. Data Helps You Test Them.
This is usually the approach I take:
Start with a hunch.
Articulate the hypothesis.
Run an experiment.
Collect the data.
Make a decision.
Reflect, learn, iterate.
It’s a loop—one that tightens and improves as you build muscle in both your instinct and your analytics toolkit. Most good product teams operate this way. In Lean Analytics, we expanded the concept of “build→measure→learn” into the Lean Analytics Cycle:
The goal was to provide more detail on how to successfully execute “build→measure→learn”. Note: Even in a book on data we said things like, “Sometimes you just ship a change to production and see what happens, instead of running a proper test.” It’s rare that you can run a perfectly designed experiment with all the variables controlled and get a clear, definitive, repeatable answer.
That’s when it’s important to ask, “What do I do when the data is inconclusive?”
When the Data’s Inconclusive—And You Ship Anyway
Sometimes the loop breaks. You run the experiment. You measure. You slice and dice. And the data just…doesn’t say much. There’s no statistically significant lift. No clear regression. Just ambiguity.
What do you do then?
On Lenny’s Podcast, Archie Abrams—Shopify’s VP of Product and Head of Growth—talks about exactly this. He explains that Shopify often runs rigorous experiments and tracks data carefully, but they don’t always kill ideas when the data is inconclusive.
Instead, they may extend the experiment. Or even ship something at full scale based on conviction—because they believe the value will compound over time, even if the short-term metrics don’t immediately reflect it.
That kind of decision isn’t irrational. It’s experience-driven. It’s intuition, powered by a long-term view. Abrams refers to this as taking a “systems-level” approach to product building—where the impact of changes may not surface right away but are still worth pursuing.
Incidentally, this is part of what Sean Ellis and I were exploring in our recent shared post on the One Metric That Matters and the North Star Metric.
Every experiment has an OMTM that you’re focused on
But the goal isn’t to just improve the OMTM
The goal is to improve the OMTM in service of the NSM, which is the metric that defines the overall health of the business
So there are occasions where the OMTM may not move as much as you’d like, but you still have confidence that your experiment / change / feature / etc. will serve “the greater good” (of the North Star Metric)
Sometimes You Start With Data Too
As your startup matures and you gain more traction, you’ll start collecting (a lot of) data. Capturing data is pretty easy, which is great, but also may lead to focusing on it too much.
Nevertheless, data can tell a story. As you’re watching numbers on a dashboard go up or down, you may realize:
Where the biggest problem is in your product usage / business (and by extension where to focus); and,
There are patterns amongst your user/customer base that require attention.
I’m a big fan of visualizing everything—mapping the business—so you can really understand what’s going on.
In this scenario, the data gives you a starting point for validating issues:
Start with the data and identify problem spots.
Talk to users/customers to validate the problems qualitatively
Ideate potential solutions (which could leverage a hunch).
Articulate the hypothesis.
Run an experiment.
Collect the data.
Make a decision.
Reflect, learn, iterate.
This isn’t radically different from when you start with a hunch, but it is different. Not surprisingly what this means is that sometimes you kick off a bunch of testing on a hunch, sometimes you do it with data. And sometimes, both play a role in helping you decide what to pursue and how.
Here’s an attempt to merge these two approaches together:
Data doesn’t make decisions. It’s an input into decision-making. Ultimately you decide what to focus on, how to execute experiments, and when to “f*** it ship it” versus a more rigorous methodology. If everything was blindly data-driven, the decision-making power would be out of your hands, reducing or eliminating the impact of instincts. In some cases that’s a good thing, but often it’s not. Often instincts are the key to unlocking huge value, and somehow you have to figure out how to merge your instincts (and those of your team, leadership, etc.) with the data.
While AI pushes us towards automation, more data-driven activities and in some cases eliminating the need for humans at all, instincts may be the key unlock, differentiator and moat that you need, especially if you think of instincts as domain expertise and taste.
Totally vibe with the idea that instincts can be a game-changer. It's all about finding that sweet spot between gut feeling and hard data to make those killer decisions.
This is a great point - "Domain expertise is actually an operating system for intuition."
With AI we can in many ways operationalize the semiotic loop of intention/interpretation/ communication - this is why personalization and contextualization have the most potential when entwined. IMO.