How to Apply the Scientific Method to Startups Without Being a Zealot
Otherwise known as: how to avoid risking everything on one insane bet. (#11)
Most people learn the scientific method in school. It’s a straightforward and rigorous way of testing things. Wikipedia does a good job of explaining it:
The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century. It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; the testability of hypotheses, experimental and the measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.
A scientific hypothesis must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.
Incidentally, this looks quite a bit like Lean Startup’s iterative Build → Measure → Learn cycle, doesn’t it?
In StartupLand, we don’t really want to talk about the scientific method. We’d rather focus on hero worshipping God-like founders that had epiphany after epiphany (they shower a lot, I think) and led with infallible skill. I wish I was a genius too, with the power to rewrite history. But alas…
We also like to talk about Lean Startup (and other similar approaches such as Design Thinking and Jobs to be Done) as panaceas. As new frameworks or approaches emerge, we often espouse them as “the answer” … but that’s also rarely true.
Zealotry sucks. 🤮
I don’t want to be a process zealot kneeling at the altar of a “6 steps to winning” framework. I also don’t want to shower all day hoping I’m struck with an almighty “aha!”
But I definitely have strong beliefs about a few things. For example:
If you rely exclusively on your gut to guide everything you do, you’ll get indigestion. And your startup will also fail.
If you try and build the “perfect product” as v1, you’ll fail.
If you don’t understand the riskiest assumptions faced by your business, you’ll fail.
Understanding Assumptions
An assumption is something you believe to be true, but haven’t yet proven. They allow you to not only figure out what you need to validate but in what order, so that you focus on learning the most important things first.
The next time you’re sitting in a meeting, see how many times someone spews “a fact” that’s really an assumption.
“Customers want this new feature.” — Oh ya? How do we know? Why do they want it?
“People will pay for this new widget.” — Oh ya? How do we know? Why will they pay? Who are these people, anyway? What are they going to do with that widget? Why?
“It takes people too long to [do that task.] We could make it faster by [doing some thing.]” — Oh ya? How do we know it takes too long? What does too long even mean? How do we know the solution will speed it up? Will it speed it up enough?
You probably get the point.
Perhaps in the next meeting when people are making bold statements without sufficient evidence (read: assumptions) you can actually step up and say, “Bob, I really think that’s an assumption. How do we know?” Bob might lose it 😰, but you’re asking the right question.
Here’s the thing: pretty much everything is an assumption.
This is especially true early on when you’re trying to validate the problem.
But it’s also very true when you’re building the solution. And marketing your product. And developing your business model. Etc.
So the question is this: How many decisions are you going to make purely off of your own belief? Or are you going to recognize that what you “know to be true” may in fact not be totally true, and requires actual testing?
How to use Assumptions
Here’s how you can use assumptions to test the right things, in the right order:
Write down every assumption you have
Identify whether they’re Desirability, Viability, or Feasibility related
Rank the assumptions based on Criticality and Certainty
Describe how you’d test the assumptions (focusing on the most critical and least certain); include success criteria/metrics for each experiment and a time period for the test
1. Write down every assumption
Now that you know “everything is an assumption” you’ll probably have a long list. Here are a few tips:
Start every assumption with, “We believe…” and then write out what you believe to be true. Even if you don’t totally believe it, frame the assumptions this way; it makes testing things easier (with a falsifiable option.) Even if you are very confident you already have proof of your assumption, write it down. Teammates may disagree.
Aim for the most precise assumptions you can, but some of them are going to be high level. Initially when you do this exercise, we’re going for quantity, not quality—the more assumptions the better. But if an assumption is too high level it becomes hard to run a good experiment against.
Ask “why” a lot. Have you ever heard of “5 Whys”? It’s a problem-solving methodology that suggests you dig into something by asking why 5 times, with the goal of getting to a root cause. If you ask “why?” a lot, you’ll generate a lot of assumptions. Does “5 Whys” get annoying? Yes. Which is why you need to get savvy about asking questions in different ways.
2. Categorize assumptions as desirability, viability or feasibility
Desirability, Viability, Feasibility (or DVF) is a concept from Design Thinking. I’ve found it to be very effective at getting people to focus on what actually matters.
In simple terms:
Desirability is whether users want “it” or not.
Viability is whether “it” is good for business / you can make the business model work.
Feasibility is whether “it” can be delivered. (This includes technical questions, but also compliance, regulatory, legal, etc.)
You almost always want to start with Desirability, because if you get that wrong, it doesn’t matter if you can build it or you think it’ll make money.
Too many founders (technical and non-technical) want to build stuff. They love building. I do too. But we often forget to build things that people want. Sure, we spoke to two of our friends and asked our moms what they thought (our moms still love us!) but we didn’t really dig deep.
Too many corporate innovators spend an enormous amount of time in Excel mapping out a complex P&L and business model, demonstrating how much money a new initiative will make. Here’s a secret: You can make Excel do almost anything you want—at least with numbers. Total revenue in 5 years is too small? Just change the market share from 1% to 2%! If you want a $100M business “on paper” I can 100% guarantee you that you can make that happen.
For every assumption you have, label it as Desirability, Feasibility or Viability. In some cases, an assumption can be more than one—that’s fine. It might make testing it more complicated, but c’est la vie.
Going through this exercise encourages a conversation around what really matters and helps us ask better questions. “Do we really know if people have that problem? Do we really know if they’ll pay $5/month? Do we really know what the MVP should include?” Etc.
Something as simple as DVF can affect product and strategy meetings. I’ve seen it work. It’s a tool that gives people a way of describing issues and digging in.
3. Rank the assumptions based on Criticality and Certainty
Now that you have a list of assumptions and they’re all labelled with DVF, you need to figure out which ones are the most important.
Which of your assumptions—if proven wrong—will completely end your business (or product, or whatever it is you’re working on?)
Ultimately that’s the question you have to ask. And you have to be honest too.
That’s how you define criticality. If something is insanely critical to the success what you’re doing, that’s the assumption you want to go after first. It’s not the colour of your logo, btw. Or how fancy the snacks are in your office. Or that feature you’re itching to build.
Certainty is how sure (or unsure) you are about something. If you’ve already done the research and proven something with a degree of confidence, then you can define an assumption as “certain.” If you have no bloody clue, it’s an uncertain assumption.
When we run these types of collaborative exercises with teams (at Highline Beta) we often do them with a simple matrix (put it on a white board, or big sheet of paper with sticky notes):
Then you can get everyone to write out their assumptions and group them around common themes, and “rank” them on a 2:2 matrix like the one above.
Ideally only a few assumptions stand out as super critical and uncertain. Sometimes we see teams go through this and EVERYTHING is a top priority. That’s a recipe for disaster. It’ll lead to analysis paralysis. When doing this you have to be pretty strict about it, and remember the question I posed earlier:
What assumptions, if proven wrong, end this whole process? ☠️
4. Define tests for each of the most critical and uncertain assumptions
Once you’ve identified the riskiest and most critical assumptions, you need to test them. The ideal test (as per the scientific method) is one that is controlled and falsifiable. If you have too many variables, it’s hard to know what’s affecting things. And if you don’t have a way of invalidating, and everything is designed to only prove your assumption, you’re cheating.
To be clear: It can be very difficult to run super controlled experiments. Don’t kill yourself trying to create the perfect experiment, especially in the early stages of your startup. It’ll lead to a lot of frustration.
When in doubt, do something. Try something. And see what happens.
It might mean you have to run multiple experiments to learn enough, but that’s better than trying to design something perfect.
Some experiments may be qualitative in nature. This is particularly true when you’re trying to validate a problem. For example: customer interviews are an experiment. They’re not perfect, and they’re not going to give you statistically significant data, but that’s OK.
Here’s a list of some common tests & experiments that you might run:
Shop Along: Accompany a user on their journey (through whatever it is you’re interested in exploring.)
Concept Statements: A simple 1-page document that describes a value proposition; share with users and see how they react.
Landing Page: Very common tool to test value propositions, conversion rates, CTAs, etc. Can help start to collect quantitative data. Tip: Get emails initially, and follow up to conduct customer interviews.
Digital Ad Campaign: Great for testing value propositions and target audiences, to see what resonates. Can lead to a landing page. Helps get a sense of what’s engaging (through conversion metrics.)
Surveys: Create surveys that you can push online to users, or to user panels. Warning: Do not rush surveying. I see lots of people rely on surveys to get answers, but you can bias them so easily to get the answers you want.
Paper Prototype: Very simple way of exploring interaction design and early solution ideation with users. Get users to co-create solutions with you. Anyone can do this—you don’t need to be a designer.
Clickable Prototype: A more robust “solution” but can still be very basic. I still love using Balsamiq for quick prototyping because it’s so easy and takes out almost all design-related elements automatically.
Brochure or Sales Sheet: A good B2B tool. Create a brochure or sales sheet (physical or digital) to share with prospective customers. See how they react. May drive conversion to something like a Letter of Intent (which is a strong desirability signal.)
There are many more types of experiments, tools and techniques you can use. The key is to figure out the best experiment for the assumption you’re testing. And again, if you’re not sure, try something, learn & iterate.
You’ll need something to track all of your assumptions & experiments. I’ve included a basic spreadsheet example here: https://bit.ly/3XtyU5Z
The template has a simple example in it with a problem statement, a list of a few assumptions, and rankings for criticality and certainty. Feel free to copy the template and use it.
Don’t be a Donkey
You know what happens when you assume something, right?
You make an ass out of u and me.
We’ve all heard that before, but it’s worth remembering. Assumptions are dangerous. If they’re taken as gospel, mistakes will be made.
Most people (whether they work in startups or big companies) do not follow a super rigorous approach to building products and validating ideas. Despite all the hype around Lean Startup, Design Thinking, JTBD, etc. people are still making the same mistakes over and over. I get it, and I’ve lived it.
I don’t believe in being insanely rigorous or trying to enforce a methodology that sucks the magic out of things. But a little more rigor would be nice. 😂
So here’s my suggestion:
Keep an eye out for assumptions and call them out. Just put your hand up a bit more often and ask questions.
Use DVF. I’ve seen this concept work by generating interesting questions and conversations. Start sneaking DVF into how you do things at work and it will help.
Test more. This again requires that you put your hand up and say something. “Hey, why don’t we run a quick experiment?” Stop arguing (I mean “debating”) in meetings about the “right way” to do something since none of you probably know. Instead just go out and test. Even small tests, repeated frequently enough, can improve the odds of building better stuff and create cultural change.
Good luck!
Hi Ben,
One would expect that Lean methods would be more rigorously followed by now. I come across founders everyday who don't, and moreover even refuse to consider them. I see accelerators that pay lip service to them and "graduate" ventures with half-baked ideas by the dozen. The worst is when they leave founders without a clear go-no-go decision criterion, leading them to believe that there is hope for their "idea", when frankly there is none. Investors usually have specific decision criteria, and apart from (in)validating their hypotheses, founders should focus on seeing if they can clearly demonstrate the value to investors along these decision axes. (This assumes that they are looking for funding of course.) Ultimately, the goal of the hypothesis approach is to make the venture successful, and using these investment decision criteria as proxy goal posts may not be such a bad thing. I think the first thing that accelerators need to do is to educate founders about building a business, even before they explore users and product and market.
I have experienced first-hand how difficult it is to keep one's objectivity regarding one's product, idea, and business, and there are always other factors that affect decisions, some of which are beyond one's control. Willingness to keep an open mind and accept the outcomes of experiments are paramount. If you refuse to let decision making be guided by experiments, it's all moot anyway.
Great stuff. I wrote something much higher level that touches on the same issues: https://tempo.substack.com/p/dont-tell-me-your-strategy-budgeting (esp. your comment about fictional excel)