When Product Usage is Actually a Bad Thing
Not all usage of your product is good or signals that a customer is happy. (#61)
After launching a product, the next big thing to do is measure usage and see if the product is creating value.
Are people using the product?
Are they using it in the right way?
Are they using it frequently enough?
Usage is a proxy of value creation—if people are using your product, they must be getting value.
Generally it’s assumed that more usage = more value.
Until that’s not the case.
When is usage a bad thing?
Recently, I’ve worked with 3 startups that care about usage (and measure it), but aren’t looking to increase it the way you might think. In fact, they’re heading towards a goal of reducing usage as much as possible.
Let’s dig in…
1. Carbonhound Aims to Automate Data Entry
Carbonhound automates client and regulatory reporting on carbon emissions, so clients can take climate action with integrity. They bring together Scope 1, 2 and 3 data, making it super easy for companies to access their data, ingest it and generate reports.
Despite all the noise around reducing carbon emissions, we’re still in the early days. Businesses are trying to wrap their heads around how to get the data they need, especially with a murky regulatory environment that’s changing frequently.
No data = no reporting = no action.
So the key is getting access to the data, easily & quickly. Carbonhound has invested a lot in building out integrations and data ingestion capabilities to automate as much of the data entry process as possible. By doing so, they actually reduce the frequency of use in their app, because clients don’t have to go in and manually input data.
More usage of Carbonhound = (probably) doing manual data = likely gets frustrated = churn
That’s a bad thing. But if Carbonhound customers rely on data automation, their frequency of use is lower and the patterns of “successful use” (i.e. use of the product that signals they’re getting value) are harder to assess. Is it accessing reports weekly? Monthly? Inviting others to access the reports? Requesting new data sources to be automatically ingested?
As products automate more things for us, usage drops and the patterns of usage change.
Map this out in a systems diagram
I’ve talked about this before—I’m a huge believer in mapping your business as a systems diagram. Here’s a very rough, high-level version I created for Carbonhound (w/o their input!):
It shows that a customer can decide to pay for the automated data integration / ingestion service, get to the value (reports) faster, and then leave it be, while continuously getting value.
On the other hand, a customer that chooses to manually input data may or may not ever do it, or do it properly. They might get to reports (hopefully with complete data), but they’ll have to go back and do the data entry again (over and over, at some regular interval). Carbonhound can build features to remind customers to do the data entry, but that doesn’t guarantee that customers will. The company can proactively engage customers (through Customer Success), but that takes time and resources.
2. Moselle Enables Merchants to Auto-Buy / Replenish Supply
Moselle is another good example of a startup trying to automate a process that’s labor-intensive and fraught with errors.
Moselle is an inventory orchestration platform designed for fast growing consumer brands. They automate buying/replenishing your stock (through more accurate forecasting/planning) so you don’t have to worry about it.
Today, Moselle doesn’t auto-buy inventory for a merchant without their approval, so the usage pattern is fairly clear:
Moselle decides what the merchant needs → Moselle tells the merchant → Merchant reviews the order and approves or changes it.
Moselle knows the frequency of when it needs to re-order supply for a merchant and can track the pattern; if a merchant doesn’t come into the app to review/approve/edit an order, they’ve likely abandoned the service.
But in the future, Moselle could completely automate this process, and put inventory orchestration on auto-pilot. Once a customer trusts Moselle enough, they can tell the app to “handle all this inventory stuff,” saving the merchant more time.
How does Moselle measure usage when it’s completely automated? How does Moselle track value creation when a customer may only log in every couple of months?
As products deliver more automation, usage and value creation become more disconnected.
3. Price Optimization Startup Doesn’t Want Usage At All
This startup (which I can’t name publicly) provides price optimization capabilities (the industry doesn’t really matter). They help business owners optimize the price of their inventory. The startup makes recommendations (on a frequent basis) and the business owner allows the recommendations to go through or overrides them.
The startup learned something interesting: Customers using the app regularly are the ones that are overriding the startup’s recommendations the most. These are the customers that are in the product constantly, reviewing the recommendations, second guessing them, and ultimately going with what they think makes the most sense. Turns out they’re also the ones that reach out a lot, complain the most, and consume the customer success team’s time.
More usage = less value = more complaining = more expensive to serve = 😢
This startup’s value proposition is compelling (and differentiated in the market), which helps them convert new customers quickly. But those customers don’t necessarily trust the company’s automated recommendations enough to implement them.
The more you automate for a customer, the more they need to trust that it’s working. And the more evidence you’ll need to provide of that.
Another systems diagram to map out the flow
This is very high level (because I can’t share all the details), but it walks through key scenarios:
Customer onboards (which is a big effort), gets price recommendations, reviews them, approves, sees the value and ultimately builds up enough trust in the system to let it automatically do it’s thing; or
Customer onboards (same effort), but doesn’t agree with the recommendations, they override them a few times, lose faith, engage customer success (costing the startup money) and eventually churn out (or possibly become convinced the system works and let it do its thing).
As Things Get More Automated, How Does that Impact What We Build and How We Sell?
Most startups are jumping on the AI bandwagon, but not fully appreciating the impact AI will have on their businesses. As implementing AI, Machine Learning (ML) and automation gets easier, we’re going to see entirely new products (and categories) emerge, but also a lot of changes to existing products & product categories.
Let’s dig in on the impact across the following:
Measuring good usage
Value propositions & brand
Product roadmaps
Business models
How do you Measure Good Usage?
I’m a big believer in identifying your “best users” and figuring out what makes them special/different. You need to figure out what makes them tick and find patterns in their profiles/demographics, usage, etc. If you can categorize your best users in some way, you can do two things:
Find more of them
Experiment with ways to level up your “OK users” into the “best users” category
Typically, you’d identify your “best users” based on usage patterns:
Who uses your product the most? What do they use the most?
Which customers add the most users (i.e. expand their usage)?
What are the usage patterns of customers that don’t churn?
In a world of increased automation, these patterns change. Customers will login less frequently, and do less in your product, because your product is actually doing the work for them. They may add less users, because they don’t need as many people in your product doing stuff. Those users might be getting the value through the automations you’ve built, without ever having to have an account.
A few people I spoke with said they focus on the frequency that their product does something (automatically) on behalf of the user. For example AI agents often run in the background, without even being triggered by a specific user request. So you measure the frequency of the AI agent doing something, not the actual human user.
Some products (think Zapier, IFTTT) measure the frequency that their automations are triggered (whether by a human or not). They can also see if a user adds more automations (that’s a good sign). “Not turning off the automation” is potentially a signal that the customer is happy, but I wouldn’t rely exclusively on that.
What’s the new Stickiness for AI / Automation focused products?
In Lean Analytics, Alistair Croll and I defined five stages that startups go through:
The Stickiness stage is focused primarily on usage. Are people using your product frequently enough to give you confidence that you’ve solved the problem? If “yes” you move forward to Virality, where you focus on user/customer acquisition. If “no” you go back to figure out what’s going on. A lot of startups fail at this stage because they don’t actually solve a painful enough problem in the right way.
A lot of AI-focused software products will still measure frequency of usage (think DAU, WAU, MAU) and use that as a proxy of value creation. But not all of them. There will be some products that have to redefine Stickiness because of how they work (largely automated; not triggered by a user’s action or intervention) and the value they create.
In a “set it and forget it” world, measuring good usage is tough. The price optimization startup above doesn’t want users logging in and overriding its recommendations, because that diminishes their value proposition. So they’re tracking when people do this but as a negative, not a positive.
The “absence of a behaviour” becomes the measurement of “good usage.”
Qualitative Feedback Becomes More Important
The importance of qualitative feedback is going to increase, as it gets harder to use quantitative data as the predominant measure of value creation. You may have a customer that doesn’t login for months and think, “Oh oh, they’re at risk of churning!” only to realize that they’re perfectly happy because they set up the product to automate something, it does the job and they rarely have to make changes. You’d only know that via qualitative engagement with the customer.
Startups that are building “set it and forget it” products will need to increase the frequency and quality of qualitative feedback they collect in order to understand their users.
Qualitative feedback helps measure “customer outcomes”—did the customer actually get value from the product?
How do Value Propositions and Brands Change as Everything is Automated?
At the highest level, there are three value propositions (in a B2B context):
Make me money: By far the most valuable, no one says no to making more money.
Save me money: Second best value proposition. Can be hard to measure. Can make employees nervous (because usually involves operational efficiencies that may lower headcount).
Save me time: Third best value proposition. People want to do things more efficiently, but it’s hard to translate “save me time” to the bottomline—did the customer make or save money by saving time? What did they do with the time they saved?
I’ve seen AI / automation products pitch all three value propositions, but they tend to focus on “save time” more than anything else. “Save time” isn’t a bad value proposition, but I think a lot of AI / automation products will have to work very hard to connect time savings to a more important value proposition: make or save me money. This will need to translate into the value proposition, brand and product.
Trust Becomes More Important
Startups are always trying to build trust with customers. Every startup has a “social proof” section on their website, highlighting customers & testimonials. For AI / automation startups that won’t be enough, because trust is a key “feature” of their products.
If Moselle or the price optimization startup (mentioned earlier) can’t get people to trust their recommendations, people won’t use the products. If a user doesn’t trust the output of an AI agent, the user will verify the work every single time (and then the “save time” value proposition is moot).
Trust needs to be built into the core of the product & how it works, not just how the brand presents itself.
How do Products Change as they Deliver More Automation?
When your core value proposition is focused on automating things and reducing the frequency of use, how do you decide what to build? Maybe you don’t build much at all, once the core use case is in place. Maybe products become simpler with fewer “bells and whistles” that are typically used to increase usage frequency.
No matter what, you still have to prove value creation. If your product is out of sight, it may go out of mind. Or if your product is used very infrequently (because that’s the intended use), customers might question the value being created. You need to remain front and centre in some way, reminding customers of the value you’re providing, even if they’re not required to interact with your product.
Some automation products will get frequent usage and not have as big a problem. An AI agent that’s helping with daily tasks will proactively engage daily. That’s a good thing. But a lot of B2B products won’t have that level of usage.
In those cases, I believe reporting will become increasingly important. Infrequently used products that create value behind the scenes will need to report on the value that’s created:
“This month, you saved X hours because we automated Y and Z.”
“The AI agent ran X times, and was able to complete Y tasks for you last month.”
“We increased revenue by X%, because we automatically changed the pricing of your inventory Y times in the last 30 days.”
Carbonhound can measure and report on the amount of data its ingested and formatted. Moselle can measure and report on the supplies/inventory it purchased on behalf of merchants. Both companies can do this fairly frequently, potentially in real-time, if it makes sense for the customer to get data that actively.
Feedback loops built into products will also be a priority. How do you determine if a customer actually achieved their goals? In a typical B2B SaaS product, usage is a reasonable way of measuring a customer outcome (although not exclusively). For an automation-focused product, outcomes are trickier.
Take Volley as an example. They help B2B companies talk to their prospects by scaling personalized and hyper-relevant outreach. They offer a managed service and a product. Volley can find and reach way more prospects than you can manually on your own, but that’s not enough; the outreach has to lead to qualified leads, which then has to drive sales. Volley can’t control sales, but it can measure qualified leads if customers let it know what’s working and what needs improving. AI / automation companies will need to build in super smart, proactive feedback loops to drive engagement from customers, which subsequently helps improve the product.
If your product’s core value proposition is automation, then your product doesn’t need to be as robust. You simply don’t need as many features. But you do need to surround the core product / value proposition with enough evidence of value creation. In a “what have you done for me lately” world, you’ll need to constantly remind customers that you’re creating a lot of value.
How do Business Models Change?
Perhaps the biggest change is with business models. Many startups are moving towards usage-based pricing of some kind: # of API calls; # of data points ingested; # of automations set up; etc.
It’s difficult to charge $X/month when usage (and the associated cost) can vary massively. You might have a “small customer” (1-5 users) that’s using your product a ton, whereas a bigger customer might use it a lot less.
Traditional B2B SaaS tools assume:
More users = More usage = More value = More revenue (but not more cost; the incremental cost of adding a user is negligible)
But that’s not necessarily the case with this new breed of AI / automation tools. Costs go up as these products are used, because there’s a cost to run the automation each time. Autonomous agents (running multiple LLM calls) can get expensive fast.
B2B SaaS tools have already been moving to usage-based pricing, but I expect this will accelerate significantly. They may still have a base monthly fee, but if the value is in the automated stuff that the product does, the pricing will need to reflect that. You want your pricing to be a reflection of the value proposition you offer and how that value proposition is delivered to customers.
New Usage Patterns = New Metrics
Products focused on automation (especially behind the scenes, running autonomously) will need to identify new usage patterns that signal good vs. bad customers. By doing so, they will need to identify new metrics to track, which will tell them who is a good user, who is at risk of churning, etc.
AI / automation tools don’t completely eliminate usage—people will still login and do stuff, but less frequently than before, and less so than traditional B2B SaaS products.
Yohei Nakajima, GP at Untapped Capital (and a leader in the AI ecosystem) summed it nicely (while I was discussing this with him), “What startups measure is an important question and it needs to be different for AI startups than SaaS ones.”
Some key things these startups will need to figure out how to measure:
Good Usage: DAU/WAU/MAU may not be as relevant as previously thought. You can measure the frequency that the AI/automations are triggered, but if this is done autonomously, you can’t assume activity = value.
Trust: How often does a customer accept or reject a recommendation? How often does a customer override what the AI/automation did? Trust isn’t simply about social proof or the customer logos you have; it’ll be a core value proposition of the product, and as a result you’ll need to measure it.
Outcomes: What did a user get as a result of your product doing something for them? Every company has to figure out if the value it creates leads to meaningful outcomes; but it’s even more important in a world of automation because so much of the work is done behind-the-scenes.
Churn Probability: As companies mature they start looking at how to reduce churn. Before you can declare PMF and scale aggressively, you need churn to be at a reasonable level. If your product is designed for customers not to use it, how do you know if they’re happy enough to stick with you (and not churn)?
Cost: Everyone looks at Customer Lifetime Value (CLV) compared with Customer Acquisition Cost (CAC) to see if they’ve hit a good benchmark (say 3:1 CLV/CAC). But what about the ongoing cost to deliver the service and support a customer? When you’re paying something for every action your product takes (i.e. using an LLM), the cost may not be negligible for delivering the product; and the more the product is used, the higher the cost. On the support side, if customers don’t fully trust the product, they may engage more proactively with customer success; suddenly your worst customers are consuming most of your team’s time, which costs more money. I don’t think I’ve ever seen a startup track “Customer Lifetime Cost” (h/t
) but it might be necessary.
What’s evident is that not all usage is good usage. That’s always been true (i.e. someone spending 20 minutes in your product perusing help files!) but it’s exacerbated by a new category of product that’s emerging, one focused on automating tasks, operating autonomously and proactively. Suddenly, (more) usage is a bad thing, which radically changes how we think about solving problems, building products, selling and measuring progress.
(Note: Carbonhound, Moselle and Volley are all Highline Beta portfolio companies.)
(Thank you to Alistair Croll, Yana Welinder and Yohei Nakajima for their input on this.)
You are raising a very interesting question around metrics that can truly represent level of customer success from product lens. It’s also a very interesting point of a greater usage not always meaning customers finding greater value.
I agree the relevance of the metrics is tightly connected to the workflow and the use case. It feels that the product success metrics should be somehow connected to the customers purchase / renewal behavior of the product but it also means a connection with the customers business metric. Those however is something product teams likely don’t have access to meaning this likely should be a hypothesis area for product teams to build on
I agree. I work in B2B. Many B2B product automate processes and are successful when people do NOT need to use the product actively.