Monday, October 6, 2008

When NOT to listen to your users; when NOT to rely on split-tests

There are three legs to the lean startup concept: agile product development, low-cost (fast to market) platforms, and rapid-iteration customer development. When I have the opportunity to meet startups, they usually have one of these aspects down, and need help with one or two of the others. The most common need is becoming more customer-centric. They need to incorporate customer feedback into the product development and business planning process. I usually recommend two things: try to get the whole team to start talking to customers ("just go meet a few") and get them to use split-testing in their feature release process ("try it, you'll like it").

However, that can't be the end of the story. If all we do is mechanically embrace these tactics, we can wind up with a disaster. Here are two specific ways it can go horribly wrong. Both are related to a common brain defect we engineers and entrepreneurs seem to be especially prone to. I call it "if some is good, more is better" and it can cause us to swing wildly from one extreme of belief to another.

What's needed is a disciplined methodology for understanding the needs of customers and how they combine to form a viable business model. In this post, I'll discuss two particular examples, but for a full treatment, I recommend Steve Blank's The Four Steps to the Epiphany.

Let's start with the "do whatever customers say, no matter what" problem. I'll borrow this example from randomwalker's journal - Lessons from the failure of Livejournal: when NOT to listen to your users.
The opportunity was just mind-bogglingly huge. But none of that happened. The site hung on to its design philosophy of being an island cut off from the rest of the Web, and paid the price. ... The site is now a sad footnote in the history of Social Networking Services. How did they do it? By listening to their users.
randomwalker identifies four specific ways in which LJ's listening caused them problems, and they are all variations on a theme: listening to the wrong users. The early adopters of LiveJournal didn't want to see the site become mainstream, and the team didn't find a way to stand up for their business or vision.

I remember having this problem when I first got the "listening to customers" religion. I felt we should just talk to as many customers as possible, and do whatever they say. But that is a bad idea. It confuses the tactic, which is listening, with the strategy, which is learning. Talking to customers is important because it helps us deal in facts about the world as it is today. If we're going to build a product, we need to have a sense of who will use it. If we're going to change a features, we need to know how our existing customers will react. If we're working on positioning for our product, we need to know what is in the mind of our prospects today.

If your team is struggling with customer feedback, you may find this mantra helpful. Seek out a synthesis that incorporates both the feedback you are hearing plus your own vision. Any path that leaves out one aspect or the other is probably wrong. Have faith that this synthesis is greater than the sum of its parts. If you can't find a synthesis position that works for your customers and for your business, it either means you're not trying hard enough or your business is in trouble. Figure out which one it is, have a heart-to-heart with your team, and make some serious changes.

Especially for us introverted engineering types, there is one major drawback to talking to customers: it's messy. Customers are living breathing complex people, with their own drama and issues. When they talk to you, it can be overwhelming to sort through all that irrelevant data to capture the nuggets of wisdom that are key to learning. In a perfect world, we'd all have the courage and stamina to perservere, and implement a complete Ideas-Code-Data rapid learning loop. But in reality, we sometimes fall back on inadequate shortcuts. One of those is an over-emphasis on split-testing.

Split-testing provides objective facts about our product and customers, and this has strong appeal to the science-oriented among us. But the thing to remember about split-testing is that it is always retrospective - it can only give you facts about the past. Split-testing is completely useless in telling you what to do next. Now, to make good decisions, it's helpful to have historical data about what has and hasn't worked in the past. If you take it too far, though, you can lose the creative spark that is also key to learning.

For example, I have often fallen into the trap of wanting to optimize the heck out of one single variable in our business. One time, I became completely enamored with Influence: The Psychology of Persuasion (which is a great book, but that's for another post). I managed to convince myself that the solution to all of our company's problems were contained in that book, and that if we just faithfully executed a marketing campaign around the principles therein, we'd solve everything. I convinced a team to give this a try, and they did tried dozens of split-test experiments, each around a different principle or combination of principles. We tried and tried to boost our conversion numbers, each time analyzing what worked and what didn't, and iterating. We were excited by each new discovery, and each iteration we managed to move the conversion needle a little bit more. Here was the problem: the total impact we were having was miniscule. It turns out that we were not really addressing the core problem (which had nothing to do with persuasion). So although we felt we were making progress, and even though we were moving numbers on a spreadsheet, it was all for nothing. Only when someone hit me over the head and said "this isn't working, let's try a radically new direction" did I realize what had happened. We'd forgotten to use the all the tools in our toolbox, and lost sight of our overarching goal.

It's important to be open to hearing new ideas, especially when the ideas you're working on are split-testing poorly. That's not to say you should give up right away, but always take a moment to step back and ask yourself if your current path is making progress. It might be time to reshuffle the deck and try again.

Just don't forget to subject the radical new idea to split-testing too. It might be even worse than what you're doing right now.

So, both split-testing and customer feedback have their drawbacks. What can you do about it? There are a few ideas I have found generally helpful:
  • Identify where the "learning block" is. For example, think of the phases of the synthesis framework: collecting feedback, processing and understanding it, choosing a new course of action. If you're not getting the results you want, probably it's because one of those phases is blocked. For example, I've had the opportunity to work with a brilliant product person who had an incredible talent at rationalization. Once he got the "customer feedback" religion, I noticed this pattern: "Guys! I've just conducted three customer focus groups, and, incredibly, the customers really want us to build the feature I've been telling you about for a month." No matter what the input, he'd come around to the same conclusion as before.

    Or maybe you have someone on your team that's just not processing: "Customers say they want X, so that's what we're building." Each new customer that walks in the door wants a different X, so we keep changing direction.

    Or consider my favorite of all: the "we have no choice but to stay the course" pessimist. For this person, there's always some reason why what we're learning about customers can't help. We're doomed! For example, we simply cannot make the changes we need because we've already promised something to partners. Or the press. Or to some passionate customers. Or to our team. Whoever it is, we just can't go back on our promise, it'd be too painful. So we have to roll the dice with what we're working on now, even if we all agree it's not our best shot at success.

    Wherever the blockage is happening, by identifying it you can work on fixing it.

  • Focus on "minimum feature set" whenever processing feedback. It's all too easy to put together a spec that contains every feature that every customer has ever asked for. That's not a challenge. The hard part is to figure out the fewest possible features that could possibly accomplish your company's goals. If you ever have the opportunity to remove a feature without impacting the customer experience or business metrics - do it. If you need help determining what features are truly essential, pay special attention to the Customer Validation phase of Customer Development.

  • Consider whether the company is experiencing a phase-change that might make what's made you successful in the past obsolete. The most famous of these phase-change theories is Crossing the Chasm, which gives very clear guidance about what to do in a situation where you can't seem to make any more progress with the early-adopter customers you have. That's a good time to change course. One possibility: try segmenting your customers into a few archetypes, and see if any of those sounds more promising than another. Even if one archetype currently dominates your customer base, would it be more promising to pursue a different one?
As much as we try to incorporate scientific product development into our work, the fact remains that business is not a science. I think Drucker said it best. It's pretty easy to deliver results in the short term or the long term. It's pretty easy to optimize our business to serve one of employees, customers or shareholders. But it's incredibly hard to balance the needs of all three stakeholders over both the short and long-term time horizon. That's what business is designed to do. By learning to find a synthesis between our customers and our vision, we can make a meaningful contribution to that goal.


  1. I don't see any contradiction between a scientific mindset and your conclusions.

    In fact, it more-or-less mirrors how scientific breakthroughs happen. People spend years looking at the trees but every so often someone comes along and points out the forest.

    And qualitative feedback certainly has its place in this system, too, with the caveat that users are often wrong, confused, and will mislead you. Nothing the users tell you should contradict your empirical findings, after all, unless you're doing something wrong.

    Any time I'm focusing on small optimizations I think to myself, "Is there any way I can move the dial 500% rather than 50%?"

    That's not pushing aside science -- that's doing it better.

  2. @jesse - I completely agree. Those who claim that science is non-creative, static, or just numbers don't really understand science at all.

    Have you ever had the problem where people with those misconceptions resist subjecting their product insights to quantitative analysis? I seem to come up against it quite often.

  3. Eric,

    Yeah, it has to be part of the culture or leads to tedious arguments about process. Engineers just want to build, designers just want to create, product managers just want to spec, etc.


    It's hard to get buy in. It means some amount of additional work for the people involved (engineers, designers, PMs) with the additional risk that their opinion will be invalidated.

    The end results is a better business and product, but each individual feels like they're being more actively scrutinized.

    You have to be in a culture that takes away the stigma from this scrutiny and embraces scientific experimentation and part of the day-to-day job or it causes too many bruised egos.

    Anyhow, it's clear you've been thinking about the general issue for some time. There are lots of people out there talking about metrics and measurement, but not a lot of people talking about this from a scientific perspective.

    I'm trying to distill this perspective, which lots of people I know share, into something you can hang your hat on. Any thoughts?

  4. Eric,

    Fellow PBwiki shareholder Chris Yeh here. I thought that this was a great post, and I've quoted it extensively to folks via email. Here is the quick summary I've sent...let me know if you think I'm oversimplifying:

    1) When trying to be customer-centric, emphasize the strategy (learning) rather than the tactic (listening to customer feedback).

    2) Split-testing has a strong appeal to our desire for data but can't substitute for judgment.*

    3) Try to specify the fewest number of features required to meet the company's goals.

    4) When crossing the chasm, you may need to focus away from the archetypes that currently dominate your customer base.

  5. @Chris - thanks for the kind words (and for apparently boosting my subscriber count!). I think you nailed the summary. In fact, when you lay it out so clearly, it almost sounds easy.

    Why do we find this so easy to say and yet so hard to do? And what can we do to mitigate these difficulties, even as we scale?

  6. For a quantitative "methodology" on identifying opportunities to deliver customer value, have a look at "What Customers Want" by Anthony Ulwick. I have found it an excellent place to begin when focusing in on the minimum priority features that customers are after. The process is empirical which suited my system-oriented background.

    As with any "methodology" don't simply follow the process blindly but take the principles and develop your own practices.