Sunday, September 28, 2008

The lean startup comes to Stanford

I'm going to be talking about lean startups (and the IMVU case in particular) three times in the next two weeks at Stanford. It's exciting to see the theory and methodology being discussed in an academic context. The entrepreneurship progarms of the business, engineering, and undergraduate schools are all tackling the subject this semester, and I'm honored to be part of it. Even better, my friend Mike Maples, one of the pioneers of microcap investing in startups, is teaching a unit in Stanford's E145 on "The New Era of Lean Startups."

It's a real challenge to communicate honestly in these classes. I struggle to try and make the students actually experience how confusing and frustrating startup environments are. When we do the IMVU case, we generally get complete consensus in the class that several of the zany things we did are 100% right. Complete consensus? We didn't even think they were 100% right. And we still argue about whether our success came from those decisions, or some exogenous factor.

It's one of the hard things about learning just from hindsight, and it matters in the board room every bit as much as in the classroom. You can only learn from being wrong, but our brains are excellent rationalizers. When something works, it's too easy to invent a story about how that was your intention all along. If you don't make predictions ahead of time, there's no way to call you on it.

In fact, in the early days, when IMVU would experience unexpected surges of revenue or traffic, it was inevitable that every person in the company was convinced that their project was responsible. Those stories would be retold and repeated, and eventually achieved mythological status as "facts" that guided future action. But making decisions on the basis of myths is dangerous territory.

How did we combat this tendency? I don't pretend that we did it well. But many of the tools of lean startups are designed for just this purpose:
  • Regular checking in with and regular talking to customers surfaces bogus theories pretty fast
  • Split-tests make it harder to take credit for someone external factor making you successful
  • Cross-functional teams tend to examine their assumptions harder and with more skepticism than purley single-function teams
  • Working in small batches tends to make it less likely that you'll attribute big results to small changes (because the fact that small changes sometimes do lead to big results is counter-intuitive)
  • Rapid iteration makes it easy to test and re-test your assumptions to give you many opportunities to drive out superstition
  • Open source code invites criticism and active questioning
Still, it's hard to make the case that these solutions are needed, because the problems seem so obvious. I hear some variation of this pretty often: "I mean, sure those guys were rationalizing and kidding themselves. But our team would never do that, right? We'll just be more vigilant." Good luck.

Let me end with a challenge: see if you can find and kill just one myth in your development team. My suggestion: take a much-loved feature and split-test it with some new customers to see if it really makes a difference. If you try, share your story here. I'm especially interested in what you used to share the idea with your colleagues. What language should we use? What arugments are persuasive? What works and what doesn't?


  1. When will you be speaking? Can I go to one of them?

  2. @hitchens - they're not open to the public as far as I know, but if you have an affiliation with Stanford engineering, undergrad, or GSB, it's possible you could come audit. Drop me an email ( if you're interested and I'll see what I can do.

  3. wow, dead on. without the tools and desire to test, in a controlled fashion, the influence of site components, you're leaving the story of success in the hands of those with the most convincing narrative. of course, the most convincing narrative usually belongs to the hippo:

    and when you remove objectivity from your startup, you are going to have a hard time sustaining successful decision making.

    godspeed with your proselytizing. i've found it's hard to get people to truly buy into this approach. they'll use it when they have the luxury, but when it's crunch time or it's a particularly critical decision, they throw objectivity out the window. sometimes it requires losing an argument over a site feature to start to see the value of measurement.

  4. Jeff, thanks so much for the kind words. Crunch time is when you need objectivity most, which is why it's so important to make the tools that enforce it as automatic and simple as possible. It's really hard.

  5. It's kinda funny that you brought up hippos, Jeff. Eric used to use that exact word at IMVU :)