Monday, October 26, 2009

A real Customer Advisory Board

A reader recently asked on a previous post about the technique of having customers periodically produce a “state of the company” progress report. I consider this an advanced technique, and it is emphatically not for everyone.

Many companies seek to involve customers directly in the creation of their products. This is a lot harder than it sounds. Hearing occasional input is one thing, but building an institutional commitment to acting on this feedback is hard. For one, there are all the usual objections to customer feedback: it is skewed in favor of the loud people, customers don’t know what they want, and it is fundamentally our job to figure out what to build. All of those objections are valid, but that can’t be the end of the story. Just because we don’t blindly obey what our customers say doesn’t absolve us of the responsibility of hearing them out.

The key to successful integration of customer feedback is to make each kind of collection part of the regular company discipline of building and releasing products. In previous posts, I’ve mentioned quite a few of these, including these most important ones:
  • having engineers post on the forums in their own name when they make a change
  • routinely split-testing new changes
  • routinely conducting in-person usability tests and interviews
  • Net Promoter Score
Each of these techniques is fundamentally bottoms-up.  They assume that each person on the team is genuinely interested in testing their work and ideas against the reality of what customers want. Anyone who has worked in a real-world product development team can tell you how utopian that sounds. In real life, teams are under tremendous time pressure, they are trying to balance the needs of many stakeholders, and they are human. They make mistakes. And when they do, they are prone to all the normal human failings when it comes to bad news: the desire to cover it up, rationalize the failure away, or redefine success.

To counteract those tendencies, it helps to supplement with top-down process as well. One example is having a real Customer Advisory Board. Here’s what it looks like. In a previous company, we put together a group of passionate early adopters. They had their own private forum, and a company founder (aka me) personally ran the group in its early days. Every two months, the company would have a big end-of-milestone meeting, with our Board of Directors, Business Advisory Board, and all employees present. At this meeting, we’d present a big package of our progress over the course of the cycle. And at each meeting, we’d also include an unedited, uncensored report direct from the Customer Advisory Board.

I wish I could say that these reports were always positive. In fact, we often got a failing grade. And, as you can see in my previous post on “The cardinal sin of community management” the feedback could be all over the map. But we had some super-active customers who would act as editors, collecting feedback from all over the community and synthesizing it into a report of the top issues. It was a labor of love, and it meant we always had a real voice of the customer right there in the meeting with us. It was absolutely worth it.

Passionate online communities are real societies. What we call “community management” is actually governance. It is our obligation to govern well, but – as history has repeatedly shown – this is incredibly hard. The decisions that a company makes with regard to its community are absolute. We aspire to be benevolent dictators. And unlike in many real-world societies, our decisions are not rendered as law but as code. (For more on this idea, see Lawrence Lessig’s excellent Code is Law.) The people who create that code are notoriously bad communicators, even when they are allowed to communicate directly to their customers.

A customer advisory board that has the ear of the company’s directors acts as a kind of appeals process for company decisions. As I mentioned in “The cardinal sin of community management,” many early adopters will accept difficult decisions as long as they feel listened to. As a policy matter, this is easy to say and very hard to implement. That’s why the CAB is so valuable. They provide a forum for dissenting voices to be heard. The members of the CAB have a stake in providing constructive feedback, since they will tend to be ignored if they pass on vitriol. In turn, they become company-sanctioned listeners. By leveraging them, the company is able to make many more customers feel heard.

The CAB report acts as a BS detector for top management. It’s a lot harder to claim everything is going smoothly, and that customers are dying for Random New Feature X when the report clearly articulates another point of view. Sometimes the right thing to do is to ignore the report. After all, listening to customers is not intrinsically good. As always, the key is to synthesize the customer feedback with the company’s unique vision. But that’s often used as an excuse to ignore customers outright. I know I was guilty of this many times. It’s all-too-easy to convince yourself that customers will want whatever your latest brainstorm is. And it’s so much more pleasant to just go build it, foist it on the community, and cross your fingers. It sure beats confronting reality, right?

Let me give one small example. Early in IMVU’s life, IM was a core part of the experience. Yet we were very worried about having to re-implement every last feature that modern IM clients had developed: away messages, file transfer, voice and video, etc. As a result, we tried many different stratagems to avoid giving the impression that we were a fully-featured IM system, going so far as to build our initial product as an add-on to existing IM programs. (You can read how well that went in another post here.)

This strategy was simply not working. Customers kept demanding that we add this or that IM feature, and we were routinely refusing. Eventually, the CAB decided to weigh in on the matter in their board-level report. I remember it so clearly, because their requests were actually very simple. They asked us to implement five – and only five – key IM features. For weeks we debated whether to do what they asked. We were afraid that this was just the tip of the iceberg, and that once we “gave in” to these five demands there would be five more, ad infinitum. It actually took courage to do what they wanted – as it does for all visionaries. Every time you listen to customers, you fear diluting your vision. That’s natural. But you have to push through the fear, at least on occasion, to make sure you’re not crazy.

In this particular example, it turned out they were right. Just those few IM features made the product dramatically better. And, most importantly, that was the end of IM feature creep. Nobody even mentioned it as an issue in subsequent board meetings. That felt good – but it also gave our Board tremendous confidence that we could change the kind of feedback we were getting by improving the product.

This technique is not for everybody. It gets much harder as the company – and the community – scales, and, in fact, IMVU uses a different system of gathering community feedback today. But, if your community is giving you a headache, give this a try. Either way, I hope you’ll share your experiences, too.

Friday, October 23, 2009

Case Study: Using an LOI to get customer feedback on a minimum viable product

How much work should you do on a new product before involving customers? If you subscribe to the theory of the minimum viable product, the answer is: only enough to get meaningful feedback from early adopters. Sometimes the best way to do this is to put up a public beta and drive a limited amount of traffic to it. But other times, the right way to learn is actually to show a product prototype to customers one-on-one. This is especially useful in situations, like most B2B businesses, where the total number of customers is likely to be small.

This case study illustrates one company’s attempt to do customer development by testing their vision with customers before writing a single line of code. In the process, they learned a lot by asking initial prospects to sign a non-binding letter of intent to buy the software. As you’ll see, this quickly separated the serious early adopters from everyone else. Mainstream customers don’t have enough motivation to buy an early product, and so building in response to their feedback is futile.

Along the way, this case study raises interesting ethical issues. The lean startup methodology is based on enlisting customers as allies, which requires honesty and integrity. If you deceive customers by showing them screenshots of a product that is “in-development” but for which you have written no code, are you lying to them? And, if so, will that deception come back to haunt you later? Read on and judge for yourself.

The following was written an actual lean startup practitioner. It was originally posted anonymously to the Lean Startup Circle mailing list, and then further developed on the Lean Startup Wiki’s Case Studies section. If you’re interested in writing a future case study, or commenting/contributing to one, please join the mailing list or head on over to the wiki. What follows is a brief introduction by me, the case study itself, and then some Q&A led by LSC creator Rich Collins. Disclaimer: claims and opinions expressed by the authors of case studies are theirs alone; I can’t take credit or responsibility. – Eric Ries

In April of 2009 my partner and I had an idea for a web app, a B2C platform that we are selling as SaaS [software-as-a-service]. We decided from the get-go that, while we clearly saw the benefits and necessity of our concept, we would remain fiercely skeptical of our own ideas and implement the customer development process to vet the idea, market, customers etc, before writing a single line of code.

My partner was especially adamant about this as he had spent the last 6 months in a cave writing a monster, feature-rich web app for the financial sector that a potential client had promised to buy, but backed out at the last second.  They then tried to shop the app around, and found no takers.  Thousands of lines of code, all for naught -- as is usually the case without a customer development process. (See Throwing away working code  for more on this unfortunate phenomenon. -Eric)

We made a few pencil drawings of what the app would look like which we then gave to a graphic designer.  With that, the graphic designer created a Photoshop image. We had him create what we called our "screenshots" (which suggests that an app actually existed at the time) and had him wrap them in one of these freely available PS Browser Templates. Now armed, with 4 "screenshots" and a story, we approached our target market, some of which was through warm introductions, and some, very literally, was through simple cold-calling.

Once we secured a meeting, we told our potential customers that we were actively developing our web app (implying that code was being written) and wanted to get potential user input into the development process early on.  Looking at paper print-outs of our "screenshots", no one could tell that this was simply a printout of a PSD, and not a live app sitting on a server somewhere. We walked them through what we thought would be the major application of our product.  Most people were quite receptive and encouraging.  What proved to be very interesting was that we quickly observed a bimodal distribution with regards to understanding the problem and our proposed solution:

  • people either became very excited and started telling us what we should do, what features it needed and how to run with this, or
  • they didn't think there was a real problem here, much less a needed solution.
We ruminated on this for a while. The vehemence of those that didn't get it surprised us.  Perhaps we had a super-duper-hyper-ultra-cool idea  --- but not enough customers existed to make it worth the effort. We visited each potential customer a minimum of twice, if not three times.  Each time we would come back with a few more "screenshots" and tell them that development was progressing nicely and ask them for more input. We also solicited information as to how they were currently solving the problem and how much they paid for their solution.

On the third visit, we pressed those who saw merit in the idea to sign a legally non-binding Letter of Intent.  Namely, that they agree to use it free of charge if we deliver it to them and it is capable of X, Y and Z.  And not only do they agree to use it, but that they intend to purchase if by Y date at X price if it meets their needs.

By the way, this LOI was not written in legalese.  Three quarters of it was simple everyday English.  In fact, we customer dev-ed the LOI itself.  The first time, we asked a client to sign it before we had even written it.  When they agreed to sign it, we quickly whipped it up while sitting in a coffee shop and emailed it off to them.  This would help us separate the wheat from the chaff when it came to determining interest and commercial viability.  Once we had two LOIs signed and in-hand, we actually began to write code.

We also implicitly used the LOIs for price structure and price discovery - which we are still working on.  We backed into prices from all sorts of angles, estimating the time-cost of equivalent functionality, competitive offerings, other tools we were potentially displacing -- but in the end, we lobbed a few numbers at them and waited to see if they flinched.

Customer A got X price, Customer B got X + Y price, and so on.  So far, our customers have never mentioned price as an objection, which suggests to me that at this point we are very much underpriced. The LOI was also useful as we leveraged it by approaching the competitor of one of those who signed by simply letting them know that their competitor will be using our app.  They returned our cold intro email within 8 mins.

We have two customers that have balked at signing LOIs, but want to use our product.  This has been somewhat of a quandary for us.  When we decided to go the LOI route, we thought that we would not bend and that we would only service those customers who would sign the LOI.  In the end, we decided that these two customers were large enough to help us with exposure, provide good usage data and worth the risk of them wasting our time.  Time will tell if this theory proves correct.

Right now, the app itself is pretty ugly, a bit buggy and slow -- and doesn't even do a lot.  It is borderline embarrassing.  Don't get me wrong, it does the few necessary things.  BUT it definitely does NOT have the super-duper-hyper-ultra-cool Web 2.0 spit and polish about it. Interestingly enough, our ratio of positive comments to negative comments from actual users is about 10 to 1.  One of our first customers had a disastrous launch with it, yet, has signed on to try it again (granted, they did get it for free and we did offer it for free for this next time). But they didn't hesitate to try it again.  I thought we would have to plead, beg and beseech.  But for them, it was a no-brainer.  So, we have to be doing something right.

Our feature set is very limited and being developed almost strictly from user input.  While I personally have all sorts of super-duper-hyper-ultra-cool Web 2.0 ideas --- we are holding ourselves back, and forcing ourselves to wait for multiple, explicit and overlapping user requests.  We have seen our competitors whose feature sets are very rich, to say the least, but we think in some cases, are as over-engineered as they are feature-rich.

Only time and the market will tell if they are innovative and we are slow, lazy pigs or they have gotten ahead of themselves/the market and our minimalist solution will be better received.

Rich Collins, founder of the Lean Startup Circle, responded to the poster with some Q&A.
LSC: What is your response to some of the people on Hacker News that questioned the ethics of taking this approach?

Some of the commenters have some good points.  It definitely explores ethical boundaries.  However, I don't think we indulged in any zero-sum game type deception.  By that, I mean our intentional fuzziness about the state of development did not cause harm in any manner to our prospective clients.  In fact, just us showing up at their offices and talking about our screenshots benefited our prospective clients tremendously as:

  1. Those clients who had never even entertained the functionality we were proposing gained significant knowledge.
  2. With that knowledge, they could (and did) Google our competition and start exploring the space and current offerings. 
We did, in fact, tell one of our prospects in the beginning that our screenshots were simply mock-ups.  However, that makes the prospect feel as if you are wasting their time and they then are unlikely to provide input.

"Oh, this is just a Photoshop file?  Well, come back to us when you are further along." which defeats the whole purpose of getting face time for Customer Development!

When you tell them, the app is in development (and it was, even before coding, we were spending a lot of time on what we wanted and didn't want, how it would look, use cases ‚ etc) the prospects are interested in providing input and shaping the product.  They need to feel and see some momentum.

LSC: Your use of a non-binding letter of intent was another interesting tactic.  Did the customers that signed it end up paying for your product?

Yes and no.  We had a dispute with one signee and couldn't convert them.  However, we successfully converted others.  I should also mention that there was one client who refused to sign an LOI, but we are in the process of converting them.

The LOI was designed to give us hard, non-bullshit-able feedback instantly.  Too often people will affirm your idea so that you (or they) can save face, which BTW is a form of well-intentioned and socially acceptable deception.  This is why, IMHO, friends, wives, and significant others are probably not good people to talk to about your idea.  At the end of the day, no one knows if the idea is any good.  The market will tell you.

LSC: Would you respond to a few selected Hacker News comments?
"If I were one of your prospects, I would never sign a letter of intent based on drawings only. I'd make you come back later with something, anything I could play with ... Come back when you have something real to show. Until then you're no different from any other poser."

I myself probably would never sign an LOI on screenshots only.  However, our customers did a lot of stuff that I would never do.  Lesson learned:  I am not my customer.  We think differently.  We solve our problems differently.  We have different needs and wants.  Repeat after me:  You are not your customer.

LSC: And one more: "Except the LOIs in this case are utterly meaningless. I've been on the customer side of LOIs that were signed on request, knowing that it obligated us to nothing."

Wrong.  We got instantaneous feedback on the validity of the idea and started our sales process concurrently.  While legally non-binding, customers who have signed an LOI are a lot less likely to disappear or make themselves hard to get a hold of.  LOIs, while clearly not as good as signed sales contract, do have meaning and are valuable.  I encourage B2B startups to keep them in their customer development arsenal.

Special thanks to Rich Collins, the Lean Startup Circle practitioners, and to everyone who has contributed to the Case Studies on the wiki. And thanks to these entrepreneurs for sharing their story. Have a case study you’d like to share? Head on over to the Lean Startup Wiki.

Monday, October 19, 2009

Myth: Entrepreneurship Will Make You Rich

I have a new guest post on GigaOm today, called Myth: Entrepreneurship Will Make You Rich. Here's an excerpt:
One of the unfortunate side effects of all the publicity and hype surrounding startups is the idea that entrepreneurship is a guaranteed path to fame and riches. It isn’t. Building a startup is incredibly hard, stressful, chaotic and –- more often than not –- results in failure. That doesn’t mean it’s not a worthwhile thing to do, just that it’s not a good way to make money.

A more rational career path for money-making is one that rewards effort, in the form of promotions, increased security, salary and status. Startups, unfortunately, punish effort that doesn’t yield results. In fact, the biggest source of waste in a startup is building something nobody wants. While in an academic R&D lab, creation for creation’s sake will often get you praise, in a startup, it will often put you out of business.

So why become an entrepreneur instead of developing technology in an R&D lab? Three reasons: change the world, make customers’ lives better and create an organization of lasting value. If you only want to do one of these things, there are better options. But only startups combine all three.

Take this fictional example of a Seedcamp attendee (actually a composite), which I will refer to as Hairbrush 2.0...

Read the rest of Myth: Entrepreneurship Will Make You Rich

Also take a look at the great Hacker News discussion of this essay. It includes several gems, including this comment from davidu:
1) Being an entrepreneur, for me, isn't about being wealthy, it's about being successful.
2) Rich is a variable term, and intended to be so.
Entrepreneurship may not make you wealthy, but it can certainly make you rich.
I enjoy the freedom and independence afforded by starting EveryDNS and OpenDNS. Both contain a passion for a system I love, the DNS, and both have let me help millions of consumers around the world. I even like knowing I control the DNS for millions and millions of Internet users. That's an awesome responsibility and it certainly makes me feel rich about everything I do.
And when it comes to money, Eric is only somewhat right. He says you should get a job that rewards and promotes effort. But lots of lawyers and finance kids in New York thought they had stable jobs that would make them rich. Ask them today and most will tell you a different story altogether. Now they hate their jobs and have no job security or path to becoming really wealthy.
So like I said, being entrepreneur, for me, isn't about being wealthy, it's about being successful. That's a measuring stick that's far more important.
and this one from gits_tokyo:
People that I've spoken with in the past more often than not associated the idea of me doing a startup in the tech industry with gaining massive wealth. While I may entertain this, deep down I find it lacking as there's so much more than wealth to be had.
How about, living in a world... some distant future from the everyday-everyday where day-by-day you toil piecing together a vision, one day injecting it into the present, in order to influence a whole new set of social behaviors while also unfolding valuable opportunities. How about, the day of flipping that proverbial switch, releasing this vision out in the wild. How about, the potential of millions interacting with your vision, it becoming a staple part of a users online experiences. There's something undeniably provoking about all this, rush of my life.
Wealth, although a welcomed aside pales in comparison. Hell I would even go so far as to say, in a world where sex is constantly peddled as a cure all, let me say it, sex pales in comparison to the feeling I get from being an entrepreneur.

Inc Magazine on Minimum Viable Product (and a response)

Inc Magazine has a great new piece up about the increasing use of the Minimum Viable Product by businesses (and not just startups). Here's an excerpt; some of my comments are below:

One of the most gut-wrenching moments for a company is the rollout of a new product. A significant swing and miss can break a company's momentum -- and maybe its bank account. Unfortunately, after months or even years of development, many companies discover that customers aren't willing to buy their new wares. That's why some entrepreneurs are trying another approach to product launches: marketing a product online before spending much on research and development or inventory.

Consider the method used by TPGTEX Label Solutions, a Houston-based software company that specializes in bar codes and labels for manufacturers and chemical companies. Like many companies, TPGTEX rolls out new products several times a year. But instead of spending the time and money to develop products on spec, TPGTEX creates mocked-up webpages that list the features of a potential new product -- such as a system for making radio-frequency identification, or RFID, labels -- along with its price. Then, the company spends no more than a few hundred dollars marketing the product through search engines and to the contacts in its sales database and LinkedIn. It isn't until a customer actually clicks or calls to place an order that TPGTEX's developers will build the software. "We do not develop a product until we get a paying customer," says Orit Pennington, who co-founded the six-employee company with her husband in 2002. Development time is typically no more than two to three weeks, and it generally takes just a few orders to cover development costs.

TPGTEX's approach is an example of a trend in business that has been dubbed minimum viable product or microtesting. The idea is to develop something with the minimum amount of features or information needed to gauge the marketability of a product online. That might mean mocking up a website with potential features and seeing how many visitors click on the item. It might also involve buying pay-per-click ads to see how easy it is to gain potential customers. Or it might mean selling a few products on a site like eBay to see how well they perform before ordering in bulk from a wholesaler.
What sets this approach apart from practices like using focus groups is that companies base product development decisions not just on what customers say they want but on how they vote with their wallets.

Read the rest...

This article is part of a trend that has taken me a bit by surprise: the adoption of lean startup techniques outside the traditional domain of high-tech startups. The theory predicts this, of course, because the definition of a startup as “a human institution creating a new product or service under conditions of extreme uncertainty” says nothing about sector, size of company, or industry. Still, it’s always a relief to see practice and theory converge.

Of course, as more people attempt to use the Minimum Viable Product as a tactic, there are a lot of misconceptions possible. The biggest is the confusion over why this tactic is useful. The Inc story, and many others, does a good job emphasizing its lean-ness. By allowing customers to “pull” value from the company in small batches, you reduce the risk of building a product that nobody wants. Like all lean transformations, this is powerful – it increases the value of every dollar invested in new product creation.

But MVP is most powerful when it is used as part of an overall strategy of learning and discovery. And this is the most confusing, because MVP does not pay off under this strategy if we are attempting to build a minimal product. For that, release early, release often will suffice. But if our aspiration is to change the world, we need something more.

The key ideas are customer development, the pivot, MVP, and root cause analysis. Each is described in separate essays on this blog, but let me say a few words about how they work together – especially for companies with big ambitions. Big visions take a long time to develop, and require an exceptionally high degree of product/market fit. That’s just a fancy way of saying: customers have to really, really like your product. Being specific, it means that their behavior powers one of the three fundamental drivers of growth with a large coefficient. But if big products and big visions take a long time to develop, it’s exceptionally risky to build it based on vision alone. That’s because for a big product to take off, it needs to be right in many key respects. Miss just one, and you can find yourself just a few degrees off – and moving with too much momentum to change course. Think Friendster, the “achieving a failure” startup I’ve written about, Apple’s Newton, Webvan, etc. In each of these, the failure of the initial idea led to the failure of the company (or division).

Building an MVP can help mitigate that risk. But it’s not enough. What if customers hate the MVP? Does that mean your product vision is fundamentally flawed, or just that your initial product sucks? There is no way to know for sure. That’s why entrepreneurship in a lean startup is really a series of MVP’s, each designed to answer a specific question (hypothesis). Being systematic about these hypotheses is what customer development is all about. By testing each failed hypothesis leads to a new pivot, where we change just one element of the business plan (customer segment, feature set, positioning) – but don’t abandon everything we’ve learned. In order to work, these pivots have to be heading in a coherent direction, which is why vision is still such a critical part of entrepreneurship, even in a data-based decision making environment. (See “It’s a startup, not a spreadsheet” for more.)

And yet, even that is not enough. The more visionary the entrepreneur, the more difficult it is to really pivot, really seek out what’s in customers’ heads, and really create a minimum viable product. And so startups – great and terrible alike - are prone to give these ideas lip service, but fail to really take maximum advantage. That’s why a process of rigorous root cause analysis is so critical. After every major milestone, the company has to ask: what did we learn? Why didn’t we learn more? And, most importantly, make incremental investments to do better next time. This is the ultimate startup discipline, the hardest to master, and the one that pays biggest dividends. If you can embrace continuous improvement from day one, you can actually speed up as you scale. It’s an awesome thing to watch.

Sunday, October 11, 2009

Innovation inside the box

I was recently privy to a product prioritization meeting in a relatively large company. It was fascinating. The team spent an hour trying to decide on a new pricing strategy for their main product line. One of the divisions, responsible for the company’s large accounts, was requesting data about a recent experiment that had been conducted by another division. They were upset because this other team had changed the prices for small accounts to make the product more affordable. The larger-account division wanted to move the pricing in just the other direction – making the low-end products more expensive, so their large customers would have an increased incentive to upgrade.

Almost the entire meeting was taken up with interpreting data. The problem was that nobody could quite agree what the data meant. Many custom reports had been created for this meeting, and the data warehouse team was in the meeting, too. The more they were asked to explain the details of each row on the spreadsheet, the more evident it became that nobody understood how those numbers had been derived.

Worse, nobody was quite sure exactly which customers had been exposed to the experiment. Different teams had been responsible for implementing different parts of it, and so different parts of the product had been updated at different times. The whole process had taken. And by now, the people who had originally conceived the experiment were in a separate division from the people who had executed it.

Listening in, I assumed this would be the end of the meeting. With no agreed-upon facts to help make the decision, I assumed nobody would have any basis for making the case for any particular action. Boy was I wrong. The meeting was just getting started. Each team simply took whatever interpretation of the data supported their position best, and started advocating. Other teams would chime in with alternate interpretation that supported their position, and so on. In the end, decisions were made – but not based on any actual data. Instead, the executive running the meeting was forced to make decisions based on the best arguments.

The funny thing to me was how much of the meeting had been spent debating the data, when in the end, the arguments that carried the day could have been made right at the start of the emeting. It was as if each advocate sensed that they were about to be ambushed; if another team had managed to bring clarity to the situation, that might have benefited them – so the rational response was to obfuscate as much as possible. What a waste.

Ironically, meetings like this had given data and experimentation a bad name inside this company. And who can blame them? The data warehousing team was producing classic waste – reports that nobody read (or understood). The project teams felt these experiments were a waste of time, since they involved building features halfway, which meant they were never quite any good. And since nobody could agree on each outcome, it seemed like “running an experiment” was just code for postponing a hard decision. Worst of all, the executive team was getting chronic headaches. Their old product prioritization meetings may have been a battle of opinions, but at least they understood what was going on. Now they first had to go through a ritual that involved complex math, reached no definite outcome, and then proceeded to have a battle of opinions anyway!

When a company gets wedged like this, the solution is often surprisingly simple. In fact, I call this class of solutions “too simple to possibly work” because the people inside the situation can’t conceive that their complex problem could have a simple solution. When I’m asked to work with companies like this as a consultant, 99% of my job is to find a way to get the team to get started with a simple – but correct – solution.

Here was my prescription for this situation. I asked the team to consider creating what I call a sandbox for experimentation. The sandbox is an area of the product where the following rules are strictly enforced:
  1. Any team can create a true split-test experiment that affects only the sandboxed parts of the product, however:
  2. One team must see the whole experiment through end-to-end.
  3. No experiment can run longer than a specified amount of time (usually a few weeks).
  4. No experiment can affect more than a specified number of customers (usually expressed as a % of total).
  5. Every experiment has to be evaluated based on a single standard report of 5-10 (no more) key metrics.
  6. Any team that creates an experiment must monitor the metrics and customer reactions (support calls, forum threads, etc) while the experiment is in-progress, and abort if something catastrophic happens.
Putting a system like this in place is relatively easy; especially for any kind of online service. I advocate starting small; usually, the parts of the product that start inside the sandbox are low-effort, high-impact aspects like pricing, initial landing pages, or registration flows. These may not sound very exciting, but because they control the product’s positioning for new customers, they often allow minor changes to have a big impact.

Over time, additional parts of the product can be added to the sandbox, until eventually it becomes routine for the company to conduct these rigorous split-tests for even very large new features. But that’s getting ahead of ourselves. The benefits of this approach are manifest immediately. Right from the beginning, the sandbox achieves three key goals simultaneously:

  1. It forces teams to work cross-functionally. The first few changes, like a price change, may not require a lot of engineering effort. But they require coordination across departments – engineering, marketing, customers service. Teams that work this way are more productive, as long as productivity is measured by their ability to create customer value (and not just stay busy).
       
  2. Everyone understands the results. True split-test experiments are easy to classify as successes or failures, because top-level metrics either move or they don’t. Either way, the team learns immediately whether their assumptions about how customers would behave were correct. By using the same metrics each time, the team builds literacy across the whole company about those key metrics.
     
  3. It promotes rapid iteration. When people have a chance to see a project through end-to-end, and the work is done in small batches, and has a clear verdict delivered quickly, they benefit from the power of feedback. Each time they fail to move the numbers, they have a real opportunity for introspection. And, even more importantly, to act on their findings immediately. Thus, these teams tend to converge on optimal solutions rapidly, even if they start out with really bad ideas.
Putting it all together, let me illustrate with an example from another company. This team had been working for many months in a standard agile configuration: a disciplined engineering team taking direction from a product owner who would prioritize the features they should work on. The team was adept at responding to changes in direction from the product owner, and always delivered quality code.

But there was a problem. The team rarely received any feedback about whether the features they were building actually mattered to customers. Whatever learning took place was happening by the product owner; the rest of the team was just heads-down implementing features.

This led to a tremendous amount of waste, of the worst kind: building features nobody wants. We discovered this reality when the team started working inside a sandbox like the one I described above.

When new customers would try this product, they weren’t required to register at first. They could simply come to the website and start using it. Only after they started to have some success would the system prompt them to register – and after that, start to offer them premium features to pay with. It was a slick example of lazy registration and a freemium model. The underlying assumption was that making it seamless for customers to ease into the product was optimal. In order to support that assumption, the team had written a lot of very clever code to create this “tri-mode” experience (every part of the product had to treat guests, registered users and paying users somewhat differently).

One day, the team decided to put that assumption to the test. The experiment was easy to build (although hard to decide to do): simply remove the “guest” experience, and make everyone register right at the start.  To their surprise, the metrics didn’t move at all. Customers who were given the guest experience were not any more likely to register, and they were actually less likely to pay. In other words, all that tri-mode code was complete waste.

By discovering this unpleasant fact, the team had an opportunity to learn. They discovered, as is true of many freemium and lazy registration systems, that easy is not always optimal. When registration is too easy, customers can get confused about what they are registering for. (This is similar to the problem that viral loop companies have with the engagement loop: by making it too easy to join, they actually give away the positioning that allows for longer-term engagement.) More importantly, the experience led to some soul-searching. Why was a team this smart, this disciplined, and this committed to waste-free product development creating so much waste?

That’s the power of the sandbox approach.

Tuesday, October 6, 2009

A large batch of videos, slides, and audio

I've been trying very hard to avoid turning this blog into a travelogue. Normally, I try to make my post-event writeups more than just a transcript, by including reactions and comments. On this speaking tour, that's been simply impossible, so I've decided to let the following collection of videos, podcasts, and slides batch up for a little while. If you're interested in more real-time updates during my speaking tour, please tune into my twitter feed.

In the meantime, I hope you enjoy all this multimedia content. In addition to some of my recent talks, you can learn more about the Startup Visa movement and enjoy two really interesting lean startup case studies.

My Stanford Entrepreneurial Thought Leader Seminar courtesy of Stanford Ecorner (audio podcast only for now, video coming soon):


if you'd like to follow along with slides, they are here:



From high atop the BT Tower in London, this brief BT Tradespace interview:


Why do we need a Startup Visa? A Tale of 2 Erics:


Also in London, I took up a lot of airtime during day two of Seedcamp. You can read highlights on their blog, or watch this short video:


Seedcamp - Day 2 Highlights from Seedcamp on Vimeo.


Or watch my full #leanstartup presentation at Seedcamp in London:


And two bonus videos that are well worth watching (weally):

Timothy Fitz, who worked for me at IMVU, giving an in-depth presentation on the details of the continuous deployment system that we built there.


With accompanying slides:


pbWorks (formerly pbWiki) was one of the first companies that ever invited me to join their advisory board. I like to think that had some small part in causing their subsequent success. Judge for yourself by watching David Weekly's #leanstartup case study (pbWorks):


Thanks to everyone who has helped plan, organize, record and attend these many events!

Monday, October 5, 2009

The curse of prevention

Beware! I have detected a secret virus in your CPU. Due to an interaction effect between your hardware, solar flares, and quantum flux, this virus will crash your computer and erase your hard drive sometime soon. There is only one way to prevent disaster: you must click the subscribe button over on the right there. Go ahead, I’ll wait.

Did you do it? Good. Now you’re safe from that dastardly virus. How do you know my solution worked? Just wait. See, no crashing. You should really say thank you.

Now, I know some of you didn’t believe my urgent virus warning, and therefore didn’t take my proposed solution. But you’re not safe. That virus is still out there, lurking. It could strike at any minute. And when your computer eventually crashes, you should feel bad that you didn’t listen to me.

OK, I admit it. There is no virus. I did my best to exaggerate this claim without saying anything disprovable, in order to illustrate the curse of prevention. Imagine for a moment that you believed my claim about the dangerous virus. After investing in my proposed solution, you probably would be grateful that I “prevented” the problem from happening. In an example this ludicrous, that hopefully sounds funny. But companies make this mistake repeatedly.

Let’s take a common real-world example. It’s important to invest in good architecture so that your website will scale once customers arrive. If you make that investment, and then customers arrive, and the site stays up, most companies will reward the people who built the architecture and, thus, prevented the scaling problems. That’s every bit as crazy as the bogus claim I made earlier. How do you know the problem was actually prevented? Isn’t it just as possible that it never would have occurred in the first place? Or, if it really was prevented, what was the opportunity cost of choosing to prevent it ahead of time?

In other words, there is a formula for evaluating the success of any proposed prevention:

IF
cost of prevention < (probability of problem occurring) * (cost of problem)
THEN
do it
ELSE
ignore it

The killer thing about this formula is that every single term in it is unknown. And in most situations, there is significant cost involved in negotiating over the right estimates to plug in.

I have been present for these kinds of negotiations many times in my career. They are usually among the most heated arguments a company has. Like other situations that I’ve written about, they tend to devolve into competing all-or-nothing camps. One side insists that we should build things the right way, and that failure to anticipate problems is an abdication of responsibility. But the other side wants to get things done, and doing things right somehow, always, every time seems to involve postponing useful work. Both sides suspect that deep down, secretly, the other side is using their arguments over architecture (or planning, or roadmaps, or specifications) to advance a secret agenda. Ever notice how people’s pet projects seem to be exempt?

Why do they harbor that paranoia? It’s easy to see. Say you want to derail someone else’s project. Just start enumerating corner cases. Imagine everything that might go wrong, and insist that those things be prevented before the project is launched. It’s a win-win: you either dramatically increase the proposed cost of the project, making it easier to get cancelled, or you can rely on some “I-told-you-so’s” when the project does launch and encounters inevitable problems, which gives you credibility in future such arguments. On the other side, if you want a project to go forward, you can suddenly "discover" all kinds of extra efficiencies that make this particular project an especially good deal. In the past, we invested in brilliant architecture, code reuse, refactoring, modular design, etc. that now makes it a simple matter to add this feature without much risk of corner cases. Right.

Managing these situations is hard for any company, but potentially lethal for a startup. There are just so many ways for a startup to fail. I’ve lived through the over-architecture failure – where attempting to prevent all kinds of problems wound up delaying the company from putting out any product at all. And I’ve seen companies fail the other way – the so-called Friendster effect: having a high-profile technical failure just when customer adoption is going wild.

Most of the advice I’ve heard on this topic has been a kind of split-the-difference approach. The theory is that there is some truth in both camps, and the right way to manage the disagreement is to sprinkle a little bit of both into our plans. A little planning, but not too much. Prevent some corner cases, but not others. The problem with this advice, as I’ve experienced it, is that it’s pretty hard to give a rationale for why we should anticipate this problem but ignore another one. To the people being managed that way, it feels like the boss is being capricious or arbitrary. And that feeds the conspiracy feeling that decisions have an ulterior motive.

So I’d like to lay out a systematic way to avoid death-by-corner-case without sacrificing the company’s ability to grow. In other words, a principled way to combine agility with stability.

The first shift required is a change in orientation from prevention to fast response. Many problems are catastrophic only if allowed to fester. Imagine you hear from an engineer that they are worried that a certain payment subsystem is unreliable, and will therefore double-charge some customers. One way to evaluate this fear is to spend time on analysis: how many customers will be affected? What is the maximum amount of overcharging that will happen? How upset will those customers be? How much will it cost to solve this problem now? In this framework, we’ll tend to either invest in the proposed prevention or do nothing.

But there is another way. Imagine we asked the following question: if this problem does materialize in the future, how will we know? In a lot of systems, it might take days or weeks to uncover a problem-in-action. Maybe we already have a mechanism for customers to report this kind of problem, or maybe we could invest in a simple alert counter that increments whenever the problem happens, and sends a notification if it happens often. Then, we’d know immediately if the problem ever manifests, and get a simultaneous report on its severity.

We can also ask: how would we fix the problem if it does occur? If we’re practicing continuous deployment, we can be confident that we’ll be able to rush an emergency fix into production without risking introducing further problems. If not, maybe an investment in that direction would be more warranted. In other words, you can always invest in process, batch size reduction, and agility as an alternative to preventing a specific problem.

There are two principal reasons why this second approach is better. The first is that it allows us to make variable-sized investments in response to a feared corner case. Instead of “do the fix” or “hack it up” we can choose increments of investment anywhere in-between. That gives teams a lot more flexibility in the face of the numerous corner cases that come up. Second, investing in fast response is a more resilient strategy. If we’re wrong about the corner case, the investments we’ve made in fast response will allow us to respond faster to whatever problems do appear. By contrast, most investments in traditional prevention are designed to anticipate and fix a specific problem.

But investing in fast response doesn’t solve the whole problem. That’s because there’s still a lot of judgment involved in choosing the right level of investment to make in any given case. It can feel  incongruous to people who are used to the traditional model because it has a built in paradox: you will encounter a lot of cases where you know a problem exists, and you know how to solve that problem, and you are investing time related to that problem but are not investing in the solution. To a lot of smart engineers, that sounds crazy.

That’s why it’s essential to pair the fast response aspect of this approach with a disciplined commitment to root cause analysis. Regular readers of this blog will know the specific methodology I recommend, called Five Whys. But regardless of the technique you use, it’s essential that you get regular feedback about how your prevention decisions are turning out in practice. When you’re heavily investing in prevention, you need to evaluate whether that’s causing your team to go faster. If it’s not, then you’re investing too much in one-off solutions and not enough in process. And if you’re having a lot of problems, you need to have a mechanism for ramping up your investment in prevention to avoid having your whole team dragged down into firefighting. Systems like Five Whys create a natural feedback loop: when you're going too fast, causing a lot of new problems, it slows you down to invest in prevention. As those preventative efforts pay off, the team naturally speeds up.

The most dangerous situation you can find yourself in is investing in prevention and also firefighting all the time. That’s why there is a third essential component to this approach. You need to have a long-term vision of where you’re headed. That’s because not all investments are created equal. In most real-world situations, any particular problem (or proposed problem) will have multiple kinds of solutions that you could invest in. Take your typical scalability bottleneck. It could be fixed by refactoring the code itself, or by partitioning the data horizontally or vertically, or by adding additional capacity at the point of the bottleneck, or by shaping end-user demand, or even by removing the feature itself. At any given point in time, which is the right solution? Here’s my belief: the right solution is always the one that moves you closest to your vision while simultaneously solving the problem. Thus it is unacceptable to choose a solution that solves the problem but makes not progress towards the end-state, just as it is unacceptable to invest in a solution that builds a beautiful vision but doesn’t solve today’s problem. Finding such a solution is sometimes challenging, but that’s the moment when it really pays to spend some time thinking through alternative approaches. In my experience, where there is a will to find a synthesis solution, there is always a way.