Tuesday, June 23, 2009

Join the lean startup discussion at Facebook on Thursday

I recently agreed to join Facebook's fbFund incubator as a mentor. Part of that includes giving a presentation for the fbFund companies on the lean startup methodology. I just got word that this is going to be happening this coming Thursday, June 25 in Palo Alto (in the former Facebook HQ). Best of all, I got word they are specially opening this event up to you, dear readers. Priority will be given to employees of companies participating in the fbFund REV program, but I am confident that there will be room for members of the general public who want to attend. If you'd like to reserve a spot, please RSVP.
Host:
fbFund
Type:
Network:
Global
Date:
Thursday, June 25, 2009
Time:
1:30pm - 3:30pm
Location:
fbFund REV Garage
Street:
164 Hamilton
City/Town:
Palo Alto, CA
RSVP Here
So, if you are in the Bay Area and want to swing by Palo Alto on Thursday afternoon, come join the conversation. As always, if you're a reader, please do say hello.
Reblog this post [with Zemanta]

Monday, June 22, 2009

Pivot, don't jump to a new vision

In a lean startup, instead of being organized around traditional functional departments, we use a cross-functional problem team and solution team. Each has its own iterative process: customer development and agile development respectively. And the two teams are joined together into a company-wide feedback loop that allows the whole company to be built to learn. This combination allows startups to increase their odds of success by having more major iterations before they run out of resources. It increases the runway without additional cash.

Increasing iterations is a good thing - unless we're going in a circle. The hardest part of entrepreneurship is to develop the judgment to know when it's time to change direction and when it's time to stay the course. That's why so many lean startup practices are focused on learning to tell the difference between progress and wasted effort. One such practice is to pivot from one vision to the next.

Changing the vision is a hard thing to do, and should not be undertaken lightly. Some startups avoid getting customer feedback for precisely this reason: they are afraid that if early reactions are negative, they'll be "forced" to abandon their vision. That's not the goal of a lean startup. We collect feedback for one reason only: to find out whether our vision is compatible with reality or is a delusion. As Steve writes in the Four Steps to the Epiphany, we always seek to find a market for the product as currently specified, not conduct a focus group to tell us what the spec should be. If and only if we can't find any market for our current vision is it appropriate to change it.

So how do you know it's time to change direction? And how do you pick a new direction? These are challenging questions, among the hardest that an early startup team will have to grapple with. Some startups fail because the founders can't have this conversation - they either blow up when they try, or they fail to change because they are afraid of conflict. Both are lethal outcomes.

I want to introduce the concept of the pivot, the idea that successful startups change directions but stay grounded in what they've learned. They keep one foot in the past and place one foot in a new possible future. Over time, this pivoting may lead them far afield from their original vision, but if you look carefully, you'll be able to detect common threads that link each iteration. By contrast, many unsuccessful startups simply jump outright from one vision to something completely different. These jumps are extremely risky, because they don't leverage the validated learning about customers that came before.

I've spoken in some detail about a specific pivot that we went through at IMVU, when we decided to abandon the instant messaging add-on concept, and switch to a standalone instant messaging network. We went through another pivot when we switched again from instant messaging to social networking. Although I wish I could take credit for these pivots, the reality is that they were not caused by my singular insight or that of my other co-founders. Instead, they were made possible by a process-oriented approach that stimulated our thinking and encouraged us to take prudent risks. More than anything, it forced us to take advantage of necessity, the mother of invention. Here's what it looked like.

IMVU had a roughly two-month-long development cycle. Each cycle was punctuated by a meeting of our Business Advisory Board (BAB). At this meeting, we would present our goals for the cycle, all the raw results we'd managed to collect, and our conclusions about what was next. This created a forum for deep thinking and conflict over the direction of the company - a classic problem team activity. It gave the whole company license to go heads-down building product as fast as possible during the development cycle, acting as a solution team should. We knew we'd have the opportunity to think strategically at least once per cycle, so we could stay focused tactically in the meantime.

When it was time to pivot, there were usually certain signs that we'd look to. The most important one actually came from solution team activities. When your fundamental product hypothesis is wrong, the solution team is going to be chronically frustrated. You can try every kind of experiment, add new features, innovate like crazy, optimize the funnel - and get only modest results. One or two cycles of that kind of frustration and you might be able to blame the solution team for insufficient creativity. But eventually, as the company fails to find traction, you start to ask problem team questions: are we really solving an important problem for customers? Are our early adopters really adopting? And does our product really solve the problem we've promised them?

Ironically, although it's the solution team that is the early-warning system ("canary in the coal mine") for pivots, it's actually hard for the solution team to make the decision to pivot. That's why it's so essential to have a co-equal problem team. The more work you've sunk into a product or vision, the harder it is to let it go. As the CTO/VP Engineering, I was the worst offender. It was incredibly hard for me to throw out working code, especially when it was well-factored, unit tested, and generally brilliant (if I did say so myself). I was stuck between a rock and a hard place. Leading up to a pivot, each cycle, despite our best efforts, the metrics weren't good enough. We didn't believe the problem was that we weren't trying hard enough. But we also didn't want to believe that the work we'd expended so far was a waste. It was painful.

The problem team/solution team combined with the concept of the pivot provides a way out. First of all, remember that each team is cross-functional. That means that I (and other engineers) were able to participate in the problem team discussions. Just wearing a different hat made it easier to consider abandoning our work. Such discussions would have been impossible in our execution-oriented engineering team meetings. Context matters. Providing a full view of all the raw data helped, too. It allowed our advisors to help us see patterns we had missed, zooming out the viewpoint from the trees back to the forest. From that angle, it was easier to accept that our micro problems had macro causes.

The pivot helped even more. The hard part about abandoning work is the feeling of wasted effort, that we'd have been just as well-off if we had spent the past few months on vacation instead of working incredibly hard. By pivoting, we honor all the effort by recognizing that learning would have been impossible without the work of the solution team. And rather than just abandoning all that work, we look for ways to take advantage of it in our new direction.

That's the pattern we see in so many successful startups. They did everything they could to take advantage of what they'd built so far. Most engineers naturally think about repurposing the technology platform, and this is a common pattern. But there are a lot of other possibilities. I'd like to call out three in particular: pivot on customer segment, pivot on customer problem, or pivot on a specific feature.

In a segment pivot, we try to take our existing product and use it to solve a similar problem for a different set of customers. This happens commonly when consumer products get unexpectedly adopted in enterprise, as happened to my friends at PBworks. In those cases, the product may stay mostly the same, but the positioning, marketing, and - most importantly - prioritization of features changes dramatically.

In a customer problem pivot, we try to solve a different problem for the same customer segment. This is an exciting kind of change, usually. When doing intense customer development, the problem team can attain a high level of empathy with potential customers. If the results of that exercise is a realization that customers have a problem that our solution doesn't address, and that problem is more promising - it's time to pivot. Starbucks famously did this pivot when they went from selling coffee beans and espresso makers to brewing drinks in-house. They were still serving high-end coffee afficionados, but in a more convenient form. This paved the way for their crossing-the-chasm type breakthrough with mainstream customers.

In a feature pivot, we select out a specific feature from our current product and reorient the whole company around that. A good example is Paypal realizing that their customers were gravitating to the email-payments part of their original solution, and ignoring the complex PDA-based cryptography solution. In order to do this kind of pivot, you need to pay close attention to what customers are really doing, not what you think they should do. It also requires abandoning the extra features that make it hard for new customers to discover what's really valuable about the new, simplified solution.

Without the tools to pivot well, startups get stuck between two extremes: the living dead, still expending energy but not really making progress, always hoping the next new feature will cause traction to magically materialize, and the compulsive jumper, never picking a single direction long enough to find out if there's anything there. Instead of these dead-ends, use the problem and solution team framework and then: pivot, don't jump.

Monday, June 15, 2009

Why Continuous Deployment?

Of all the tactics I have advocated as part of the lean startup, none has provoked as many extreme reactions as continuous deployment, a process that allows companies to release software in minutes instead of days, weeks, or months. My previous startup, IMVU, has used this process to deploy new code as often as an average of fifty times a day. This has stirred up some controversy, with some claiming that this rapid release process contributes to low-quality software or prevents the company from innovating. If we accept the verdict of customers instead of pundits, I think these claims are easy to dismiss. Far more common, and far more difficult, is the range of questions from people who simply wonder if it's possible to apply continuous deployment to their business, industry, or team.

The particulars of IMVU’s history give rise to a lot of these concerns. As a consumer internet company with millions of customers, it may seem to have little relevancy for an enterprise software company with only a handful of potential customers, or a computer security company whose customers demand a rigorous audit before accepting a new release. I think these objections really miss the point of continuous deployment, because they focus on the specific implementations instead of general principles. So, while most of the writing on continuous deployment so far focuses on the how of it, I want to focus today on the why. (If you're looking for resources on getting started, see "Continuous deployment in 5 easy steps")

The goal of continuous deployment is to help development teams drive waste out of their process by simultaneously reducing the batch size and increasing the tempo of their work. This makes it possible for teams to get – and stay – in a condition of flow for sustained periods. This condition makes it much easier for teams to innovate, experiment, and achieve sustained productivity. And it nicely compliments other continuous improvement systems, such as Five Whys.

One large source of waste in development is “double-checking.” For example, imagine a team operating in a traditional waterfall development system, without continuous deployment, test-driven development, or continuous integration. When a developer wants to check-in code, this is a very scary moment. He or she has a choice: check-in now, or double-check to make sure everything still works and looks good. Both options have some attraction. If they check-in now, they can claim the rewards of being done sooner. On the other hand, if they cause a problem, their previous speed will be counted against them. Why didn't they spend just another five minutes making sure they didn't cause that problem? In practice, how developers respond to this dilemma is determined by their incentives, which are driven by the culture of their team. How severely is failure punished? Who will ultimately bear the cost of their mistakes? How important are schedules? Does the team value finishing early?

But the thing to notice in this situation is that there is really no right answer. People who agonize over the choice reap the worst of both worlds. As a result, developers will tend towards two extremes: those who believe in getting things done as fast as possible, and those who believe that work should be carefully checked. Any intermediate position is untenable over the long-term. When things go wrong, any nuanced explanation of the trade-offs involved is going to sound unsatisfying. After all, you could have acted a little sooner or a little more careful – if only you’d known what the problem was going to be in advance. Viewed through the lens of hindsight, most of those judgments look bad. On the other hand, an extreme position is much easier to defend. Both have built-in excuses: “sure there were a few bugs, but I consistently over-deliver on an intense schedule, and it’s well worth it” or “I know you wanted this done sooner, but you know I only ever deliver when it’s absolutely ready, and it’s well worth it.”

These two extreme positions lead to factional strife in development teams, which is extremely unpleasant. Managers start to make a note of who’s on which faction, and then assign projects accordingly. Got a crazy last-minute feature, get the Cowboys to take care of it – and then let the Quality Defenders clean it up in the next release. Both sides start to think of their point of view in moralistic terms: “those guys don’t see the economic value of fast action, they only care about their precious architecture diagrams” or “those guys are sloppy and have no professional pride.” Having been called upon to mediate these disagreements many times in my career, I can attest to just how wasteful they are.

However, they are completely logical outgrowths of a large-batch-size development process that forces developers to make trade-offs between time and quality, using the old “time-quality-money, pick two fallacy.” Because feedback is slow in coming, the damage caused by a mistake is felt long after the decisions that caused the mistake were made, making learning difficult. Because everyone gets ready to integrate with the release batch around the same time (there being no incentive to integrate early), conflicts are resolved under extreme time pressure. Features are chronically on the bubble, about to get deferred to the next release. But when they do get deferred, they tend to have their scope increased (“after all, we have a whole release cycle, and it’s almost done…”), which leads to yet another time crunch, and so on. And, of course, the code rarely performs in production the way it does in the testing or staging environment, which leads to a series of hot-fixes immediately following each release. These come at the expense of the next release batch, meaning that each release cycle starts off behind.

Many times when I interview a development team caught in the pincers of this situation, they want my help "fixing people." Thanks to a phenomenon called the Fundamental Attribution Error in psychology, humans tend to become convinced that other people’s behavior is due to their fundamental attributes, like their character, ethics, or morality – even while we excuse our own actions as being influenced by circumstances. So developers stuck in this world tend to think the other developers on their team are either, deep in their souls, plodding pedants or sloppy coders. Neither is true – they just have their incentives all messed up.

You can’t change the underlying incentives of this situation by getting better at any one activity. Better release planning, estimating, architecting, or integrating will only mitigate the symptoms. The only traditional technique for solving this problem is to add in massive queues in the forms of schedule padding, extra time for integration, code freezes and the like. In fact, most organizations don’t realize just how much of this padding is already going on in the estimates that individual developers learn to generate. But padding doesn’t help, because it serves to slow down the whole process. And as all development teams will tell you – time is always short. In fact, excess time pressure is exactly why they think they have these problems in the first place.

So we need to find solutions that operate at the systems level to break teams out of this pincer action. The agile software movement has made numerous contributions: continuous integration, which helps accelerate feedback about defects; story cards and kanban that reduce batch size; a daily stand-up that increases tempo. Continuous deployment is another such technique, one with a unique power to change development team dynamics for the better.

Why does it work?

First, continuous deployment separates out two different definitions of the terms “release.” One is used by engineers to refer to the process of getting code fully integrated into production. Another is used by marketing to refer to what customers see. In traditional batch-and-queue development, these two concepts are linked. All customers will see the new software as soon as it’s deployed. This requires that all of the testing of the release happen before it is deployed to production, in special staging or testing environments. And this leaves the release vulnerable to unanticipated problems during this window of time: after the code is written but before it's running in production. On top of that overhead, by conflating the marketing release with the technical release, the amount of coordination overhead required to ship something is also dramatically increased.

Under continuous deployment, as soon as code is written, it’s on its way to production. That means we are often deploying just 1% of a feature – long before customers would want to see it. In fact, most of the work involved with a new feature is not the user-visible parts of the feature itself. Instead, it’s the millions of tiny touch points that integrate the feature with all the other features that were built before. Think of the dozens of little API changes that are required when we want to pass new values through the system. These changes are generally supposed to be “side effect free” meaning they don’t affect the behavior of the system at the point of insertion – emphasis on supposed. In fact, many bugs are caused by unusual or unnoticed side effects of these deep changes. The same is true of small changes that only conflict with configuration parameters in the production environment. It’s much better to get this feedback as soon as possible, which continuous deployment offers.

Continuous deployment also acts as a speed regulator. Every time the deployment process encounters a problem, a human being needs to get involved to diagnose it. During this time, it’s intentionally impossible for anyone else to deploy. When teams are ready to deploy, but the process is locked, they become immediately available to help diagnose and fix the deployment problem (the alternative, that they continue to generate, but not deploy, new code just serves to increase batch sizes to everyone’s detriment). This speed regulation is a tricky adjustment for teams that are accustomed to measuring their progress via individual efficiency. In such a system, the primary goal of each engineer is to stay busy, using as close to 100% of his or her time for coding as possible. Unfortunately, this view ignores the overall throughput of the team. Even if you don’t adopt a radical definition of progress, like the “validated learning about customers” that I advocate, it’s still sub-optimal to keep everyone busy. When you’re in the midst of integration problems, any code that someone is writing is likely to have to be revised as a result of conflicts. Same with configuration mismatches or multiple teams stepping on each others’ toes. In such circumstances, it’s much better for overall productivity for people to stop coding and start talking. Once they figure out how to coordinate their actions so that the work they are doing doesn’t have to be reworked, it’s productive to start coding again.

Returning to our development team divided into Cowboy and Quality factions, let’s take a look at how continuous deployment can change the calculus of their situation. For one, continuous deployment fosters learning and professional development – on both sides of the divide. Instead of having to argue with each other about the right way to code, each individual has an opportunity to learn directly from the production environment. This is the meaning of the axiom to “let your defects be your teacher.”

If an engineer has a tendency to ship too soon, they will tend to find themselves grappling with the cluster immune system, continuous integration server, and five whys master more often. These encounters, far from being the high-stakes arguments inherent in traditional teams are actually low-risk, mostly private or small-group affairs. Because the feedback is rapid, Cowboys will start to learn what kinds of testing, preparation and checking really do let them work faster. They’ll be learning the key truth that there is such a thing as “too fast” – many quality problems actually slow you down.

But for engineers that have the tendency to wait too long before shipping, they too have lessons to learn. For one, the larger the batch size of their work, the harder it will be to get it integrated. At IMVU, we would occasionally hire someone from a more traditional organization who had a hard time letting go of their “best practices” and habits. Sometimes they’d advocate for doing their work on a separate branch, and only integrating at the end. Although I’d always do my best to convince them otherwise, if they were insistent I would encourage them to give it a try. Inevitably, a week or two later, I’d enjoy the spectacle of watching them engage in something I called “code bouncing.” It's like throwing a rubber ball against the wall. In a code bounce, someone tries to check in a huge batch. First they have integration conflicts, which require talking to various people on the team to know how to resolve them properly. Of course, while they are resolving, new changes are being checked in. So new conflicts appear. This cycle repeats for a while, until the team either catches up to all the conflicts or just asks the rest of the team for a general check-in freeze. Then the fun part begins. Getting a large batch through the continuous integration server, incremental deploy system, and real-time monitoring system almost never works on the first try. Thus the large batch gets reverted. While the problems are being fixed, more changes are being checked in. Unless we freeze the work of the whole team, this can go on for days. But if we do engage in a general check-in freeze, then we’re driving up the batch size of everyone else – which will lead to future episodes of code bouncing. In my experience, just one or two episodes are enough to cure anyone of their desire to work in large batches.

Because continuous deployment encourages learning, teams that practice it are able to get faster over time. That’s because each individual’s incentives are aligned with the goals of the whole team. Each person works to drive down waste in their own work, and this true efficiency gain more than offsets the incremental overhead of having to build and maintain the infrastructure required to do continuous deployment. In fact, if you practice Five Whys too, you can build all of this infrastructure in a completely incremental fashion. It’s really a lot of fun.

One last benefit: morale. At a recent talk, an audience member asked me about the impact of continuous deployment on morale. This manager was worried that moving their engineers to a more-rapid release cycle would stress them out, making them feel like they were always fire fighting and releasing, and never had time for “real work.” As luck would have it, one of IMVU’s engineers happened to be in the audience at the time. They provided a better answer than I ever could. They explained that by reducing the overhead of doing a release, each engineer gets to work to their own release schedule. That means, as soon as they are ready to deploy, they can. So even if it’s midnight, if your feature is ready to go, you can check-in, deploy, and start talking to customers about it right away. No extra approvals, meetings, or coordination required. Just you, your code, and your customers. It’s pretty satisfying.

Friday, June 12, 2009

Lean Startup Workshop scholarship program

I haven't had much time to write lately, and so haven't been able to share much about the Lean Startup Workshop series I have been producing with O'Reilly in their Master Class division. We had the first event last month, and the next is coming up on June 18th. The May workshop was a huge success, with much better turnout and feedback than I had any right to expect. I'm incredibly grateful to the early adopters who were able to be there. Thank you all.

Part of what made the workshop so successful was the caliber of the participants in the room. These were serious entrepreneurs who have the vision to see how the lean startup concept can help them increase their odds of success. Thus, the quality of the questions I was asked, the discussion in the room during exercises, and the overall intensity was higher than anything I've experienced in any other venue. That's why I'm so excited about this format. (For more on what it was like, you can read an in-depth review from the first workshop here).

So far, we have limited registration at the workshops to people who can afford the admittedly high cost of entry. I believe this is part of what made the first workshop such a success - the people in the room were visionary customers who therefore came ready to work, not just to be entertained (by contrast with what I see at some of my speaking engagements). However, this financial bar has had the effect of excluding one segment of potential customers that I'd really like to see there - early stage entrepreneurs who have all the intelligence and vision of their later stage counterparts, but simply cannot afford the cash flow to attend.

Thus, I'm excited to announce that we're trying an experiment in providing scholarships for worthy entrepreneurs. For starters, we've reserved a few seats in the June 18th workshop (next week), and O'Reilly has agreed to jointly sponsor these scholarships with me. We're also soliciting additional sponsors for future workshops; if you or your company are interested, please let me know. The next two workshops aren't scheduled until the fall (Oct 30 in SF, Dec 10 in NYC) - look for a separate announcement about those.

These scholarships will provide extremely discounted pricing for entrepreneurs who demonstrate that they would contribute positively to the discussion in the room. They are not for people who are casually interested - we're looking for more early adopters. If you think you meet that description, and want to come join us on June 18th, please send me an email with a brief less-than-one-page (please, no more) explanation of why you want to be there. We'll reply by email if you are selected to attend.

Once again, thanks to all of you for your passion and support. For more on the workshop series, you can read about them here.
Reblog this post [with Zemanta]

Tuesday, June 9, 2009

The Lean Startup Tokyo edition

I had a blast speaking at Startonomics Tokyo, which was organized to foster ties between the startup cultures in Japan and Silicon Valley. It was an eye-opening day, and a great crowd to present to. As usual, I'll post the slides and then check in with the live commentary and feedback, and offer some additional comments. Without further ado, the slides:



And now, the feedback:
Slansing97: #leanstartup @ericries Stumbled into your live talk, and it's very relevant to me! I'm watching as EA clings onto the waterfall model. Thx!

adamjacksonSF: #GoaP #leanstartup - notes and slides from Eric's preso - doesn't do it justice - go see him live if you can - http://bit.ly/Uherx

yongfook: @ericries is a rock star. Very concise presentation and a great speaker. I am now decompressing with a guinness. Mmm. #goap
Thanks!

ericnakagawa: Show of hands how many in startup here? 40%, How many think they could iterate faster? Same #. #leanstartup #goap
It was great to be in an audience of entrepreneurs who recognized the value of iteration and speed. Even though they may not know how to improve, they were eager to learn. It meant the questions and discussion were very practical.
benjaminjoffe: early adopters of buggy product are visionary customers, sometimes smarter than founders! #goap #ericries

InvisibleGaijin: #goap #leanstartup Eric Ries talks about importance of "visionary customers" in startup success. Brilliant insight.
Many founders don't like to hear that visionary customers are as smart, maybe even more so, than they are. Startups need to spend time with these customers. In fact, early stage companies shouldn't be able to get time from anyone else - who else would be crazy enough to try an truly innovative new product? Incidentally, I can't take credit for this idea - it appears in The Four Steps to the Epiphany, Crossing the Chasm, and many others.

christinelu: if you're building a disruptive innovation ...the only people who you want to talk to are early adopters. not investors says @ericries #goap
heysanford: Nay, recipe for chasm crossing fail. RT @christinelu: building a disruptive innovation? ... only talk to early adopters. @ericries #goap
Continuing on the theme of early adopters, I thought this exchange was really interesting. First of all, let me emphasize how much more important it is to talk to customers than to talk to investors, journalists, and the people who hang around at industry trade shows. I've previously recounted the story of the IMVU "IM add-on" feature, a feature that sounds so good on paper I wound up building it twice. Yet it's one of those features that's only ever requested by investors and engineers - never by customers.

Books like Crossing the Chasm are excellent, but they can be misleading. Getting to the chasm is actually quite difficult; most truly early-stage startups never even get that far. The most important thing is to realize that all strategies and tactics are context-sensitive. It's never "always correct" to do a certain thing, and therefore there really aren't any universal "best practices." Instead, we need to focus on tuning our practices to our real situation. Thus, even something as general as "listening to customers" can actually be lethally bad advice.
davetroy: "Think about how *hard* it would be to get a big company to steal your idea. That paranoia is totally ridiculous." - @ericries at #goap
People who work in big companies often laugh out loud when they hear startup founders acting paranoid about having their great ideas stolen. That's not to say that there are no situations where patent or trade secret protection is important. Rather, it shouldn't be considered obvious. Most startup ideas are actually completely worthless without learning and iteration to back them up.
davetroy: "Fanatical empathy for your customer's pain point is the key to designing great products." - @ericries at #goap Tokyo
I get a particular type of question quite often - I call it the "Steve Jobs defense." The idea is that great product visionaries don't need to listen to customers or test their ideas against reality. They just call forth amazing products from the ether. That's how the iPhone was made, right? I really don't buy this account of product visionaries. For one, it doesn't match my experience having worked with some true visionaries at all. It also doesn't seem to line up with the documentary record. Read Founders at Work or take a look at this video of Jobs himself, and see if you see anything at odds with that story.

My belief is that what makes product visionaries awesome is their ability to have radical empathy for their customers, and then to rigorously hold teams accountable for building solutions that match that standard.


Most surprising of all, to me at least, were the questions I got about how to reconcile the lean startup with "the Japanese way of doing business." Since I learned much of what I know about lean from studying Toyota, you can imagine how great a shock this was. After some discussion, it seemed like what I was hearing was that Japanese companies like Toyota have been so successful that many people have forgotten the entrepreneurial roots of those same companies. For anyone interested in this topic, I highly recommend reading Toyota Production System: Beyond Large-Scale Production.


I want to thank everybody who helped oragnize the Startonomics Japan event and the whole Geeks on a Plane trip, especially Dave McClure and Founders Fund, who arranged for me to speak in Tokyo. I had a great time, and learned a great deal.
Reblog this post [with Zemanta]

Monday, June 8, 2009

Datablindness

Most of us are swimming in a sea of data about our products, companies, and teams. Too much of this data is non-actionable. That’s because many of our reports feed us vanity metrics: numbers that make us look good but don’t really help make decisions.

Yet even among those who have access to good actionable metrics, I’ve noticed a phenomenon that prevents taking maximum advantage of data. It’s a condition I call datablindness, and it's a painful affliction.

Imagine you are crossing the street. You constantly assess the situation, looking for hazards and timing your movements carefully to get across safely. Now imagine the herculean task that faces those who are blind. That they can function so well in our inhospitable modern life is impressive. But imagine if a blind person had to navigate the street as follows: whenever they wanted to know about their surroundings, they had to ask for a report. Sometime later, a guide would rattle off useful information, like the density of cars in the immediate vicinity, how that density compares to historical averages, the average mass and velocity of recent cars. Is that a good substitute for vision? I think we can all agree that it wouldn’t get most people across the street.

That’s what most startup decisions are like. Because of the extreme unknowns inherent in startup situations, we are all blind – to the realities of what customers what, market dynamics, and competitive threats. In order to use data effectively, we have to find ways to overcome this blindness. Periodic or on-demand reports are one possibility, but we can do much better. We can achieve a level of insight about our surroundings that is much more like vision. We can learn to see.

I got a powerful taste of datablindness recently, as I’ve started to work with various large companies as partners in setting up events, speeches, and other products to sell around the Lean Startup concept. Yet, whenever I find myself transitioning responsibility for one of these events to these third-parties, I have this sudden sensation of loss. I suddenly lose my ability to judge if our marketing programs are being effective. I start to get very fuzzy on questions like “are we making progress towards our goals?” In other words, I’m experiencing datablindness.

What’s happening? Mostly, I’m no longer being hit over the head with data.

For example, a recent event I held started with a customer validation exercise (actually, this example is fictionalized for clarity). I had it all set up to a jury-rigged SurveyMonkey-PayPal minimum viable product. It was pretty ugly, the marketing and design sucked, and I was embarrassed by it. Yet it had one huge advantage. Whenever someone decided to buy a ticket, I got an email immediately letting me know. So throughout the process of taking deposits and then selling seats, I was getting constant impossible-to-ignore feedback about how I was doing. For example, I quickly learned that when I twittered about the event, more often than not I would make a sale. Yet, when I tried other forms of promotion, I’d have to accept their failures when the emails failed to come. True, this wasn’t nearly as good as a true split-testing environment, but it was powerful nonetheless.

Now that I put on events with official hosts and sponsors, my experience is different. Of course, I can still get access to the data about who’s signing up and when – and a lot more analytics, to boot – but I have to ask. Asking imposes overhead. When I get a response, when someone tells me “hey, we had 3 more signups” I’m never quite sure if those are the same three signups I heard about yesterday, and this person just has somewhat stale information, or if we had three new ones. And of course, if I twitter about the workshop on a Friday afternoon, I won’t know if that had any impact until Monday – unless I want to be a pain and bother someone on their weekend. There are lots of good reasons why I can’t have instantaneous access to this data, and each partner has their own. I wonder if their internal marketing folks are as datablind as I feel. It’s not a pleasant sensation.

Let me give another example (as usual, a lightly fictionalized composite) drawn from my consulting practice. This startup has been busy transforming their culture and process to incorporate split-testing. I remember a period where they were suffering from acute datablindness. The creators of split-tests were disconnected from the results. So the product development team was busy creating lots of split-tests for lots of hypotheses. Each day, the analytics team would share a report with them that had the details of how each test was doing. But for a variety of reasons, nobody was reading these reports. The number of active experiments was constantly growing, individual tests were never getting completed. This had bad effects on the user experience, but much worse was the fact that the company was expending energy measuring but not really learning.

The solution turned out to be surprisingly simple. It required two things. First, we had to revise the way the reports were presented. Instead of a giant table that packed in a lot of information about the ever-growing list of experiments, we gave each experiment it’s own report, complete with basic visualizations of which variation was currently more successful. (This is one of the three A’s of metrics: Accessible). Second, we changed the process for creating a split-test to integrate it in with the team’s story prioritization process. The Product Owner would not mark an experiment-related story as “done” until the team had collected enough data to make a decision about the outcome of the experiment relative to their expectations. Since only a certain number of stories can be in-progress at any one time, these experiment stories threaten to clog up the pipeline and prevent new work from starting. That’s causing the Product Owner and team to spend more time with each other reviewing the results of experiments, which is allowing them to learn and iterate much faster. Within a few weeks, they have already discovered that huge parts of their product, which cause a lot of extra work for the product development team due to their complexity, are not affecting customer behavior at all. They’ve learned to see this waste.

Curing datablindness isn’t easy, because unlike real blindness, datablindness is a disability that many people find refreshingly comfortable. When we only have selective access to data, it’s much easier to be reassured that we’re making progress, or even to fall back on judging progress by how busy our team is. For a lean startup, this lack of discipline is anathema. So how do we reduce datablindness?
  1. Have data cause interrupts. We have to invent process mechanisms that force decision makers to regularly confront the results of their decisions. This has to happen with regularity, and without too much time elapsing, or else we might forget what decisions we made. When the incidence rate is small, emails or text messages are a great mechanism. That’s why we have operations alerts trigger a page, but it can also work for other customer events. I’ve often wanted to wire up a bell to sales data, so that when we make a sale, we literally hear the cash register ring.

    When the volume is too high for these kinds of tricks, we can still create effective interrupts. Imagine if the creator of a new split-test received a daily email with the results of that test, including the computer’s judgment of which branch was winning. Or imagine an automatic system that caused the creator of a new feature to get daily updates on its usage for the first three weeks of it being live. Certainly our marketing team should be getting real-time alerts about the impact of a new promotion or ad blitz.

  2. Require data to justify decisions. Whenever you see someone making a decision, ask them what data they looked at. Remember that data can come in qualitative as well as quantitative forms. Just the act of asking can have powerful effects. It serves as a regular reminder that it’s possible to make data-based decisions, even if it’s not easy. When you hear someone say that they think it would have been impossible to use data to influence their decision, that might be a signal to investigate via root cause analysis.

    My experience is that companies that ask questions about how decisions get made are much more meritocratic than those that don’t. Any human organization is vulnerable to politics and cults of personality. Curing datablindness is not a complete antidote, but it can provide an alternative route for well-intentioned people to advocate for what they think is right.

  3. Use pilot programs. Another variation on this theme is to consistently pilot new initiatives before rolling them out to full-scale release. This is true for split-testing features, but it’s also true for marketing programs or even operations changes. In general, avoid make big all-at-once changes. Insist on showing that the idea works in micro-scale, and then proceed to roll it out on a larger scale. There are a lot of advantages to piloting, but the one that bears on datablindness is this: it’s extremely difficult to argue that your pilot program is a success without referring back to the expectations that got it funded in the first place. At a minimum, the pilot team will have to consult a bunch of data right before their final “success” presentation. As people get more and more used to piloting, they will start to ask themselves “why wait until the last minute?” (See Management Challenges for the 21st Century by Peter Drucker for more on this thesis.)
Luckily, datablindness is not an incurable condition. Have stories of how you’ve seen it cured? Share them in the comments, so we can all learn how to eradicate it.

Reblog this post [with Zemanta]

Friday, June 5, 2009

It’s a startup, not a spreadsheet

Some people, when they start to realize the power of using data to inform their decisions, become obsessed with optimization. I think this idea is particularly appealing to those of us from an engineering background. By reducing the decisions we have to make to a series of quantitative questions, we can avoid a lot of real-life messiness. Unfortunately, most decisions that confront startups lack a definitive right answer. Sometimes an early negative result from an experiment is a harbinger of doom for that product, and means it should be abandoned. Other times, it’s just an indicator that further iteration is needed. The only way to get good at these decisions is to practice making them, pay attention to what happens, compare it to what you thought would happen, and learn, learn, learn.

This has given rise to another school of thought, one that sees quantitative analysis, models, and anything involving spreadsheets as inherently anti-innovative and, therefore, anti-startup. But this is wrong, too. Spreadsheets, and predictive modeling in particular, have an important role to play in startups. It’s just very different than what it looks like in other contexts.

Let’s first take a look at what happens when spreadsheets go horribly wrong. For a change of pace, I’ll take an example from a startup inside a large enterprise. Imagine a general manager that has read The Innovator’s Dilemma and related books, and is therefore trying hard to help her organization make a transition to a new product category via disruptive innovation. She knows the internal politics are tricky, but she’s navigated them well. She has a separate team, with its own culture and office, and a mandate straight from top management to innovate without regard to the company’s historic products, channels, or supply chain. So far, so good.

Still, this manager is going to spend the company’s money, and needs to be held accountable. So somebody from the CFO’s organization prepares an ROI-justification spreadsheet for this new team. Because this is a new skunkworks-type project, everyone involved is savvy enough to understand that the initial ROI is likely to be low, much lower than projects that are powered by sustaining innovation. And so the spreadsheet is built with conservative assumptions, including a final revenue target.

Everything that’s happened so far seems reasonable. And yet we’re now headed for trouble. No matter how low we make the revenue projections for this new product, it’s extremely unlikely that they are achievable. That’s because the model is based on assumptions about customers that are totally unproven. If we already knew who the customer was, how they would behave, how much they would pay, and how to reach them, this wouldn’t be a disruptive innovation. When the project winds up getting cancelled for failing to meet its ROI justification, it’s natural for the entrepreneur to feel like it was the CFO – and their innovation-sucking spreadsheet – that is the real cause.

And yet, it’s not really fair to ask that the company’s money be spent without anyone bothering to build a financial model that can be used to judge success. Certainly venture-backed startups don’t have this luxury – every business plan has a model in it. Just because entrepreneurs tend to forget about these models doesn’t mean their investors do. Companies that reliably fail to make their forecasted numbers are exceptionally prone to “management retooling.”

I think the problem with this approach is not the presence of the spreadsheet, but how it’s used. In a startup context, numbers like gross revenue are actually vanity metrics, not actionable metrics. It’s entirely possible for the startup to be a massive success without having large aggregate numbers, because the startup has succeeded in finding a passionate, but small, early adopter base that has tremendous per-customer behavior. Similarly, it’s easy to generate large aggregate numbers by simply falling back to non-disruptive or non-sustainable tactics (see Validated learning about customers for one example). And in a corporate context, a result in which the startup proves that a particular innovation is non-viable is actually very valuable learning.

The challenge is to find a way to use spreadsheets that can reward all of these positive outcomes, while still holding the team accountable if they fail to deliver. In other words, we want to use the spreadsheet to quantify our progress using the most important unit: validated learning about customers.

The solution is to change our focus from outputs to inputs. One way to conceive of our goal in an early-stage venture is to incrementally “fill in the blanks” for the business model that we think will one day power our startup. For example, say that your business model calls for a 4% conversion rate – as ours did initially at IMVU.

After a few months of early beta at IMVU, we discovered that our actual conversion rate was about 0.4%. That’s not too surprising, because our product was pretty bad in those days. But after a few more iterations, it became clear that improvements in the product were going to drive the conversion rate up – but probably not by a factor of 10. As the product got better, we could see the rate getting closer and closer to the mythical “one percent rule.” Even that early, it became clear that 4% was not an achievable goal. Luckily, we also discovered that certain other metrics, like LTV and CPA were much better than we initially projected. Running the revised business model with these new numbers was great news – we still had a shot at a viable business.

That’s hardly the end of the story, since there is still a long way to go between validating a business model in micro-scale and actually building a mainstream business. But proving your assumptions with early adopters is an essential first step. It provides a baseline against which you can start to assess your long-term assumptions. If it costs $0.10 to acquire an early adopter, how much should it cost to acquire a mainstream customer? $0.50? $1.00? Maybe. But $10.00? Unlikely.

Think back to the conflict between our Innovator’s Dilemma general manager and her nemesis, the CFO. The resolution I am suggesting is that they jointly conceive of their project as filling-in the missing parts of the spreadsheet, replacing assumptions and guesses with facts and informed hypotheses. As the model becomes clear, then – and only then – does it make sense to start trying to set milestones in terms of overall revenue. And as long as the startup is in learning and discovery mode – which means at least until the manager is ready to study Crossing the Chasm – these milestones will always have to be hybrids, with some validation components and some gross revenue components.

This model of joint accountability is at the heart of the lean startup, and is just as applicable to venture-backed, bootstrapped, and enterprise startups. As with most startup practices, it requires us to do a constant balancing act between execution and learning – both of which require tremendous discipline. The payoff is worth the effort.

Reblog this post [with Zemanta]