Saturday, November 29, 2008

The ABCDEF's of conducting a technical interview

I am incredibly proud of the people I have hired over the course of my career. Finding great engineers is hard; figuring out who's good is even harder. The most important step in evaluating a candidate is conducting a good technical interview. If done right, a programming interview serves two purposes simultaneously. On the one hand, it gives you insight into what kind of employee the candidate might be. But it also is your first exercise in impressing them with the values your company holds. This second objective plays no small part in allowing you to hire the best.

Balancing competing objectives is a recurring theme on this blog - it's the central challenge of all management decisions. Hiring decisions are among the most difficult, and the most critical. The technical interview is at the heart of these challenges when building a product development team, and so I thought it deserved an entire post on its own.

In this post I'll follow what seems to be a pattern for me: lay out a theory of what characterizes a good interview, and then talk practically about how to conduct one.

When I train someone to participate in a technical interview, the primary topic is what we're looking for in a good candidate. I have spent so much time trying to explain these attributes, that I even have a gimmicky mnemonic for remembering them. The six key attributes spell ABCDEF:
  • Agility. By far the most important thing you want to hire for in a startup is the ability to handle the unexpected. Most normal people have a fairly narrow comfort zone, where they excel in their trained specialty. Those people also tend to go crazy in a startup. Now, we're not looking for people who thrive on chaos or, worse, causes chaos. We want someone who is a strong lateral thinker, who can apply what they've learned to new situations, and who can un-learn skills that were useful in a different context but are lethal in a new one. When talking about their past experience, candidates with agility will know why they did what they did in a given situation. Beware anyone who talks too much about "best practices" - if they believe that there are practices that are ideally suited to all situations, they may lack adaptability.

    To probe for agility, you have to ask the candidate questions involving something that they know little about.

  • Brains. There's no getting around the fact that at least part of what you should screen for is raw intelligence. Smart people tend to want to work with smart people, so it's become almost a cliche that you want to keep the bar as high as you can for as long as you can. Microsoft famously uses brainteasers and puzzles as a sort of quasi-IQ test, but I find this technique difficult to train people in and apply consistently. I much prefer a hands-on problem-solving excercise, in a related discipline to the job they are applying for. For software engineers, I think this absolutely has to be a programming problem solved on a whiteboard. You learn so much about how someone thinks by looking at code you know they've written, that it's worth all the inconvenience of having to write, analyze and debug it by hand.

    I prefer to test this with a question about the fundamentals. The best candidates have managed to teach me something about a topic I thought I already knew a lot about.

  • Communication. The "lone wolf" superstar is usually a disaster in a team context, and startups are all about teams. We have to find candidates that can engage in dialog, learning from the people around them and helping find solutions to tricky problems.

    Everything you do in an interview will tell you something about how the candidate communicates. To probe this deeply, ask them a question in their area of expertise. See if they can explain complex concepts to a novice. If they can't, how is the company going to benefit from their brilliance?

  • Drive. I have most been burned by hiring candidates that had incredible talents, but lacked the passion to actually bring them to work every day. You need to ask: 1) does the person care about what they work on? and 2) can they get excited about what your company does? For a marketing job, for example, it's reasonable to expect that a candidate will have done their homework and used your product (maybe even talked to your customers) before coming in. I have found this quite rare in engineers. At IMVU, most of them thought our product was ridiculous at best; hopeless at worst. That's fine for the start of their interview process. But if we haven't managed to get them fired up about our company mission by the end of the day, it's unlikely they are going to make a meaningful contribution.

    To test for drive, ask about something extreme, like a past failure or a peak experience. They should be able to tell a good story about what went wrong and why.

    Alternately, ask about something controversial. I remember once being asked in a Microsoft group interview (and dinner) about the ActiveX security model. At the time, I was a die-heard Java zealot. I remember answering "What security model?" and going into a long diatribe about how insecure the ActiveX architecture was compared to Java's pristine sandbox. At first, I thought I was doing well. Later, the other candidates at the table were aghast - didn't I know who I was talking to?! Turns out, I had been lecturing the creator of the ActiveX security model. He was perfectly polite, not defensive at all, which was why I had no idea what was going on. Then I thought I was toast. Later, I got the job. Turns out, he didn't care that I disagreed with him, only that I had an opinion and wasn't afraid to defend it. Much later, I realized another thing. He wasn't defensive because, as it turns out, he was right and I was completely wrong (Java's sandbox model looked good on paper but its restrictions greatly retarded its adoption by actual developers).

  • Empathy. Just as you need to know a candidates IQ, you also have to know their EQ. Many of us engineers are strong introverts, without fantastic people skills. That's OK, we're not trying to hire a therapist. Still, a startup product development team is a service organization. We're there to serve customers direclty, as well as all of the other functions of the company. This is impossible if our technologists consider the other types of people in the company idiots, and treat them that way. I have sometimes seen technical teams that have their own "cave" that others are afraid to enter. That makes cross-functiona teamwork nearly impossible.

    To test for empathy, I always make sure that engineers have one or two interviews with people of wildly different background, like a member of our production art department. If they can treat them with respect, it's that much less likely we'll wind up with a silo'd organization.

  • Fit. The last and most elusive quality is how well the candidate fits in with the team you're hiring them into. I hear a lot of talk about fit, but also a lot of misunderstandings. Fit can wind up being an excuse for homogeneity, which is lethal. When everyone in the room thinks the same way and has the same background, teams tend to drink the proverbial Kool-Aid. The best teams have just the right balance of common background and diverse opinions, which I have found true in my experience and repeatedly validated in social science research (you can read a decent summary in The Wisdom of Crowds).

    This responsibility falls squarely to the hiring manager. You need to have a point of view about how to put together a coherent team, and how a potential candidate fits into that plan. Does the candidate have enough of a common language with the existing team (and with you) that you'll be able to learn from each other? Do they have a background that provides some novel approaches? Does their personality bring something new?
It's nearly impossible to get a good read on all six attributes in a single interview, so it's important to design an interview process that will give you a good sampling of data to look at. Exactly how to structure that process is a topic for another day, however, because I want to focus on the interview itself.

My technique is to structure a technical interview around an in-depth programming and problem-solving exercise. If it doesn't require a whiteboard, it doesn't count. You can use a new question each time, but I prefer to stick with a small number of questions that you can really get to know well. Over time, it becomes easier to calibrate a good answer if you've seen many people attempt it.

For the past couple of years I've used a question that I once was asked in an interview, in which you have the candidate produce an algorithm for drawing a circle on a pixel grid. As they optimize their solution, they eventually wind up deriving Bresenham's circle algorithm. I don't mind revealing that this is the question I ask, because knowing that ahead of time, or knowing the algorithm itself, confers no advantage to potential candidates.

That's because I'm not interviewing for the right answer to the questions I ask. Instead, I want to see how the candidate thinks on their feet, and whether they can engage in collaborative problem solving with me. So I always frame interview questions as if we were solving a real-life problem, even if the rules are a little far-fetched. For circle-drawing, I'll sometimes ask candidates to imagine that we are building a portable circle-drawing device with a black and white screen and low-power CPU. Then I'll act as their "product manager" who can answer questions about what customers think, as well as their combined compiler, interactive debugger, and QA tester.

You learn a lot from how interested a candidate is in why they are being asked to solve a particular problem. How do they know when they're done? What kind of solution is good enough? Do they get regular feedback as they go, or do they prefer to think, think, think and then dazzle with the big reveal?

My experience is that candidates who "know" the right answer do substantially worse than candidates who know nothing of the field. That's because they spend so much time trying to remember the final solution, instead of working on the problem together. Those candidates have a tendency to tell others that they know the answer when they only suspect that they do. In a real-world situation, they tend to wind up without credibility or forced to resort to bullying.

No matter what question you're asking, make sure it has sufficient depth that you can ask a lot of follow-ups, but that it has a first iteration that's very simple. An amazing number of candidates cannot follow the instruction to Do the Simplest Thing That Could Possibly Work. Some questions have a natural escalation path (like working through the standard operations on a linked-list) and others require some more creativity.

For example, I would often ask a candidate to explain to me how the C code they are writing on the whiteboard would be rendered into assembly by the compiler. There is almost no earthly reason that someone should know about this already, so candidates answer in a wide variety of ways: some have no idea, others make something up; some have the insight to ask questions like "what kind of processor does this run on?" or "what compiler are we using?" And some just write the assembly down like it's a perfectly normal question. Any of these answers can work, and depending on what they choose, it usually makes sense to keep probing along these lines: which operations are the most expensive? what happens if we have a pipelined architecture?

Eventually, either the candidate just doesn't know, or they wind up teaching you something new. Either way, you'll learn something important. There are varying degrees of not-knowing, too.
  1. Doesn't know, but can figure it out. When you start to probe the edges of someone's real skills, they will start to say "I don't know" and then proceed to reason out the answer, if you give them time. This is usually what you get when you as about big-O notation, for instance. They learned about it some time ago, don't remember all the specifics, but have a decent intuition that n-squared is worse than log-n.

  2. Doesn't know, but can deduce it given the key principles. Most people, for example, don't know exactly how your typical C++ compiler lays out objects in memory. But that's usually because most people don't know anything about how compilers work, or how objects work in C++. If you fill them in on the basic rules, can they reason with them? Can those insights change the code you're trying to get them to write?

  3. Doesn't understand the question. Most questions require a surprising amount of context to answer. It doesn't do you any good to beat someone up by forcing them through terrain that's too far afield from their actual area of expertise. For example, I wold often work the circle-drawing question with candidates who only had ever programmed in a web-based scripting language like PHP. Some of them could roll with the punches and still figure out the algorithmic aspects of the answer. But it was normally useless to probe into the inner workings of the CPU, because it wasn't something they knew about, and it can't really be taught in less than a few hours. You might decide that this knowledge is critical for the job you're hiring for, and that's fine. But it's disrepectful and inefficnet to waste the candidate's time. Move on.
My purpose in elaborating these degrees of not-knowingness is to emphasize this essential point: you want to keep as much of the interview split between boxes one and two. In other words, you want to keep asking questions on the boundaries of what they know. That's the only way to probe for agility, brains, and the best way to probe for communication. In the real world, the vast majority of time (especially in startups) is spent encountering novel situations without a clear answer. What matters is how good your thinking is at times like those, and how well you can communicate it. (It's kind of like playing Fischer Random Chess, where memorizing openings is useless).

Let me return to my topic at the top of the post: using the interview to emphasize values as well as evaluate. The best interviews involve both the interviewer and the canddiate learning something they didn't know before. Making clear that your startup doesn't have all the answers, but that your whole team pushes their abilities to their limits to find them is a pretty compelling pitch. Best of all, it's something you just can't fake. If you go into an interview with the intention of lording your knowledge over a candidate, showing them how smart you are, they can tell. And if you ask questions but don't really listen to the answers, it's all-too-obvious. Instead, dive deep into a problem and, together, wrestle the solution to the ground.


Reblog this post [with Zemanta]

Saturday, November 22, 2008

Net Promoter Score: an operational tool to measure customer satisfaction

Cover of
I've mentioned Net Promoter Score (NPS) in a few previous posts, but haven't had a chance to describe it in detail yet. It is an essential lean startup tool that combines seemingly irreconcilable attributes: it provides operational, actionable, real-time feedback that is truly representative of your customers' experience as a whole. It does it all by asking your customers just one magic question.

In this post I'll talk about why NPS is needed, how it works, and show you how to get started with it. I'll also reveal the Net Promoter Score for this blog, based on the data you've given me so far.

How can you measure customer satisfaction?
Other methods for collecting data about customers have obvious drawbacks. Doing in-depth customer research, with long questionnaires with detailed demographic and psychograpic breakdowns, is very helpful for long-range planning, interaction design and, most importantly, creating customer archetypes. But it's not immediately actionable, and it's far too slow to be a regular part of your decision loop.

At the other extreme, there's the classic A/B split-test, which provides nearly instantaneous feedback on customer adoption of any given feature. If your process for creating split-tests is extremely light (for example, it requires only one line of code), you can build a culture of lightweight experimentation that allows you to audition many different ideas, and see what works. But split-tests also have their drawbacks. They can't give you a holistic view, because they only tell you how your customers reacted to that specific test.

You could conduct an in-person usability test, which is very useful for getting a view of how actual people perceive the totality of your product. But that, too, is limited, because you are relying on a very small sample, from which you can only extrapolate broad trends. A major usability problem is probably experienced similarly by all people, but the absence of such a defect doesn't tell you much about how well you are doing.

Net Promoter Score
NPS is a methodology that comes out of the service industry. It involves using a simple tracking survey to constantly get feedback from active customers. It is described in detail by Fred Reichheld in his book The Ultimate Question: Driving Good Profits and True Growth. The tracking survey asks one simple question: How likely are you to recommend Product X to a friend or colleague? The answer is then put through a formula to give you a single overall score that tells you how well you are doing at satisfying your customers. Both the question and formula are the results of a lot of research that claims that this methodology can predict the success of companies over the long-term.

There's a lot of controversy surrounding NPS in the customer research community, and I don't want to recapitulate it here. I think it's important to acknowledge, though, that lots of smart people don't agree with the specific question that NPS asks, or the specific formula used to calculate the score. For most startups, though, I think these objections can safely be ignored, becuase there is absolutely no controversy about the core idea that a regular and simple tracking survey can give you customer insight.

Don't let the perfect be the enemy of the good. If you don't like the NPS question or scoring system, feel free to use your own. I think any reasonably neutral approach will give you valuable data. Still, if you're open to it, I recommend you give NPS a try. It's certainly worked for me.

How to get started with NPS
For those that want to follow the NPS methodology, I will walk you through how to integrate it into your company, including how to design the survey, how to collect the answers, and how to calculate your score. Because the book is chock-full of examples of how to do this in older industries, I will focus on my experience integrating NPS into an online service, although it should be noted that it works equally well if your primary contact with customers is through a different channel, such as the telephone.

Designing the survey
The NPS question itself (again, "How likely are you to recommend X to a friend or colleague?") is usually asked on a 0-10 point scale. It's important to let people know that 10 reperesents "most likely" and 0 represents "least likely" but it's also important not to use words like promoter or detractor anywhere in the survey itself.

The hardest part about creating an NPS survey is to resist the urge to load it up with lots of questions. The more questions you ask, the lower your response rate, and the more you bias your results towards more-engaged customers. The whole goal of NPS is to get your promoters and your detractors alike to answer the question, and this requires that you not ask for too much of their time. Limit yourself to two questions: the official NPS question, and exactly one follow-up. Options for the follow-up could be a different question on a 10-point scale, or just an open ended question asking why they chose the rating that they did. Another possibility is to ask "If you are open to answering some follow-up questions, would you leave your phone number?" or other contact info. That would let you talk to some actual detractors, and get a qualitative sense of what they are thinking, for example.

For an online service, just host the survey on a webpage with as little branding or decoration as possible. Because you want to be able to produce real-time graphs and results, this is one circumstance where I recommend you build the survey yourself, versus using an off-the-shelf hosted survey tool. Just dump the results in a database as you get them, and let your reports calculate scores in real-time.

Collecting the answers
Once you have the survey up and running, you need to design a program to have customers take it on a regular basis. Here's how I've set it up in the past. Pick a target number of customers to take the survey every day. Even if you have a very large community, I don't think this number needs to be higher than 100. Even just 10 might be enough. Build a batch process (using GearMan, cron, or whatever you use for offline processing) whose job is to send out invites to the survey.

Use whatever communication channel you normally rely on for notifying your customers. Email is great; of course, at IMVU, we had our own internal notification system. Either way, have the process gradually ramp up the number of outstanding invitations throughout the day, stopping when it's achieved 100 responses. This way, no matter what the response rate, you'll get a consistent amount of data. I also recommend that you give each invitation a unique code, so that you don't get random people taking the survey and biasing the results. I'd also recommend you let each invite expire, for the same reason.

Choose the people to invite to the survey according to a consistent formula every day. I recommend a simple lottery among people who have used your product that same day. You want to catch people when their impression of your product is fresh - even a few days can be enough to invalidate their reactions. Don't worry about surveying churned customers; you need to use a different methodology to reach them. I also normally exclude anyone from being invited to take the survey more than once in any given time period (you can use a month, six months, anything you think is appropriate).

Calculate your score
Your NPS score is derived in three steps:
  1. Divide all responses into three buckets: promoters, detractors, and others. Promoters are anyone who chose 9 or 10 on the "likely to recommend scale" and detractors are those who chose any number from 0-6.
  2. Figure out the percentage of respondants that fall into the promoter and detractor buckets.
  3. Subtract your detractor percentage from your promoter percentage. The result is your score. Thus, NPS = P% - D%.
You can then compare your score to people in other industries. Any positive score is good news, and a score higher than +50 is considered exceptional. Here are a few example scores taken from the official Net Promoter website:

Apple 79
Adobe 46
Google
73
Barnes & Noble online
74
American Express
47
Verizon
10
DIRECTV
20

Of course, the most important thing to do with your NPS score is to track it on a regular basis. I used to look at two NPS-related graphs on a regular basis: the NPS score itself, and the response rate to the survey request. These numbers were remarkably stable over time, which, naturally, we didn't want to believe. In fact, there were some definite skeptics about whether they measured anything of value at all, since it is always dismaying to get data that says the changes you're making to your product are not affecting customer satisfaction one way or the other.

However, at IMVU one summer, we had a major catastrophe. We made some changes to our service that wound up alienating a large number of customers. Even worse, the way we chose to respond to this event was terrible, too. We clumsily gave our community the idea that we didn't take them seriously, and weren't interested in listening to their complaints. In other words, we committed the one cardinal sin of community management. Yikes.

It took us months to realize what we had done, and to eventually apologize and win back the trust of those customers we'd alienated. The whole episode cost us hundreds of thousands of dollars in lost revenue. In fact, it was the revenue trends that eventually alerted us to the magnitude of the problem. Unfortunately, revenue a trailing indicator. Our response time to the crisis was much too slow, and as part of the post-mortem analysis of why, I took a look at the various metrics that all took a precipitous turn for the worse during that summer. Of everything we measured, it was Net Promoter Score that plunged first. It dropped down to an all-time low, and stayed there for the entire duration of the crisis, while other metrics gradually came down over time.

After that, we stopped being skeptical and started to pay very serious attention to changes in our NPS. In fact, I didn't consider the crisis resolved until our NPS peaked above our previous highs.

Calculating the NPS of Lessons Learned
I promised that I would reveal the NPS of this blog, which I recently took a snapshot of by offering a survey in a previous post. Here's how the responses break down, based on the first 100 people who answered the question:
  • Number of promoters: 47
  • Number of detractors: 22
  • NPS: 25
Now, I don't have any other blogs to compare this score to. Plus, the way I offered the survey (just putting a link in a single post), the fact that I didn't target people specifically to take the survey, and the fact that the invite was impersonal, are all deeply flawed. Still, all things considered, I'm pretty happy with the result. Of course, now that I've described the methodology in detail, I've probably poisoned the well for taking future unbiased samples. But that's a small price to pay for having the opportunity to share the magic of NPS.

I hope you'll find it useful. If you do, come on back and post a comment letting us all know how it turned out.


Reblog this post [with Zemanta]

Wednesday, November 19, 2008

Lo, my 1032 subscribers, who are you?

When I first wrote about the advantages of having a pathetically small number of customers, I only had 5 subscribers. When I checked my little badge on the sidebar today, I was shocked to see it read 1032. As it turns out, it was much harder to get those first five subscribers, then the next thousand, thanks to great bloggers like Andrew Chen, Dave McClure, and the fine folks over at VentureHacks. Thank you all for stopping by.

Of course, 1000 customers is pretty pathetically small too. When startups achieve that milestone, it's a mixed blessing. On the one hand, having a little traction is a good thing. But on the other hand, figuring out what's going on starts to get more difficult. You can't quite talk to everyone on the phone . You have to start filtering and sorting; deciding which feedback to listen to and which loud people to ignore. It's also time to start thinking about customer segments. Do you have a particular set of early adopters that share some common traits? If so, they might be pointing the way towards a much bigger set of people who share those traits, but are not early adopters.

Let's take an example of a startup I was advising a few years ago. Of their early customers, about 1/3 of them turned out to be high school or middle school teachers. This wasn't an education product - it was a pretty surprising group to find using it. What all these teachers had in common were two things: they were technology early adopters that were willing to take a chance on a new software product, and they all had similar problems organizing their classes and students. At that early stage, it was the company's first glimpse of what a crossing the chasm strategy might look like: use these early adopters to build a whole product for the education market. Then sell it to mainstream educators, schools, and school districts, who shared the same problem of organizing classes, but were not themselves early adopters.

So how do you get started with customer segmentation? If you've already been talking to customers one-on-one, don't stop now (and if you haven't, this is still a good time to start). Those conversations are the best way to look for patterns in the noise. As you start to see them, collect your hypotheses and start using broader-reach tools to find out how they break down. I would recommend periodic surveys, along with some kind of forum or other community tool where the most passionate customers can congregate. You can also use Twitter, your blog (with comments), or even a more structured tool like uservoice.

I'd start with a simple survey (I use SurveyMonkey), combining the NPS question with a handful of more in-depth optional questions. In fact, I feel like I should eat my own dogfood, take my own medicine, or whatnot. Here's my survey for Lessons Learned:
As a loyal subscriber, I'd like to invite you to take the first Lessons Learned customer survey: Click Here to take survey
I put this together using the free version of SurveyMonkey, to show just how easy it is. If you're serious about this, you probably want to use their premium version, which will let you do things like add logic to let people easily skip the second page if they choose to, and send them to a "thank you page" afterward. Be sure to make the thank you page have a call to action (like a link to subscribe, for example) - after all, you're dealing with a customer passionate enough to talk to you.

So, to those of you who take the time to fill out the survey: thanks for the feedback! And to everyone who's taken the time to read, comment, or subscribe: thank you.
Reblog this post [with Zemanta]

Tuesday, November 18, 2008

ScienceDaily: Corporate culture is most important factor in driving innovation

Some recent research into what makes innovation happen inside companies:
Corporate Culture Is Most Important Factor In Driving Innovation: "Looking at data from 759 firms across 17 countries the researchers found that location is not the determining factor in the degree to which any given firm is innovative; but rather, the innovative firms themselves share key internal cultural traits. Innovation appears to be a function of the degree to which a company fosters a supportive internal structure headed by product champions and bolstered by incentives and the extent to which that organization is able to change quickly"
The concept of a strong product champion is a recurring theme in successful product development organizations, large and small. It's even more critical in lean startups when they need to manage growth.

I believe it's important that product teams be cross-functional, no matter what other job function the product champion does. At IMVU, we called this person a Producer (revealing our games background); in Scrum, they are called the Product Owner. At Toyota, they are called Chief Engineer:
Toyota realizes that the Chief Engineer job is probably the most important one in the company because the Chief Engineer listens to the customer and then determines what the functions need to do to address the customer’s desires. Thus the power of the Chief Engineer is very large even though he (and they are all men so far) has no direct reports other than a secretary and a few assistants who are themselves being trained to be chief engineers.

The job of the Chief Engineer is to determine the needs of the product and then to negotiate with the heads of body engineering, drive train engineering, manufacturing engineering, production, purchasing, etc., about what their function needs to do to fully support the product. Once an agreement is reached, the Chief Engineer continually watches to make sure that the functions are following through. In the event there is an irreconcilable difference between Chief Engineer and function head, the issue can be elevated to a very high level, but apparently this doesn’t happen.

Great companies build highly adaptable teams, empower leaders to run them, and have high standard of accountability. I will share some further thoughts on how to build strong cross-functional teams in part three of The four kinds of work, and how to get them done.

Reblog this post [with Zemanta]

Monday, November 17, 2008

The four kinds of work, and how to get them done: part two

In part one, I talked about four different kinds of work that every company has to do: innovation/R&D, strategy, growth, and maintenance/scalability. When startups grow, they tend to have problems handling the inevitable conflicts that emerge from having to do multiple kinds of work all at once. In order to grow effectively, it's important to have a technique that mitigates these problems.

I ended part one with two questions: Why do these different kinds of work cause problems? And why do those problems seem to get worse as the company grows? Let's get to the answers.
  1. Apples-and-oranges trade-offs. It's extremely difficult to make intelligent trade-offs between things that are not at all alike. For example, should we invest a week of engineering time into making our website more failure-proof (which we're pretty sure will pay off right away) or into experimenting with a new technology (that might pay off in months, years, or never)? If I have some budget for outside help, should I hire a vendor to help us drive down our payment fraud rates by 1% (ROI easy to predict), or hire a market research firm to give us insights into potential customers (ROI hard to predict)? It's much easier to make trade-offs within a single kind of work than across types of work.

  2. People have natural affinity for some kinds of work. Even worse, in my opinion, is that I know there are at least a few readers out there who read the previous paragraph and thought, "those aren't hard choices to make; it's obvious what you should choose..." That's because most people have a natural affinity for certain kinds of work. Have you met that prickly operations guy who seems to love servers more than people (but would never let them fail on his watch)? Or the zany innovator who just can't comprehend schedules but always has a new trick up her sleeve? Those are the natural leaders of the kind of work they were born to do. But they are often counter-productive when placed in a management role for other kinds of work. Sometimes, just having other kinds of work being done nearby is enough to drive them crazy. This can lead to a lot of needless politics and needless suffering if it is not proactively managed.

  3. People get trapped doing the wrong kind of work. Successful products and features have a natural lifecycle. They are born in R&D, become part of the company's DNA in Strategy, delight zillions of customers in Growth, and eventually become just another box on a Maintenance checklist somewhere. The problem is that people who were essential to the product in a previous phase can get carried along and find themselves stuck downstream. For example, the original innovator from R&D can find himself the leader of a team tasked with executing incremental growth, because he understands the feature better than anyone. Or a critical engineer, who wrote the breakthrough code that first helped a feature achieve scale is considered "too essential" to be relieved of responsibility for maintaining it. This has two bad consequences: it puts people in jobs that they are not ideally suited for, and it reduces degrees of freedom for management to make optimal resource allocation decisions. If your top performers are all stuck in Growth and Maintenance, who do you have left in R&D and Strategy?
To mitigate these problems, we need a process that recognizes the different kinds of work a company does, and creates teams to get them done. It has to balance competing goals of establishing clear ownership, while avoiding talented employees getting stuck.

In part three, I'll lay out the criteria for such a process, and describe the techniques I've used to make it work.

The four kinds of work, and how to get them done: part one

I've written before about some of the advantages startups have when they are very small, like the benefits of having a pathetically small number of customers. Another advantage of the early stages is that most don't have to juggle too many competing priorities. If you don't have customers, a product, investors, or a board of directors, you can pretty much stay focused on just one thing at a time.

As companies grow, it becomes increasingly important to build an organization that can execute in multiple areas simultaneously. I'd like to talk about a technique I've used to help manage this growth without slowing down.

This technique rests on three things: identifying the kinds of work that need to get done, creating the right type of teams for each kind, and steering the company by allocating resources among them. For this analysis, I am heavily indebted to Geoff Moore, who laid out the theoretical underpinnings of this approach (and describes how to use it for companies of all sizes and scales) in Dealing with Darwin: How Great Companies Innovate at Every Phase of Their Evolution.

Four kinds of work
  1. Innovation / R&D - this is what all startups do in their earliest stages. Seeing what's possible. Playing with new technologies. Building and testing prototypes. Talking to potential customers and competitors' customers. In this kind of work, it's hard to predict when things will be done, what impact they will have, and whether you're making progress. Managers in this area have to take a portfolio approach, promoting ideas that work and might make good candidates for further investment. The ideal R&D team is a small skunkworks that is off the radar of most people in the company. A "startup within the startup" feeling is a good thing.

  2. Strategy - startups first encounter this when they have the beginnings of a product, and they've achieved some amount of product/market fit. Now it's time to start to think seriously about how to find a repeatable and scalable sales process, how to position and market the product, and how to build a product development team that can turn an early product into a Whole Product. As the company grows, this kind of work generalizes into "executing the company's current strategy." Usually, that will be about finding new segments of customers that the company can profitably serve. It's decidedly not about making incremental improvements for current customers - that's a different kind of work altogether. This kind of work requires the most cross-functional of teams, because it draws on the resources of the whole company. And although schedules and prediction are difficult here, they are critical. It's essential to know if the strategy is fundamentally working or failing, so the company can chart its course accordingly.

    Your strategy might be wrong; might take a long time to pay off; might even pay off in completely unexpected ways, which is why it is unwise to devote 100% of your resources to your current strategy. If you invest in strategy at the expense of innovation, you risk being unprepared for the next strategy (or of achieving tunnel-vision in which everyone drinks the Kool-Aid). If you invest in strategy at the expense of growth, you can starve yourself of the resources you need to implement the strategy. And if you neglect maintenance, you may not have a business left at all.

  3. Growth - when you have existing customers, the pressure is on to grow your key metrics day-in day-out. If you're making revenue, you should be finding ways to grow it predictably month-over-month; if you're focused on customer engagement, your product should be getting more sticky, and so on. Some companies and founders refuse to serve existing customers, and are always lurching from one great idea to the next. Others focus exclusively on incremental growth, and can never find the time or resources for strategy. Either extreme can be fatal. This kind of work is where schedules, milestones, and accurate estimates thrive. Since the work is building on knowledge and systems built in the past, it's much more likely to get done on-time, on-budget, and to have a predictable effect on the business. Growth work calls for relentless executors, who know how to get things done.

  4. Maintenance and scalability - "keeping the lights on" gets harder and harder as companies grow. Yet the great companies manage to handle growth while keeping the resources dedicated to maintenance and scalability mostly fixed. That means they are continuously getting better and better at automating and driving out waste. Continuous improvement here frees up time and energy for the parts of the company that find new ways to make money. Often a company's unsung heroes are doing this kind of work: invisible when doing a good job, all-too-visible when something goes wrong. These teams tend to be incredibly schedule and process-centric, with detailed procedures for anything that might happen.
Companies of any size do all these kinds of work, and do them well. You don't need any special process to make it happen, just good people who are committed to making the company successful. So why do these different kinds of work cause problems? And why do those problems seem to get worse as the company grows?

We'll talk about those problems in detail in part two.
Reblog this post [with Zemanta]

Thursday, November 13, 2008

Five Whys

Taiichi Ohno was one of the inventors of the Toyota Production System. His book Toyota Production System: Beyond Large-Scale Production is a fascinating read, even though it's decidedly non-practical. After reading it, you might not even realize that there are cars involved in Toyota's business. Yet there is one specific technique that I learned most clearly from this book: asking why five times.

When something goes wrong, we tend to see it as a crisis and seek to blame. A better way is to see it as a learning opportunity. Not in the existential sense of general self-improvement. Instead, we can use the technique of asking why five times to get to the root cause of the problem.

Here's how it works. Let's say you notice that your website is down. Obviously, your first priority is to get it back up. But as soon as the crisis is past, you have the discipline to have a post-mortem in which you start asking why:
  1. why was the website down? The CPU utilization on all our front-end servers went to 100%
  2. why did the CPU usage spike? A new bit of code contained an infinite loop!
  3. why did that code get written? So-and-so made a mistake
  4. why did his mistake get checked in? He didn't write a unit test for the feature
  5. why didn't he write a unit test? He's a new employee, and he was not properly trained in TDD
So far, this isn't much different from the kind of analysis any competent operations team would conduct for a site outage. The next step is this: you have to commit to make a proportional investment in corrective action at every level of the analysis. So, in the example above, we'd have to take five corrective actions:
  1. bring the site back up
  2. remove the bad code
  3. help so-and-so understand why his code doesn't work as written
  4. train so-and-so in the principles of TDD
  5. change the new engineer orientation to include TDD
I have come to believe that this technique should be used for all kinds of defects, not just site outages. Each time, we use the defect as an opportunity to find out what's wrong with our process, and make a small adjustment. By continuously adjusting, we eventually build up a robust series of defenses that prevent problems from happening. This approach is a the heart of breaking down the "time/quality/cost pick two" paradox, because these small investments cause the team to go faster over time.

I'd like to point out something else about the example above. What started as a technical problem actually turned out to be a human and process problem. This is completely typical. Our bias as technologists is to over-focus on the product part of the problem, and five whys tends to counteract that tendency. It's why, at my previous job, we were able to get a new engineer completely productive on their first day. We had a great on-boarding process, complete with a mentoring program and a syllabus of key ideas to be covered. Most engineers would ship code to production on their first day. We didn't start with a great program like that, nor did we spend a lot of time all at once investing in it. Instead, five whys kept leading to problems caused by an improperly trained new employee, and we'd make a small adjustment. Before we knew it, we stopped having those kinds of problems altogether.

It's important to remember the proportional investment part of the rule above. It's easy to decide that when something goes wrong, a complete ground-up rewrite is needed. It's part of our tendency to over-focus on the technical and to over-react to problems. Five whys helps us keep our cool. If you have a severe problem, like a site outage, that costs your company tons of money or causes lots of person-hours of debugging, go ahead and allocate about that same number of person-hours or dollars to the solution. But always have a maximum, and always have a minimum. For small problems, just move the ball forward a little bit. Don't over-invest. If the problem recurs, that will give you a little more budget to move the ball forward some more.

How do you get started with five whys? I recommend that you start with a specific team and a specific class of problems. For my first time, it was scalability problems and our operations team. But there is no right answer - I've run this process for many different teams. Start by having a single person be the five whys master. This person will run the post mortem whenever anyone on the team identifies a problem. Don't let them do it by themselves; it's important to get everyone who was involved with the problem (including those who diagnosed or debugged it) into a room together. Have the five why master lead the discussion, but they should have the power to assign responsibility for the solution to anyone in the room.

Once that responsibility has been assigned, have that new person email the whole company with the results of the analysis. This last step is difficult, but I think it's very helpful. Five whys should read like plain English. If they don't, you're probably obfuscating the real problem. The advantage of sharing this information widely is that it gives everyone insight into the kinds of problems the team is facing, but also insight into how those problems are being tackled. And if the analysis is airtight, it makes it pretty easy for everyone to understand why the team is taking some time out to invest in problem prevention instead of new features. If, on the other hand, it ignites a firestorm - that's good news too. Now you know you have a problem: either the analysis is not airtight, and you need to do it over again, or your company doesn't understand why what you're doing is important. Figure out which of these situations you're in, and fix it.

Over time, here's my experience with what happens. People get used to the rhythm of five whys, and it becomes completely normal to make incremental investments. Most of the time, you invest in things that otherwise would have taken tons of meetings to decide to do. And you'll start to see people from all over the company chime in with interesting suggestions for how you could make things better. Now, everyone is learning together - about your product, process, and team. Each five whys email is a teaching document.

Let me show you what this looked like after a few years of practicing five whys in the operations and engineering teams at IMVU. We had made so many improvements to our tools and processes for deployment, that it was pretty hard to take the site down. We had five strong levels of defense:
  1. Each engineer had his/her own sandbox which mimicked production as close as possible (whenever it diverged, we'd inevitably find out in a five whys shortly thereafter).
  2. We had a comprehensive set of unit, acceptance, functional, and performance tests, and practiced TDD across the whole team. Our engineers built a series of test tags, so you could quickly run a subset of tests in your sandbox that you thought were relevant to your current project or feature.
  3. 100% of those tests ran, via a continuous integration cluster, after every checkin. When a test failed, it would prevent that revision from being deployed.
  4. When someone wanted to do a deployment, we had a completely automated system that we called the cluster immune system. This would deploy the change incrementally, one machine at a time. That process would continually monitor the health of those machines, as well as the cluster as a whole, to see if the change was causing problems. If it didn't like what was going on, it would reject the change, do a fast revert, and lock deployments until someone investigated what went wrong.
  5. We had a comprehensive set of nagios alerts, that would trigger a pager in operations if anything went wrong. Because five whys kept turning up a few key metrics that were hard to set static thresholds for, we even had a dynamic prediction algorithm that would make forecasts based on past data, and fire alerts if the metric ever went out of its normal bounds. (You can even read a cool paper one of our engineers wrote on this approach).
So if you had been able to sneak into the desk of any of our engineers, log into their machine, and secretly check in an infinite loop on some highly-trafficked page, here's what would have happened. Somewhere between 10 and 20 minutes later, they would have received an email with a message more-or-less like this: "Dear so-and-so, thank you so much for attempting to check in revision 1234. Unfortunately, that is a terrible idea, and your change has been reverted. We've also alerted the whole team to what's happened, and look forward to you figuring out what went wrong. Best of luck, Your Software." (OK, that's not exactly what it said. But you get the idea)

Having this series of defenses was helpful for doing five whys. If a bad change got to production, we'd have a built-in set of questions to ask: why didn't the automated tests catch it? why didn't the cluster immune system reject it? why didn't operations get paged? and so forth. And each and every time, we'd make a few more improvements to each layer of defense. Eventually, this let us do deployments to production dozens of times every day, without significant downtime or bug regressions.

One last comment. When I tell this story to entrepreneurs and big-company types alike, I sometimes get this response: "well, sure, if you start out with all those great tools, processes and TDD from the beginning, that's easy! But my team is saddled with zillions of lines of legacy code and ... and ..." So let me say for the record: we didn't start with any of this at IMVU. We didn't even practice TDD across our whole team. We'd never heard of five whys, and we had plenty of "agile skeptics" on the team. By the time we started doing continuous integration, we had tens of thousands of lines of code, all not under test coverage. But the great thing about five whys is that it has a pareto principle built right in. Because the most common problems keep recurring, your prevention efforts are automatically focused on the 20% of your product that needs the most help. That's also the same 20% that causes you to waste the most time. So five whys pays for itself awfully fast, and it makes life noticeably better almost right away. All you have to do is get started.

So thank you, Taiichi Ohno. I think you would have liked seeing all the waste we've been able to drive out of our systems and processes, all in an industry that didn't exist when you started your journey at Toyota. And I especially thank you for proving that this technique can work in one of the most difficult and slow-moving industries on earth: automobiles. You've made it hard for any of us to use the most pathetic excuse of all: surely, that can't work in my business, right? If it can work for cars, it can work for you.

What are you waiting for?

Tuesday, November 11, 2008

Where did Silicon Valley come from?

Those of us who have had the privilege of working in the premier startup hub in the world often take its advantages for granted. Among those: plentiful financing and nerds, a culture that celebrates both failure and success, and an ethos of openness and sharing. It's useful to look back to understand how we got those advantages. It's not side-effect of some secret mineral in the water: it was painstakingly crafted by people who came before us. And what may surprise you is how many of those people were part of the military-industrial complex.

I think the absolute best reading on this subject is a book called Regional Advantage: Culture and Competition in Silicon Valley and Route 128 by AnnaLee Saxenian. It's an academic treatise that tries to answer a seemingly straightforward question: after World War II, why did Silicon Valley become the undisputed leader of the technology world, while Boston's Route 128 corridor did not. To an early observer, it would have seemed obvious that Route 128 had all the advantages: a head start, more government and military funding, and far more established companies. And although both regions had outstanding research universities, MIT was way ahead of Stanford by every relevant measure. However...
While both Stanford and MIT encouraged commercially oriented research and courted federal research contracts in the postwar years, MIT's leadership focused on building relations with government agencies and seeking financial support from established electronics producers. In contrast, Stanford's leaders, lacking corporate or government ties or even easy proximity to Washington, actively promoted the formation of new technology enterprises and forums for cooperation with local industry.

This contrast — between MIT's orientation toward Washington and large, established producers and Stanford's promotion of collaborative relationships among small firms — would fundamentally shape the industrial systems emerging in the two regions.
The book is really fun to read (how often do you see an academic tome crossed with a real whodunit?). It's important not just for historical reasons, but because we are often called upon to take sides in current debates that impact the way our region and industry will develop. Just to pick one: will software patents, NDA's and trade secrets laws make it harder for people to share knowledge outside of big companies? We need to work hard, as previous generations did, to balance the needs of everyone in our ecosystem. Otherwise, we risk sub-optimizing by focusing only on one set of players.

However, even that fascinating history is not the whole story. You might be wondering: who were those brilliant people who made the key decisions to mold Silicon Valley? And what were they doing beforehand? Steve Blank, who I've written about recently in a totally different context has attempted to answer these questions in a talk called "Hidden in Plain Sight: The Secret History of Silicon Valley." If you're in the Bay Area, you have the opportunity to see it live: he's giving the talk at the Computer History Museum next Thursday, November 20:
Hear the story of how two major events – WWII and the Cold War – and one Stanford professor set the stage for the creation and explosive growth of entrepreneurship in Silicon Valley. In true startup form, the world was forever changed when the CIA and the National Security Agency acted as venture capitalists for this first wave of entrepreneurship. Learn about the key players and the series of events that contributed to this dramatic and important piece of the emergence of this world renowned technology mecca.
If you can't make it, you can take a look at this sneak peak of the slides, courtesy of the author:



In addition to learning who to thank (Frederick Terman and William Shockley), you'll get a behind-the-scenes look at World War II and the Cold War from an electronics perspective. Fans of
Cryptonomicon will have a blast.

Reblog this post [with Zemanta]

Saturday, November 8, 2008

What is customer development?

When we build products, we use a methodology. For software, we have many - you can enjoy a nice long list on Wikipedia. But too often when it's time to think about customers, marketing, positioning, or PR, we delegate it to "marketroids" or "suits." Many of us are not accustomed to thinking about markets or customers in a disciplined way. We know some products succeed and others fail, but the reasons are complex and the unpredictable. We're easily convinced by the argument that all we need to do is "build it and they will come." And when they don't come, well, we just try, try, again.

What's wrong with this picture?

Steve Blank has devoted many years now to trying to answer that question, with a theory he calls Customer Development. This theory has become so influential that I have called it one of the three pillars of the lean startup - every bit as important as the changes in technology or the advent of agile development.

You can learn about customer development, and quite a bit more, in Steve's book The Four Steps to the Epiphany. I highly recommend this book for all entrepreneurs, in startups as well as in big companies. Here's the catch. This is a self-published book, originally designed as a companion to Steve's class at Berkeley's Haas school of business. And Steve is the first to admit that it's a "turgid" read, without a great deal of narrative flow. It's part workbook, part war story compendium, part theoretical treatise, and part manifesto. It's trying to do way too many things at once. On the plus side, that means it's a great deal. On the minus side, that has made it a wee bit hard to understand.

Some notable bloggers have made efforts to overcome these obstacles. VentureHacks did a great summary, which includes slides and video. Marc Andreeson also took a stab, calling it "a very practical how-to manual for startups ... a roadmap for how to get to Product/Market Fit." The theory of Product/Market Fit is one key component of customer development, and I highly recommend Marc's essay on that topic.

Still, I feel the need to add my two cents. There's so much crammed into The Four Steps to the Epiphany that I want to distill out what I see as the key points:
  1. Get out of the building. Very few startups fail for lack of technology. They almost always fail for lack of customers. Yet surprisingly few companies take the basic step of attempting to learn about their customers (or potential customers) until it is too late. I've been guilty of this many times in my career - it's just so easy to focus on product and technology instead. True, there are the rare products that have literally no market risk; they are all about technology risk ("cure for cancer"). For the rest of us, we need to get some facts to inform and qualify our hypotheses ("fancy word for guesses") about what kind of product customers will ultimately buy.

    And this is where we find Steve's maxim that “In a startup no facts exist inside the building, only opinions.” Most likely, your business plan is loaded with opinions and guesses, sprinkled with a dash of vision and hope. Customer development is a parallel process to product development, which means that you don't have to give up on your dream. We just want you to get out of the building, and start finding out whether your dream is a vision or a delusion. Surprisingly early, you can start to get a sense for who the customer of your product might be, how you'll reach them, and what they will ultimately need. Customer development is emphatically not an excuse to slow down or change the plan every day. It's an attempt to minimize the risk of total failure by checking your theories against reality.

  2. Theory of market types. Layered on top of all of this is a theory that helps explain why different startups face wildly different challenges and time horizons. There are three fundamental situations that change what your company needs to do: creating a new market (the original Palm), bringing a new product to an existing market (Handspring), and resegmenting an existing market (niche, like In-n-Out Burger; or low-cost, like Southwest Airlines). If you're entering an existing market, be prepared for fast and furious competition from the incumbent players, but enjoy the ability to fail (or succeed) fast. When creating a new market, expect to spend as long as two years before you manage to get traction with early customers, but enjoy the utter lack of competition. What kind of market are you in? The Four Steps to the Epiphany contains a detailed approach to help you find out.

  3. Finding a market for the product as specified. When I first got the "listening to customers" religion, my plan was to talk to as many customer as possible, and build them as many features as they asked as possible. This is a common mistake. Our goal in product development is to find the minimum feature set required to get early customers. In order to do this, we have our customer development team work hard to find a market, any market, for the product as currently specified. We don't just abandon the vision of the company at every turn. Instead, we do everything possible to validate the founders' belief.

    The nice thing about this paradigm is it sets the company up for a rational discussion when the task of finding customers fails. You can start to think through the consequences of this information before it's too late. You might still decide to press ahead building the original product, but you can do so with eyes open, knowing that it's going to be a tough, uphill battle. Or, you might start to iterate the concept, each time testing it against the set of facts that you've been collecting about potential customers. You don't have to wait to iterate until after the splashy high-burn launch.

  4. Phases of product & company growth. The book takes its name from Steve's theory of the four stages of growth any startup goes through. He calls these steps Customer Discovery (when you're just trying to figure out if there are any customers who might want your product), Customer Validation (when you make your first revenue by selling your early product), Customer Creation (akin to a traditional startup launch, only with strategy involved), and Company Building (where you gear up to Cross the Chasm). Having lived through a startup that went through all four phases, I can attest to how useful it is to have a roadmap that can orient you to what's going on as your job and company changes.

    As an aside, here's my experience: you don't get a memo that tells you that things have changed. If you did, it would read something like this: "Dear Eric, thank you for your service to this company. Unfortunately, the job you have been doing is no longer available, and the company you used to work for no longer exists. However, we are pleased to offer you a new job at an entirely new company, that happens to contain all the same people as before. This new job began months ago, and you are already failing at it. Luckily, all the strategies you've developed that made you successful at the old company are entirely obsolete. Best of luck!"

  5. Learning and iterating vs. linear execution. I won't go through all four steps in detail (buy the book already). I'll just focus on the paradigm shift represented by the first two steps and the last two steps. In the beginning, startups are focused on figuring out which way is up. They really don't have a clue what they should be doing, and everything is guesses. In the old model, they would probably launch during this phase, failing or succeeding spectacularly. Only after a major, public, and expensive failure would they try a new iteration. Most people can't sustain more than a few of these iterations, and the founders rarely get to be involved in the later tries.

    The root of that mistake is premature execution. The major insight of The Four Steps to the Epiphany is that startups need time spent in a mindset of learning and iterating, before they try to launch. During that time, they can collect facts and change direction in private, without dramatic and public embarrassment for their founders and investors. The book lays out a disciplined approach to make sure this period doesn't last forever, and clear criteria for when you know it's time to move to an execution footing: when you have a repeatable and scalable sales process, as evidenced by early customers paying you money for your early product.
It slices, it dices. It's also a great introduction to selling and positioning a product for non-marketeers, a workbook for developing product hypotheses, and a compendium of incredibly useful tactics for startups young and old.

When I first encountered this book, my first impulse was as follows. I bought a bunch of copies, gave them out to my co-founders and early employees, and then expected the whole company's behavior would radically change the next day. That doesn't work (you can stop laughing now). This is not a book for everyone. I've only had luck sharing it with other entrepreneurs who are actually struggling with their product or company. If you already know all the answers, you can skip this one. But if you find some aspect of the situation your in confusing, maybe this will provide some clarity. Or at least some techniques for finding clarity soon.

My final suggestion is that you buy the book and skim it. Try and find sections that apply to the startup you're in (or are thinking of building). Make a note of the stuff that doesn't seem to make sense. Then put it on your shelf and forget about it. If your experience is anything like mine, here's what will happen. One day, you'll be banging your head against the wall, trying to make progress on some seemingly intractable problem (like, how the hell do I know if this random customer is an early adopter who I should spend time listening to, or a mainstream customer who won't buy my product for years). That's when I would get that light bulb moment: this problem sounds familiar. Go to your shelf. Get down the book, and be amazed that you are not the first person to tackle this problem in the history of the world.

I have been continually surprised at how many times I could go back to that same well for wisdom and advice. I hope you will be too.

Friday, November 7, 2008

Using AdWords to assess demand for your new online service, step-by-step

Google Analytics ベンチマークIf you want to build an online service, and you don't test it with a fake AdWords campaign ahead of time, you're crazy. That's the conclusion I've come to after watching tons of online products fail for a complete lack of customers. So I thought I would walk you through exactly how to run a "fake landing page" test using cheap tools that require no technical skills whatsoever.

Our goal is to find out whether customers are interested in your product by offering to give (or even sell) it to them, and then failing to deliver on that promise. If you're worried about disappointing some potential customers - don't be. Most of the time, the experiments you run will have a zero percent conversion rate - meaning no customers were harmed during the making of this experiment. And if you do get a handful of people taking you up on the offer, you'll be able to send them a nice personal apology. And if you get tons of people trying to take you up on your offer - congratulations. You probably have a business. Hopefully that will take some of the sting out of the fact that you had to engage in a little trickery.

To motivate you to give this a try, let me tell you a story from the early days of IMVU. It was fall 2004, and the presidential election was in full swing. One day, we became convinced that a killer app for IMVU would be to sell a presidential debate bundle, where our customers could put on a Bush or Kerry avatar, and then engage in mock debates with each other. It was one of those brilliant startup brainstorms that comes to the team in a flash, with a giant thunderclap. We spent weeks working on this new product, racing the clock so it would be done in time for the real presidential debates. We had endless arguments internally about what features it should include, how the avatars should look, and how much it should cost. We finally settled on a $1.99 price point, figuring that we wouldn't make much money, but at least we wouldn't get in the way of achieving scale. Finally the day came, we unleashed the landing page, emailed our existing customers, and started advertising online.

The net result: we sold exactly zero presidential debate avatars. None. Nada. We tried different price points, different ad copy, different landing pages. Nothing made any difference. Turns out, there was aboslutely no demand whatsoever for that particular product. And we could have found it out quite easily, if we'd used the simple five step process below. Oops - there went several precious weeks of development effort down the drain.

So, if you're interested in helping avoid mistakes like that, here are the steps:
  1. Get a domain name. It doesn't have to be the world's catchiest name, just pick something reasonably descriptive. If you're concerned about sullying your eventual brand name, don't use your "really good" name, pick a code name. Make sure your domain registrar offers free "website forwarding" if you don't use a hosting service that lets you use a custom domain name in step 2.

  2. Setup a simple website. I recommend using a hosted service like SnapPages. You basically want to create two pages: a landing page that says what your product does, and a signup page that people can use to register for it. If you're feeling charitable, you can add a third page that lets people know that the product isn't available right now, and that you'll get back to them when it is.

  3. Enable Google Analytics tracking. The nice thing about services like SnapPages is that they offer this built-in. You just have to sign up for Google Analytics, get your account number, and plug it into your site.

  4. Start an AdWords campaign. Google AdWords has no minimum buy required, so you can easily run a campaign for five dollars a day, or even less. Just put in your credit card. I recommend using their Keyword Tool to setup your initial list of ad targets. Don't worry about selecting particularly good keywords, if you're new to SEM. Just load them all in and choose a low cost-per-click. I used to use $.05, but you might want to go as high as $.25 or $.50. Just make sure you choose a maximum daily budget that is affordable for you to run the campaign for a few weeks with. I would aim to get no more than 1o0 clicks per day - over the course of a week or two, you'll get pretty good conversion data.

  5. Measure conversion rates. Use Google's built-in Analytics/AdWords integration, to track the effectiveness of each ad you run. Then set up "goal tracking" in Analytics to see how many people actually sign up using your registration page. Here are the stats you want to pay particular attention to: the overall conversion % of customers from landing page to completed registration, the click-through-rate for your ads on different keywords, and the bounce rate of your landing page for different keywords.
Armed with that data, you will know a lot about what your business will look like when you finally do build the product you're imagining. At the very least, you can plug those assumptions into your financial model, now that you have a sense for what the cost of acquiring new customers might look like.

Even more importantly, you can start to experiment with feature set, positioning, and marketing - all without building a product. Use Google Optimizer to try different landing pages (even radically different landing pages) to see if any particular way of talking about your product makes a difference to your conversion rates. And if you're getting conversion rates that you feel good about, try asking for a credit card or other method of payment. If that takes your conversion rate to zero, that doesn't necessarily mean you don't have a business, but I'd give it some serious thought. Products that truly solve a severe pain for early adopters can usually find some visionary customers who will pre-order, just based on the vision that you're selling. If you can't find any, maybe that means you haven't figured out who your customer is yet.

And if you don't know who your customer is, perhaps some customer development is in order?
Reblog this post [with Zemanta]

Thursday, November 6, 2008

Stevey's Blog Rants: Good Agile, Bad Agile

I thought I'd share an interesting post from someone with a decidedly anti-agile point of view.
Stevey's Blog Rants: Good Agile, Bad Agile: "Google is an exceptionally disciplined company, from a software-engineering perspective. They take things like unit testing, design documents and code reviews more seriously than any other company I've even heard about. They work hard to keep their house in order at all times, and there are strict rules and guidelines in place that prevent engineers and teams from doing things their own way. The result: the whole code base looks the same, so switching teams and sharing code are both far easier than they are at other places."
I think you can safely ignore the rantings about "bad agile" and the bad people who promote it. But it's helpful to take a detailed look inside the highly agile process used by Google to ship software. Three concepts I found particularly helpful:

  1. Process = discipline. Agile is not an excuse for random execution or lack of standards. They have an extreme focus on unit tests and code standards, which is highly recommended.

  2. Dates are irrelevant. Use a priority work queue instead of scheduling and estimating. As I've written previously:
    I think agile team-building practices make scheduling per se much less important. In many startup situations, ask yourself "Do I really need to accurately know when this project will be done?" When the answer is no, we can cancel all the effort that goes into building schedules and focus on making progress evident.
  3. Focus on launching. All of the incentives described in the article focus on making it easy and highly desirable to launch your product:
    [He] claimed that launching projects is the natural state that Google's internal ecosystem tends towards, and it's because they pump so much energy into pointing people in that direction. ...

    So launches become an emergent property of the system.

    This eliminates the need for a bunch of standard project management ideas and methods: all the ones concerned with dealing with slackers, calling bluffs on estimates, forcing people to come to consensus on shared design issues, and so on. You don't need "war team meetings," and you don't need status reports
    I even believe in doing launches on a continuous basis (see continuous ship).
Anyway, thanks Stevey for your thoughtful post. And sorry about those horrible Bad Agile charlatans that have been apparently torturing you with their salesmanship and dumb ideas.



Reblog this post [with Zemanta]

Wednesday, November 5, 2008

Learning from Obama: maneuver warfare on the campaign trail

I had the privilege of volunteering for the Obama campaign in swing states for a few weekends during the final push towards victory. I'm quite confident I got more out of the experience than the campaign got from me. Zillions of stories are being written about why Barack Obama won, and I will try and avoid repeating the obvious. But nothing I've read so far really does justice to what I witnessed in Colorado and Nevada.

I believe that part of the reason for Barack Obama's victory was his superior understanding of how to build an organization that could learn, discover and execute and at speed so fast that it looked like a blur to the McCain camp. In other words, he better applied the principles of maneuver warfare to his campaign. Startups and larger companies alike can learn a lot from the organization I observed.

Here's some of what I learned from the experience. Obama understood the two concepts that are essential for building a high-performance, highly adaptable, agile organization: 1) rapid iteration and 2) clear values-based objectives.

Following this approach, an organization can act with incredible speed, constantly updating its strategy before an opponent can even figure out what's happening. But instead of looking erratic, the many units of your organization act in concert to produce a coherent whole. In other words: you look like a blur to your enemies. Boyd called this concept the OODA loop, for Observe, Orient, Decide, Act. He believed that the speed at which you can move through the loop determines victory. I believe John Boyd would have been impressed by Barack Obama's campaign.

Speed of iteration
I was impressed by the speed at which the campaign executed its OODA loop, at many levels of the organization. In the small field office that I volunteered at in Colorado, here was the rhythm of our daily existence.

Observe
At the end of each day, we'd laboriously enter data, updating the campaign's voter database with information about every voter contact we accomplished that day. That voter database was accessible to staff at every level of the campaign.

Orient
And then the voter data would be crunched, by someone at statewide HQ, and each night we'd start the process of creating new packets of instructions for the next day. The packets were created from targeted lists of voters, based on all the data the campaign was able to gather from its multi-pronged collection efforts.

Decide
Each day we'd be directed to use specific lists with specific scripts, all created by the campaign. We'd also learn what our overall goals were, and we'd report on how well we'd accomplished those goals at the end of each day.

Act
Each morning, volunteers would arrive and be handed packets with instructions. We'd train them on-demand as they came in, and send them into the world (or onto the phones) with written instructions and voter contact data. Over the course of the day, we'd take their feedback about how it was going into account, revising the script occasionally (back to the Orient step) to try and maximize the goal for that day (voter contacts, persuasion, get out the vote, signing up new volunteers, etc).

The whole loop took only one day.

I have to assume that this structure allowed the campaign to experiment freely and rapidly on their data mining and script-building techniques. It also must have allowed them to assess the effectiveness of each field organizer, team leader, and even each volunteer day-by-day. I didn't witness this first-hand, but they must have been able to diagnose problems pretty quickly and take corrective action whenever necessary. This allowed them very wide discretion when it came to decentralizing the whole organization. The risk of incorporating a bunch of brand-new volunteers is much lower if you have good analytics about their performance.


Mission synchronization
If an organization is changing on a daily basis how is it that it doesn't look "erratic" yet feels "agile." The key is mission synchronization. Every day, up and down the organization we all shared consistent values and sense of common purpose. I want to quote a little bit from the "Organizing Principles" section of the briefing packet I was issued by the Colorado Border States Team when I agreed to come volunteer:
"Respect. Empower. Include" guides everything we do. [it] is the mantra for our campaign and our organizing. Our army of volunteers has been our core advantage on the ground. This army will serve as the foudnation of our general election organization. Our campaign must maximize this strength.

To do so, we must live this mantra on a daily basis. We must be respectful of our coworkers and our supporters; of our own daily projects; of the voters in the state we work; of our opponent and his supporters. We must go beyond engaging volunteers with tasks. Respecting, empowering, and including supporters in our campaign in a meaningful way requires a committment to volunteer leadership, development, training, and accountability. ...

In exchange for that ownership, we will hold them, each otehr, and ourselves accountable to shared goals and expectations.
Now, I had never volunteered in a campaign before. In fact, my political philosophy is considered pretty conservative by many of my friends, and I'd never engaged with the Democratic Party in any way before. So I was pretty nervous about how I'd be treated, and pretty skeptical of the words written in that briefing packet. My experience totally blew me away. Every worker - volunteer and paid staffer alike - that I interacted with from the campaign lived these values every day. Everyone understood the campaign's values, as well as its high-level strategy. And I was always given the opportunity to do meaningful work for the campaign, as long as I was willing to be held accountable for accomplishing its goals.

I think modern companies have a lot they can learn from that experience. In today's world, knowledge workers (and especially those who thrive in startups!) are basically volunteers. They don't have to work for you - they can always get another job. They aren't primarily motivated by money, anyway. Instead, they seek meaningful work where their abilities can make a difference. If you give them that opportunity, and hold them accountable for the results of their efforts, they will move mountains for you. But if you make the mistake of telling them what to do, you'll probably be disappointed.

The benefit to the campaign of having everyone understand its mission and its strategy was immense. As the days wound down to election day, the polls showed Obama with a clear and decisive lead. It would have been pretty easy for the volunteers and supporters to slow down, confident in victory. But everyone understood that the campaign's goal was not just to win an election, but to build a movement. We were building a community, open to anyone who shared its values, and that mission inspired volunteers and staff to try and reach out to as many people as possible. And so the organization ran full-force across the finish line, delivering a healthy mandate for the President-Elect.


Putting it together: maneuver warfare
Let me try and show how all that theory came together in a concrete example. It was three days before the election, and we got word that the McCain campaign was about to unleash its vaunted 72 hour strategy in our county, making it the centerpiece of their get-out-the-vote efforts. We got that news on a conference call at midnight the night before. The statewide office had crunched the numbers and realized we'd be well short of McCain's total in the final days. So we were instructed to rip up the packets for the next day, and create an entirely new set, focused on the objective of signing up new volunteers. Although it was late, we had no problem accomplishing this goal. We knew that as long as we showed up at 9AM the next day with new instructions, scripts, and voter lists, our volunteers would be able to execute. The next day, the campaign used our new data to figure out how many additional volunteers and staff it still needed to send. The whole loop took less than 24 hours. And the volunteers took up the call that morning with the kind of passion and zeal that comes with truly understanding the situation and what the mission requires.

Our opponents seemed to be fighting and old style of ground war, while we were engaged in maneuver warfare. Their strategy was static, and there was no opportunity, as far as I could tell, for them to react to what we were doing. On the other hand, we could match them strength-for-strength as soon as we knew where to go.

What I found particularly interesting, looking back, is that the McCain campaign had superior technology, voter lists, and numbers on their side. Our campaign's software tools were pretty poor (don't let the fancy website fool you, the volunteers are working from an antiquated system). Democrats didn't have a great voter database to start with, because Colorado had only recently become part of their electoral strategy. And the "72 hour strategy" had previously proven decisive in many key counties, blitzkrieg style. Yet, I believe it was Obama's superior agility that led to a decisive 12-point margin in Arapahoe County, CO:




I want to say thanks to everyone on the campaign, from the candidate on down to the paid staff and volunteers that I got to work with. Not only did they create a superb organization and win a decisive victory in a critical election, they also taught many of us some incredible lessons. I'm truly grateful.

The Marine Corps adopted Boyd's ideas as the basis of their maneuver warfare doctrine. The irony is that McCain, a true war hero, might have been undone by the first pristine execution of maneuver warfare in a political campaign.


Reblog this post [with Zemanta]