Guest post by Lisa Regan, writer for The Lean Startup Conference.
Last week, we hosted a webcast conversation, Lean Startup for Growing Companies, with Eric Ries, Wyatt Jenkins of Shutterstock, and Ari Gesher of Palantir. The discussion focused on companies that have hit product-market fit and are growing fast—a topic for advanced entrepreneurs. But the information was critical for any early-stage company that hopes to reach that critical point and wants to be prepared when it comes. We’d like to share some highlights from the webcast and invite you to watch it in its entirety. There’s great information here about hiring, team structure, and best practices that will make you smarter.
About Ari and Wyatt: Wyatt Jenkins is VP of Product at Shutterstock, a stock photo site founded in 2003 that now encompasses twelve cross-functional teams and is one of the world’s largest two-sided marketplaces. Ari Gesher is a senior engineer at Palantir Technologies, creating data-mining software for government and financial clients. Palantir, founded in 2004, had 15 employees when Ari started there; it now has over 1,000 employees and undisclosed revenues reported to be approaching $1B. Wyatt and Ari will both be speaking at The Lean Startup Conference in December.
The first topic that came up is one that people in younger companies will want to know about—what’s hyper-growth actually like? What would it help to know about it before it happens?
Ari: “Having been through hyper-growth or exponential growth, you hear about these other organizations that have been through that and you look at your Googles and your Facebooks and you sort of knew them when they were smaller, and you see them as these behemoths. And what you don’t realize is that there’s almost no graceful way to go through that kind of growth. It’s painful no matter what. We had a year where we doubled size from around 400 to 800, and so you end this year where you have half the company’s been there for less than a year. And all the old ways of doing things are busting at their seams…. It’s a good problem to have. It means that hiring’s working, the business is working – but when you’re going at that speed and that growth, I think it’s something humans just weren’t even built for…. It’s going to be painful. And if there’s one lesson to take away, I guess it’s, ‘Know that it’s going to be painful, and don’t be afraid that that means you’re doing something wrong.’”
Wyatt: “There’s a certain point at which all the things you didn’t want to have to do when you started a company, you now not only have to do, but you have to do them well. You have to really know how to run a meeting, to keep it efficient and keep people wanting to go and be productive. You have to get really good at onboarding practices and the things that when you started a company you thought, I don’t want to do any of that, I just want to build stuff. But now suddenly all those soft skills become the key to your organization.”
Another portion of the conversation centered on the role Lean Startup techniques usually associated with smaller companies can have in a scaling business— specifically, a Five Whys (a technique for discovering the root causes of a failure, which Eric explains in detail in the webcast) and testing.
Ari: “We started doing Five Whys when we were I’d say probably around 100 people. And at that point I might argue you maybe don’t even need it. But it’s important to get it to start being part of the culture, because the point at which you need it is when you’re bigger, when you have a lot of complexity in the way the organization interacts, and the whole point of asking ‘why’ five times is that you’re going to come up with some really surprising results that have to do with everybody doing what they thought was right, but because of the way information doesn’t really flow or process interlocks… the problem is actually four or five layers deeper than where you thought it was. And that only happens at scale.”
Wyatt: “I think testing culture is one of the most important parts of keeping yourself Lean as you scale. And the reason is that people have a direct connection to results without having to go up and down the chain of command. When I meet companies that are struggling a lot with hierarchy or struggling with bureaucracy, a lot of the time the data and results about things are trapped in pockets of the organization and other parts of the organization have to fight to get at it. But if you have a true testing culture, whenever somebody says something in a meeting like, ‘I think X,’ and someone else goes, ‘That’s a nice hypothesis. Let’s go try that out,’ I think that healthy level of testing keeps you lean, it keeps you close to the customer, and that’s one of the things that I think helps us a lot – testing.
Hiring, recruiting, and training (or perhaps fostering – the correct term to use was hard to settle on) played a big role in this conversation. Having more employees doesn’t mean that each hire is less important – it means that the processes around hiring need to develop to meet the company’s needs. But those are constantly changing. So what goes into acquiring and supporting the best employees?
Wyatt: “One thing I try to avoid is dogmatism. If somebody’s really into a process, like really, really into it, to where they’re inflexible, they’re probably not ready for a hyper-growth organization, because whatever it is you’re dogmatic about, it ain’t gonna work in another six months. So when I see that dogmatism I immediately recognize that, wow, this person’s going to have trouble when we’re a completely different company in a year. I always like to look back at my own job and say, you know, I’m doing a completely different job today than I was a year ago, and the year before that, and the year before that. That’s hyper-growth. And in hyper-growth, I promise you whatever you hold near and dear will be incorrect – soon.”
Ari: “You can’t train people to have a different mindset…. The important thing to do as leaders is bring in priming, to give people permission to be uncomfortable. To say, hey, we’re gonna go through this, and some stuff’s gonna be broken, don’t freak out. It’s when they’re not ready for it, when they’re not aware that that doesn’t mean that there’s actually anything existentially wrong, [that you have a problem]….. The psychological effect of priming is really important. If you give people a framework on which to hang their experiences before they encounter them, it makes it much easier for them to digest them and understand them as they encounter [them].”
Though a company may expand from two to 2,000, Eric, Wyatt and Ari all agreed on the importance of maintaining a structure of small, cross-functional teams, rather than siloed divisions (for more on that, see our last webcast on Lean Startup in the Enterprise).
Wyatt: “We’re still in love with the ‘two-pizza team’…just in general if it takes more than two pizzas to feed the team, the team’s too big. We like our teams to be small, relatively autonomous, very autonomous in some cases, depending on the kind of work they’re doing. We treat the teams like startups, we like that ‘us against the world’ mentality of small, autonomous teams.” And, later: “I don’t think we can say that enough: Let the product team figure out what they’re building. If you’re trying to micro-manage that on a high level, across lots of teams, you’re not smart enough [to pull it off], I promise. You really have to point into a direction, have a few high-level metrics, and let your teams fill in the gaps, let them be autonomous. That was a big lesson for me, at least.”
Ari, on creating community while maintaining multiple teams: “We foster all kinds of extra-curricular activities, everything from people doing tabletop games to sponsoring a team in a basketball league to having video game rooms. A lot of these things exist here and they may look like perks…but they’re actually about building the non-obvious links, the non-formal links between teams to really start to create a community. And I think everything you can do to invest in making that place – a business – actually a community, where people live their lives and meet each other, and have a lot of trust – that goes a long way toward making the company feel smaller. And then you get people to be able to lean on those relationships. So maybe you have a team of five people that work close together, and you need something from another team, and one person, well they play Halo together after dinner. And so it’s easy to have that conversation, to break through that ‘stranger barrier’ you get at scale.”
Finally, some last words of wisdom from Eric on the basics of creating a culture of experimentation:
Eric: "People listening in, you’re hearing a lot of cultural and practical tips that are applicable to the stage of company that these guys are at now, and I’m trying to throw in my two cents every once in a while based on companies that I’ve seen. But if you just go and you say, ‘Ok, I’ve learned that we should have a culture of experimentation,’ and you put up posters in your office saying, ‘Ok, everybody, starting today we’re going to have a culture of experimentation!’ You’ll have absolutely no impact whatsoever. One of the things I really believe in is something called the Startup Way, which is just a diagram that helps me remember how to invest in change from the bottom up rather than mandating it from the top down. And it goes like this: Accountability; Process; Culture; People – in that order. It’s the foundation of how we hold people accountable; determines what kind of [experiments] we can and can’t use, what kind of process and infrastructure we will or won’t invest in – obviously if you hold people accountable only for quick, short-term results, then if there’s no long-term philosophy then there’s no point ever in investing in long-term infrastructure, for example. But if you don’t make those process investments, if you don’t have a system for testing hypotheses, you’re never going to get a culture of experimentation and hypothesis-driven development. And if you have an old, Dilbert-styled culture, you’re never really going to be able to retain the best people for the long term.
"So when people say, ‘The solution to having a high-growth company is to hire good people,’ that’s true. When people say, ‘You have to have a culture of experimentation,’ also true. ‘You need to really invest in infrastructure and tools,’ yup, that’s correct. And when they say, ‘You need to hold people accountable, not to vanity metrics, but to learning milestones,’ yup, that’s true. All four of those things are the one thing you have to do to have a high-growth, successful company. It’s just that there’s more than one number-one high-priority thing, because each of those is an interlocking part of the system. You can’t really do one without the other, or if you try, God help you."
--
Watch the rest of the webcast—and register for The Lean Startup Conference—for more specific information on all these topics. We sell conference tickets in blocks; when one block sells out, the price goes up. Register today for the best price possible.
Tuesday, October 29, 2013
Wednesday, October 23, 2013
Lean Analytics: The Best Numbers for Non-Tech Companies
Guest post by Lisa Regan, writer for The Lean Startup Conference.
Analytics spark more questions and discussion than almost any other aspect of the Lean Startup method. If you’re coming to them from outside the tech sector, the language around analytics can be particularly confusing. Alistair and Ben, co-authors of the book Lean Analytics, will help you sort it out in our next webcast, Lean Analytics for Non-tech Companies. The webcast is this Friday, October 25, at 10a PT and includes live Q&A with participants. Registration is free.
For those new to analytics, Alistair and Ben have a free Udemy course well worth checking out. It provides a basic introduction to analytics as they apply to Lean Startup, including sections on what metrics to use and how to interpret them. And it’s also a great starting point for learning the basic vocabulary and methods for analytics, especially for anyone in non-tech startups, where this kind of language is less prevalent. For instance, Ben lists out the worst of the “vanity metrics,” a term that describes appealing but meaningless or misleading numbers. And, Alistair carefully breaks down cohort analysis, a method of grouping users according to a shared criterion (all the users who joined in a given month, for instance, or during a particular campaign), and then demonstrates how you can test with those cohorts to yield actionable information. And, Ben goes over the difference between “leading” and “lagging” indicators--with the former able to tell you how to create growth by creating effective changes.
In the Udemy course, Alistair and Ben expand these basics into a description of how to create empathy, stickiness, virality, revenue, and scale. Stickiness, Ben and Alistair say, is where people move on too quickly--they don’t make sure they really have a product that has the right features and functionality to meet their customers’ needs. It’s here that analytics are important in checking your or your investors’ natural impulses to jump ahead to the next phase.
To help turn the conversation specifically to non-tech companies—the topic of our webcast this week—we asked Alistair to answer a few questions.
LSC: Tell us about the customer development you did for your book:
Alistair: We've been thrilled at how Lean Analytics seemed to resonate with founders. As operators of an accelerator—and founders in our own right—Ben and I had constantly struggled with what the “right” numbers are for a business. We decided to find out, and talked with around 130 founders, entrepreneurs, investors and analysts. The results were revealing: most people didn't know what “normal” was, but there were clear patterns that stood out.
While many of the organizations were technical, we also spoke to big non-tech companies, and smaller businesses like restaurant owners. Nearly all of the ones who'd been successful went through a natural process of customer development—what we call the “empathy” stage—followed by a tight focus on stickiness, then virality, then paid acquisition, and finally scaling.
LSC: What's an example of one metric, other than revenue, that you might look at for a non-tech product?
Alistair: There are plenty. The Net Promoter Score is an obvious one for an established product—how likely are you to tell someone else about the product or service. It's a good measurement because it captures both satisfaction and virality. Customer support numbers, trouble-tickets, returns and complaints are good too. But they're all lagging indicators. In other words, they show you the horse left the barn.
Consider a restaurant. Revenue is a good, obvious metric; but maybe the number of people who don't leave a tip is a leading indicator of revenue. If you could find a way to measure that, and then you understood that there was a strong correlation between tipping rates or amounts and revenue, then you could experiment with things more cleanly. You could try different menus to different tables, and then look at tip amounts, and figure out earlier in the process whether the new menu was better or worse.
The reality, though, is that every company today is a tech company. The dominant channel by which we reach customers is the Internet, whether you're a small local restaurant on Yelp or a global maker of tissue paper. And the dominant tool we use to measure back-office operations is technology, from inventory to supply chain management to procurement to human resources.
The beautiful thing about this, to someone who's analytically minded, is that while humans are awful at recording things, software has no choice but to do so. As a result, we're awash in a sea of data that might yield good insights about the business. The challenge is to know what the biggest problem in the business is right now, then to find a metric that shows you, as early as possible in the customer lifecycle, whether that problem is getting better or worse.
LSC: Here's a common problem: you start measuring something, and you assume that the results will be clear enough to help you make additional decision about your product (for example, to pivot, persevere or kill an idea)--but then the results are hazy. What's a good step to take when your measurement Magic 8-Ball says, "Ask again later"?
Alistair: This is why it's so important to draw a line in the sand beforehand. Scientists know this: you formulate a hypothesis, and then you devise an experiment that will reveal the results. Unfortunately, as founders, we're so enthusiastic, so governed by our reality distortion field, that we often run the experiment and then find the results we want. This is confirmation bias, and it kills.
We often tell founders that a business plan is nonsense. A business model, on the other hand, is a snapshot of your business assumptions at this moment in time. Once you've stated those assumptions clearly, you run experiments to see if they're valid. We spoke with the head of innovation at one Fortune 500 company who told us his only metric for early-stage innovation is “how many assumptions have you tested this week?”
The confusion isn't that the results are hazy. It's that the business model is complex. If I think I can sell 100 widgets at $10 apiece, and they cost me $5 to build and market, that's a business model. But if my measurements show me that people will only pay $8 a widget, is that a failure? No—it means I now need to revise my assumptions and test whether people will buy 125 widgets instead, so I can generate the same revenue (and adjust my margins accordingly).
The Magic 8-Ball seldom says “Ask again later.” What it often says is “Revise your assumptions and test something else.” That's why the most critical attribute of early-stage product development is the ability to learn quickly.
LSC: For companies that aren't used to thinking in terms of metrics, any tips for getting a team on board?
Alistair: As we say in the book, once, the leader was someone who could convince others to act in the absence of information. Today, the leader is someone who can ask the right questions. Data-driven business is here today; it's just not evenly distributed. That's changing, slowly. But there are things you can do to hasten it along.
The first is to use a small data victory to create an appetite for a bigger one. Take, for example, David Boyle at EMI. The company had billions of transactions locked away that might reveal how and why people bought music. But there was little support for analyzing it. So David started his own analysis project, surveying a million people about their music. This was brand new data, and he evangelized it within the organization. Everyone wanted some. Once there was a demand for this data, he earned the political capital to dig into the vast troves of historical information.
The second is to treat everything as a study. Many companies like certainty. We've joked that if a startup is an organization in search of a sustainable, repeatable business model, then a big company is an organization designed to perpetuate such a model. That's in direct conflict with disruption and innovation. So how do you deal with a boss who wants certainty? When we spoke with DHL, they told us that they consider every new initiative a learning exercise that might just happen to produce a new product or service. They've launched new business ideas that failed—but that failure taught them valuable things about a particular market, which they then shared with customers and used for strategic planning.
The simple reality is that with cloud computing, prototyping, social media, and other recent tools, the cost of trying something out is now vanishingly small. In fact, it's often cheaper than the old cost of a big study or research project. Companies need to learn that trying something out is how you conduct the study. Let's say you want to know about the burgeoning market for mobile widgets. So you create a mobile widget MVP. If it fails, you've successfully studied it. If it succeeds, you've successfully studied it, and built a new venture along the way.
The third is, when in doubt, collect and analyze data. We've done some work with the folks at Code for America. In one case, a group was trying to improve the Failure to Appear rate for people accused of a crime. This is a big deal: if you don't show up for court, it triggers a downward spiral of arrests and incarceration. But there were a lot of challenges to tackling the problem directly, so they took a different approach: they created tools to visualize the criminal justice system as a supply chain, making it easier to identify bottlenecks that showed where the system needed work most urgently.
If you're an intrapreneur tilting at corporate windmills, you need to embrace these kinds of tactics. Use small data victories to give management a taste of what's possible. Frame your work as a study that will be useful even if it fails. And when you run into roadblocks, grab data and analyze it in new ways to find where you'll get the most leverage.
--
Our webcast with Alistair and Ben, Lean Analytics for Non-tech Companies, is this Friday; register today and come ready with your questions. Alistair will also be giving a workshop at The Lean Startup Conference, December 9 – 11 in San Francisco. Join us there.
Analytics spark more questions and discussion than almost any other aspect of the Lean Startup method. If you’re coming to them from outside the tech sector, the language around analytics can be particularly confusing. Alistair and Ben, co-authors of the book Lean Analytics, will help you sort it out in our next webcast, Lean Analytics for Non-tech Companies. The webcast is this Friday, October 25, at 10a PT and includes live Q&A with participants. Registration is free.
For those new to analytics, Alistair and Ben have a free Udemy course well worth checking out. It provides a basic introduction to analytics as they apply to Lean Startup, including sections on what metrics to use and how to interpret them. And it’s also a great starting point for learning the basic vocabulary and methods for analytics, especially for anyone in non-tech startups, where this kind of language is less prevalent. For instance, Ben lists out the worst of the “vanity metrics,” a term that describes appealing but meaningless or misleading numbers. And, Alistair carefully breaks down cohort analysis, a method of grouping users according to a shared criterion (all the users who joined in a given month, for instance, or during a particular campaign), and then demonstrates how you can test with those cohorts to yield actionable information. And, Ben goes over the difference between “leading” and “lagging” indicators--with the former able to tell you how to create growth by creating effective changes.
In the Udemy course, Alistair and Ben expand these basics into a description of how to create empathy, stickiness, virality, revenue, and scale. Stickiness, Ben and Alistair say, is where people move on too quickly--they don’t make sure they really have a product that has the right features and functionality to meet their customers’ needs. It’s here that analytics are important in checking your or your investors’ natural impulses to jump ahead to the next phase.
To help turn the conversation specifically to non-tech companies—the topic of our webcast this week—we asked Alistair to answer a few questions.
LSC: Tell us about the customer development you did for your book:
Alistair: We've been thrilled at how Lean Analytics seemed to resonate with founders. As operators of an accelerator—and founders in our own right—Ben and I had constantly struggled with what the “right” numbers are for a business. We decided to find out, and talked with around 130 founders, entrepreneurs, investors and analysts. The results were revealing: most people didn't know what “normal” was, but there were clear patterns that stood out.
While many of the organizations were technical, we also spoke to big non-tech companies, and smaller businesses like restaurant owners. Nearly all of the ones who'd been successful went through a natural process of customer development—what we call the “empathy” stage—followed by a tight focus on stickiness, then virality, then paid acquisition, and finally scaling.
LSC: What's an example of one metric, other than revenue, that you might look at for a non-tech product?
Alistair: There are plenty. The Net Promoter Score is an obvious one for an established product—how likely are you to tell someone else about the product or service. It's a good measurement because it captures both satisfaction and virality. Customer support numbers, trouble-tickets, returns and complaints are good too. But they're all lagging indicators. In other words, they show you the horse left the barn.
Consider a restaurant. Revenue is a good, obvious metric; but maybe the number of people who don't leave a tip is a leading indicator of revenue. If you could find a way to measure that, and then you understood that there was a strong correlation between tipping rates or amounts and revenue, then you could experiment with things more cleanly. You could try different menus to different tables, and then look at tip amounts, and figure out earlier in the process whether the new menu was better or worse.
The reality, though, is that every company today is a tech company. The dominant channel by which we reach customers is the Internet, whether you're a small local restaurant on Yelp or a global maker of tissue paper. And the dominant tool we use to measure back-office operations is technology, from inventory to supply chain management to procurement to human resources.
The beautiful thing about this, to someone who's analytically minded, is that while humans are awful at recording things, software has no choice but to do so. As a result, we're awash in a sea of data that might yield good insights about the business. The challenge is to know what the biggest problem in the business is right now, then to find a metric that shows you, as early as possible in the customer lifecycle, whether that problem is getting better or worse.
LSC: Here's a common problem: you start measuring something, and you assume that the results will be clear enough to help you make additional decision about your product (for example, to pivot, persevere or kill an idea)--but then the results are hazy. What's a good step to take when your measurement Magic 8-Ball says, "Ask again later"?
Alistair: This is why it's so important to draw a line in the sand beforehand. Scientists know this: you formulate a hypothesis, and then you devise an experiment that will reveal the results. Unfortunately, as founders, we're so enthusiastic, so governed by our reality distortion field, that we often run the experiment and then find the results we want. This is confirmation bias, and it kills.
We often tell founders that a business plan is nonsense. A business model, on the other hand, is a snapshot of your business assumptions at this moment in time. Once you've stated those assumptions clearly, you run experiments to see if they're valid. We spoke with the head of innovation at one Fortune 500 company who told us his only metric for early-stage innovation is “how many assumptions have you tested this week?”
The confusion isn't that the results are hazy. It's that the business model is complex. If I think I can sell 100 widgets at $10 apiece, and they cost me $5 to build and market, that's a business model. But if my measurements show me that people will only pay $8 a widget, is that a failure? No—it means I now need to revise my assumptions and test whether people will buy 125 widgets instead, so I can generate the same revenue (and adjust my margins accordingly).
The Magic 8-Ball seldom says “Ask again later.” What it often says is “Revise your assumptions and test something else.” That's why the most critical attribute of early-stage product development is the ability to learn quickly.
LSC: For companies that aren't used to thinking in terms of metrics, any tips for getting a team on board?
Alistair: As we say in the book, once, the leader was someone who could convince others to act in the absence of information. Today, the leader is someone who can ask the right questions. Data-driven business is here today; it's just not evenly distributed. That's changing, slowly. But there are things you can do to hasten it along.
The first is to use a small data victory to create an appetite for a bigger one. Take, for example, David Boyle at EMI. The company had billions of transactions locked away that might reveal how and why people bought music. But there was little support for analyzing it. So David started his own analysis project, surveying a million people about their music. This was brand new data, and he evangelized it within the organization. Everyone wanted some. Once there was a demand for this data, he earned the political capital to dig into the vast troves of historical information.
The second is to treat everything as a study. Many companies like certainty. We've joked that if a startup is an organization in search of a sustainable, repeatable business model, then a big company is an organization designed to perpetuate such a model. That's in direct conflict with disruption and innovation. So how do you deal with a boss who wants certainty? When we spoke with DHL, they told us that they consider every new initiative a learning exercise that might just happen to produce a new product or service. They've launched new business ideas that failed—but that failure taught them valuable things about a particular market, which they then shared with customers and used for strategic planning.
The simple reality is that with cloud computing, prototyping, social media, and other recent tools, the cost of trying something out is now vanishingly small. In fact, it's often cheaper than the old cost of a big study or research project. Companies need to learn that trying something out is how you conduct the study. Let's say you want to know about the burgeoning market for mobile widgets. So you create a mobile widget MVP. If it fails, you've successfully studied it. If it succeeds, you've successfully studied it, and built a new venture along the way.
The third is, when in doubt, collect and analyze data. We've done some work with the folks at Code for America. In one case, a group was trying to improve the Failure to Appear rate for people accused of a crime. This is a big deal: if you don't show up for court, it triggers a downward spiral of arrests and incarceration. But there were a lot of challenges to tackling the problem directly, so they took a different approach: they created tools to visualize the criminal justice system as a supply chain, making it easier to identify bottlenecks that showed where the system needed work most urgently.
If you're an intrapreneur tilting at corporate windmills, you need to embrace these kinds of tactics. Use small data victories to give management a taste of what's possible. Frame your work as a study that will be useful even if it fails. And when you run into roadblocks, grab data and analyze it in new ways to find where you'll get the most leverage.
--
Our webcast with Alistair and Ben, Lean Analytics for Non-tech Companies, is this Friday; register today and come ready with your questions. Alistair will also be giving a workshop at The Lean Startup Conference, December 9 – 11 in San Francisco. Join us there.
Monday, October 21, 2013
Speaker Lineup for the 2013 Lean Startup Conference
Guest post by Lisa Regan, writer for The Lean Startup Conference.
Between webcasts and interviews, we’ve been gradually introducing some of the speakers who are appearing at this year’s Lean Startup Conference. Now we’re ready to announce the full lineup, along with a special deal, explained below. There are some speakers on this year’s roster whom you've heard of before and who who deliver great talks every time out—people like Marc Andreessen, Steve Blank, Reid Hoffman, Chris Dixon and Kent Beck. But we’ve also put a big emphasis on finding terrific speakers who are new to the conference—people we’ve been posting about, like Steven Hodas, Mariya Yao, Khalid Smith and Nicole Tucker-Smith.
Also among the new speakers is Keya Dannenbaum, founder and CEO of ElectNext. She’ll talk in practical terms about how her organization is learning to be one that pivots. Pivoting means accepting hard truths—and to get a sense of Keya’s experience in that department, we asked her to give us an example of something she built that, in retrospect, she could have built more quickly to have learned the same thing. Here’s what she said:
Now’s a good time to remind you that many of our 2013 speakers are participating in free webcasts this fall—which include live Q&A with participants. Past webcasts have featured returning Lean Startup experts, like Patrick Vlaskovits and Brant Cooper, who talked with Eric Ries about Lean Startup in enterprise companies. New webcasts include one on Friday, October 25, Lean Analytics for Non-tech Companies with popular speakers Alistair Croll and Ben Yoskovitz, and on one November 5, Lean Impact--Implementing Lean Startup in Mission-driven Organizations. Tomorrow, October 22, we’ve got a webcast on Lean Startup for Growing Companies with Wyatt Jenkins, VP of product at Shutterstock, Ari Gesher, senior engineer at Palantir, and Eric.
This is Wyatt’s first time speaking at the conference, and he’ll be looking at some truly advanced techniques for A/B testing. Meantime, we’ve asked him to talk about how Shutterstock has scaled Lean Startup techniques as the company has grown into one of the biggest two-sided marketplaces on the web. He gave us a quick rundown on two major growth challenges he’s faced:
Other new speakers this year include Kimberly Bryant of Black Girls Code, Robin Chase of Zipcar and Buzzcar, Matt Mullenweg of Automattic/WordPress, Mureen Allen of Optum, Alexis Ringwald of LearnUp, Catherine Bracy of Code for America, John Goulah of Etsy and many, many more.
Among the speakers we’re bringing back this year are those whose talks generated a lot of interest in the past and who have more we can learn from. So, for example, you’ll see Andres Glusman of Meetup, who last year talked engagingly about the myths he confronted in implementing Lean Startup methods at his company, Malkovich Bias among them:
Dan Milstein gave a popular 2012 talk on how to run a Five Whys and deal with failure in a profitable way. On a webcast this summer, he shared direct advice for the engineering crowd, and he’ll have more ideas-you-use-right-now for us in December.
Justin Wilcox, who joined us earlier this year for a lively webcast on applying Lean Startup ideas beyond Silicon Valley, will be returning to follow up on his surprising 2012 talk on pricing and MVPs. You’ll remember Justin for his helpful distinction between a business and a hobby, and the tendency of startups to accidentally wind up in the second category.
Steph Hay opened our eyes last year with a talk on testing content strategies. She’ll be back with deeper advice in December and previewed that in an interview earlier this year and. Here’s here 2012 talk.
You’ll also see Diane Tavenner with an update on Summit Charter Schools, where last year rapid iteration was both raising math scores and revealing the weakness of lecture as a knowledge delivery method. Her 2012 talk—which suggested the possibility of truly disruptive innovation to address entrenched issues in education--was a huge discussion starter.
--
Now that you know some of the people who’ll be speaking at the 2013 Lean Startup Conference, you’re probably wishing you already had tickets. The good news is that in a way, you can get them. Until tomorrow, October 22 at 11:59 PT, we’re rolling back prices to our mid-August level—that’s three price breaks back, and more than 40% off the standard rate. Register now, as this price won’t be available after tomorrow.
Between webcasts and interviews, we’ve been gradually introducing some of the speakers who are appearing at this year’s Lean Startup Conference. Now we’re ready to announce the full lineup, along with a special deal, explained below. There are some speakers on this year’s roster whom you've heard of before and who who deliver great talks every time out—people like Marc Andreessen, Steve Blank, Reid Hoffman, Chris Dixon and Kent Beck. But we’ve also put a big emphasis on finding terrific speakers who are new to the conference—people we’ve been posting about, like Steven Hodas, Mariya Yao, Khalid Smith and Nicole Tucker-Smith.
Also among the new speakers is Keya Dannenbaum, founder and CEO of ElectNext. She’ll talk in practical terms about how her organization is learning to be one that pivots. Pivoting means accepting hard truths—and to get a sense of Keya’s experience in that department, we asked her to give us an example of something she built that, in retrospect, she could have built more quickly to have learned the same thing. Here’s what she said:
It’s well-known that publishers, particularly those of the traditional news variety, are facing challenging financial times, and we always knew that our means of making money wouldn’t be to charge them. So we spent some time iterating--leanly, we thought--on the revenue model.
This past spring we stumbled on a genius idea – in order to make our publisher products work, we were scanning millions of articles and categorizing them by politicians, issues and popularity, among other dimensions. We could easily (easily! ha) take all that data and package it up for politicians, in a product to help them monitor their earned media.We visited political offices, and saw them painfully cutting articles out of physical newspapers and pasting them in binders.
We saw “sophisticated” operations using Google Alerts. We asked why they weren’t using BGov or Meltwater, and we heard they were priced out of those markets. We pitched a product we hadn’t built yet and had 80% signup rates for a free trial. BOOM. Lean validation. So we spent a couple months building the thing. We put together a small salesforce to sell it…. And no one bought it.
There were a number of reasons: the users (from whom net promoter scores averaged 9.5! Irrelevant, it turns out) weren’t the buyers; the buyers wanted a feature set no one would have used; the buyers only made purchases once a year…the list goes on. But the real reason for that product’s failure was our mistake in not charging up front. We would have learned everything we needed to know in the third week of assessing the opportunity. And that’s why we now pre-sell all our paid products.
Now’s a good time to remind you that many of our 2013 speakers are participating in free webcasts this fall—which include live Q&A with participants. Past webcasts have featured returning Lean Startup experts, like Patrick Vlaskovits and Brant Cooper, who talked with Eric Ries about Lean Startup in enterprise companies. New webcasts include one on Friday, October 25, Lean Analytics for Non-tech Companies with popular speakers Alistair Croll and Ben Yoskovitz, and on one November 5, Lean Impact--Implementing Lean Startup in Mission-driven Organizations. Tomorrow, October 22, we’ve got a webcast on Lean Startup for Growing Companies with Wyatt Jenkins, VP of product at Shutterstock, Ari Gesher, senior engineer at Palantir, and Eric.
This is Wyatt’s first time speaking at the conference, and he’ll be looking at some truly advanced techniques for A/B testing. Meantime, we’ve asked him to talk about how Shutterstock has scaled Lean Startup techniques as the company has grown into one of the biggest two-sided marketplaces on the web. He gave us a quick rundown on two major growth challenges he’s faced:
Challenge #1: Vanity Metrics. The problem of vanity metrics seemed to exacerbate itself as we added more employees. Everyone wanted to be more accountable and measure their progress (a great problem to have), but there weren't enough meaningful metrics for everyone to rally around, so people latch onto other metrics that may or may not be helpful. Product team velocity is a great example. It's a helpful metric when trying to figure out the efficiency of a team, but at the end of the day, it has no bearing on whether what the team is building is effective or not. You can have a team with great velocity building crap that no one wants or vice versa.
Another Vanity metric I've seen is visits—a stat that I've rarely seen anyone take action on. Usage of a new feature was a useless stat for us until we changed it to repeat usage. Lastly, we used to track deploys to production because we were so proud of our continuous deployment practice, but past a certain threshold (we are somewhere north of 200 deploys a month) this is a vanity metric.
Challenge #2: Technical Debt. This might be one of our biggest challenges that we actively work on every day. As with most evolving systems that are over 10 years old, ours has increased complexity making it difficult to move quickly. There are some amazing efforts going on internally to simplify our system and modernize our stack, but hiring teams of developers for this effort is still something new to our organization. Oftentimes there are decisions made without understanding of the technical ramifications, not because someone is vindictive, but because they have a pre-conceived notion of how complex something is and they don't want to call a meeting about it (part of keeping our process streamlined). Working to keep systems simple is an important part of our product strategy. We try to delete less-used features often in order to have a simple, scalable system. Still, we have a ways to go.
Other new speakers this year include Kimberly Bryant of Black Girls Code, Robin Chase of Zipcar and Buzzcar, Matt Mullenweg of Automattic/WordPress, Mureen Allen of Optum, Alexis Ringwald of LearnUp, Catherine Bracy of Code for America, John Goulah of Etsy and many, many more.
Among the speakers we’re bringing back this year are those whose talks generated a lot of interest in the past and who have more we can learn from. So, for example, you’ll see Andres Glusman of Meetup, who last year talked engagingly about the myths he confronted in implementing Lean Startup methods at his company, Malkovich Bias among them:
Dan Milstein gave a popular 2012 talk on how to run a Five Whys and deal with failure in a profitable way. On a webcast this summer, he shared direct advice for the engineering crowd, and he’ll have more ideas-you-use-right-now for us in December.
Justin Wilcox, who joined us earlier this year for a lively webcast on applying Lean Startup ideas beyond Silicon Valley, will be returning to follow up on his surprising 2012 talk on pricing and MVPs. You’ll remember Justin for his helpful distinction between a business and a hobby, and the tendency of startups to accidentally wind up in the second category.
Steph Hay opened our eyes last year with a talk on testing content strategies. She’ll be back with deeper advice in December and previewed that in an interview earlier this year and. Here’s here 2012 talk.
You’ll also see Diane Tavenner with an update on Summit Charter Schools, where last year rapid iteration was both raising math scores and revealing the weakness of lecture as a knowledge delivery method. Her 2012 talk—which suggested the possibility of truly disruptive innovation to address entrenched issues in education--was a huge discussion starter.
--
Now that you know some of the people who’ll be speaking at the 2013 Lean Startup Conference, you’re probably wishing you already had tickets. The good news is that in a way, you can get them. Until tomorrow, October 22 at 11:59 PT, we’re rolling back prices to our mid-August level—that’s three price breaks back, and more than 40% off the standard rate. Register now, as this price won’t be available after tomorrow.
Saturday, October 19, 2013
Rapid Iteration for Mobile App Design
Guest post by Lisa Regan, writer for The Lean Startup Conference.
As we’ve mentioned before, this year’s Lean Startup Conference features a lot of speakers who have incredible expertise to share but are new to our event. Mariya Yao is one such speaker. She’s the founder and Creative Director at Xanadu, a mobile strategy and design consultancy helping to guide app developers to success in a rapidly-changing, often chaotic mobile ecosystem.
We asked her a few questions about how mobile developers can measure and address their product’s performance in an environment that is both incredibly competitive and rapidly changing. She provided some basic answers for us here and will go into more depth at the conference.
LSC: You've spoken before about strategic failures--where people build the wrong product--versus tactical fails, where people build the product wrong. This is a great distinction; so how can a mobile app developer know which of these is their particular problem? In other words, are there dead giveaways that the problem with an app is strategic rather than tactical?
Mariya: A strategic failure occurs when--as Paul Graham is fond of saying--you build a product no one wants. This means that you can't easily get users through the door despite solid marketing efforts, they aren't proactively inviting their friends and colleagues, or no one is paying for your product. A tactical failure occurs when you do grow quickly or easily attract passionate users, but see major drop-offs at key points in product usage due to poor implementation and user experience.
When you build a product that is clearly performing poorly from the get-go and you've ruled out basic technical, marketing, or executive issues, it's very likely the product is a strategic fail. However, what often happens is a startup builds a product people like but don't love. They'll typically appear to do well early on, but won't have enough of a passionate following to achieve meaningful growth or revenues.
There are two questions that I recommend startups use to differentiate between being liked versus being loved. First is the question Sean Ellis popularized, where you ask your users, "How disappointed would you be if you could no longer use our product?" and have them answer with either, "Very Disappointed," "Somewhat Disappointed," "Not Disappointed," or "I no longer use the product." Sean did research across hundreds of startups and discovered that companies that had fewer than 40% of their users answer "Very Disappointed" tended to struggle with building a successful and sustainable business.
The second question is known as the Net Promoter Score, where you ask your users, "On a scale from 0-10, how likely are you to recommend us to your friends?" You mark those who answer 0-6 as Detractors, 9-10 as Promoters, and 7-8 as Neutral. Your Net Promoter score is the percent of Promoters minus your percentage of Detractors, which should be a number between -100 and +100. The world's most successful companies typically score around +50, and top performing tech companies like Apple, Google, and Amazon regularly score over +70.
LSC: You've also spoken before about the fact that mobile apps suffer a major dropoff in engagement between opening the app and registering it. When that happens, what has a developer typically failed to validate before this step? How can they test for this in the app development?
Mariya: The drop-off between opening the app and registering tends to occur because an app developer doesn't clearly communicate the value of their app before demanding that a user put in work to register an account. This is a violation of the "give before you take" principle that governs social interactions.
For example, you'll often see apps where the very first screen is a Facebook-only login screen. Most of the time, all you see here is the title of the app, some vague background image or tagline, and this big Facebook Connect button. While social registration can be easier than regular registration, you're also asking users to give you access to their social data before you've clearly shown them WHAT your app does and communicated clearly WHY they should hand over sensitive information.
Imagine if a random stranger comes up to, someone you know nothing about, and immediately demands to know your birthday, your relationship status, and all your friend's email addresses. Obviously that'd be wildly off-putting and you'd refuse his request. That behavior is socially awkward for people AND socially awkward for apps, and the numbers show this. The typical drop-off rate at these kinds of Facebook-only login screens is about 30% and I've even seen cases where it is over 50%.
My advice for developers who want to combat this immediate drop-off is to test different kinds of onboarding flows for brand new users and try to delay registration until user data is absolutely needed. There are many apps that deliver plenty of utility and value without mandating that a user create an account up front. Great examples include Yelp and Flipboard. Others like Airbnb allow you to browse listings to your heart's content and only require registration when you are at the last step of completing a booking. That said, there will always be categories of apps — such as social networks or messaging apps — that require a user's identity in order to deliver value. In those cases, I'd recommend testing very short "Learn more" overviews prior to registration and optimizing your social invite flows, as they will often be the most compelling ways to get new users over the registration hurdle.
If a developer has a live product with sufficient usage already in the market, I'd recommend running several split tests with delayed registration if he or she hasn't already. For developers who are still in early ideation phases and are building utility apps that don't require user identification, one quick way to get early feedback is to create a multitude of paper prototypes on index cards that test different opening flows and show them to potential users in the app's intended context. For apps that are social or require a user's identity to be useful, a prototype needs to be more fully fleshed out to give meaningful test results. Here I'd recommend developers build as minimal as possible of an HTML5 app, hook up all the requisite analytics, and test as early as possible for retention on the core action loop they want their users to take. For less technical developers, I'll be covering some methods and tools to get functional prototypes built with less dependency on engineering know-how.
LSC: You do a lot of work in helping app developers create longterm engagement. Do you have examples of app-specific measures that developers really should pay attention to (and maybe generally don't) in order to validate customers' engagement?
Mariya: Compared to desktop usage patterns, mobile apps tend to see more frequent sessions but significantly lower session lengths. For example, a product that has both a desktop and a mobile presence might see desktop users visit 10-20 times a month for session lengths of over 10 minutes on average, whereas on mobile they might see users visit 30-50 times a month for less than 60 seconds at a time.
Another difference you'll see is that people will visit hundreds of websites in a month on desktop, but their bandwidth for apps is much more limited. On mobile, despite the fact that there are millions of offerings in the app stores, the average consumer only uses about 15-20 different apps per week on a regular basis. There's a limit on both the real estate on a mobile user's home screen and their capacity for adopting new apps for habitual use.
Thus for many types of mobile apps, the holy grail is to become a daily habit for users. For your app category, you want to be the "go-to" app that users depend on. Aim to get your users to come back every day, maybe even multiple times a day, in order to have a shot at broad long-term retention. A popular metric for measuring retention in the mobile games industry is DAU / MAU, or daily active users divided by monthly active users, and I highly recommend that consumer-facing mobile app developers keep track of that metric as well.
LSC: How can app developers, particularly those working in a cross-platform environment, quickly test and validate new features and processes?
Mariya: Moving quickly across multiple platforms is tough because development and testing are both so much slower and more bug-prone than on desktop or a single platform. Generally speaking, I'd advise developers to focus on nailing the product experience on a single platform first before becoming too ambitious on the cross-platform front, but occasionally you come across apps whose value comes from being ubiquitous.
Regardless of what app or feature you want to test, I'd recommend you first follow Eric's advice in The Lean Startup and clearly identify your hypotheses and unanswered questions. Then you should decide effective ways to test your assumptions and pre-determine what your metrics of success should be in order for you to make a go or no-go decision to build. Much of this is the same whether you are building for mobile or web, though on mobile there are some specific tactics and tools you can use to prototype aspects of your new products or features quickly that I'll share in my talk at the Lean Startup Conference. I shamelessly encourage all of you to attend my session on "Rapid Iteration on Mobile" if you'd like to learn more.
LSC: Let's say an app has 2,000 monthly active users and a simple function those people like—but the developer has done some testing and thinks there's a much bigger market in a related but different product. How would you recommend that the developer pivot to the new idea without losing all of the existing customers?
Mariya: My advice would heavily depend on the resources--time, money, and engineering prowess--that the app developer has available and what the growth metrics and business model look like for this existing app with 2,000 MAU. For the vast majority of social games or consumer-facing mobile products, 2,000 MAU is probably too low of a user base to sustain a real business model as typically only 1%-5% of your users will convert to paying customers and advertisers aren't usually enticed into partnerships unless your numbers are well into the millions. If there aren't real drivers of long-term growth behind this app, it may be the right (albeit incredibly tough) strategic decision to pursue a higher potential market even if it means abandoning some early wins.
That said, there are many ways to test new products and markets relatively cheaply so any major pivoting decision can and should be vetted thoroughly. If the new app idea is closely related to the existing one, the app developer should try cross-promoting the new product to his existing user base. 2,000 MAU is a ripe field for recruiting potential users and conducting user research and usability studies. He or she may even choose to launch the product in parallel with the existing one if the company can manage to do this without sacrificing too much momentum or morale. By comparing the live performance of both products in the market, you'll get the most accurate data to inform your strategic product decisions.
For an existing product on mobile, there are many ways to segment your audience to test new features. One of the most popular is to release an app in a limited number of countries, such as Canada or New Zealand, prior to a global launch. Another is to "white-label" your app and release parallel apps in the same market that test different value propositions. Yet another is to test with mobile web apps or Android apps first prior to officially launching. For example, pushing new changes out on Android is typically much faster than with iOS so it's popular, especially with mobile game developers, to fine-tune apps on Android rather than starting with iOS.
--
Learn more at The Lean Startup Conference, December 9 - 11 in San Francisco. Register today.
As we’ve mentioned before, this year’s Lean Startup Conference features a lot of speakers who have incredible expertise to share but are new to our event. Mariya Yao is one such speaker. She’s the founder and Creative Director at Xanadu, a mobile strategy and design consultancy helping to guide app developers to success in a rapidly-changing, often chaotic mobile ecosystem.
We asked her a few questions about how mobile developers can measure and address their product’s performance in an environment that is both incredibly competitive and rapidly changing. She provided some basic answers for us here and will go into more depth at the conference.
LSC: You've spoken before about strategic failures--where people build the wrong product--versus tactical fails, where people build the product wrong. This is a great distinction; so how can a mobile app developer know which of these is their particular problem? In other words, are there dead giveaways that the problem with an app is strategic rather than tactical?
Mariya: A strategic failure occurs when--as Paul Graham is fond of saying--you build a product no one wants. This means that you can't easily get users through the door despite solid marketing efforts, they aren't proactively inviting their friends and colleagues, or no one is paying for your product. A tactical failure occurs when you do grow quickly or easily attract passionate users, but see major drop-offs at key points in product usage due to poor implementation and user experience.
When you build a product that is clearly performing poorly from the get-go and you've ruled out basic technical, marketing, or executive issues, it's very likely the product is a strategic fail. However, what often happens is a startup builds a product people like but don't love. They'll typically appear to do well early on, but won't have enough of a passionate following to achieve meaningful growth or revenues.
There are two questions that I recommend startups use to differentiate between being liked versus being loved. First is the question Sean Ellis popularized, where you ask your users, "How disappointed would you be if you could no longer use our product?" and have them answer with either, "Very Disappointed," "Somewhat Disappointed," "Not Disappointed," or "I no longer use the product." Sean did research across hundreds of startups and discovered that companies that had fewer than 40% of their users answer "Very Disappointed" tended to struggle with building a successful and sustainable business.
The second question is known as the Net Promoter Score, where you ask your users, "On a scale from 0-10, how likely are you to recommend us to your friends?" You mark those who answer 0-6 as Detractors, 9-10 as Promoters, and 7-8 as Neutral. Your Net Promoter score is the percent of Promoters minus your percentage of Detractors, which should be a number between -100 and +100. The world's most successful companies typically score around +50, and top performing tech companies like Apple, Google, and Amazon regularly score over +70.
LSC: You've also spoken before about the fact that mobile apps suffer a major dropoff in engagement between opening the app and registering it. When that happens, what has a developer typically failed to validate before this step? How can they test for this in the app development?
Mariya: The drop-off between opening the app and registering tends to occur because an app developer doesn't clearly communicate the value of their app before demanding that a user put in work to register an account. This is a violation of the "give before you take" principle that governs social interactions.
For example, you'll often see apps where the very first screen is a Facebook-only login screen. Most of the time, all you see here is the title of the app, some vague background image or tagline, and this big Facebook Connect button. While social registration can be easier than regular registration, you're also asking users to give you access to their social data before you've clearly shown them WHAT your app does and communicated clearly WHY they should hand over sensitive information.
Imagine if a random stranger comes up to, someone you know nothing about, and immediately demands to know your birthday, your relationship status, and all your friend's email addresses. Obviously that'd be wildly off-putting and you'd refuse his request. That behavior is socially awkward for people AND socially awkward for apps, and the numbers show this. The typical drop-off rate at these kinds of Facebook-only login screens is about 30% and I've even seen cases where it is over 50%.
My advice for developers who want to combat this immediate drop-off is to test different kinds of onboarding flows for brand new users and try to delay registration until user data is absolutely needed. There are many apps that deliver plenty of utility and value without mandating that a user create an account up front. Great examples include Yelp and Flipboard. Others like Airbnb allow you to browse listings to your heart's content and only require registration when you are at the last step of completing a booking. That said, there will always be categories of apps — such as social networks or messaging apps — that require a user's identity in order to deliver value. In those cases, I'd recommend testing very short "Learn more" overviews prior to registration and optimizing your social invite flows, as they will often be the most compelling ways to get new users over the registration hurdle.
If a developer has a live product with sufficient usage already in the market, I'd recommend running several split tests with delayed registration if he or she hasn't already. For developers who are still in early ideation phases and are building utility apps that don't require user identification, one quick way to get early feedback is to create a multitude of paper prototypes on index cards that test different opening flows and show them to potential users in the app's intended context. For apps that are social or require a user's identity to be useful, a prototype needs to be more fully fleshed out to give meaningful test results. Here I'd recommend developers build as minimal as possible of an HTML5 app, hook up all the requisite analytics, and test as early as possible for retention on the core action loop they want their users to take. For less technical developers, I'll be covering some methods and tools to get functional prototypes built with less dependency on engineering know-how.
LSC: You do a lot of work in helping app developers create longterm engagement. Do you have examples of app-specific measures that developers really should pay attention to (and maybe generally don't) in order to validate customers' engagement?
Mariya: Compared to desktop usage patterns, mobile apps tend to see more frequent sessions but significantly lower session lengths. For example, a product that has both a desktop and a mobile presence might see desktop users visit 10-20 times a month for session lengths of over 10 minutes on average, whereas on mobile they might see users visit 30-50 times a month for less than 60 seconds at a time.
Another difference you'll see is that people will visit hundreds of websites in a month on desktop, but their bandwidth for apps is much more limited. On mobile, despite the fact that there are millions of offerings in the app stores, the average consumer only uses about 15-20 different apps per week on a regular basis. There's a limit on both the real estate on a mobile user's home screen and their capacity for adopting new apps for habitual use.
Thus for many types of mobile apps, the holy grail is to become a daily habit for users. For your app category, you want to be the "go-to" app that users depend on. Aim to get your users to come back every day, maybe even multiple times a day, in order to have a shot at broad long-term retention. A popular metric for measuring retention in the mobile games industry is DAU / MAU, or daily active users divided by monthly active users, and I highly recommend that consumer-facing mobile app developers keep track of that metric as well.
LSC: How can app developers, particularly those working in a cross-platform environment, quickly test and validate new features and processes?
Mariya: Moving quickly across multiple platforms is tough because development and testing are both so much slower and more bug-prone than on desktop or a single platform. Generally speaking, I'd advise developers to focus on nailing the product experience on a single platform first before becoming too ambitious on the cross-platform front, but occasionally you come across apps whose value comes from being ubiquitous.
Regardless of what app or feature you want to test, I'd recommend you first follow Eric's advice in The Lean Startup and clearly identify your hypotheses and unanswered questions. Then you should decide effective ways to test your assumptions and pre-determine what your metrics of success should be in order for you to make a go or no-go decision to build. Much of this is the same whether you are building for mobile or web, though on mobile there are some specific tactics and tools you can use to prototype aspects of your new products or features quickly that I'll share in my talk at the Lean Startup Conference. I shamelessly encourage all of you to attend my session on "Rapid Iteration on Mobile" if you'd like to learn more.
LSC: Let's say an app has 2,000 monthly active users and a simple function those people like—but the developer has done some testing and thinks there's a much bigger market in a related but different product. How would you recommend that the developer pivot to the new idea without losing all of the existing customers?
Mariya: My advice would heavily depend on the resources--time, money, and engineering prowess--that the app developer has available and what the growth metrics and business model look like for this existing app with 2,000 MAU. For the vast majority of social games or consumer-facing mobile products, 2,000 MAU is probably too low of a user base to sustain a real business model as typically only 1%-5% of your users will convert to paying customers and advertisers aren't usually enticed into partnerships unless your numbers are well into the millions. If there aren't real drivers of long-term growth behind this app, it may be the right (albeit incredibly tough) strategic decision to pursue a higher potential market even if it means abandoning some early wins.
That said, there are many ways to test new products and markets relatively cheaply so any major pivoting decision can and should be vetted thoroughly. If the new app idea is closely related to the existing one, the app developer should try cross-promoting the new product to his existing user base. 2,000 MAU is a ripe field for recruiting potential users and conducting user research and usability studies. He or she may even choose to launch the product in parallel with the existing one if the company can manage to do this without sacrificing too much momentum or morale. By comparing the live performance of both products in the market, you'll get the most accurate data to inform your strategic product decisions.
For an existing product on mobile, there are many ways to segment your audience to test new features. One of the most popular is to release an app in a limited number of countries, such as Canada or New Zealand, prior to a global launch. Another is to "white-label" your app and release parallel apps in the same market that test different value propositions. Yet another is to test with mobile web apps or Android apps first prior to officially launching. For example, pushing new changes out on Android is typically much faster than with iOS so it's popular, especially with mobile game developers, to fine-tune apps on Android rather than starting with iOS.
--
Learn more at The Lean Startup Conference, December 9 - 11 in San Francisco. Register today.
Wednesday, October 16, 2013
Lean Startup at Scale
Guest post by Lisa Regan, writer for The Lean Startup Conference.
As Lean Startup methods have been used now for a number of years, we’ve become increasingly interested in how companies use them to sustain growth. That is, once you’re no longer a small company and you have some success, how do you execute and continue to grow through innovation? Next Tuesday, October 22 at 10a PT, we’ll take a look at this advanced entrepreneurship question. In a webcast conversation, Lean Startup for Growing Companies, Eric Ries will talk with two Lean Startup Conference speakers: Wyatt Jenkins, VP of Product at Shutterstock, a stock photo site that has become one of the world’s largest two-sided marketplaces, and has expanded since its founding in 2003 from a single development team to 12 cross-functional teams spanning all phases of product development; and Ari Gesher, a senior engineer at Palantir Technologies, which specializes in data-mining software for a diverse set of problems across verticals, including disaster recovery work recognized by the Clinton Global Initiative, and has grown since its founding in 2004 to over 1,000 employees and undisclosed revenues reported to be approaching $1B today. The webcast is free with registration and will include a live Q&A with attendees.
Below are excerpts from conversations we had with Ari and Wyatt about their growing companies and about some of their suggestions for staying with Lean Startup as you expand:
When we asked Ari to talk about how he addressed some of the challenges of growth at Palantir, he gave the example of developing iterative cycles that could accommodate a scaling company:
Ari: At Palantir, we've had to tailor our software development process over time to deal with the scale of the team and the scale of our core software products. Each plateau of scale has required adjustments--changing or dropping an old process that wasn't working or creating a new process to deal with novel challenges. One good example is the way in which we've adjusted the length of different phases of our agile sprints. We don't follow a set agile methodology, but rather follow a more home-grown, minimal version of various approaches. We work in prototypically four-week iterations, with quality engineers and software developers working in close collaboration.
It wasn’t always this way. Palantir is a deep technical play and we had a lot of code to write just to fill out the product vision that we had already validated with potential customers; it took us two straight years of development to go from early prototypes to software that could be used in production. In these halcyon days, we weren't using iterative cycles as there was often nothing that could be tested for long periods of time (aside from unit testing, of course). During this period, the Palantir Gotham team grew from five developers to around 35.
This finally bit us after a four month stint of development blew through its testing schedule by a factor of four: two scheduled weeks turned into two months before the product reached stability.
So what was going on? Software development is all about managing complexity and the bigger and more mature the codebase gets, the more complex it gets. The interconnectedness had come to a head, such that new code was much more likely to disrupt existing code than ever before. As we started looking at bug counts something else became clear: we could now create bugs faster than we could find them.
We had finally hit the point where we needed to impose clear process - process whose main goal was to ensure that software stayed stable. But we couldn't have identified this without having clear metrics (that high bug count) to assess our development process. The result was a new process of four-week iterative cycles all about throttling new code. Here's the simplest form of that cycle:
The new process let us manage the velocity of change in the codebase low, keeping the resistance manageable.
More important, the iteration framework gave us something like a meta-process: we could try new ideas about how to manage development process and measure them against historical data to see what could further optimize the process.
Note that the iteration template above is our prototypical iteration. What started as a four-week cycle has since expanded to five-week cycle — adding a second week of regression to pay down technical debt. For the final iteration that turns into an external release, we push out to a six-week cycle, adding an additional week of testing: end-of-release testing.
We asked Wyatt about his experience with testing at Shutterstock, and the best ways to approach it in a growing company:
Wyatt: The concept that your "big idea" is nothing but a hypothesis until we test it is a part of the Shutterstock culture. We learned some hard lessons via A/B testing, but realized quickly that the pace at which we could perform tests and the systems for reading tests had to be both excellent and easily accessible by many different groups in the organization. Because of this (and because we believe our data is a competitive advantage that we don't want to share with third parties), we chose to build many tools ourselves. At this point, we have our own open-sourced click tracking "lil brother," an internal data visualization tool built on Rickshaw, as well as an A/B testing platform called Absinthe. We pride ourselves on the speed of testing hypotheses and reading those tests. Shutterstock is in a competitive market, but we have the most traffic and the most usage, meaning that we can run more tests (achieve significance sooner) and learn faster than our competitors.
The upshot of this is that speed matters if you hope to build, measure and learn faster than your competition. As Shutterstock has grown, there are a few key elements to our continued development speed:
Wyatt: We've employed a number of systems in the organization that keep all of us close to the customer. As we've grown, we now have a great qualitative research team dedicated helping us stay close. There are 5-10 customers in our office (or remote) per week for developers, product owners and marketers to speak to and validate learning. The important aspect of scaling customer development is to build it into the process and make it so easy to put your idea in front of a customer that everybody does it. Skip the focus groups--they don't work and they take too long to set up. Create a steady stream of customer input that anyone can dip into.
Ari also talked about the changes in the information flow as a company grows, in his case thinking of it moving in a variety of directions--between the company and its customers, certainly, but also between the company and its own internal teams, or between parts of the company itself:
Ari: One of the biggest effects of scale has to do with internal information flows. For example, a small team of four people starting work on a product has it easy. By just sitting in one room, they can have amazing shared situational awareness. There's two things at work here: as new information comes into that room, it's as easy as an offhand comment or a lunch conversation to share. The second thing is that it has no history - everyone on the team is starting from a place of zero knowledge and accumulating context as it arrives.
I joined Palantir when it was one room and fifteen people--the above model was still pretty functional. We all knew a lot about what was going on. Things like shared meals helped us stay in sync such that the knowledge about everything from the state of the product to the outcome of our last meeting with potential customers was pervasive.
Another interesting feature of those early years: most of what we needed to know was outside the company.
Growth changed all that. We've been building the company for almost ten years now and we have three major locations in the United States, as well as about half-a-dozen overseas offices. Over a thousand people work here now.
Most of the information that most of the people need to do their jobs is actually generated inside the company now. We have a long history and so new employees have to spend a long time getting up to speed on the why, what, and how of everything that we do. As a result, we've had to designs process, protocols, and infrastructure to make sure that critical information flows to the right people in a timely manner. We've had to design and implement training programs to help on-board people to our culture and technology. We have a blog that's internal-only to capture important stories for prosperity. We have a dizzying array of email lists and careful protocols about which lists get copied to make sure we can maintain a shared situational awareness that can only hope to approach what we had when we were in one room. We have an internal directory that lets people tag themselves with the things they know about--often learning is about finding who knows the answer and can explain the full why (not just the what) of something.
And none of that addresses exactly how the company as a whole learns from the world. There is now an immense flow of information coming in from the world, about how our product is working (and not working), about how our software is solving customer problems.
There are two teams that handle the bulk of the learning that comes in from the field: Support and Product Navigation. Both of these teams are collating, distilling, and turning into knowledge the information flowing from the field.
Unlike most support teams, the [Palantir] Support Team is not contacted by end users but instead by our people in the field. Our model is to place Forward Deployed Engineers (FDEs) on customer sites to add with integration of data, customization of our platforms and applications to a specific task, and training of the customer's analysts who will be the end-users of the system. If an FDE runs into trouble with the product or a novel situation that we did not anticipate, they contact the Support Team, who handles both resolving the situation, communication with the product team and documenting the knowledge gained in this situation so it can be avoided in the future.
The Product Navigators are responsible for understanding the use cases that are covered by our products--the gaps, what new features we need, what's working, what's not. This is not bug tracking but more along the lines of customer development and guidance to the product team on what to build next. They collate, distill, and prioritize information coming in from the FDEs and product instrumentation about how well the product is actually solving customer problems, what features are in use (or that users don't understand). This is a part of what many other organizations would call product management, but we decouple the learning portion of that discipline from the design portion (handled by product designers and engineers based on the knowledge created by the Product Navigation team) of product management.
Both of these teams were created to handle the sheer scale of information coming in from the field as our customer base as grown from zero to where it is today.
--
Register today for our free webcast to join Eric, Ari, and on October 22 at 10a PT. For even more Lean Startup learning, register for The Lean Startup Conference, December 9 – 11 in San Francisco.
As Lean Startup methods have been used now for a number of years, we’ve become increasingly interested in how companies use them to sustain growth. That is, once you’re no longer a small company and you have some success, how do you execute and continue to grow through innovation? Next Tuesday, October 22 at 10a PT, we’ll take a look at this advanced entrepreneurship question. In a webcast conversation, Lean Startup for Growing Companies, Eric Ries will talk with two Lean Startup Conference speakers: Wyatt Jenkins, VP of Product at Shutterstock, a stock photo site that has become one of the world’s largest two-sided marketplaces, and has expanded since its founding in 2003 from a single development team to 12 cross-functional teams spanning all phases of product development; and Ari Gesher, a senior engineer at Palantir Technologies, which specializes in data-mining software for a diverse set of problems across verticals, including disaster recovery work recognized by the Clinton Global Initiative, and has grown since its founding in 2004 to over 1,000 employees and undisclosed revenues reported to be approaching $1B today. The webcast is free with registration and will include a live Q&A with attendees.
Below are excerpts from conversations we had with Ari and Wyatt about their growing companies and about some of their suggestions for staying with Lean Startup as you expand:
When we asked Ari to talk about how he addressed some of the challenges of growth at Palantir, he gave the example of developing iterative cycles that could accommodate a scaling company:
Ari: At Palantir, we've had to tailor our software development process over time to deal with the scale of the team and the scale of our core software products. Each plateau of scale has required adjustments--changing or dropping an old process that wasn't working or creating a new process to deal with novel challenges. One good example is the way in which we've adjusted the length of different phases of our agile sprints. We don't follow a set agile methodology, but rather follow a more home-grown, minimal version of various approaches. We work in prototypically four-week iterations, with quality engineers and software developers working in close collaboration.
It wasn’t always this way. Palantir is a deep technical play and we had a lot of code to write just to fill out the product vision that we had already validated with potential customers; it took us two straight years of development to go from early prototypes to software that could be used in production. In these halcyon days, we weren't using iterative cycles as there was often nothing that could be tested for long periods of time (aside from unit testing, of course). During this period, the Palantir Gotham team grew from five developers to around 35.
This finally bit us after a four month stint of development blew through its testing schedule by a factor of four: two scheduled weeks turned into two months before the product reached stability.
So what was going on? Software development is all about managing complexity and the bigger and more mature the codebase gets, the more complex it gets. The interconnectedness had come to a head, such that new code was much more likely to disrupt existing code than ever before. As we started looking at bug counts something else became clear: we could now create bugs faster than we could find them.
We had finally hit the point where we needed to impose clear process - process whose main goal was to ensure that software stayed stable. But we couldn't have identified this without having clear metrics (that high bug count) to assess our development process. The result was a new process of four-week iterative cycles all about throttling new code. Here's the simplest form of that cycle:
- Week -1 - Planning/End-of-Cycle - Software engineers are planning: writing specifications, doing light prototyping, and experimentation. During this week, quality engineers are attending planning meetings but also doing end-of-cycle testing of the previous iteration - the final testing on an internal release before it's deployed to our dog-fooding systems.
- Week 0 - New Feature 1 - Software engineers are busy building brand new features. Quality engineers are writing up test plans based on the specifications and creating test data (if necessary). By mid-week, the software engineers have to deliver something testable so the quality engineers can start testing and (of course) filing bugs.
- Week 1 - New Feature 2 - Development and testing continue together. By the end of the week, the new code should be stable, with all filed bugs fixed.
- Week 2 - Regression - Software engineers start paying off technical debt by attacking a queue of existing bugs. Quality engineers make full regression passes through the software, looking for old functionality that may have been impacted by the new changes this iteration.
- Week 3 - Planning/End-of-Cycle - See Week -1.
The new process let us manage the velocity of change in the codebase low, keeping the resistance manageable.
More important, the iteration framework gave us something like a meta-process: we could try new ideas about how to manage development process and measure them against historical data to see what could further optimize the process.
Note that the iteration template above is our prototypical iteration. What started as a four-week cycle has since expanded to five-week cycle — adding a second week of regression to pay down technical debt. For the final iteration that turns into an external release, we push out to a six-week cycle, adding an additional week of testing: end-of-release testing.
We asked Wyatt about his experience with testing at Shutterstock, and the best ways to approach it in a growing company:
Wyatt: The concept that your "big idea" is nothing but a hypothesis until we test it is a part of the Shutterstock culture. We learned some hard lessons via A/B testing, but realized quickly that the pace at which we could perform tests and the systems for reading tests had to be both excellent and easily accessible by many different groups in the organization. Because of this (and because we believe our data is a competitive advantage that we don't want to share with third parties), we chose to build many tools ourselves. At this point, we have our own open-sourced click tracking "lil brother," an internal data visualization tool built on Rickshaw, as well as an A/B testing platform called Absinthe. We pride ourselves on the speed of testing hypotheses and reading those tests. Shutterstock is in a competitive market, but we have the most traffic and the most usage, meaning that we can run more tests (achieve significance sooner) and learn faster than our competitors.
The upshot of this is that speed matters if you hope to build, measure and learn faster than your competition. As Shutterstock has grown, there are a few key elements to our continued development speed:
- Small, autonomous teams: The more a team can do on their own, the faster they can go. The hand-offs between teams are (mostly) eliminated, and close-working autonomy creates a good startup vibe as well.
- Continuous deployment: A key component of speed is to keep pushing out work. This has kept us lean as well—we don't have release trains, and code generally goes live to the site every day of the week.
- Don't get religious about process – just continuously improve. You need to strike a balance between process and problem-solving. You don’t want to get so committed to a particular process that you can’t adapt to problems as they actually present themselves. So, depending on which team you are talking to at Shutterstock, we may be Lean Startup or Agile or Kanban or some other method depending on the type of problem that team is designed to solve. But that doesn’t mean that we don’t take these ideas seriously. You want to be flexible enough to change your process across teams as you scale--but as a rule of thumb we question every new bit of process someone tries to add because process is easy to add and very difficult to remove once it's in place. For that reason, it’s important to go with methods, like Lean Startup, that have proven results for the kind of problem that team is trying to address.
Wyatt: We've employed a number of systems in the organization that keep all of us close to the customer. As we've grown, we now have a great qualitative research team dedicated helping us stay close. There are 5-10 customers in our office (or remote) per week for developers, product owners and marketers to speak to and validate learning. The important aspect of scaling customer development is to build it into the process and make it so easy to put your idea in front of a customer that everybody does it. Skip the focus groups--they don't work and they take too long to set up. Create a steady stream of customer input that anyone can dip into.
Ari also talked about the changes in the information flow as a company grows, in his case thinking of it moving in a variety of directions--between the company and its customers, certainly, but also between the company and its own internal teams, or between parts of the company itself:
Ari: One of the biggest effects of scale has to do with internal information flows. For example, a small team of four people starting work on a product has it easy. By just sitting in one room, they can have amazing shared situational awareness. There's two things at work here: as new information comes into that room, it's as easy as an offhand comment or a lunch conversation to share. The second thing is that it has no history - everyone on the team is starting from a place of zero knowledge and accumulating context as it arrives.
I joined Palantir when it was one room and fifteen people--the above model was still pretty functional. We all knew a lot about what was going on. Things like shared meals helped us stay in sync such that the knowledge about everything from the state of the product to the outcome of our last meeting with potential customers was pervasive.
Another interesting feature of those early years: most of what we needed to know was outside the company.
Growth changed all that. We've been building the company for almost ten years now and we have three major locations in the United States, as well as about half-a-dozen overseas offices. Over a thousand people work here now.
Most of the information that most of the people need to do their jobs is actually generated inside the company now. We have a long history and so new employees have to spend a long time getting up to speed on the why, what, and how of everything that we do. As a result, we've had to designs process, protocols, and infrastructure to make sure that critical information flows to the right people in a timely manner. We've had to design and implement training programs to help on-board people to our culture and technology. We have a blog that's internal-only to capture important stories for prosperity. We have a dizzying array of email lists and careful protocols about which lists get copied to make sure we can maintain a shared situational awareness that can only hope to approach what we had when we were in one room. We have an internal directory that lets people tag themselves with the things they know about--often learning is about finding who knows the answer and can explain the full why (not just the what) of something.
And none of that addresses exactly how the company as a whole learns from the world. There is now an immense flow of information coming in from the world, about how our product is working (and not working), about how our software is solving customer problems.
There are two teams that handle the bulk of the learning that comes in from the field: Support and Product Navigation. Both of these teams are collating, distilling, and turning into knowledge the information flowing from the field.
Unlike most support teams, the [Palantir] Support Team is not contacted by end users but instead by our people in the field. Our model is to place Forward Deployed Engineers (FDEs) on customer sites to add with integration of data, customization of our platforms and applications to a specific task, and training of the customer's analysts who will be the end-users of the system. If an FDE runs into trouble with the product or a novel situation that we did not anticipate, they contact the Support Team, who handles both resolving the situation, communication with the product team and documenting the knowledge gained in this situation so it can be avoided in the future.
The Product Navigators are responsible for understanding the use cases that are covered by our products--the gaps, what new features we need, what's working, what's not. This is not bug tracking but more along the lines of customer development and guidance to the product team on what to build next. They collate, distill, and prioritize information coming in from the FDEs and product instrumentation about how well the product is actually solving customer problems, what features are in use (or that users don't understand). This is a part of what many other organizations would call product management, but we decouple the learning portion of that discipline from the design portion (handled by product designers and engineers based on the knowledge created by the Product Navigation team) of product management.
Both of these teams were created to handle the sheer scale of information coming in from the field as our customer base as grown from zero to where it is today.
--
Register today for our free webcast to join Eric, Ari, and on October 22 at 10a PT. For even more Lean Startup learning, register for The Lean Startup Conference, December 9 – 11 in San Francisco.
Friday, October 11, 2013
The Entrepreneurial Enterprise
Guest post by Lisa Regan, writer for The Lean Startup Conference.
How can established companies benefit from implementing Lean Startup? To answer that question, we hosted a webcast conversation earlier this week with Eric
Ries, Brant Cooper, and Patrick Vlaskovits. We’d like to share a few highlights and invite you to join the
ongoing conversation by posing questions for Eric, Patrick, and Brant in the
comments to this post. They’ll jump in to answer in the coming days. (Of
course, you can also continue the conversation by attending The Lean Startup Conference. Our latest batch of discounted tickets is about to sell out, so
register today.)
A lot of people think the “startup” part of “Lean Startup”
means the ideas apply only in young companies. But, in fact, they can be
crucial in large, established companies that need to find growth in new
products and new markets. As Eric puts it in the webcast, “I’ve met now the
CEOs of some of the biggest companies in the world, and I still spend time with
the CEOs of high-growth Silicon Valley companies from the garage on up, and what all those
people have in common is that they are seeking out sources of sustainable
growth…. What we care about in the innovation community is growth that is
driven by customers and creating value for them. And we just so happen to think that the best
way to create sustainable growth in our highly disruptive world is through
continuous innovation…. And that challenge I have seen to be identical no
matter the size of the company.”
Patrick and Brant understand these problems intimately. They
are the co-authors of The Lean
Entrepreneur, and the co-founders of the Moves the Needle Group, which
advises the innovation practices of Fortune 100 companies. Their webcast conversation
with Eric began with a discussion of the ways that large companies come around
to realizing they need to implement Lean Startup. Among other themes they
covered: the conditions under which established companies implement Lean
Startup; the hindrances and incentives they face; what individual employees can
do to implement the methodology; and how to protect a company’s core business
from the potential impact of experimentation. Though we’re posting a few
highlights below, we encourage you to watch the video in its entirety, as Eric,
Patrick and Brant dig into technical details and case studies that will be
helpful for people engaged with this topic.
--
The webcast conversation examined corporate versus
startup structure, and the limitations on innovation that corporations create
for themselves. Problems arise when companies isolate areas of the business in
what Eric refers to as “functional silos”—independently operating teams (design,
engineering, marketing, legal, etc.) that discretely address specific areas of
the production chain. Each silo may be innovative in itself, but at some point
it considers its work complete and hands off to the next silo, with which it
has had little to no collaboration and for whose tradeoffs it has not accounted.
What can Lean Startup do about this?
Eric: I’ve now worked with a number of companies
where once they adopt Lean Startup, it gives them a common vocabulary for all
the different teams to use the same vocabulary, the same business-oriented,
results set of concepts to wrap all these techniques in…. We knock the silos
down and get everybody on a single, cross-functional team to say to everybody,
look, you are a startup, you are not a set of different functions, so you,
team…go experiment and learn how to make this happen. When teams are organized
that way they’re so much more productive, so much more energized, the creativity
you unlock is incredible.
Brant: We’re still taught in MBA school is these
silos, right? And if people can just imagine, if the way we motivate people,
the way we do performance reviews, and incentivize people within silos—those
measures cannot be drawn in a direct line to corporate objectives. You cannot
draw a direct line to reducing waste or improving revenues or cutting costs…whereas
the cross-functional teams, you can tie it to a performance metric that has a
direct result in the corporate objectives.
Patrick, with a bit of
highly practical advice: One of the things Brant and I counsel large
organizations, particularly ones that have already embraced Agile, is to extend
the Agile metaphor into the funnel, for example, and show the benefit of Lean
Startup for the sales and marketing teams, and then show the benefits of Lean
Startup to the HR folks…. With sales and marketing experiments, often you pick
low-hanging fruit in the millions of dollars very, very quickly. We’ve seen
this time and time and time again, and that’s how you get other parts of the
organization and other functional roles excited about starting LS methods.
--
A variety of audience questions came in effectively asking
how an established company, with an existing product, customer, and brand
image, can make use of a minimum viable product (MVP) for experimentation
purposes without damaging the larger company’s standing.
Brant: The core
business has to be protected from the startup. We separate out these startups.
They need to exist in a different place for the time being, so that the startup
is protected from the questions around return on investment, and the core business
is protected from the startup…. So you’re not launching a minimum viable
product to all of your core business customers--that’s a no-no. You have to
keep these separate. And then the Lean Startup acts like a new startup. You
have to go find your own customers to experiment with, you have your own brand,
so you have to think like a Lean Startup.
You’ve got your validated learning with the customer development you’ve
been doing with your own market segment. But you’re not going to your core
business.
Patrick: The
startup “can’t run to the core
business and have all its problems solved for it. Because then you get
something like the children of helicopter parents, that can’t fend for
themselves. And ultimately if it’s a real innovation, that has real value and
creates real value for customers, it has to fend for itself. It has to be a
stand-alone business case that makes sense.
Eric, on making
lemonade out of an MVP lemon: The great thing about an MVP is that if
customers don’t like it, but they care enough to complain, that’s actually
great news. Most of the time when you do an MVP, no one even notices. Zero
customers show up. You have “launch day” and then nothing happens, because your
value proposition is so wrong that no one cares. So that doesn’t harm the brand. If people complain but you kept the
scale of the product small--and MVP is about containing the scope of the
experimentation so that the cost of failure is low--then you can make it right
for those customers that complain. Like, you can send them a hand-engraved
letter press apology. Every single one. Personally delivered to their house. By
you. If that ever happens.
--
So how do you start a Lean Startup practice where you
actually work? Patrick, Eric, and Brant all talked, in response to audience
questions, about the importance of individual actors within corporations
deciding to, as Patrick put it, “be subversive.”
Brant: Changing
the culture is what’s needed and it’s hard and it takes time. And so I think if
you’re a person inside of an organization that is ready for this type of stuff,
the first thing you should do is go find like-minded people, and that’s actually
how you start the process…. I’m not saying go do something that’s going to get
you in trouble, but you can’t wait for some magic to happen, for some external
force that says ok, now this company is prepared for Lean Startup culture. You
actually have to go and make the change yourself.
Patrick: Actually, let me go as far as to say you should go get in trouble, you should actually be subversive. I’m
half-joking here, but I think we’ve all seen this, that where Lean Startup has
managed to take root and flower is where initially you had some aggressive
early-adopters act a little subversively. If you have a mortgage and a family,
I’m not telling you to risk your career on adopting Lean Startup, but if you’re
passionate about making change, you can’t wait for permission to do this stuff,
you’ve got to start doing it and you’ve got to start doing it intelligently,
and part of that is hacking the actual internal political system. That’s very
difficult, obviously, but it’s part of that journey.
Eric: Think of all the managers who worked at Kodak,
or Nokia, or Blackberry--pick your favorite company that has had a total collapse
in living memory. And say, “How would it feel to be the manager who was there,
who saw the disruption coming and did nothing about it?” First of all, is that
actually a good path to having a
longterm career? If you have to feed a family and make your mortgage, and your
company collapses while you’re there, is that really going to help you? I feel
like it’s almost irresponsible to put your head down and say, “Whatever, I work
for a great institution, and I’m going to let it crumble under my
stewardship….” The future leaders of companies I think are going to be the
people who today started learning with these techniques, because what we’re
talking about here is nothing less than a full-scale paradigm change in the
management culture and management philosophy of modern companies. So would you
rather be an early adopter of that, or have someone else take that lead?
--
Interested in more in-depth discussion of Lean Startup in
the enterprise? We invite you to check out the entire video, including a
fascinating conversation about why revenue is not a good growth metric. For even more insight, register today to join us at The Lean Startup Conference in December.
If you bring eight or more of your employee, we’ll give you a substantial break
on the price. For more info on the benefits of team registration, see our
post on fostering innovation in established companies. For pricing details
just email our executive producer Melissa
Tinitigan and use the subject header “group discount.”
Wednesday, October 2, 2013
Key Questions for Bringing Lean Startup to Established Companies
Guest post by Lisa Regan, writer for The Lean Startup Conference.
Our next webcast tackles a particularly challenging topic. On October 8 at 10a PT, we’ll be talking about bringing Lean Startup methods to established companies, precisely the kinds of businesses that are most resistant to experimentation, even as they desperately need it. Eric Ries will be joined by Patrick Vlaskovits and Brant Cooper, co-authors of the NYT-bestselling book “The Lean Entrepreneur” and frequent advisers to Fortune 100 companies like Hewlett-Packard, Qualcomm and Pitney Bowes. The webcast is free with advance registration and — the best part — features a live audience Q&A where you can submit your own questions for Brant, Patrick and Eric.
To set up the themes of the webcast, we asked both Brant and Patrick a few questions about their experience with enterprise companies. They didn’t pull any punches in telling us how difficult — and yet important — it can be to get leadership on board.
LSC: What do you see as the most common reasons that large companies give for why they don't apply Lean Startup?
Brant: In my experience, the conversations don’t ever go that way. Large companies don’t have to, and in fact don’t, justify choosing not to shift to a different methodology. In fact, most of our work is inbound, meaning that companies come to us already seeking to do Lean Startup; they’re already on board. I’m not currently in the business of – nor sure I ever want to be in the business of – convincing organizations to do Lean Startup. That's hard. That’s Eric Ries’s job.
Patrick: Last week I had an interesting conversation with the CIO of a major European company operating in a mature, very profitable industry. His team was quite well-versed in Lean Startup, and asked hard and insightful questions. One theme that emerged a few times amongst the comments was essentially, “Yes, we think that Lean Startup could be hugely beneficial, but we’re worried about the changes in culture, organization and process required to pull it off.” Essentially, they have a “the cure might kill the patient” anxiety.
It would be too easy to shrug off their thinking as short-sighted, but I think their concern is valid. At the end of the day, it will be how they view the future of their industry, and their role in it, that will determine whether they see the coming disruption as something they choose to participate in, or remain willfully ignorant of.
LSC: What opportunities do large or established companies typically fail to see or take advantage of when they do attempt Lean Startup methods?
Brant: Companies take on Lean Startup for reasons that are all over the map, and they have experiences with it that cover a similar range. This is mostly because they’re not implementing the methodology in its entirety; rather, they have some concrete set of outcomes in mind. So I’ll see internal development groups trying to apply Lean Startup methods. And, companies that are modifying their design thinking processes are similarly interested. I also see employees attending Lean Startup Machine in essentially ad hoc efforts to try something new. Overall, this is part of a tendency to see Lean Startup as simply an evolution of Agile, and to mistakenly think that it thus “belongs” in Engineering, or that it’s only useful for new product endeavors. That kind of compartmentalization ignores existing products and processes, and encourages people to think that the methodology doesn’t apply to “their stage” of product development.
In general, I think organizations are prone to seeing Lean Startup as a “tack-on,” a benefit to supplement the usual way of doing things. And while Lean Startup can certainly be useful from that perspective, the real missed opportunity for the company as a whole is in organizational transformation. In other words, why only make changes to a particular product or product line, when you could teach the entire organization, at all levels, to move faster, get closer to customers, and learn to continuously innovate?
Patrick: There is one area of low-hanging fruit that I do see for companies that institute Lean Startup as a “tack-on,” as Brant put it. That’s when the sales and marketing functions adopt Lean Startup for customer acquisition. By executing best practices, they can often realize huge gains in conversion from hitherto invisible opportunities around the funnel and the product.
LSC: Lastly, how can reform-minded individuals or teams address these resistances from within — that is, how can people interested in the methodology get buy-in up the hierarchy?
Brant: In my experience this cannot be only a grass-roots push. Somewhere in the C-Suite, there has to be an executive who has realized that a change in philosophy and in practices must occur if the company is to take “innovation” out of the realm of being just a marketing buzzword. This may be a maverick CEO, or a new CEO brought in for change, or one who simply has had an epiphany and is out looking for a new approach. We’re still in early-adopter territory here.
From that perspective, I’d say that the best path for reform-minded individuals is not to seek buy-in, but rather to attempt to foster an epiphany. That isn’t going to happen, at this stage, by seeking approval from your boss. But hey, we’re approaching the holidays. Buy the CEO or Chief Innovation Officer a copy of Eric’s book. With any luck, he or she will read it.
Patrick: I share Brant’s emphasis on leadership, but I think you gotta try the sandwich approach -- grassroots from the bottom and an aggressive leader from the top. Ideally, this kind of organizational change covers the entire spectrum of the business.
--
Our webcast with Brant, Patrick and Eric is October 8 at 10a PT. Register today to join hundreds of people attending, and bring your questions for the live Q&A.
If you’re thinking of bringing Lean Startup to your organization, we encourage you to take advantage of our team discount for The Lean Startup Conference, Dec 9 – 11 in SF: send eight or more of your employees and get a substantial break on the price. For more info on the benefits of team registration, see our post on fostering innovation in established companies. For pricing details just email our executive producer Melissa Tinitigan, and use the subject header “group discount.”
Our next webcast tackles a particularly challenging topic. On October 8 at 10a PT, we’ll be talking about bringing Lean Startup methods to established companies, precisely the kinds of businesses that are most resistant to experimentation, even as they desperately need it. Eric Ries will be joined by Patrick Vlaskovits and Brant Cooper, co-authors of the NYT-bestselling book “The Lean Entrepreneur” and frequent advisers to Fortune 100 companies like Hewlett-Packard, Qualcomm and Pitney Bowes. The webcast is free with advance registration and — the best part — features a live audience Q&A where you can submit your own questions for Brant, Patrick and Eric.
To set up the themes of the webcast, we asked both Brant and Patrick a few questions about their experience with enterprise companies. They didn’t pull any punches in telling us how difficult — and yet important — it can be to get leadership on board.
LSC: What do you see as the most common reasons that large companies give for why they don't apply Lean Startup?
Brant: In my experience, the conversations don’t ever go that way. Large companies don’t have to, and in fact don’t, justify choosing not to shift to a different methodology. In fact, most of our work is inbound, meaning that companies come to us already seeking to do Lean Startup; they’re already on board. I’m not currently in the business of – nor sure I ever want to be in the business of – convincing organizations to do Lean Startup. That's hard. That’s Eric Ries’s job.
Patrick: Last week I had an interesting conversation with the CIO of a major European company operating in a mature, very profitable industry. His team was quite well-versed in Lean Startup, and asked hard and insightful questions. One theme that emerged a few times amongst the comments was essentially, “Yes, we think that Lean Startup could be hugely beneficial, but we’re worried about the changes in culture, organization and process required to pull it off.” Essentially, they have a “the cure might kill the patient” anxiety.
It would be too easy to shrug off their thinking as short-sighted, but I think their concern is valid. At the end of the day, it will be how they view the future of their industry, and their role in it, that will determine whether they see the coming disruption as something they choose to participate in, or remain willfully ignorant of.
LSC: What opportunities do large or established companies typically fail to see or take advantage of when they do attempt Lean Startup methods?
Brant: Companies take on Lean Startup for reasons that are all over the map, and they have experiences with it that cover a similar range. This is mostly because they’re not implementing the methodology in its entirety; rather, they have some concrete set of outcomes in mind. So I’ll see internal development groups trying to apply Lean Startup methods. And, companies that are modifying their design thinking processes are similarly interested. I also see employees attending Lean Startup Machine in essentially ad hoc efforts to try something new. Overall, this is part of a tendency to see Lean Startup as simply an evolution of Agile, and to mistakenly think that it thus “belongs” in Engineering, or that it’s only useful for new product endeavors. That kind of compartmentalization ignores existing products and processes, and encourages people to think that the methodology doesn’t apply to “their stage” of product development.
In general, I think organizations are prone to seeing Lean Startup as a “tack-on,” a benefit to supplement the usual way of doing things. And while Lean Startup can certainly be useful from that perspective, the real missed opportunity for the company as a whole is in organizational transformation. In other words, why only make changes to a particular product or product line, when you could teach the entire organization, at all levels, to move faster, get closer to customers, and learn to continuously innovate?
Patrick: There is one area of low-hanging fruit that I do see for companies that institute Lean Startup as a “tack-on,” as Brant put it. That’s when the sales and marketing functions adopt Lean Startup for customer acquisition. By executing best practices, they can often realize huge gains in conversion from hitherto invisible opportunities around the funnel and the product.
LSC: Lastly, how can reform-minded individuals or teams address these resistances from within — that is, how can people interested in the methodology get buy-in up the hierarchy?
Brant: In my experience this cannot be only a grass-roots push. Somewhere in the C-Suite, there has to be an executive who has realized that a change in philosophy and in practices must occur if the company is to take “innovation” out of the realm of being just a marketing buzzword. This may be a maverick CEO, or a new CEO brought in for change, or one who simply has had an epiphany and is out looking for a new approach. We’re still in early-adopter territory here.
From that perspective, I’d say that the best path for reform-minded individuals is not to seek buy-in, but rather to attempt to foster an epiphany. That isn’t going to happen, at this stage, by seeking approval from your boss. But hey, we’re approaching the holidays. Buy the CEO or Chief Innovation Officer a copy of Eric’s book. With any luck, he or she will read it.
Patrick: I share Brant’s emphasis on leadership, but I think you gotta try the sandwich approach -- grassroots from the bottom and an aggressive leader from the top. Ideally, this kind of organizational change covers the entire spectrum of the business.
--
Our webcast with Brant, Patrick and Eric is October 8 at 10a PT. Register today to join hundreds of people attending, and bring your questions for the live Q&A.
If you’re thinking of bringing Lean Startup to your organization, we encourage you to take advantage of our team discount for The Lean Startup Conference, Dec 9 – 11 in SF: send eight or more of your employees and get a substantial break on the price. For more info on the benefits of team registration, see our post on fostering innovation in established companies. For pricing details just email our executive producer Melissa Tinitigan, and use the subject header “group discount.”
Subscribe to:
Posts (Atom)