Thursday, April 29, 2010

Video update on the Startup Visa Act

The Startup Visa Act continues to gain momentum on Capitol Hill, thanks to grassroots support of all of you. Without lobbyists or PACs, we're getting the word out in DC and nationwide that we have an opportunity to act - this year - to create jobs right here in America by supporting entrepreneurship and innovation. As bills in both chambers of Congress pick up supporters and co-sponsors, it's more important than ever for citizens who care about this issue to call, write, and tweet their representatives.

On our most recent trip to DC, the Startup Visa team produced two new videos to encapsulate our work to date, and hopefully inspire future action. They feature two of the rock stars of the Startup Visa team, Shervin and Brad, looking like, well, rock stars. Please take a look and, if you're as inspired as I am, please take a moment to help spread the word, embed these videos, or take another action outlined below. Thanks!

Shervin Pishevar, activism at 30,000 feet.
"My big belief in the Startup Visa Act: Entrepreneurship is very much representative and symbolic of what America's all about."




Brad Feld, the Startup Visa Act.
"When you think bout the economic stress and economic crisis we've been going through in this country, and you think about future economic growth, there's no question that entrepreneurship and innovation is a huge driver of future success."



Want to get involved? We have a list of ways on the StartupVisa website:
  1. Go to our campaign page & tweet your support NOW! (<30 sec)
  2. Write your local newspaper & tell them you support the Startup Visa Act
  3. Call your senators & let them know you support Startup Visa legislation
  4. Add the Startup Visa Widget to your blog or website
  5. Follow the #startupvisa hash tag on Twitter and voice your support
  6. Contribute to Startup Visa so we can spread the word!

In particular, we are also working on a letter of support from university presidents and entrepreneurship professors across academia. If you know someone who might be willing to sign on to that letter, please contact Brad Feld, who is organizing the letter.

Monday, April 26, 2010

Lean Enterprise Institute webinar, April 28

I can barely write, as I'm still recovering from the amazing but overwhelming Startup Lessons Learned conference last Friday (great summary here). I'll follow up with a more detailed post later, but for now let me just say: thank you to everyone who participated, spoke, sponsored or helped organize. It exceeded my expectations totally.

Want to learn more about lean startups? Want to talk about applying the lessons beyond software, internet, and small companies? The Lean Enterprise Institute, the official keepers of lean, are hosting a free webinar on Wednesday, April 28. Details are below. This will be a unique cross-cultural meeting between entrepreneurs and traditional lean experts. I believe we have much to learn from each other.

920 people from over sixty countries have already signed up to attend - help us break 1000 by registering here.
Lean Startups
Lean mindsets and methods for innovation in any company
a free webinar featuring:
Eric Ries
April 28, 2010 at 2:00 PM EDT

The "Lean Startup" is the application of lean thinking to the process of innovation in startup companies — defined as a type of business where both the problem (customer need) and the solution (product) are unknown. Traditional product development efforts often invest millions of dollars and years of time into one fixed product concept that is assumed to meet known customer needs — creating a high level of risk that the "waste of overproduction" occurs and creating a product that customers reject. The "Lean Startup" methodology, instead, tests new ideas early and cheaply, with early and frequent customer feedback. Critical to product success is creating a learning feedback loop that's company-wide, continuously testing new ideas so that idea failure doesn't have to equal company failure. Iterating more quickly is the key to success rather than having the one initial perfect concept.

Since successful startups grow into larger mature companies, how do the lessons from "Lean Startups" apply? How can businesses of all shapes and sizes use lean methods to be innovative and disruptive? Where a startup is a high uncertain opportunity in a highly uncertain business, how can more stable businesses use these methods for their new uncertain products or ideas? How do familiar lean manufacturing mind sets and philosophies for quality management, training, and problem solving contribute to innovation?

Specifically, you will learn:
  • How to define "value" in an innovation setting
  • What is a "minimum viable product" and why is that preferable to big batch
    development?
  • How to create a blame-free development culture that encourages learning, root
    cause problem identification, and improvement
  • How a process focus can lead to discipline, not bureaucracy
  • How to apply the right learnings from "Lean Startups" to a non-startup environment
Who Should Attend:
This webinar is designed for a broad audience: everyone who is interested in learning how companies with unknown problems and solutions can use rapid P-D-C-A cycles to better understand and meet customer needs. This is intended for anybody who is working on new innovations, whether that means continuous improvement at the front lines, or new product development in a mature manufacturing company.
I hope you'll join us. More information is available here.

Sunday, April 18, 2010

Four myths about the Lean Startup

Myth: Lean means cheap. Lean startups try to spend as little money as possible.

Truth: The Lean Startup method is not about cost, it is about speed. Lean Startups waste less money, because they use a disciplined approach to testing new products and ideas. Lean, when used in the context of lean startup, refers to a process of building companies and products using lean manufacturing principles applied to innovation. That process involves rapid hypothesis testing, validated learning about customers, and a disciplined approach to product development.

Myth: The Lean Startup methodology is only for Web 2.0/internet/consumer software companies.

Truth: The Lean Startup methodology applies to all companies that face uncertainty about what customers will want. This is true regardless of industry or even scale of company: many large companies depend on their ability to create disruptive innovation. Those general managers are entrepreneurs, too. And they can benefit from the speed and discipline of starting with a minimum viable product and then learning and iterating continuously.

Myth: Lean Startups are small bootstrapped startups.

Truth: There’s nothing wrong with raising venture capital. Many lean startups are ambitious and are able to deploy large amounts of capital. What differentiates them is their disciplined approach to determining when to spend money: after the fundamental elements of the business model have been empirically validated. Because lean startups focus on validating their riskiest assumptions first, they sometimes charge money for their product from day one – but not always.

Myth: Lean Startups replace vision with data or customer feedback.

Truth: Lean Startups are driven by a compelling vision, and they are rigorous about testing each element of this vision against reality. They use customer development, split-testing, and actionable analytics as vehicles for learning about how to make their vision successful. But they do not blindly do what customers tell them, nor do they mechanically attempt to optimize numbers. Along the way, they pivot away from the elements of the vision that are delusional and double-down on the elements that show promise.

Saturday, April 17, 2010

Sneak preview, KISSmetrics (and more)

Hear the CEO of KISSmetrics give a sneak preview of what he'll be presenting at the Startup Lessons Learned conference on April 23 (we're less than a week away!):




Conference updates continue to pour in:
  • Want to see more preview videos? Take a look at our new sneak-preview site, sponsored by KISSmetrics.
  • We've updated our list of simulcast locations (see below). Thanks to volunteer organizers around the world, the conference is now available on every continent (well, except Antarctica) and in almost fifty cities. Most of these events are free, but they do require that you RSVP. An up-to-date list of simulcast locations is always available at http://sllconf.com/streaming.
  • Our sponsors continue to support the conference as well as deserving entrepreneurs. Three premier early-stage venture investors have joined forces to sponsor the event: Baseline, Floodgate, and First Round Capital. In addition to supporting their portfolio companies in coming to the event, they also have taken the additional step of underwriting a limited number of discounted tickets that are available to the general public. The latest batch, sponsored by First Round, is available here.
Thanks again to all of our amazing sponsors, volunteers and staff who are working very hard to make this event a reality.

Africa
Asia
Europe
North America
Oceania
South America

Wednesday, April 14, 2010

Sneak preview, Grockit

Hear the CEO of Grockit give a sneak preview of what he'll be presenting at the Startup Lessons Learned conference on April 23:

Monday, April 12, 2010

The Lean Startup Intensive at Web 2.0 Expo SF (May 3, 2010)

I'm discovering the truth of the old saying, "when it rains, it pours." I keep waiting for the tide of interesting people, opportunities, and ideas to ebb - but so far it has done nothing but accelerate. Thank you all so much. Just one year ago, I gave my first big conference talk at the 2009 Web 2.0 Expo in San Francisco. I had no idea what to expect, and the response was truly humbling. So I am particularly excited that the Lean Startup is a big part of this year's Web 2.0 Expo. Steve Blank and I are both giving keynotes in the main conference track. And for those who want more than just the overview, we're offering the Lean Startup Intensive on the first day of the conference: May 3, 2010.

We've built the Intensive into an all-star program designed to give a comprehensive overview of the methodology, taught by its leading practitioners. Unlike the conference on April 23, the Intensive does not assume any prior knowledge of lean startups, and is designed for a wide audience. Anyone who's thinking of attending the Expo will get something out of it. I believe it will be the first time each of the following speakers will be presenting a full session back-to-back: Steve Blank, Dave McClure, Sean Ellis, Hiten Shah, Dan Martell, as well as an investing panel which we'll announce soon. Here's an excerpt from the official program:
“A startup is a human institution designed to deliver a new product or service under conditions of extreme uncertainty.”
All entrepreneurs face the same fundamental challenges:
  • How do we know if we’re making progress?
  • How do we know if customers will want the product we’re building?
  • And, if they do, how do we know what kind of value we can create with it?
But because every startup also strives to become an institution, answering these questions requires more than just disciplined thinking at the whiteboard. It requires the coordination of many different people, working in concert to answer them. In other words, it requires management.
[...]

This event brings together the leading thinkers and practitioners of the Lean Startup movement. The goal is to provide a complete introduction to the theory as well as a grounding in advanced techniques that you can put to immediate use.

This program is designed for people who have a stake in creating great products: engineers, designers, product managers, marketers and businesspeople—from companies of any size. And, of course, for present or future entrepreneurs who are hoping to do more than punch a lottery ticket.

Read the rest here...

I'm incredible excited about the lineup, and think it'll provide the world's first comprehensive introduction to these ideas. If you're thinking of attending the Web 2.0 Expo, I hope you'll consider spending your first day with us. 

To sweeten the deal, we also have a special 25% discount code which you can use for either the Intensive itself or for a whole Expo pass. The code is websf10ls25 and can be redeemed here. And there are still a (very) few application spots open for a complete conference pass scholarship; details are available here.

So yes, there are two major lean startup events coming up in San Francisco in the next month. Both are going to be amazing, so take your pick. And, as always, if you do decide to stop by, please say hello and let me know you're a reader. I'm looking forward to meeting you.

Thursday, April 8, 2010

Conference streaming, sponsors, discounted tickets

Simulcasting in twenty cities worldwide leads today's news for the Startup Lessons Learned conference on April 23. Thanks to the generosity of new sponsors, we also have a handful of heavily-discounted tickets available.

Discounted Hotel Rate
Traveling to SF for the conference? We have a special hotel rate of $189 a night at the Westin where the event is being held. However, this discount expires this Monday, April 12: be sure to reserve your room before then.

Streaming
Our simulcast program has expanded to five continents (so far). We have a new section of the website devoted to remote viewing, which is located here. More locations are being added as we can confirm them. If you'd like to host a simulcast, please sign up here. A complete list of locations is below. Most events are free, but they all require that you RSVP in advance. Special thanks to our streaming sponsor Justin.tv for handling the actual broadcasts.

Africa
Asia
Europe
North America
South America
Sponsors
We are enormously grateful to our latest conference sponsors: KISSmetrics, Microsoft BizSpark and Baseline Ventures. Microsoft will be making scholarships available to companies in their BizSpark program; stay tuned for details.

Discounted Tickets
As part of their sponsorship, Baseline is offering discounted tickets to their portfolio companies. However, in an additional act of generosity, Baseline's Steve Anderson has asked us to make discounted tickets available to the general public as well. He's footing the bill for a block of half-price tickets which are available now on a first come, first served basis. You can get those tickets here, while they last. (And, if you feel so moved, you can leave Steve a thank-you message; he's @standers on Twitter.)

And, don't forget, full ticket scholarships are also available from IMVU. The application form is here.

Wednesday, April 7, 2010

Learning is better than optimization (the local maximum problem)

Lean startups don’t optimize. At least, not in the traditional sense of trying to squeeze every tenth of a point out of a conversion metric or landing page. Instead, we try to accelerate with respect to validated learning about customers.

For example, I’m a big believer in split-testing. Many optimizers are in favor of split-testing, too: direct marketers, landing page and SEO experts -- heck even the Google Website Optimizer team. But our interest in the tactic of split-testing is only superficially similar.

Take the infamous “41 shades of blue” split-test. I understand and respect why optimizers want to do tests like that. There are often counter-intuitive changes in customer behavior that depend on little details. In fact, the curse of product development is that sometimes small things make a huge difference and sometimes huge things make no difference. Split-testing is great for figuring out which is which.

But what do you learn from the “41 shades of blue” test? You only learn which specific shade of blue customers are more likely to click on. And, in most such tests, the differences are quite small, which is why sample sizes have to be very large. In Google’s case, often in the millions of people. When people (ok, engineers) who have been trained in this model enter most startups, they quickly get confused. How can we do split-testing when we have only a pathetically small number of customers? What’s the point when the tests aren’t going to be statistically significant?

And they’re not the only ones. Some designers also hate optimizing (which is why the “41 shades of blue” test is so famous – a famous designer claims to have quit over it). I understand and respect that feeling, too. After you’ve spent months on a painstaking new design, who wants to be told what color blue to use? Split-testing a single element in an overall coherent design seems ludicrous. Even if it shows improvement in some micro metric, does that invalidate the overall design? After all, most coherent designs have a gestalt that is more than the sum of the parts – at least, that’s the theory. Split-testing seems fundamentally at odds with that approach.

But I’m not done with the complaints, yet. Optimizing sounds bad for visionary thinking. That’s why you hear so many people proclaim proudly that they never listen to customers. Customers can only tell you want they think they want, and tend to have a very near-term perspective. If you just build what they tell you, you generally wind up with a giant, incoherent mess. Our job as entrepreneurs is to invent the future, and any optimization technique – including split-testing, many design techniques, or even usability testing – can lead us astray. Sure, customers think they want something, but how do they know what they will want in the future?

You can always tell who has a math background in a startup, because they call this the local maximum problem. Those of us with a computer science background call it the hill-climbing algorithm. I’m sure other disciplines have their own names for it; even protozoans exhibit this behavior (it's called taxis). It goes like this: whenever you’re not sure what to do, try something small, at random, and see if that makes things a little bit better. If it does, keep doing more of that, and if it doesn’t, try something else random and start over. Imagine climbing a hill this way; it’d work with your eyes closed. Just keep seeking higher and higher terrain, and rotate a bit whenever you feel yourself going down. But what if you’re climbing a hill that is in front of a mountain? When you get to the top of the hill, there’s no small step you can take that will get you on the right path up the mountain. That’s the local maximum. All optimization techniques get stuck in this position.

Because this causes a lot of confusion, let me state this as unequivocally as I can. The Lean Startup methodology does not advocate using optimization techniques to make startup decisions. That’s right. You don’t have to listen to customers, you don’t have to split-test, and you are free to ignore any data you want. This isn’t kindergarten. You don’t get a gold star for listening to what customers say. You only get a gold star for achieving results.

What should you do instead? The general pattern is: have a strong vision, test that vision against reality, and then decide whether to pivot or persevere. Each part of that answer is complicated, and I’ve written extensively on the details of how to do each. What I want to convey here is how to respond to the objections I mentioned at the start. Each of those objections is wise, in its own way, and the common reaction – to just reject that thinking outright – is a bad idea. Instead, the Lean Startup offers ways to incorporate those people into an overall feedback loop of learning and discovery.

So when should we split-test? There’s nothing wrong with using split-testing, as part of the solution team, to do optimization. But that is not a substitute for testing big hypotheses. The right split-tests to run are ones that put big ideas to the test. For example, we could split-test what color to make the “Register Now” button. But how much do we learn from that? Let’s say that customers prefer one color over another? Then what? Instead, how about a test where we completely change the value proposition on the landing page?

I remember the first time we changed the landing page at IMVU from offering “avatar chat” to “3D instant messaging.” We didn’t expect much of a difference, but it dramatically changed customer behavior. That was evident in the metrics and in the in-person usability tests. It taught us some important things about our customers: that they had no idea what an avatar was, they had no idea why they would want one, and they thought “avatar chat” was something weird people would do. When we started using “3D instant messaging,” we validated our hypothesis that IM was an activity our customers understood and were interested in “doing better.” But we also invalidated a hypothesis that customers wanted an avatar; we had to learn a whole new way of explaining the benefits of avatar-mediated communication because our audience didn’t know what that word meant.

However, that is not the end of the story. If you go to IMVU’s website today, you won’t find any mention of “3D instant messaging.” That’s because those hypotheses were replaced by yet more, each of which was subject to this kind of macro-level testing. Over many years, we’ve learned a lot about what customers want. And we’ve validated that learning by being able to demonstrate that when we change the product as a result of that learning, the key macro metrics improve.

A good rule of thumb for split-testing is that even when we’re doing micro-level split-tests, we should always measure the macro. So even if you want to test a new button color, don’t measure the click-through rate on that button! Instead, ask yourself: “why do we care that customers click that button?” If it’s a “Register Now” button, it’s because we want customers to sign up and try the product. So let’s measure the percentage of customers who try the product. If the button color change doesn’t have an impact there – it’s too small, and should be reverted. Over time, this discipline helps us ignore the minor stuff and focus our energies on learning what will make a significant impact. (It also just so happens that this style of reporting is easier to implement; you can read more here)

Next, let’s take on the sample-size issue. Most of us learn about the samples sizes from things like political polling. In a large country, in order to figure out who will win an election with any kind of accuracy, you need to sample a large number of people. What most of us forget is that statistical significance is a function of both sample size and the magnitude of the underlying signal. Presidential elections are often decided by a few percentage points or less. When we’re optimizing, product development teams encounter similar situations. But when we’re learning, that’s the rare exception. Recall that the biggest source of waste in product development is building something nobody wants. In that case, you don’t need a very large sample.

Let me illustrate. I’ve previously documented that early-on in IMVU’s life, we made the mistake of building an IM add-on product instead of a standalone network. Believe me, I had to be dragged kicking and screaming to the realization that we’d made a mistake. Here’s how it went down. We would bring customers in for a usability test, and ask them to use the IM add-on functionality. The first one flat-out refused. I mean, here we are, paying them to be there, and they won’t use the product! (For now, I won’t go into the reasons why – if you want that level of detail, you can watch this interview.) I was the head of product development, so can you guess what my reaction was? It certainly wasn’t “ooh, let’s listen to this customer.” Hell no, “fire that customer! Get me a new one” was closer. After all, what is a sample size of one customer? Too small. Second customer: same result. Third, fourth, fifth: same. Now, what are the odds that five customers in a row refuse to use my product, and it’s just a matter of chance or small sample size? No chance. The product sucks – and that is a statistically significant result.

When we switch from an optimization mindset to a learning mindset, design gets more fun, too. It takes some getting used to for most designers, though. They are not generally used to having their designs evaluated by their real-world impact. Remember that plenty of design organizations and design schools give out awards for designing products that never get built. So don’t hold it against a classically trained designer if they find split-testing a little off-putting at first. The key is to get new designers integrated with a split-testing regimen as soon as possible. It’s a good deal: by testing to make sure (I often say “double check”) each design actually improves customers lives, startups can free designers to take much bigger risks. Want to try out a wacky, radical, highly simplified design? In a non-data-driven environment, this is usually impossible. There’s always that engineer in the back of the room with all the corner cases: “but how will customers find Feature X? What happens if we don’t explain in graphic detail how to use Feature Y?” Now these questions have an easy answer: we’ll measure and see. If the new design performs worse than the current design, we’ll iterate and try again. But if it performs better, we don’t need to keep arguing. We just keep iterating and learning. This kind of setup leads to a much less political and much less arbitrary design culture.

This same approach can also lead us out of the big incoherent mess problem. Teams that focus on optimizing can get stuck bolting on feature upon feature until the product becomes unusable. No one feature is to blame. I've made this mistake many times in my career, especially early on when I first began to understand the power of metrics. When that happens, the solution is to do a whole product pivot. "Whole product" is a term I learned from Bill Davidow's classic Marketing High Technology. A whole product is one that works for mainstream customers. Sometimes, a whole product is much bigger than a simple device - witness Apple's mastery of creating a whole ecosystem around each of their devices that make them much more useful than their competitors. But sometimes a whole product is much less - it requires removing unnecessary features and focusing on a single overriding value proposition. And these kinds of pivots are great opportunities for learning-style tests. It only requires the courage to test the new beautiful whole product design against the old crufty one head-to-head.

By now, I hope you’re already anticipating how to answer the visionary’s objections. We don’t split-test or talk to customers to decide if we should abandon our vision. Instead, we test to find out how to achieve the vision in the best possible way. Startup success requires getting many things right all at once: building a product that solves a customer problem, having that problem be an important one to a sufficient number of customers, having those customers be willing pay for it (in one of the four customer currencies), being able to reach those customers through one of the fundamental growth strategies, etc. When you read stories of successful startups in the popular and business press, you usually hear about how the founders anticipated several of these challenges in their initial vision. Unfortunately, startup success requires getting them all right. What the PR stories tend to leave out is that we can get attached to every part of our vision, even the dumb parts. Testing the parts simply gives us information that can help us refine the vision – like a sculptor removing just the right pieces of marble. There is tremendous art to knowing which pieces of the vision to test first. It is highly context-dependent, which is why different startups take dramatically different paths to success. Should you charge from day one, testing the revenue model first? Or should you focus on user engagement or virality? What about companies, like Siebel, that started with partner distribution first?  There are no universally right answers to such questions. (For more on how to figure out which question applies in which context, see Business ecology and the four customer currencies.)

Systematically testing the assumptions that support the vision is called customer development, and it’s a parallel process to product development. And therein lies the most common source of confusion about whether startups should listen to customers. Even if a startup is doing user-centered design, or optimizing their product through split-testing, or conducting tons of surveys and usability tests, that’s no substitute for also doing customer development. It’s the difference between asking “how should we best solve this problem for these customers?” and “what problem should we be solving? and for which customer?” These two activities have to happen in parallel, forming a company-wide feedback loop. We call such companies built to learn. Their speed should be measured in validated learning about customers, not milestones, features, revenue, or even beautiful design. Again, not because those things aren’t important, but because their role in a startup is subservient to the company’s fundamental purpose: piercing the veil of extreme uncertainty that accompanies any disruptive innovation.

The Lean Startup methodology can’t guarantee you won’t find yourself in a local maximum. But it can guarantee that you’ll know about it when it happens. Even better, when it is time to pivot, you’ll have actual data that can help inform where you want to head next. The data doesn’t tell you what to do – that’s your job. The bad news: entrepreneurship requires judgment. The good news: when you make data-based decisions, you are training your judgment to get better over time.

Sunday, April 4, 2010

Kent Beck keynote, "To Agility, and Beyond"

Kent Beck will give the opening keynote at the Startup Lessons Learned conference on April 23. Our mystery keynote is now revealed and I couldn't be more excited. Kent is a significant figure in the field of software development. To his credit are Extreme Programming, jUnit, patterns, TDD, the list goes on. His keynote will kick off the day as well as our module on the build phase of the fundamental feedback loop that powers all startups.

In Kent's honor, we've extended earlybird pricing for conference tickets (one last time) through Monday, April 5.

Test Driven Development: By ExampleExtreme Programming Explained: Embrace Change (2nd Edition)Implementation Patterns

Saturday, April 3, 2010

Six streaming locations

Can't travel to San Francisco for the Startup Lessons Learned Conference on April 23, but still want to participate? Thanks to the support of organizers around the world, you can join in remotely - in most cases, free of charge. I'm excited to announce the first six confirmed simulcast locations:
Please RSVP so our organizers can plan capacity accordingly. We're offering this simulcast service free of charge to organizers; if you'd like to host a simulcast in your city, you can sign up to do so here. We'll continue to post links to new events to the conference website as we get them confirmed. If you're curious about whether there are others in your area who'd like to get together to watch, feel free to contact a local meetup or post a comment below.

I want to thank everyone who has volunteered to host, and especially thank our simulcast organizer Erin Turner. If you have any questions, please feel free to leave a comment. Hope to see you on April 23.

Friday, April 2, 2010

Interviews

Robert Scoble came by my office to learn about the Lean Startup and the Startup Lessons Learned Conference on April 23. By all accounts, the conversation was a success. I certainly had a great time, and the early Twitter reactions seem positive. Of course, you can judge for yourself. I've embedded parts one and two below:

Part One:


Part Two:


And, as a special bonus, I've included a recent podcast episode of The Web 2.0 Show, in which we discuss Lean Startups and the upcoming Lean Startup Intensive at the Web 2.0 Expo. Enjoy: