Monday, September 29, 2008

Q&A with an actual reader

One of my favorite things about having a blog is the feedback I get in comments and by email. Today, I thought I'd answer a few questions that came in from a very thoughtful comment from Andrew Meyer. (He's also a blogger, at Inquiries Into Alignment).

Question 1:

When you're adding features to a product used by an existing user base, do you still do split testing to determine usage patterns?
Absolutely, yes. Sometimes, testing with existing customers is more complicated than with new customers. Existing customers already have an expectation about how your product works, and it's important to take this into consideration when adding or changing features. For example, it's almost always the case that a new layout or UI will confuse some customers, even if it's a lot better than the old one. You have to be prepared for that effect, so it doesn't discourage you prematurely. If you're worried it, either run the test against new customers or run it for longer than usual. We usually would give changes like this a few extra days to see if customers eventually recover their enthusiasm for the product.

On the other hand, existing customers can be a testing benefit. For example, let's say you are adding a new feature in response to customer feedback. Here, you expect that customers will find the feature a logical or natural extension, and so they should immediately gravitate to it. If they don't, it probably means you misunderstood their feedback. I have made this mistake many times. At IMVU, for example, we used to hear the feedback that people wanted to "try IMVU by themselves" before inviting their friends to use it. Because many on our team came from a games background, we just assumed this meant they were asking for a "single-player mode" where they could dress their avatar and try controlling it on their own.

Turns out, shipping that feature didn't make much impact when we looked at the data. Turns out, what customers really meant was "let me use IMVU with somebody I don't know" so they could get a feel for the social features of the product without incurring the social risk of recommending it to their friend. Luckily, the metrics helped us figure out the difference.

Question 2:

If your product has areas where people read and then different areas where people interact, are there ways to do metrics to determine where people spend their time? Could this be done on mouse focus, commenting amounts, answer percentages, download percentages, etc?

There are ways to measure customer behavior in tremendous detail, and in some situations these metrics are important. But lately I have been recommending in stronger and stronger terms that we not get too caught up in detailed metrics, especially when we are split-testing. Let's run a thought experiment. Imagine you have incredibly precise metrics about every minute that every customer spends with your product, every mouse click, movement - everything. So you do a split-test, and you discover that Feature X causes people to spend 20% more time on a given part of your product, say a particular web page.

Is that a success? I would argue that you really don't know. It might be that the extra time they are spending their is awesome, because they are highly engaged, watching a video or reading comments. Or it could be that they are endlessly pecking through menus, totally confused about what to do next. Either way, you would have been better off focusing your split-test on high level metrics that measure how much customers like your product as a whole. Revenue is always my preferred measure, but you can use anything that is important to your business: retention, activation, viral invites, or even customer satisfaction in the form of something like net promoter score. If an optimization has an effect at the micro level that doesn't translate into the macro level - who cares?

For more on the details of how to do simple and repeatable split-testing, take a look at The one line split-test, or how to A/B all the time.

1 comment:

  1. Eric,

    thanks, your answer was very helpful. It may interest you that what we are doing is a little different than a game or public webpage. We built and are refining a communications and productivity Web 2.0 tool for a large public Utility. It solves the problem of communicating between a small group of managers, say twenty and a large group of users, say ten thousand. Of course there are trust, political, geographic, incentive and status issues to overcome, but these actually work in our favor.

    We created a user interface that requires no training to figure out and incents accuracy over political correctness. We're looking to metrics to determine how users interact with our product. More importantly, it must not create a time sink. On the other hand, we provide a management dashboard showing peoples' expectations of success for different projects along with explaination for why something will or won't work.

    The metrics I was asking about are to help refine the user interface. The goal is about five to ten minutes of engagement, but not more than fifteen minutes a week. More important, is getting the correct insights/predictions passed up to the dashboard.

    The beauty is that this leads nicely to split-testing and also measuring customer (user) satisfaction (engagement) and predictions. Doing so will both make our product better for our current customer, and provide compelling, statistical evidence of its effectiveness when it comes to marketing. Very exciting.

    Thanks again and if you have any other suggestions or books that I should read, I would be forever in your debt.

    Good luck with IMVU and thanks for a great blog,

    Andy

    ReplyDelete