When you're adding features to a product used by an existing user base, do you still do split testing to determine usage patterns?Absolutely, yes. Sometimes, testing with existing customers is more complicated than with new customers. Existing customers already have an expectation about how your product works, and it's important to take this into consideration when adding or changing features. For example, it's almost always the case that a new layout or UI will confuse some customers, even if it's a lot better than the old one. You have to be prepared for that effect, so it doesn't discourage you prematurely. If you're worried it, either run the test against new customers or run it for longer than usual. We usually would give changes like this a few extra days to see if customers eventually recover their enthusiasm for the product.
On the other hand, existing customers can be a testing benefit. For example, let's say you are adding a new feature in response to customer feedback. Here, you expect that customers will find the feature a logical or natural extension, and so they should immediately gravitate to it. If they don't, it probably means you misunderstood their feedback. I have made this mistake many times. At IMVU, for example, we used to hear the feedback that people wanted to "try IMVU by themselves" before inviting their friends to use it. Because many on our team came from a games background, we just assumed this meant they were asking for a "single-player mode" where they could dress their avatar and try controlling it on their own.
Turns out, shipping that feature didn't make much impact when we looked at the data. Turns out, what customers really meant was "let me use IMVU with somebody I don't know" so they could get a feel for the social features of the product without incurring the social risk of recommending it to their friend. Luckily, the metrics helped us figure out the difference.
If your product has areas where people read and then different areas where people interact, are there ways to do metrics to determine where people spend their time? Could this be done on mouse focus, commenting amounts, answer percentages, download percentages, etc?
There are ways to measure customer behavior in tremendous detail, and in some situations these metrics are important. But lately I have been recommending in stronger and stronger terms that we not get too caught up in detailed metrics, especially when we are split-testing. Let's run a thought experiment. Imagine you have incredibly precise metrics about every minute that every customer spends with your product, every mouse click, movement - everything. So you do a split-test, and you discover that Feature X causes people to spend 20% more time on a given part of your product, say a particular web page.
Is that a success? I would argue that you really don't know. It might be that the extra time they are spending their is awesome, because they are highly engaged, watching a video or reading comments. Or it could be that they are endlessly pecking through menus, totally confused about what to do next. Either way, you would have been better off focusing your split-test on high level metrics that measure how much customers like your product as a whole. Revenue is always my preferred measure, but you can use anything that is important to your business: retention, activation, viral invites, or even customer satisfaction in the form of something like net promoter score. If an optimization has an effect at the micro level that doesn't translate into the macro level - who cares?
For more on the details of how to do simple and repeatable split-testing, take a look at The one line split-test, or how to A/B all the time.