There are three stages:
- We start with ideas about what our product could be.
- We're a software company, so what we do everyday is turn ideas into code.
- Hopefully, we find out what happened when people use that code, creating data.
- Implement (programming!) where we turn ideas into code the best way possible.
- Measure what happened, as quickly as possible.
- Learn from the data, letting it influence our ideas for the next iteration through the loop
Optimize speed through the whole loop. This sometimes steps on the favored opinions of functional specialists in any organization.
My personal favorite: "Code without data collection? Faster but..." Ever heard a programmer argue for ripping out all that pesky data monitoring code? It's slowing down the system, wasting resources, creating ugly code and uglier scaling problems. If we just stopped measuring, we could write code a hell of a lot faster.
If you have worked with a Professional Data Warehouse Expert, you might have seen: "Measure 10000 things? Comprehensive but..." No human being can learn from 10,000 graphs. It's overwhelming. To turn data into learning, you have to focus on the few key pieces of data the everyone agrees are important. And you have to get the decision makers and implementers to look at (and believe!) the data on a regular basis.
How about documentation that nobody reads? Reports that go unnoticed? Alerts that go off so often that they get ignored? Split-test experiments that go on forever? All of these are true waste, and they generally happen because somebody is optimizing for their particular part of the puzzle, not for the team as a whole.