Kill Your Darlings - Part 2
- Published: September 6, 2016
- Written by Max Young
“It is amazing that there are so many stars in the sky and that the sky itself is so vast: but it’s wonderful, bordering on mystical, that man has actually measured it.”
This is the second in a three-part article, regarding the processing of choosing, quantifying, and starting your first process transformation. Part 1 can be found here.
So, you’ve chosen your project: you’re in love with it. You know it’s the right fit. You’ve done your homework, you’ve got funding, you’ve convinced your rivals & your peers, and you’re ready to execute. The rest is just a matter of technology, right? We won? We can rest now?
Of course not. This is where the real work starts. This is where we have to challenge ourselves to be brutally honest, soul-crushingly humble, and drive towards the right answer: and not just the one that validates our preconceived ideas.
Avoiding Vanity Statistics: don’t measure your head.
Here’s the tl;dr for the following section. You’ve got to call your shot, and it can’t be the eight ball.
In the 1800s, there was a popular pseudo-medicine, Craniometry, which claimed that intelligence could be measured by quantifying the size of head. The theory was that the bigger the head, the more intelligent the person. There was even a very precise instrument, the Craniometer, to facilitate the process.
Figure : Craniometer
There was nothing wrong with the instrument. As a matter of fact, it was well crafted, and extremely precise. The problem was in the underlying assumption that there was strong relationship between individual skull size and actual intelligence.
You see, the entire premise was flawed, and so the instrument of measure told the researchers what they wanted to hear. Personally, I have no doubt they all had very large heads, and this was a way to validate same.
So, what does this mean to you?
It’s means don’t be that guy. It means make sure you’re measuring the right things, and that you’re willing to take the risk of being proven wrong by the data. It means measuring the dollars and cent and errors and quality and throughput of the results of your process, projecting the improvement you expect to see, and deciding beforehand what you’ll do if the results don’t go as you expect.
How do I actually do that?
Glad you asked. Once you’ve narrowed your deliverables to one or two actual processes, use the following table to rank their values.
- Take the KPIs you defined from the first part of the this article and project out what’s realistic in the way of improvement. Work with your SMEs, your business partners, your technology partners, your historical data, and put together a chart like the following.
- Project out what you think achievable improvements are. Invite your critics to the conversation. Invite your boss and partners and the key stake holders. Make sure you all agree about what would be a win. I know it’s hard to do this. I know there are political forces at work. But I can promise you this: those same forces will be waiting for you in the tall grass if you don’t do this: and you won’t see them coming, because you won’t be a part of the conversation. This is a way to get ahead of that.
- Be prepared to be wrong. Have a plan and a confidence interval for your projections. Align on the best answer, not just the answer that you had projected. Be prepared to be as hard on your own projections as anything. The difference between what we do and quackery is the science and discipline we apply to it. are Otherwise, it’s all just voodoo. This is where the soul crushing humility come in. If you were wrong, be the first person to draw attention to it, improve, and move on.
- Keep revisiting the projections during the lifecycle of your project. I usually like to align my re-assessment to Sprint playbacks. There’s nothing wrong with making better judgments as you gather more data during the process. There’s everything wrong with refusing to do so.
- Don’t be the voodoo guy. See #3.
Figure : Calling your shot
The next article in this three-part series will focus on Quantifying the end result: meshing what you expected what actually happened, and moving forward from there.