Behind the Metrics
- Focus on reaching the real goal, not improving the metrics
- Don't judge how (un)successful your people are by their performance metrics
- Metrics are a catalyst for conversation
We live in a data driven world, where we rely on tools like Google Analytics or Tableaux to make key business decisions. But is there a case when you can rely too heavily on data?
I'm all too familiar with the struggles of a growing startup. Generally somewhere around the 150 employee mark, leadership begins the journey to collect data around performance metrics which they can use to see how effective and efficient a department, team, or individual is, and make well informed resourcing decisions. But when does it become a problem?
When are metrics a problem?
Data is never the problem. How you use data can be the problem. Performance metrics should never be used to solely decide if someone is succeeding or failing. Let me share a story to illustrate why.
In a previous life I worked in sales making cold calls all day. Don't hate me, I still have nightmares. In this role they tracked performance metrics such as 120+ dials per day, or 15+ conversations over 1 minute long, the ratio of dials to conversations, and then the amount of time it takes to make a sale. After my first month I felt like a failure. Of the small percentage of leads that actually answered their phone, virtually none would stay on for longer than 1 minute. I closed a total of zero sales. I talked with my manager to see what could be done. His answer? Dial more. Just increase your number of tries and your odds go up, right? Except 0 * 0 = 0.
But I tried his suggestion and I made more dials. Halfway through month two I was stuck with the same results. More dials, but less conversations. Inexplicable. I went to him again. His solution? More dials. He even suggested I skip my lunch to get in more calls a day. In fact, he wanted to help me so badly, that he decided to personally track my phone and watch the number of dials per day to help me ensure I was calling enough people. Eventually I started to break. I wasn't succeeding, but I couldn't quit because I needed the paycheck, so I started to game the system. I started dialing answering machines I knew didn't have a time limit, and I would leave a long enough message to log conversations. Dials down, conversations up. Metric success, but it actually got me much farther away from the only real goal of making sales.
By month three, I had totally given up. Morally I couldn't game the system any longer, but I still couldn't quit. So I stopped playing their game. I ignored the metrics. I forwent the leads they gave me and started calling other people's rejects that I knew had picked up for them. My goal was no longer numbers. I resorted to the fact that I was just bad at sales, so when people picked up I decided to just be myself and talk to them, not sell to them. I started by asking about their day. I asked about their business, about their family, and I told them about mine. I just made conversation because I was bored and wanted to pass the time. My dial to conversation ratio skyrocketed to almost 1:1. Most of my leads forgot to ask why I was even calling until at least minute 15. By minute 45 they were asking what I was selling and if they could buy it. By the end of the month I had sold more than any other person in the company. I had somehow found the magic sales potion. Only it wasn't magic; I had simply given up on the metrics and resorted to being myself.
My manager and our Director of Sales scheduled a meeting to personally congratulate me on my success. However, the second half of that same meeting they spent trying to figure out how to get my average talk time to sale ratio down, because it was significantly higher than anyone else's. By their math, if I could make $1,000 in a 60 minute call, and they could decrease my talk time by half, then I could make $2,000 in two-30 minutes calls, and eventually $4,000 in four-15 minute calls. At this, I gave my two weeks notice.
They never bothered to ask why I was so successful, they were too obsessed with the numbers, the data, the performance metrics. To them, I was a machine and all they had to do was turn some levers to squeeze more juice out. This is a prime example of what happens when you define success by performance metrics. You risk ignoring other approaches that might also work because your head is stuck so far up the black whole of numbers and statistics that you forget that maybe there's a human element somewhere behind it all.
The other risk, especially when you tie reward to performance metrics, is that people will do whatever it takes to be successful in the eyes of the metrics. They game the system. I called answering machines. I've seen software developers write obscenely long code for simple features because their bonus was tied to lines-of-code-per-day. I've seen testers log one defect 10 different ways because they were rated based on the number of defects they filed. I've seen Project Managers cut safety precautions to meet arbitrary deadlines that have no actual value, because that's what they needed to do to get their bonus. The examples are almost endless.
These aren't bad people. They're people forced to make bad decisions because their livelihood lies in arbitrary numbers that don't in themselves equate to success. It's a system problem, not a people problem.
When are metrics useful?
Collecting performance metrics is one of the most important things you should do as a business. It shines light in dark areas of your organization, it can help foster teamwork and collaboration, and it can mean all the difference to resourcing correctly. Collecting performance metrics is one of the first things I advise companies to do when they've lost direction, because you need to know where you are to know where to go. It's vital. However, it's important to distinguish how you use the data.
Metrics are a catalyst for conversation.
I collect a lot of data on teams that I work with. Velocities, capacities, hours worked, progress towards deliverables and commitments, etc. But for the amount of data I collect, I act on a very small percentage of it. I use metrics mainly to reveal both positive and negative deviations from the standard. When data reveals a negative deviation, it spurs a conversation with the team, and enables them to self-assess and self-solve, if there's problem. However, what I don't do, is see a negative deviation and then jump straight to correction. You're missing a step. Let me give you an example.
I was working with a software development team and I tracked their progress towards our goal via a Burndown Chart (amount of remaining work plotted over time). It's a very common performance metric. Here is an example:
Highlighted in red is a spike in the amount of work we had left to do. Following the spike you can see from that point on the amount of work remaining burns down relatively steadily for the remaining period of time. So what happened?
I check this chart every day to monitor for deviations like this. When I saw this spike I scheduled a meeting with the team. I started the meeting by presenting this chart and simply asking what had happened that would cause this spike. One of the team members explained that he been pulled by the CTO to work on a side project, and had logged this work in the team's backlog so that it would be visible, causing this spike. The team proceeded to discuss their priorities and commitments and decided to accept the added work, but decided they would have to reallocate some other work items to accommodate and still meet their commitments.
At no point was I upset that the metric was off. At no point did I reprimand the developer for introducing something that could derail our project or increase scope. I used the data to raise a red flag, and allow the team to have a discussion so that they could self-solve, if there was a problem. In this case there wasn't a problem. The work fit in and everything was fine. The metric didn't reflect the success or failure of the team.
Let's circle back to my first story. By the end of my third month in sales, one specific performance metric showed a negative deviation from the standard. My talk time per sale was far higher than anyone else's. But without having that conversation, and skipping from negative deviation to correction, my manager and the Director of Sales didn't know why that was the case. If they had had the conversation with me, they would have discovered that it was actually a positive deviation in my case, because it enabled me to meet the real goal. It was only a negative deviation according to their metrics.
When data reveals a positive deviation, it should spur a conversation with the team or individual to determine if they have discovered an alternate solution that might be repeatable for others. In my sales example, I was selling more than I ever had, and more than anyone else, not in spite of my talk time per sale, but because of it. If they had correctly identified the deviation as positive, and engaged me in conversation, they would have discovered that many of my struggling colleagues could have benefited from the same strategy adjustment. If those with personalities similar to mine were allowed to engage in a longer sales-cycle, they would be able to sell far more than in the enforced shorter one. It would have repointed the whole organization to what is important: the goal not the metric.
Identify a deviation > have a conversation to learn the story behind the metric > adjust your strategy. Too many leaders skip the middle. They see a deviation and jump to correcting. Stop using performance metrics to judge the success or failure of your people. Use them to spark a conversation to determine if there's a problem, then discuss how to solve the problem if one exists. The point is to reach the goal, not improve the metrics.