read

Much has been said and written about how you can measure being “Agile” in software development. I, for one, had my own share writing an article about it in an online Agile magazine back in 2011. Of which, in retrospect, makes me laugh at how bad it is. Bad in terms of ridiculous complexity. And bad in terms of missing a clear context of what the whole practice is for. All the circus was supposed to be a show by the development team, for the development team, no one else. I presented tools for them to try and gauge whether they are improving and becoming as agile as they can be. I can see that it can easily be used by people outside of the development team (yes, I’m looking at management) and misuse it. I like how someone from an Agile forum pointed out that it’s good, but ultimately is just a placebo.

Almost a year ago, eBay posted an article regarding how they use a plethora of metrics to appropriately create their performance feedback system. All is well and good until I hit these things of which I do not necessarily agree with.

“The peer feedback results not only help the management team get much more insight into each individual’s performance, but also help identify and fix team-level issues that have more profound and meaningful impact on our ability to improve our work.”

I believe monthly surveys targeting individual members won’t give you insights on how to fix team-level issues. Why? It’s fixing the whole by fixing the parts (or at least understanding the moving bits). This is typical reductionist point of view that is prevalent in most thorough top-down mandated metrics. People are way more complex than this. The levers and switches to fine tune the team-level performance does not lie solely (if at all) in any individual performance metric.

“People self-organize and share the team’s performance. But how about the individual’s performance within the team? I’m not supposed to micro-manage each person, but it seems the Scrum team becomes a ‘black hole’ to me, and I lose sight of each team member’s performance behind the ‘event horizon’”

So they are not supposed to micro-manage and yet need to keep track of each team member’s performance. A bit conflicting statement. If they are not to micro-manage, then what’s the individual monitoring for? Perhaps because it’s an irrevocable company policy, and the guy making the statement above is forced to do something that conflicts with their philosophy. And now you have competing philosophies between the company and the personnel. Metrics can easily mask an underlying conflict within the system.

In summary, I would like to think the right metric is one which is contextually correct. That is, people in the right context monitor and fix their situation, regardless of how simplistic or complex their way is. The purpose of which is for only one or a very limited set of goals (e.g., for the team to understand where they are headed, not for performance appraisals). Otherwise, gaming the process is inevitable. With possible unexpected effects. Much care must be practiced in dealing with what level these decisions are made. For all we know, we are blinding ourselves by masking faulty assumption by using the intricacies of the metrics we set.

Blog Logo

Mike Mallete


Published

Image

Agile Coaching in the Asia Pacific

Thoughts and ideas of Mike Mallete - Agile Coach, Trainer, and Software Developer

Back to Overview