Evaluating performance isn’t easy. One of the simplest methods is to consider a single piece of accounting information, such as profit for profit center managers or production for production managers. But any one of these lone figures will tell only part of the story. That’s why many companies produce reports that present relevant financial data in the context of other metrics. (These metrics often reflect strategic aims and can range from from internal, non-financial measures like quality to external information about customers or suppliers.)
The goal of these reports is to arm employees with additional information that will help them make the best decisions—the decisions that their supervisors would most want them to make. But does this actually work? In experiments, the results have been mixed.
I worked with colleagues Joan Luft and Michael D. Shields of Michigan State University to understand why accounting reports with additional information can at times stand in the way of good decision making.
We designed an experiment that involved 94 MBA students. One group of MBA students acted as managers of a fictional restaurant chain and were charged with making a management decision for the firm. Their goal was to maximize income with a strategy of “delighting the customer.” Some of these students made their decisions with accounting reports that included only profit, while others had reports of profit plus additional information on customer delight.
Meanwhile, a second group of MBA students, acting as supervisors, evaluated the performance of the decision-makers. Each supervisor had access to the same reports as his or her subordinate, whether these were profit only or “profit plus.”
We found that the additional information in the “profit plus” reports imposed hidden costs on organizational performance in two ways: It increased the likelihood that subordinates would make decisions their supervisors disagreed with, and it also increased the uncertainty of their performance evaluations, resulting in negative performance evaluation surprises.
The problem stems from the fact that complex reports with many types of information can be interpreted in many plausible ways. This allows individuals to arrive more easily at interpretations supporting what they already believe. As a result, initial differences in opinion between subordinates and superiors will only be reinforced.
When subordinates fail to realize that their own interpretations differ from those of their supervisors, their decisions may also differ from those that their supervisors would want them to make. This in turn increases the likelihood that they will be surprised by unexpectedly poor performance evaluations. The uncertainty that this creates undermines the effectiveness of these performance evaluations as a management tool.
Does this mean that simpler is always better? On the one hand, our experiment does suggest that evaluations based on single pieces of information, such as profit or production, provide the benefit of putting supervisors and their subordinates on the same page and eliminating uncertainty in performance reviews, providing support for why some companies focus on single aggregate measures for evaluations. On the other hand, more complex reports and evaluations offer richer information that can, under the right circumstances, lead to good decisions and effective evaluations, but users must be careful in their selection and use of this information. Further research is necessary to explore how companies can mitigate the costs and extend the benefits of using complex reports for subjective performance evaluation.