Too much visibility?

There is a balance to be played in the role of any person who is responsible for metrics and measurement. How to approach the perennial question: "can you make me a dashboard/report so I can see X".

Working on questions like this can feel tiring, boring and often pointless. The results of the work sometimes hit gold, but often end up in the bin. Perhaps more surprisingly to some (although not to some battle hardened analysts out there), this question doesn't go away in a world where lots of dashboard and reports already exist - and it can even become more prevalent in those circumstances.
This feels counterintuitive, but I believe there are four factors at play here which create this dynamic.

First, and most obviously, a data vacuum.

Starting from a team with no metrics, that very quickly there will be an ask for some visibility. Likely that initial visibility will be well received, well used and for a time things will get better. Once an initial hurdle of basic visibility is reached, there is significant value to be gained from using performance information to make progress, and significant incentive for consumers of data to access it in the few ways they are able. Life is good.

Second: Clarity only exists the far side of complexity.

Beyond the most simple reporting needs, the ask is usually not a direct translation of the real need of the user. It would probably be unreasonable to ask most people in the market to buy a car, to sketch out the engine performance figures and body shape precisely before they were allowed to purchase. In analytics however, we often ask the equivalent of our partners and then are annoyed when they are disappointed that we've built precisely what they've asked for.

The request for insight often arrives disguised as a request for visibility and reporting because that's how the end result is perceived - it's a just a chart right? The request framed as a report is then further muddied because the user doesn't yet know exactly which chart is going to give them the insight they need to move forward - so they do their best to be helpful and guess. If their initial guess turns out to not be spot on - the natural response is to guess again.

Third: Bernoulli's law.

In economics, there is a concept of utility: that the satisfaction derived from a gain in wealth is inversely proportional to the quantity of wealth already possessed. Put simply, £100 means more to someone on minimum wage than a millionaire.
The same is true in business visibility. Each additional metric, dashboard or report is proportionally "worth" less, because there already exists a wealth of data to use in decision making. Contrary to what you might expect, and reducing the demand for visibility, it actually dilutes the demands. The number of desired views multiples, and each one individually is cared about less. Visibility becomes viewed as cheap, and so, like the £100, less thought will be given to how it is spent.
Together, these two principles make a very clear case for diminishing returns, but there is a third effect at play which is more devious.

Fourth: The filtering effect or the paradox of choice.

It is well documented, that humans are bad at choices with lots of options. This is the fundamental principle behind why there is so much value in recommendations, whether human or algorithmic, in decision-making. The implication of this in a reporting context, is that every additional metric which could be used, increases the difficulty of choosing which metric should be used.

The practical effect of this is that each additional way to observe things, paradoxically reduces the degree to which consumers of that data feel they are able to effectively observe the system. That can easily drive demands for more or new visibility - which in turn makes the problem worse and the situation can easily spiral out of control.

So are you saying we should stop building new reports after a certain point?

While that would provide a blunt (and in some scenarios potentially very effective) solution, it is unlikely to be popular - and I think there are a few much better options on the table.

  1. The first and foremost of these is to aggressively deprecate old reporting. Data professionals are often hoarders because they know that data the more data you have, the easier it is to reach conclusions and the more ways of looking at a problem you have. They also dislike building reports they've built before so like to keep old reports around "just in case". The cost of this is often invisible, but beyond the pure maintenance burden - the bigger cost is that they muddy the water and contribute to the filtering problem. Once you have a new way of looking at something, delete the old way (or at least, archive it to a place the majority of users cannot find it). Some people will be grumpy, because change is always uncomfortable - but the cost of not removing it from circulation is worse.
  2. Listen, and understand what your user needs - not just the chart they want. Building exactly what your users ask for, may actually be the fastest way to encourage them back to your door to ask again if what you build doesn't actually help with their need. Remember that their initial ask is their best attempt to frame what they need, and part of the role of a great analyst is to help refine the question, before diving straight into the answer. Beyond that, not all decisions require ongoing reporting - and some are best served by a one-off or ad hoc analysis. Identifying these scenarios early also ensures that you don't create things that you're just going to have to deprecate later.
  3. Prefer Brownfield development to Greenfield. Improve and extend existing reporting and visibility wherever possible, before building new. Data professionals often like a "clean start", but this often comes from a place of avoiding the tough questions and of unpicking old work. Additionally this builds habits and an expectation that reporting is a dynamic thing that can be improved and changed. This naturally helps redirect the blunt desire for "more visibility" (which is a symptom of too much visibility as we discussed before) into a slightly more nuanced desire for "better visibility" which is much closer to the true solution to the problem.
  4. Plan, prioritise and schedule reporting work alongside everything else the team does, using the same criteria. While we've explored why the perceived value of reporting decreases and the ease of asking increase, the actual effort and time required to do it well is the same. It may even be greater due to the need to think through what we've discussed here. In the early days of a team we can be lazy in our scoping because any visibility is better than no visibility (mostly), but while we can't necessarily fault our colleagues for desiring more visibility (which we've shown is only natural), we can fault ourselves for being drawn into that spiral. We should know better. It shouldn't matter whether it's the 2nd or 100th report, we should always be clear on who's going to use it and how we think it delivers the kind of value which makes it worth building.