There is a balance to be played in the role of any person who is responsible for metrics and measurement. How to approach the perennial question: "can you make me a dashboard/report so I can see X".

Working on questions like this can feel tiring, boring and often pointless. The results of the work sometimes hit gold, but often end up in the bin. Perhaps more surprisingly to some (although not to some battle hardened analysts out there), this question doesn't go away in a world where lots of dashboard and reports already exist - and it can even become more prevalent in those circumstances.
This feels counterintuitive, but I believe there are three factors at play here which create this dynamic.

First, and most obviously, a data vacuum.

Starting from a team with no metrics, that very quickly there will be an ask for some visibility. Likely that initial visibility will be well received, well used and for a time things will get better. Once an initial hurdle of basic visibility is reached, there is significant value to be gained from using performance information to make progress, and significant incentive for consumers of data to access it in the few ways they are able. Life is good.

Second, Bernoulli's law.

In economics, there is a concept of utility: that the satisfaction derived from a gain in wealth is inversely proportional to the quantity of wealth already possessed. Put simply, £100 means more to someone on minimum wage than a millionaire.
The same is true in business visibility. Each additional metric, dashboard or report is proportionally "worth" less, because there already exists a wealth of data to use in decision making. Contrary to what you might expect, and reducing the demand for visibility, it actually dilutes the demands. The number of desired views multiples, and each one individually is cared about less. Visibility becomes viewed as cheap, and so, like the £100, less thought will be given to how it is spent.
Together, these two principles make a very clear case for diminishing returns, but there is a third effect at play which is more devious.

Third, the filtering effect or the paradox of choice.

It is well documented, that humans are bad at choices with lots of options. This is the fundamental principle behind why there is so much value in recommendations, whether human or algorithmic, in decision-making. The implication of this in a reporting context, is that every additional metric which could be used, increases the difficulty of choosing which metric should be used.

The practical effect of this is that each additional way to observe things, paradoxically reduces the degree to which consumers of that data feel they are able to effectively observe the system. That can easily drive demands for more or new visibility - which in turn makes the problem worse and the situation can easily spiral out of control.

So are you saying we should stop building new reports after a certain point?

While that would provide a blunt (and in some scenarios potentially very effective) solution, it is unlikely to be popular - and I think there are a few much better options on the table.

  1. The first and foremost of these is to aggressively deprecate old reporting. Data professionals are often hoarders because they know that data the more data you have, the easier it is to reach conclusions and the more ways of looking at a problem you have. They also dislike building reports they've built before so like to keep old reports around "just in case". The cost of this is often invisible, but beyond the pure maintenance burden - the bigger cost is that they muddy the water and contribute to the filtering problem. Once you have a new way of looking at something, delete the old way (or at least, archive it to a place the majority of users cannot find it). Some people will be grumpy, because change is always uncomfortable - but the cost of not removing it from circulation is worse.
  2. Brownfield development is much better than Greenfield. An extension of the first point is to improve and extend existing reporting and visibility wherever possible. Data professionals often like a "clean start", but this often comes from a place of avoiding the tough questions and of unpicking old work. Beyond just being another way of avoiding more and more reports - building habits and an expectation that reporting is a dynamic thing that can be improved and changed help redirect the blunt desire for "more visibility" (which is a symptom of too much visibility as we discussed before) into a slightly more nuanced desire for "better visibility" which is much closer to the true solution to the problem.
  3. Plan, prioritise and schedule reporting work alongside everything else the team does, using the same criteria. While we've explored why the perceived value of reporting and the ease of asking decreases, the actual effort and time required to do it well is the same, or arguably greater due to the need to think through what we've discussed here. In the early days of a team we can be lazy in our scoping because any visibility is better than no visibility (mostly), but while we can't necessarily fault our colleagues for desiring more visibility (which we've shown is only natural), we can fault ourselves for being drawn into that spiral. We should know better. It shouldn't matter whether it's the 2nd or 100th report, we should always be clear on who's going to use it and how we think it delivers the kind of value which makes it worth building.