The world’s data is growing at an astonishing rate. Increasingly, statisticians are turning to data generated as the by-products of our daily, digitised lives, as well as larger and more fine-grained sets of regulatory data.
Good data, combined with good statistical analysis, has long helped economists overturn weak theories and throw light on dark areas of our knowledge. Learning to channel the deluge of data should only improve our understanding.
But with bigger data come larger and deadlier pitfalls. Messy unstructured data sets and vast volumes may introduce biases that humans struggle to identify, and those biases must not find their way into policy. The need to handle such large volumes throws up questions of processing power, storage, efficient coding and how to organise the information within an institution so that the right people can make use of it – but no one else.
The aim of this focus report is to offer assistance to central bankers in executing this demanding transition. Our survey of central banks highlights how the move to better data governance is proving a challenge for many, while our Q&A with the St Louis Fed, an established leader in data management, may offer some answers and guidance. The European Central Bank’s Aurel Schubert outlines a major European project to harness data for better regulatory outcomes, and our online forum panellists discuss the potential for big data to better shape policy and regulation.
Central banks are uniquely placed to deepen our collective understanding of data. They concentrate within their ranks experts on economics and statistics, but also employ a growing number of computer scientists, mathematicians and physicists. Central banks are also well-versed in communication, giving them a sturdy platform from which to encourage a deeper public understanding of statistics and resist the rise of lazy or misguided interpretations.
There is much work to do. We hope our modest contribution can make the way ahead a little clearer.