Much can change in a year – especially where technology is concerned. Central Banking conducted its first survey of big data use by central banks in 2016, when thinking on the topic was in flux. This year’s survey updates the original findings, highlighting the rapid evolution of the field and a process of coalescence into concrete applications.
One of the most striking results revealed in this year’s survey is the shift of big data analysis into the mainstream. In 2016, 22% of central banks described big data as a “core input” into policymaking, while this year 36% said this was the case. In the 2017 survey, 58% of respondents said they were using big data as at least an auxiliary input to policy.
The past year has been marked by growing collaboration among central bank data users, notably in forums such as the Irving Fisher Committee. Central banks have also devoted considerable effort to the question of how data – big and small – should be gathered, stored and organised within their organisations to generate the most powerful results.
Big data techniques such as machine learning are becoming much more widely used in economics, as our feature on the topic finds. Applications from early warning systems to trend forecasting and picking out patterns from unstructured data are being adopted across the central banking community. Deep learning methods bring the potential for dramatic change in the years ahead.
Our Q&A with the European Central Bank’s Per Nymand-Andersen reveals the ways in which central banks are harnessing big data, providing insights from one of the leading institutions in the field. Participants in our online forum held in September also revealed the diversity of applications and flagged several issues for central banks to be aware of.
As big data enters the mainstream, it remains critical to understand the ways in which it can transform our thinking, but also to be realistic about its shortcomings and biases. This report is designed as a small step in that direction.