Data as a critical factor for central banks

fissure-98836256

bearingpoint-logoData remains a critical factor for central banks. The financial crisis revealed that some of the deepest fissures were caused by gaps in data and exposed the need for high-quality comparable and timely data on the global financial network. Since then, policy-makers, supervisory authorities and standard-setters across the globe have been collaborating to better harmonise and standardise regulatory data in financial services. According to a recent BearingPoint Institute paper, urgent debate is still needed on how the world’s financial services industry could be better and less onerously supervised via a smarter approach to regulatory reporting and data exchange.1

Financial supervision and central banks’ momentary statistics and financial stability functions are vastly driven by data. In the aftermath of the financial crisis, a ‘regulatory tsunami’ flooded the financial services industry. Particularly after the adoption of the Basel III framework, regulatory requirements have significantly increased and new regulations such as AnaCredit, the Basel Committee on Banking Supervision (BCBS) 239, Solvency II, Dodd–Frank and International Financial Reporting Standard (IFRS) 9 have posed new challenges to the banking and insurance sector on global, regional and local levels. Moreover, regulations such as the European Market Infrastructure Regulation (Emir), money market statistical reporting, the Markets in Financial Instruments Regulation (Mifir) and the Securities Financing Transaction Regulation (SFTR) oblige the major monetary financial institutions to report derivatives or money market data on a daily basis.

However, in the central banking area, while no single agreed definition exists, big data has already been heralded as offering a wide range of central banking applications: from ‘nowcasting’ to modelling, to early warning systems and systemic risk indicators. For some it opens a new chapter in policy-making. A number of central banks are currently rethinking their data infrastructures, which today are rather siloed, and demonstrating the legacy of the past decades with no central approach to data handling. 

Challenges for central banks

Notwithstanding the huge potential of big data, decision-making is now even harder than before, and businesses need adequate solutions to analyse this data.2 A crucial point is how to mine all this information from the different sources exhaustively and at reasonable cost. Despite innovative tools and technologies such as blockchain, cloud computing and machine learning, even today plans often fail because the required processing power outweighs the potential returns or computing time is too long.3 

The specific challenge for central banks in the sense of an effective 360-degree risk-based supervision is to rapidly access, effectively manage and process and analyse in a timely manner the increasing amounts of supervisory, statistical and markets (big) data. The near- or real-time access and efficient processing are especially regarded as critical factors due to limitations in human and IT resources.4 According to a report by the Institute of International Finance (IIF), some regulators still use outdated portal solutions and methods that are inefficient and increase chances of introducing error.5 The IIF recommends automated secure data transfer mechanisms based on standards such as XBRL. But even with the use of such standards as XBRL or SDMX, central banks must abandon a paper- or document-oriented world and think of data in an integrated and interlinked way.

Current systems do not meet today’s requirements when regulators have to deal with large amounts of data of various kinds – collected from supervised entities for statistical, prudential or stability purposes, provided by information providers or obtained from internal research and analysis. Such data ranges from granular micro information on single mortgage loans, securities traded and counterparties affected, to macroeconomic analysis of countries or regions and form-based collections of financial and risk data or ad hoc supervisory exercises. 

Some of this data will remain only within the perimeter of the central bank, while some will be remitted to other stakeholders such as the European supervisory authorities, national governments, the International Monetary Fund and the Bank for International Settlements, and some will be disseminated to the wider public or research community.

Therefore, it is mission-critical for regulators to:

Effectively handle the large amounts of increasingly granular data from various sources: that is, rethink existing IT system architectures and landscapes.

Gain transparency on the status of the reporting agents in the collection and dissemination process.

Consider interlinkages between micro and macro data sets in ‘going beyond the aggregates’ from macro and financial stability perspectives.

Have a timely overview of relevant micro and macro developments in the financial markets.

Execute reliable trend analyses on key performance indicators and key risk indicators based on validated collected data.

In view of the developments described above, it is undisputable that it is mission-critical for central banks to reshape their data management and further automate and industrialise processes of handling data. Automation helps to minimise risk, reduce errors, increase transparency and thereby deliver a better basis for decision-making.

According to a BearingPoint Institute article,6 a new information value chain is needed for reporting that helps to increase the efficiency of supervisory processes, minimise risk, allocate resources effectively and improve the basis for decision-making by higher transparency and faster availability of data. We further notice a trend to shared utilities – a kind of ‘regulatory-as-a-service’.

A prominent example is the Austrian solution, where the National Bank of Austria and the supervised banks joined forces to stepwise replace the template-driven model and use innovative technologies to create a new regulatory value chain. The initiative is based on greater harmonisation and integration of data within banks as well as greater integration of the IT systems of the supervisory authority and the supervised entities. It works through a common data model developed by the central bank in co-operation with Austrian banks and a shared utility, Austrian Reporting Services, which is co-owned by the largest Austrian banking groups. This model allows cost-sharing of compliance as well as standardisation of data collection.

To summarise, it is clear that data is and will remain a critical factor for central banks across the globe. But first, combining data with the right people, technology, processes – and also collaboration models – will allow central banks to leverage it for their missions and objectives. 

Notes

1. Maciej Piechocki and Tim Dabringhausen, Reforming regulatory reporting – from templates to cubes (Bank of International Settlements, 2016)

2. BearingPoint Institute Issue 002, Seeing beyond the big (data) picture, pp. 3–4.

3. Ibid., p. 6.

4. Irving Fisher Committee, Central banks’ use of and interest in ‘big data’ (Bank of International Settlements, October 2015), p. 11 

5. Institute of International Finance, RegTech in financial services: technology solutions for compliance and reporting, (March 2016), pp. 22–23

6. BearingPoint Institute, Reforming regulatory reporting: are we headed toward real-time? (2015).

Read/download the article in PDF format

Read the full survey analysis

About the author

maciej-piechock-headshotFor the past 10 years, Maciej Piechocki, financial services partner at BearingPoint, has specialised in the regulatory value chain, designing and implementing digital solutions for regulatory clients, including central banks and supervisory authorities. He also supports supervised entities in implementing regulatory mandates, particularly regulatory reporting requirements. Piechocki’s main focus is on analytics, XBRL and oversight, and he has been particularly involved in CRD IV, Solvency II and IFRS. A member of the IFRS Taxonomy Consultative Group of the IASB and a board member of XBRL Germany, Piechocki has written several books, including XBRL for Interactive Data: Engineering the Information Value Chain. He is a frequent contributor to independent journals on the subject of regulatory reporting.

This article is part of the Central Banking focus report, Big data in central banks, published in association with BearingPoint. 

 

You need to sign in to use this feature. If you don’t have a Central Banking account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account

.