The changing data landscape

Central Banking speaks to six policy-makers for their thoughts on the evolving data landscape and what central banks must do to adapt to a data-driven policy world.
The changing data landscape

Central Banking speaks to six policy-makers for their thoughts on the evolving data landscape and what central banks must do to adapt to a data-driven policy world.

The panel

  • Philip Abradu-Otoo, Director of Research, Bank of Ghana
  • Ramūnas Baravykas, Head of Digitalisation and Advanced Analytics, Bank of Lithuania
  • Howard Chang, Vice-president of Global Affairs, TCSA
  • Wanpracha Chaovalitwongse, Senior Director, Data Management and Analytics Department, Bank of Thailand
  • Juan José Ospina, Chief Officer for Monetary Policy and Economic Information, Central Bank of Colombia
  • Eyal Rozen, Director, Information and Statistics Department, Bank of Israel

The way central banks think about data is changing. As the world becomes more digital, there is an ever-growing data tool from which central banks and regulators can draw information about the economy. But old legacy systems are preventing this information being drawn into decision-making processes. In many countries, firms are still using outdated reporting tools to provide central banks with critical information about their liquidity and solvency positions. Meanwhile, data around everyday economic activities – including retail transactions – is not being used to its full advantage.  

However, there is some progress being made. Artificial intelligence (AI) and machine learning is being deployed by central banks to scan alternative forms of data – social media and mainstream newspapers being two examples. Elsewhere, central banks are collaborating with big tech firms to source data about consumer behaviour. But there is still work to be done.  

 

Central Banking: What are the biggest data challenges your institution faces when making economic policy decisions, and what are the current solutions?

Eyal Rozen, Bank of Israel
Eyal Rozen, Bank of Israel

Eyal Rozen, Bank of Israel: Data that supports decision-making must be rapid, up to date and integrative. The main challenge is that the data the central bank collects comes from a wide variety of sources, at varying levels of quality and frequency. Some are aggregate and some are granular. Real economy data, for instance, arrives at a relatively low frequency – monthly or quarterly – and lacks a good picture for policy-makers regarding the economic situation in real time. 

To address this challenge, the central bank’s Information and Statistics Department initiated a project aimed at broadening the sources of information and enriching data gathering on economic activity, with rapid data indicators – commercial information, internet information, and so forth – in addition to the information obtained from administrative sources.

Ramūnas Baravykas, Bank of Lithuania: It is the diversity of data sources that poses challenges to data integration. Data comes from many different sources – such as financial institutions, survey results and findings, macroeconomics, dashboards and scorecards created by researchers. It can be extremely difficult to combine all of this data and reconcile it so it can be used for reporting, and derive insight from it. Data validation is closely related to the data integration issue. Central banks receive similar pieces of data from different sources for different purposes and store them in different systems, which is why the data there is not always consistent. The data governance process – as well as ensuring the records are accurate, usable and secure – is paramount and needs to get the desired focus from the board. 

Solving data governance challenges is complex and requires a combination of policy changes, organisational transformation and new technology. To this end, the Bank of Lithuania is currently reviewing its data governance processes, and is seeking to increase data management maturity, not only by shifting to new technology solutions, but also by reviewing its organisational structure and set of policies to avoid data duplication, fragmentation, incomplete data, and so on. 

However, the main reason behind the need for full buy-in from the board is assurance of smooth organisational change and commitment to creating a data-driven culture. To address the organisational resistance and improve decision-making capabilities, strong leaders should be appointed who would understand the potential of data, challenge the existing practice of data silos and know what actions should be taken to remain competitive in the growing data-driven economy.

Wanpracha Chaovalitwongse, Bank of Thailand: Obtaining accurate, granular data in a timely fashion is one of the biggest challenges. Economic data is often aggregate in nature and does not provide enough detailed resolution in terms of time, locations, entities and sectors to integrate the data across different sources from different domains. Certain types of data currently lack the same standards in acquisition and processing. 

Juan José Ospina, Central Bank of Colombia: The main challenge is how to collect data in a formal, systematic and timely manner that also allows the central bank to better track the economy in real time. Assessing it in real time allows a central bank to better understand the shocks and how they affect the economy. I think the Covid-19 crisis has highlighted the need to look into different data. When the pandemic broke out, the Central Bank of Colombia looked at its traditional indicators and also those of statistical agencies that publish real-time indicators of how the economy is doing – but we didn’t know everything.

For example, commercial banks have real-time information on the transactions people make using credit and debit cards, which allows them to track real-time spending trends within each sector and region. That is something we cannot do, but would like to. If a central bank does not have a good sense of how the economy is doing, then the policy recommendations made may not be appropriate. Thankfully, when we asked the commercial banks to share this information with us, they did, but there is nothing in the legislation that mandates them to do this; we could get to a stage when sharing data is not a priority. When receiving data from different sources, there are additional formatting challenges. Each institution publishes its data in a different way, there is no single standard, which makes it difficult to compare and transmit information quickly. 

Philip Abradu-Otoo, Bank of Ghana: Data fragmentation is one of the biggest challenges for us. Also the challenge of merging the various data types received by the central bank to form a large, high-quality and reliable database that can be easily accessible on various platforms to facilitate research and decision-making. Other challenges focus on the ability to rapidly access, effectively manage, process in a timely manner and analyse growing volumes of economic, financial, supervisory and statistical data. It is also worth noting the limitations in human and IT resources that allow ideal real-time access and efficient processing of data or information. 

Occasionally, technical hitches during data transmission are challenging. Problems with file formats sent to the central bank also remain an issue. Where providers of this data are focused on individual staff rather than being institutionally focused, risks associated with data transmission could occur. When these individuals become unavailable for any reason, data submission gaps emerge and could act as a drag on the time lapse needed for policy decision-making.

Howard Chang, TCSA
Howard Chang, TCSA

Howard Chang, TCSA: There are shared data challenges and common predicaments for the entire central bank community, including limited data sources, timeliness, inefficient transmission, varied formats, data fragmentation, granularity, colossal volume, accessibility, confidentiality and regulations. 

Numerous novel approaches have been deployed to address these issues. They include the introduction of additional data sources, simplifying reporting procedures with the help of regulatory technology – known as regtech – solutions, and building unified data lakes. 

However, despite these cutting-edge advancements, policy-makers still find it difficult to garner a clear understanding of the state of the economy – not to mention the more granular forces at play in the national economy. The tools and methods mentioned above are indeed admirable, yet fall short of an effective solution. 

TCSA has developed an innovative methodology to resolve these data challenges once and for all by tracking real-time transactions of the entire society via nationwide payment terminals. Akin to “the blood in the bloodstream for an economy”, this granular data will yield a bird’s-eye view of national economic dynamics with the highest resolution. Worthy of attention is that this precise picture of the macroeconomic landscape will not be built on unnecessarily complex IT or data technologies but will be based on a very simple concept instead: data standardisation on a national level.

 

Central Banking: Data plays a key role in informing central banks’ decisions. How does your institution collect data? What kind of data does it collect? And how does it organise these datasets with other sources of financial data? 

Eyal Rozen: The Information and Statistics Department collects aggregate and granular data, transactions and positions from a variety of information sources: capital market data from the stock exchange, banking data, data on the assets of institutional investors and mutual funds, sectoral activity in the foreign exchange and derivatives markets, corporate financial statements, government debt, data on real economy activity in Israel, household credit data and more. 

Most of the data is gathered from periodic transmissions by the reporting entities and organisations, and is recorded by the IT department in the bank’s systems. The data is put through quality control validations and used to calculate aggregates. The department processes the data and produces the main economic data products: the balance of payments, the public’s asset portfolio, the economy’s debt, international investment position, sectoral exposures to FX, and so forth. 

To organise the information in an integrative fashion, the Information and Statistics Department has established a single time-series database for all types of data to make it accessible in one location, with uniform metadata according to the international Statistical Data and Metadata eXchange standard. This project makes it possible to better organise the data, integrate it and reduce the bank’s existing data silos. The data is accessed through an internal portal. Some of the information is also published via the Bank of Israel’s website. There are databases of personal granular data as well. In addition to producing aggregates from these databases, there are dashboards to query data, and access is provided to the data through analytical tools such as R or Python, and the bank also cross-references data from its various databases, all for research purposes. 

Ramūnas Baravykas, Bank of Lithuania
Ramūnas Baravykas, Bank of Lithuania

Ramūnas Baravykas: The Bank of Lithuania collects most of its information via the usual means: surveys, macroeconomic indicators, aggregate reports provided by financial institutions and data from state registers. Traditional methods and common standards are used to collect the following data: XML and XBRL forms to collect template-based aggregated data from financial institutions, system-to-system file exchange solutions developed to gather structured information from registers, while comma-separated values or other formats for semi-structured or unstructured information collected from surveys or other ad hoc exercises. 

However, for research purposes, the traditional data sources are being increasingly enriched by additional collection of social media data via direct web scraping or application programming interface (API) technology. 

Currently, the Bank of Lithuania is fundamentally reviewing its data management practices and seeks to outline a long-term perspective for increasing the maturity of data management. To optimise its data management, the bank is reviewing the data management models, tools for collecting reporting data from financial institutions, data integration, storage and analysis technologies. The central bank also aims to be an innovative partner in the financial market and has therefore launched a pilot project to develop an intelligent solution using regtech  to simplify reporting procedures and reduce administrative burden and reporting costs for financial institutions. 

A solution prototype uses the API technology to pull structured micro-level data from financial institutions and automatically transfer it to the required reports, including the ability to access data in a specified format or manner. As a result, all financial reports could be generated automatically, thus avoiding different interpretations of legislation, delays and any inaccuracies. 

It will also allow the central bank to not only generate timely and accurate reports, but also conduct in-depth analysis, which was only possible through on-site inspections, as well as gain insights into potential market risks and share it with financial market participants. This would not only improve the efficiency of supervisory practices, but would also have a positive impact on the financial stability and soundness of the financial system. 

Wanpracha Chaovalitwongse: The Bank of Thailand currently collects data from financial institutions, government entities, utilities companies, private sectors and third-party data providers, which include social media, internet and telecoms data. The central bank organises this data into two main groups: financial and economic. Financial data includes all information collected from financial institutions, whereas economic data includes all data from government entities, organisations with which the central bank has memorandums of understanding, and alternative data that the central bank purchases.

Juan José Ospina: We collect data in very different ways, including directly from analysts, businesses and financial institutions. But it is likely different datasets are collected by different departments of the central banks. For example, the financial stability department might conduct a survey, while economists looking at inflation might ask for different data. Data is also collected through regulatory reporting. In Colombia, we require all FX transactions to be reported to us so we know how many US dollars are going in and out of the country; it is also helpful to know what types of transactions are being made in foreign currencies. And, finally, we collect data through co-ordination with other government agencies. In these instances, we tend to share information to a schedule – daily or monthly. 

It really depends on what data agreement we have in place. For example, the Consumer Confidence Index is created by a private think-tank, but we have an agreement with the think-tank whereby we basically pay them to produce the survey. The majority of the data we collect is microdata from financial institutions, aggregated by sector. We also collect data through surveys, which are similar to the US Federal Reserve’s Beige Book. And we collect data from other companies such as Datastream and Bloomberg, which also aggregate financial statistics. Aggregating them is where it gets complicated. We have several systems that capture all of this information, and they do not always talk to one another – so we have a system that communicates with them and extracts information. 

We also have systems that organise our datasets according to the needs of the end-user. For example, a system might collect an entire set of data, but an analyst might only want data from a certain institution or over a specific time period. There are also certain permissions for accessing certain datasets. From this data we can produce economic analysis as well as our own statistics and indicators.

Philip Abradu-Otoo: The Bank of Ghana has well laid down structures for collecting primary and secondary data. These are executed through surveys, submission of prudential reports by banks and other international institutions, and other secondary data sources. Regarding surveys, questionnaires are sent to respondents via email. This has its own limitations at times and has proven to be an ineffective data-gathering tool as email responses are often not fully completed. This has meant staff have often needed to physically visit respondents to ensure full completion of the questionnaires before processing. 

We also have a straight-through process via which the central bank receives data. For example, banking supervision reports on the deposit money banks and other supervised non-bank financial institutions would be submitted in this way. Organisation of fiscal data is very important given the role of government in the economy, and the unique role of the central bank as the banker to the government. The behaviour of government is measured through its revenue mobilisation efforts and spending behaviour. As a result, gathering and organising information on governmental operations is important to inform the entire policy formulation process on the monetary and fiscal policy sides.

Howard Chang: The common approach of data collection by central banks today still lies in the gathering of traditional data sources through manual surveys across various entities, and quarterly or annual reports from financial institutions. Some have begun digging into secondary data sources across governmental departments or purchasing additional internet-based data from the private sector. 

In the words of Ousmène Mandeng, a visiting fellow at the London School of Economics: “It surprises me that, with so much data, central banks are not using the data at their doorsteps.” In fact, central banks already have direct and unfettered access to the most logical and ubiquitous sources of information regarding all human economic activities – monetary transactional data. When aggregated, this primary data provides a real-time snapshot of the entire economy.

Central banks are already halfway there in terms of solving the issue of inadequate available data sources. Therefore, the fastest, easiest and cheapest upgrade of national-level data collection and data processing has to start from, and be based on, existing central bank data systems. It is the most viable path and the lowest-hanging fruit. With a simple algorithmic software add-on grafted onto all existing payment terminals, this data can be captured into a single standardised metadata package that encompasses all dimensions of socioeconomic activities. This means pieces of data will no longer be scattered into different silos, removing the risk of fragmentation and the trouble of data cleansing further down the road.

 

Central Banking: How should central banks approach data fragmentation?

Eyal Rozen: This fragmented data is kept in different types of databases: of aggregate data, granular data, identified and anonymous data, including big data, and so on. The bank also conducts data scraping from external public databases and imports data from commercial suppliers. 

To deal with the issues of data fragmentation, the technological solution the bank has implemented in the past was building data warehouses – which are still operative at the bank (such as for capital market data). Currently, the bank intends to expand the assimilation of cloud technology, which has already been applied in a number of databases.

The bank plans on installing a dual cloud environment – one for internal use on a private cloud, and one for combined internal and external use on a public cloud. The use of the cloud environment is necessary not only because of the volume and form of the data, but also because of the possibilities for using innovative tools that IT vendors have developed in recent years, which exist only in a cloud environment. Using advanced technology and migration to a private cloud environment at the end of the evolution process of most of the bank’s databases will allow good technological support of efforts to integrate information on the business side, and will enable good querying and analysis capabilities along with privacy protection and information security. In my view, this is the recommended approach to solving issues of data fragmentation at central banks. 

Ramūnas Baravykas: A combination of measures should be introduced to address the issue of data fragmentation. First, the organisation must be well aware of what data it holds, as well as appoint a chief data officer who would be responsible for institution-wide data governance and utilisation of information as an asset. 

Then a comprehensive inventory of the information should be conducted to identify all places where data is being held, carry out data classification and determine policies on storage management, data retention and information protection.  

Third, it is important to ensure adequate technological solutions allowing those responsible to effectively catalogue the organisation’s data and manage access rights, thus ensuring the hub-and-spoke architecture implementation. 

Fourth, it is essential to ensure everyone involved in the data governance process at all organisational levels understands their roles and responsibilities. Therefore, a training programme should be developed to make users aware of the policies, but also explain the logic behind them, so they can act responsibly when faced with a new situation not covered by an existing policy.

Wanpracha Chaovalitwongse, Bank of Thailand
Wanpracha Chaovalitwongse, Bank of Thailand

Wanpracha Chaovalitwongse: The government should develop a national data strategy and let the central bank be the lead for overseeing all financial and economic data. This will enable the development of common data standards, interoperability of data and seamless integration.

Juan José Ospina: At the moment, if you ask banks for certain datasets, they give it to you in whatever format is easiest for them. Most of these firms use different platforms and software to aggregate and collect data; it is never one size fits all. In some cases, they do not even organise it. So pulling information from a number of institutions and then comparing it is a major challenge. 

In this instance, we would look to standardise data at the source. This gives you control of what is then handed over. But there is also the issue of data repetition – where numerous institutions ask for the same data, sometimes different departments asking for the same set of data. You end up asking consumers or firms for the same information over and over again, and this is not sustainable. For me, fragmentation has to start with solving issues within the central bank first. 

I would ask institutions to send as much data as they can, and then draw it together into a single central database where it can be standardised and accessed by different departments. There will be challenges that remain – a centralised system is going to need to be interoperable with those of the firms reporting the data, and vice versa. Some departments may also need certain datasets in different formats to other departments, but it would be a start. You would at least have the primary data and can then apply any kind of aggregation or computation needed.

Philip Abradu-Otoo: To keep up with the evolving data demand, the Bank of Ghana has developed a shared internal platform to enable different departments to access data resources within the institution. 

The statistics office within the research department collates, processes and manages data from both the primary and secondary sources, and then sends the information to another unit for further data consistency checking. A unit within the research department of the bank receives data from these various data sources, checks for consistencies with other key macroeconomic variables and initiates an approval process within the institution before it is consumed by the policy-makers and subsequently published and disseminated via the bank’s main communication channels. 

Within the bank, there are also clear internal processes that exist to ensure data interrogation before such data is presented to management for policy discourse and decision-making. Such processes of intense data interrogation (structured and unstructured) unearth inconsistencies in the data, allowing fragmented data to be coherently repaired before being used for analytical and policy work.

Howard Chang: To address data fragmentation, most central banks choose to extract data from various silos into a unified data lake, which ultimately forms what has become known as the hub-and-spoke architecture. It is a powerful approach, but an alternative and more efficient path can be found by fundamentally reconstructing current data infrastructure. 

Rather than following the conventional logic, where attempts to bridge data from various silos at huge cost into one centralised data lake after the silos are formed, our new proposition capitalises on the fact that all pieces of monetary transactional data can already be synchronised and organised into a standardised metastructure at the point of collection. This allows for data integration on a real-time basis at a large scale, while operating at a cost one-hundredth that of traditional data infrastructure.

Furthermore, under a homogeneous format, these metadata packages are ready to be accessed and computed by various departments for various usages. They act as the basic building block for all data architecture, laying the foundation for an extremely cost-effective computing mechanism for data on the national scale. They can be added, subtracted and calculated quickly and cheaply, allowing policy-makers to know at a glance how much water is consumed in each province, how many roads are needed in each city or how much of what resource is consumed for which purposes. 

 

Central Banking: As the world becomes more digital, a larger volume of consumer data will be collected by firms and, in some instances, by regulators. What are the data protection implications of using consumer data to better inform central bank policy-making?

Eyal Rozen: The global trend in data analytics is promoting the gathering of granular microdata on firms and households. As a result, issues of privacy protection and protection of commercial secrets are amplified. The Bank of Israel is dealing with data protection issues through integrated, statistical, physical and administrative means, including anonymisation, output checking, physical controls, authorisations, separate and secured IT infrastructure environments, work procedures, and so on. These protective means sometimes reduce researchers’ ability to cross-reference data, analyse it for decision-making purposes and make necessary segmentations for integrative data products. There is ongoing tension between the need to protect privacy and to conduct research to support policy decisions. The anonymisation methodology is therefore set specifically for each database in view of the business requirements and of the exposure scenarios.

Wanpracha Chaovalitwongse: The Personal Data Protection Act allows the Bank of Thailand access to most financial and economic data as its mandates also include maintaining the nation’s financial stability for the public good. The ability to holistically integrate citizen financial data – from debts to equities to payments – will allow the central bank and the government to more precisely identify vulnerable groups of citizens and create targeted policies.

Juan José Ospina, Central Bank of Colombia
Juan José Ospina, Central Bank of Colombia

Juan José Ospina: The main challenge is actually ensuring each institution you are collecting data from is compliant with data privacy regulation. Sometimes consumers provide data on the understanding is not going to be shared, even if the institution asking for the data has its own policy in place or a good reputation.

Once you have the microdata, there is also the added issue of storing it in a secure manner. The volume of data financial institutions gather has never been greater, and so new technology has been developed to aggregate and store this data. With new technology comes new risk, and preventing a data breach is a priority for central banks. One solution for the private sector is cloud technology. It is harder for central banks to use these systems because they are often owned by third parties that have introduced an additional layer or additional risks; some central banks have developed their own cloud technology, but this is not common. So getting the information is hard – storing it is also difficult. One could argue that, while the digital age has revolutionised financial services, it has also made it more challenging. 

Ramūnas Baravykas: Such challenges can be overcome. In general, it used to be easier for central banks to conduct investigations using consumer data. Currently, researchers use anonymised data, which essentially allows them to obtain the same results, yet it requires a greater focus on the preparatory phase by processing data before handing it over to researchers. 

Philip Abradu-Otoo: The central bank is increasingly developing requirements to ensure data protection. I think your question is basically trying to assess how we strike that fine balance between our regulatory functions, which requires gathering larger volumes of data and respecting data protection and data rights issues. 

Already there are rules and regulations that protect the data we collect from depositors, lenders, borrowers and other counterparties, and how this data is used for policy. As regulators, we have all signed up to some form of confidentiality agreement with our partner bodies, we follow confidentiality rules in line with International Organization for Standardization requirements, there are data restrictions on staff regarding the sensitivity of data that can be assessed and, in instances where banks have to provide consumer data to parent companies, they are required to seek clearance from the Bank of Ghana and data protection agencies. 

These rules and regulations have evolved over time and, as the economy undergoes a digitisation phase, these rules and regulations are also changing with the exigencies of time. Looking ahead, I see these rules and regulations bordering on consumer data, and other forms of big data will also be required to evolve with constant fine-tuning to help better inform policy. Put together, central banks must remain watchful of these evolving trends while building on the necessary mechanisms to protect the data that is received from consumers. 

Howard Chang: The heated debate on data protection could be examined from several angles. First, data sharing and privacy are not in binary opposition because privacy can be ensured through technology: for example, anonymisation, output checking, physical controls, authorisation and cloud technology. User data can be fully collected, but the rights to access and use it should be clearly defined within systemic sets of policies and regulations. 

Second, the tension and balance between technological breakthroughs and individual freedom can be revisited from a different perspective where data-empowered informed decisions actually bring citizens the truest sense of freedom. For example, in Sweden, citizens opt to use electronic means over traditional payment methods. As a result, the authorities have collected massive amounts of data for analysis, policy-making and research purposes. 

Third, there would be an element of data supervision. To prevent the potential abuse of power and violation of data confidentiality, it serves as an enabler for government entities to share with the public its decision-making rationale and logic underneath.   

The status quo is that private data has never been genuinely owned by citizens, but instead largely controlled by tech giants that have been monopolising and monetising consumer data. Yet whether market-oriented business behaviours can ensure data protection begs a number of questions. We believe establishing a centralised data powerhouse is the safest way to return that data ownership to individuals and prevent data breaches. 

 

Central Banking: How do you see the role of central banks evolving in a data-driven policy-making space?

Eyal Rozen: The central bank is active, in a number of areas, in evaluating issues concerning the developments in the digital world. Examples include faster payments, central bank digital currency, open banking and APIs, and financial stability and banking supervision. The Information and Statistics Department established a unit specialising in data science. The unit promotes and implements new methods of machine learning in models that support decision-making. For instance, the unit is currently working on a model for nowcasting price index components (forecasting fruit and vegetable prices on the basis of the retail database, web-scraping online price data for clothing and electronics), sentiment indexes, a nowcasting model based on textual analysis, building a nowcasting model similar to the GDP now, network analysis of various databases, and more. In my view, central banks need to establish leading internal capabilities in the field of data science, including a teamwork convention, which integrates statistical methodology know-how, economic content expertise and advanced data engineering infrastructure. 

Ramūnas Baravykas: The Bank of Lithuania operates according to data- and research-based decisions, thus data availability, reliability and efficiency play an important role in its monetary or macroprudential policy, supervision of financial market participants, economic analysis, forecasting, financial stability risk monitoring and other functions. The Bank of Lithuania also compiles and publishes comprehensive statistics that meet international standards and are comparable between European Union member states.

Financial and economic crises have led central bankers to become true leaders, paving the way for debate based on insights drawn from the data. The developed central bank competencies – to quickly collect and analyse the necessary information and deliver it in a timely manner to decision-makers – have made them key players in policy-making. I believe this feature will become especially important with the digital transformation of finance and increasingly growing
data volumes. 

Wanpracha Chaovalitwongse: The digital economy will play an increasingly predominant role in assessing the nation’s economic growth and wellbeing. Thus, being able to measure and monitor the digital economy will allow the central bank to nowcast the current economic condition and perhaps provide more preventive measures to maintain financial stability and promote economic growth.

Juan José Ospina: At the moment, there is no need to centralise data collection. Every company has its own agenda when it comes to data, and they should be free to collect, aggregate and store it whichever way they see fit. It is not a central bank’s role to govern these things. 

But we also have to move with the times. There are now more sources of data available to central banks – more high-frequency indicators that can give us a real-time snapshot of how the economy is doing. At the moment, many central banks operate in a very ‘constrained’ environment, which means, realistically, all of the data has to be analysed ‘yesterday’. I think the ability to use new real-time data is out there – we know it is because some firms are already using it. I do see a future where central banks are trying to grab more of that information, and not only to make more timely policy decisions. During the Covid-19 pandemic, central banks have been able to respond quickly and appropriately, but it would be prudent to regularly monitor certain metrics to better understand the dynamics of the economy. It is important to understand how shocks impact different regions, income groups, social groups, and more. Currently, doing this type of research takes a lot of time, but for policy-making purposes that time may not be available. We need to ensure we have the data so we can make decisions quicker.

Philip Abradu-Otoo, Bank of Ghana
Philip Abradu-Otoo, Bank of Ghana

Philip Abradu-Otoo: As the digital space evolves, the Bank of Ghana is working to provide timely economic and financial data that can be accessed with ease by all stakeholders to facilitate enhanced and quick decision-making processes. 

Banks now submit data online, which has helped greatly during the pandemic; ultimately, the central bank’s supervisory role has been enhanced through this approach to data submission. The Bank of Ghana is also developing requirements for the financial institutions to improve data management and technological innovations to reduce the inherent risk associated with technological innovations.

Howard Chang: TCSA agrees central bankers should be the leaders of a data-empowered future. That is why we built an open-access data platform, which can be built upon central banks’ existing cloud architecture for data-sharing and maximised accessibility. Granular data captured through the lens of monetary transactions will project the clearest view of the economy with supply and demand, inputs and outputs of each industry for decision-making. Furthermore, the highly compatible data can power machine-learning forecasting models and numerous other toolkits, such as AI and natural language processing, for in-depth analysis. The management of data in this way takes the mystery out of monetary policy-making and enables central banks to make precise adjustments based on accurate information. 

Furthermore, the centralisation of national economic data will lay the solid groundwork for a hard value anchor for national currencies by realigning currency prices with real economic values. Currencies can be pegged to each country’s actual economic strength – as quantified by real-time data. Inflation could be better managed; internationally, the credibility and status of a national currency can be improved for global trade.

Finally, the platform also makes data easily accessible for the private sector to spur further innovation. In this sense, the platform is similar to the dual cloud environment piloted by leading central banks with dashboards to query data. Data resources and services could then be transformed into a steady source of fiscal revenues or income for central banks, for example, in the form of data tax.

 

This feature forms part of the Central Banking focus report, Data-driven policy-making for central banks 2021

  • LinkedIn  
  • Save this article
  • Print this page  

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.centralbanking.com/subscribe

You are currently unable to copy this content. Please contact [email protected] to find out more.

You need to sign in to use this feature. If you don’t have a Central Banking account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here: