Why do you believe that the longer-term natural interest rate is not a constant as is assumed in many economic models?
Myself and some of my colleagues at the Federal Reserve started research on this back in the 1990s. The context of the work was the tech boom, the invention of the internet and more rapid productivity growth. This was relevant when one was thinking about the use of the neutral interest rate for a Taylor rule, or some kind of notion of the normal setting for monetary policy. The question was: “If the economy’s growing faster and productivity growth is faster in the ‘new economy’, would that mean the neutral interest rate should be higher?”
Economic theory suggested that that neutral rate depends on factors like demographics, productivity growth and other things. That was why our interest – not only mine, but others as well – was high. The Taylor rule itself, which came out in the early 1990s, with John Taylor’s famous paper, gave a very prominent role to the notion of an equilibrium or neutral, real interest rate as the anchor for interest rates. Given the value of thinking about Taylor or other policy rules, it garnered attention.
Policymakers and senior economists asked: “Well, what is that number? Is it 2%? Is it 2.5%? Is it something else? And does it change over time? Thomas Laubach and I wrote a paper on that, as did others over the years.1 Interest rekindled following the financial crisis, when the US and other economies were growing quite slowly despite historically low interest rates. By chance, Laubach and I had been updating our model every quarter, as the framework we developed around 2001 was sitting on the shelf running on computers, so we had that ability to ask: “What are our estimates looking like?”
That was when I realised these estimates had shifted dramatically again, but in the opposite direction – down, instead of up, as in the late 1990s and early 2000s. It was around 2011 that I noticed this downshift in the natural rate of interest had occurred in our estimates, and had stayed at these low levels, as opposed to moving back to more normal levels. Since then, there’s been a lot of research around this issue. What’s interesting is that economists have highlighted a number of factors that were probably driving down the natural rate, including demographic factors and productivity. It’s kind of a global savings glut kind of story – there are various factors. But what economists are consistently finding is evidence that the natural or equilibrium interest rate is much lower than we believed, say, in the 1990s, or even before the crisis.
Using your model, what was the main factor that’s driving it down?
We wrote a very simple model. We allowed for two factors that would drive the natural rate of interest, R*, up and down. One was the trend growth rate of the economy, which we are also estimating – kind of the underlying trend of GDP. The other was what we called ‘other stuff’. At the time, we didn’t have a really clear view which variables we should or should not be including, so we adopted a flexible approach that allowed for other things to be happening that we weren’t specifically modelling. We found in the US that roughly half of the decline in the natural rate of interest is because of a slowdown in trend growth – productivity growth slowing, partially slower labour force growth, things like that. The other half is explained by other factors that affect the supply and demand for savings. That could be a global savings glut or a greater risk premium in the markets after the financial crisis. There are also different demographic factors that economists have highlighted. We are all getting older, especially in the advanced economies. It’s not just that we have lower birth rates, it’s that we’re living longer, which tends to cause people to save more, preparing for a longer period of retirement. So we didn’t include that in our model originally. But roughly half is slower growth, and half is other things to do with saving and investment beyond the slower growth.
What about concerns some expressed that current statistical techniques fail to adequately measure productivity, especially linked to the use of empowering ‘free’ technology? Some Silicon Valley firms put this at around half a percentage point.
If you are going to tell me that the true underlying productivity growth is understated by a quarter to half a percentage point, I can’t really argue with that. That’s probably a reasonable perspective. But I would emphasise that that was probably true 10, 20, 30 years ago as well. My colleague John Fernald and his co-authors did a very careful study where they looked at all these explanations about productivity growth being mismeasured. Fascinatingly, they found the measurement issues are real – there are a lot of things not captured by the official statistics – but these measurement issues were as big, or probably even bigger, in the past. So you might say productivity growth has always been faster than officially reported, but it’s hard to explain the slowdown, the sharp decline.
Some people have brought up some very specific issues, like the fact companies are now moving more of their value-added production to other domiciles for tax reasons. There have been papers that looked at that. First of all, that’s true, and that affects the counting of productivity. But that’s also been going on for quite some time. At most, you’re talking about a tenth or two one way or the other, in terms of a productivity slowdown. So there are arguments for a tenth here, a tenth there, but they don’t really explain how productivity growth has gone from 2.5–3.0% to 1.0–1.5% – it’s fallen in half.
What about tech innovation?
The bigger story is that a lot of the innovation in terms of new ideas and products is often consumer-focused. That’s in contrast to the 1990s, early 2000s, when a lot of the innovation was really business either buying or taking advantage of more computing power. So companies like Intel, HP, Dell, Microsoft – all the ‘old tech’ – were coming up with products that allowed businesses computationally or otherwise to do things with computers that they couldn’t do before.
Companies took advantage of this with better inventory control, inventory management, much more efficient operations in a lot of areas. So businesses invested heavily. They took those investments and they turned them into higher profits or lower costs, and that’s why it showed up in the productivity statistics as strong as it did – there was greater output per hour in the business sector.
Today, first of all, the US doesn’t make many computers. A lot of these physical computers are made abroad, so that affects some of this. But the other factor is a lot of the newer apps and so on are oriented to the consumer, which may not have a knock-on effect of faster productivity growth in GDP. Probably it’s showing up in, maybe, the better use of your leisure time. People take advantage of apps in terms of in finding what restaurant to go to and getting a car to drive you from point A to point B. But that’s probably an improvement in our leisure time, and probably doesn’t show up in GDP – and it never was intended to. So one of the interesting areas of research today is how much of that is not in GDP, it’s actually in what economists call ‘household production’.
Do you believe the Fed – even with all its very capable economists – can pick up on these different elements of what is a complex dynamic?
We do have the economists, not just my colleague here John Fernald, but also colleagues around the system who have focused on productivity for at least the 23 years I have been at the Fed and even longer. Plus, we’re in constant contact with the academic community and with the economists and others in the tech sector. The good news is that some of these companies have really amazing economists who think about these issues in a very careful way, so we definitely have those discussions and debates about these issues. But there is a bit of a puzzle out there.
There are also B2B software companies, such as Salesforce (whose headquarters you will have seen being built on the way, as it will be San Francisco’s tallest building), Oracle and others who are creating customer relationship management or other software products that are designed to make business more efficient and more productive. They have obviously been very successful at selling their products, but the big advances in software – cloud computing, software as a service and so on – in the business community should be showing up in the productivity numbers if it really were the case that they led to improvements in productivity. We know that with new technological innovations, it often takes years – or maybe even decades – to fully take advantage of new systems and new approaches. Maybe that it’s become very popular, it’s being used a lot. But maybe it hasn’t been used, businesses haven’t really taken full advantage of that. So that’s something we need to keep watching. Do we see productivity growth kind of picking up as companies really utilise a lot of these investments they’re making in software and the cloud?
So there are some big questions about the productive value of all this technology?
To fully take advantage of a lot of these changes, you have to transform how you do business. The classic example in retail was companies like Walmart and others using sophisticated inventory management systems to control their inventory costs, manage their distribution across their stores around the country and with their suppliers around the world. If you just use computers to do the same thing you’ve always done, you get a little bump in productivity. When you totally change how you do things, that’s when you get the bigger shift.
I believe all that’s really happened is companies have moved a big chunk of their advertising dollars from TV, radio, newspaper and magazines to the internet, and they’re still trying to figure out how to exploit that… I’m not saying it is good or bad, but we’re not seeing the big surge in productivity that you might expect because people really haven’t changed that much of what they’re doing
Similarly, what are the biggest companies around San Francisco? Google, Facebook, etc, and they sell to businesses, right? Most people don’t think about this, but they’re selling advertising – they’re knocking out newspapers, magazines, TV and radio by selling advertising generally to businesses. The fact that we receive their services is just like how we got to read the newspaper or listen to the radio. It’s a way to get us to get our ‘eyeballs’ on the advertising. If this technology was so much better at providing advertising services to businesses, you would expect it should show up as greater productivity in the private sector. I believe all that’s really happened is companies have moved a big chunk of their advertising dollars from TV, radio, newspaper and magazines to the internet, and they’re still trying to figure out how to exploit that. So far, we’re not seeing a big surge in the productivity of marketing departments. What we’re just seeing is the money moved from one area to another. I’m not saying it is good or bad, but we’re not seeing the big surge in productivity that you might expect because people really haven’t changed that much of what they’re doing.
Research by Charles Goodhart and Manoj Pradhan indicates that changing demographics will force the real rate of interest to rise in the future, as the labour supply shrinks and demand rises for workers in roles caring for the elderly, with desired levels of savings falling faster than investment. They stress that the role of growth in determining the equilibrium real interest rate “is exaggerated”. What do you think?
This issue of how demographics play out in terms of the R*, natural rate of interest, has been recognised as complicated, even in the earliest papers. We are living longer and then we have these demographic booms and waves. Their effect on asset prices, the economy and the natural rate of interest depends on where you are in those cycles. So I understand the argument, because even the models I referred to earlier have that dynamic. As people move into retirement, they’re dissaving, right? And some of these other factors come into play. I think this is an open question. The analysis I’ve seen by some of my colleagues at the Fed and research by other economists argues that, at least for the next decade or so, the primary influence of demographics will be to push that natural rate of interest down. But I agree that may change as these demographic waves change.
Another key question as we live longer is: “Do we work longer?” If people are living on average to be 100, but they’re retiring at 65, it makes sense to, say, people who are saving a lot while they are working and then dissaving over the rest of their life. But, if instead, we think of a world where we spend the same proportion of our life, roughly, working, so retirement ages move out, that also could influence this calculation and reduce some of these demographic effects. The working assumption that people have today is that we are going to see a continuation of what we’ve seen, which is that even though life expectancy gets longer, people still retire basically around the same age on average, although some people will work a lot longer. If you look at Japan, which is on the leading edge of this demographic, they’re living longer, but people on average are not working much longer.
But the Japanese also have much higher levels of savings, don’t they?
Yes, they do. So this is an open question. We do not know if 10 years from now these demographic waves may move in the opposite direction. In the analysis, there are competing forces of people saving a lot for a while and then dissaving. But as we proceed over the next decade, the working assumption that the natural rate has declined – at least for now – would make sense. Over the next 10 years, we’re going to see how saving rates and how people’s behaviour change. So productivity growth may pick back up. The whole point of the research programme was because we know it changes over time, and we have to keep account of it.
What are the implications for central banks?
The key implication is the likelihood of more unconventional monetary policy due to low normal interest rates. On average during normal economic conditions when inflation is 2%, that would be 2.5–3.0%. My own view is about 2.5%, so 0.5 percentage point is the real interest rate plus 2 percentage points of inflation. That is low. And, if historically the Fed usually cuts interest rates by 4–5 percentage points during typical recessions, that is no longer possible if you are starting at 2.5–3.0%. So unconventional policy becomes conventional. Quantitative easing, forward guidance or some way of committing to keep interest rates low for a long time become more normal policy.
Importantly, this isn’t just happening in the US. It’s happening, according to our research, in Europe and Canada. Others have done research showing Japan’s natural rate of interest has also fallen to very low levels. If everybody is pursuing QE-type policies, this creates complications. We know from recent experiences, it’s a very challenging environment to stimulate the economy and get inflation back. If you want to see my gloomy view of the future, just open up your computer and look at the last five years. I’m not saying that we’re going to have another financial crisis, it’s just that the challenges of using interest rates to keep the economy on track and inflation stable are going to be very difficult.
Are the 2% inflation targets that most central banks in the developed world have adopted suitable for this sort of environment? Would price-level targeting be better?
This is a discussion we need be having on a regular and very serious basis at the Fed and other central banks. The Bank of Canada has a framework approach in place where it conducts a very serious, in-depth, careful analysis around its policy framework every five years. We and other central banks should follow that example. Discussion should include: whether a 2% inflation target, price-level or some other framework is the right one; and how should we think about unconventional policies and the size of balance sheets in the future? These are important discussions to have, especially in times like today, where the US economy is doing well – unemployment’s low, and inflation’s a little low relative to our target, but not that low. We have gotten out of the crisis and the recession. Through the recovery, it makes sense for us to be preparing for that future.
Price-level targeting does have some advantages. One is that you can maintain a 2% inflation objective or maybe a little bit higher than that. At least, in theory, it has some strong advantages over an inflation target that treats bygones as bygones
When people rail against a higher inflation target, I always respond: “Well if you don’t like a higher inflation target, like a 3% or 4% inflation target to try to deal with a lower natural rate of interest, what’s your other option?” Price-level targeting does have some advantages. One is that you can maintain a 2% inflation objective or maybe a little bit higher than that. At least, in theory, it has some strong advantages over an inflation target that treats bygones as bygones. Price-level targets have the advantage during a recession in that one can promise to keep interest rates lower for longer, which we know is a good strategy. It’s the strategy that we and other central banks have followed – ‘lower-for-longer’ relative to a Taylor rule, keeping interest rates somewhat lower to help the economy to recover fully after interest rates were constrained for so long.
Could you have rules-based price-level targeting?
Yes. There’s been a lot of research on this, and I’ve worked on it over the years. In a speech I gave to the Shadow Open Market Committee this year, I laid out a very simple example using the Taylor rule and an inflation gap of prevailing inflation minus the target. I put in a measure of a price-level gap. So you’re still responding to the economy and keeping a very simple framework. There are two basic points about price-level targeting. First, it’s not radical. I tracked what it would have told the Fed to do relative to the Taylor rule over the last 15 years or so, and it’s very similar, because it’s responding to fundamentally the same thing. Second, it actually protects you from missing the target on both sides. Traditionally, it’s been viewed as kind of a hawkish thing, because you’re putting a big weight on the price level. But Athanasios Orphanides and I showed that this kind of approach would have been very beneficial during the 1960s and 1970s in the US. It’s really just about making sure you’re maintaining the anchor of inflation expectations, whether inflation’s going high or low. I view it as a middle ground. It doesn’t require you to have an inflation target that is so high people are uncomfortable with it. At the same time, it maintains the benefits of the Fed’s dual mandate, where we care both about employment and inflation, in this case, as represented by the price level.
Would you favour implementing a price-level target now?
Any central bank should reflect deeply before coming to any conclusion on changing its framework. We do not want to be changing it frequently or for it to only work in the current economic environment. It’s really important to do a lot of homework, have a lot of discussion and then come to a conclusion that we could stick with for five, 10 and 20 years. We have had changes in our framework over the last 100 years, but we don’t want to do it too frequently. In terms of options, I do think price-level targeting is better than inflation targeting, both to tackle the zero lower bound issue and in recognition that our understanding of the economy is incomplete. We don’t really know what the potential output is, or the natural rates of interest or unemployment. A price-level targeting framework does protect you from making mistakes or errors. That was the point of our paper about the ’60s and ’70s. The Fed was run by people who desired lower inflation and struggled in that environment to achieve those goals – in our view, because they had mistaken beliefs about the natural rate of unemployment and how the economy worked. Price-level targeting ensures that the nominal anchor is maintained, and assures people that if they are going to buy a car or a house, they can be confident in what the prices will be over a five-, 10- or 20-year period.
Should these reviews be conducted internally or externally?
It would be healthy for both to happen. I’ve been advocating for this for some time, and I believe outside parties – whether they are academics or think-tanks – should be actively engaged in these policy issues. It’s very healthy to have outside views, especially from people who may not agree with us. That way we will get better decisions and conclusions. But in the end, the reality is that the Federal Open Market Committee has to come to a decision and make that happen. Our statement of long-run strategy in 2012 – when we formally adopted the 2% target – was very carefully worked out. There was lengthy discussion within the FOMC with the goal of securing essentially universal support for the strategy, so every participant, president and governor could support it now and in the future. That takes a while. It is also why we need to be doing this now. I worry that some people say: “Well, things are fine now. Inflation’s a little low, but why are you debating this today?” And I respond: “Because that is how the Fed works.” It takes us a long time from the point that we start a conversation to when we’re comfortable with a decision.
Presumably, also you do not want to be seen as reacting to events?
That was the problem I had during the early stage of the recovery, when growth was faltering and inflation was low. People were advocating this or that strategy. But firstly, we know that if it’s not credible, it doesn’t work. Second, if we are really thinking about adopting a price-level targeting strategy or a higher inflation target, there has to be a commitment to the strategy for the foreseeable future. And it also has to be symmetric. The painful side of a price-level targeting strategy is if inflation runs higher than your target for a while you then have to act to get the price level back to its target. The strategy cannot be opportunistic or reactive. It has to work in the long term. The last thing we want to do in any way is lose the credibility the Fed does have [regarding] keeping low and stable inflation. That must be maintained while seeking a better operational strategy to achieve these goals.
Realistically, will that happen any time soon?
Currently, there is some uncertainty about the future of the Fed, so it’s hard to predict what will happen. Obviously, we’ll be seeing what happens in terms of leadership [chair Janet Yellen’s term expires in 2018, and vice-chair Stanley Fischer plans to step down this year] and other openings on the Board of Governors over the next year, or so. If we were going to a five-year discussion around our framework, that would be a big undertaking that the chair and leadership of the committee would have to decide to do and take the lead on. So that’s a little up in the air right now. But I think it’s a good idea.
You have said the US economy has “fully recovered” from recession. Are you concerned that heightened levels of leverage (encouraged by low rates) and a reliance on asset price appreciation (partly a result of central bank asset purchases) are the real driving forces behind economic growth?
This is close to Lawrence Summers’ secular stagnation hypothesis. Certainly, low interest rates do boost asset prices – that’s one of the ways that monetary policy works. But as we normalise monetary policy – both in terms of interest rates and our balance sheet – that will take some of the pressure off asset prices. I would say the US situation is actually pretty good. We’ve raised interest rates a number of times. We’ve announced we are going to normalise the balance sheet. Everyone knows down to the specific security when we’re going to do the normalisation. So, assuming the economy continues to behave the way I expect, we will have an economy that supports sustainable growth and doesn’t need to be fuelled by asset prices on a sustained basis. It will probably only grow by 1.5% in terms of real GDP, and labour force growth will be less than we are used to. But we can manage with this kind of a slow growth.
Secular stagnation puts forward that the only way one can keep the economy at full employment is by juicing some asset bubbles. I don’t think that’s necessary. Growth will be slower than people have seen throughout their lifetimes, but there is a middle path of managing economic growth without creating excessive leverage or bubbles. It won’t be easy to keep on track due to issues with the lower bound, and it does require us to normalise our monetary policy over the next couple of years, which we’re in the process of doing. I fully support that because if we kept interest rates too low for too long, we would be fuelling the further growth via unsustainable factors like asset price bubbles or other activity that involves trying to have the economy grow beyond a sustainable pace, and eventually, it’s going to bite you. But I don’t think we’re at that place now. Growth is running around 2%, plus or minus. Job growth is still very strong, unemployment continues to be very low. There is a goldilocks path, where we keep unemployment around where it is, GDP growth slows to a sustainable level and inflation moves to 2%.
What are the more esoteric inputs you look at to inform your FOMC decisions?
At the Fed, we have a ‘mother ship’ – the Board of Governors – which produces lots of analysis and forecast materials. I study that very carefully. We also have our colleagues around the Federal Reserve System, not just the staff in this building, and we share and read each other’s work. I like the Atlanta Fed’s wage tracker – it is one of the innovative ideas that’s out there. The San Francisco Fed was involved in thinking through some of the issues looking at composition effects in wages and the fact that the skill experience composition of the labour pool changes during the cycle, and that’s one of the factors explaining the slow wage growth these days. These kind of things are really valuable. My 2017 mantra is: “We always have to look under the hood.” You can’t just look at the aggregate number, and wage growth is a great example of that.
What is your explanation for wages not growing as fast as expected?
Nominal wage growth is not that surprising, given slow productivity growth and lower inflation. Going deeper than that, one can also look at some of the work on composition effects in the labour market. My colleagues here and in Richmond have been working on non-employment indexing. It looks at everybody who’s out of the labour force, and weights them according to the probably of that group getting a job. The long-term unemployed have a pretty low probability of getting a job in any given month, while the short-term unemployed have a very high rate. Some people in the labour force actually are a little more like the long-term unemployed. People over the age of 65 or 70 have a very low probability of getting a job because they’re not really looking. Instead of looking at who is in or out of the labour force, it takes everybody who’s not working and weights them by the probability of that narrowly defined group of getting a job. And so you get this very broad thing. It includes the unemployed and out-of-the-labour-force, by thinking of them as being on the edge of the labour force and bringing them in their analysis shows that when you look at the labour market from this point of view, some of the anomalies in job-finding, job-matching rates aren’t as anomalous as you thought. We have all these very engaged economists digging into the micro data – looking ‘under the hood’ – as opposed to looking at time series relating consumption to GDP to the top-line stuff, which I grew up with.
That said, my favourite labour indicator is still the Consumer Confidence Survey question: “Is it easy or hard to find a job?” This to me is my favourite measure of slack, because it doesn’t get into the flatness of the Phillips curve and so on. Right now, it is telling us the labour market is about as strong as it ever has been on the basis of how easy it is to find a job.
Philip Turner, formerly of the Bank for International Settlements, says it might be better if some central banks conducted off-market swaps transactions with the Treasury to reduce their longer-dated bond exposures in return for shorter-term bills. Any thoughts?
I cannot speak for other central banks about how they would manage normalisation. But for us, it’s pretty straightforward. The Cusip [number] of every Treasury security we hold is publicly known, so everybody knows the dates they will mature. We have announced a phase-in of the normalisation of the balance sheet, and that will take about four years, plus or minus a little. Markets have fully absorbed this, and there has been no disruption. If anything, the markets seem pretty complacent nowadays. My own view is that the balance sheet actions primarily work through affecting the net supply of longer-term securities. So, holding more shorter-term Treasuries is not a relevant parameter. What really mattered were the yields that affected the economy. So buying mortgage-backed securities affected mortgage costs, and 10-year Treasuries affected the cost of mortgages and other bases of investment. As we get out of those, I expect the yield curve to steepen gradually over the next few years.
It can be argued that the San Francisco Fed is underrepresented in FOMC decision-making, given the size of the economy (20% of the US) and landmass (36%) represented. Does that bother you? Some say a solution would be for San Francisco to rotate with New York, and sit on the FOMC every other year. What is your view?
It is up to Congress to design aspects of the Federal Reserve, but the FOMC voting works fine. I have been very fortunate to work in committees led by chairs where I always knew that the arguments that won the day were the arguments that made sense and were supported by the evidence. I’ve never gone into a meeting where I felt that I wasn’t listened to because I wasn’t a voter. Look at my predecessor, Janet Yellen. As president of the San Francisco Fed, she clearly was very influential because of the power of her arguments, her knowledge and her expertise. So I always use that as the role model. It doesn’t matter if you vote or not. What matters is that you are contributing and bringing ideas and helping to better decisions.
The other thing is that monetary policy is a national policy. We do not have different interest rates in different parts of the country. Importantly, none of our discussion is about what’s good for the twelfth district or the eighth district or the second district or whatever, it’s really about what’s good for the whole US economy. All my colleagues go in there with that same approach – although we do share the anecdotes, experiences and what we hear from people in our districts. A strong FOMC is not so much about the voting rotation, or this or that. It is to have a committee that is fully engaged and debating, discussing and sharing ideas – and that’s what we’ve had in all my years on the committee. That is what helps make us successful.
You’ve said that central banks should do more to tap into greater diversity in their staffing. Why do you feel that’s so important, and can you give a good example where diversity has really aided decision-making?
This applies to the financial services sectors as a whole – it is not just central banks, but also banking and Wall Street that haven’t done as well as they should in terms of diversity. That’s been recognised by many leaders in many organisations. The hard thing is: what do you do about it? So my view is that we have a very diverse workforce and community in San Francisco and the western United States. You look at our workforce, you look at the labour force, and if you don’t tap into the diversity of the population – whether it’s where people are men, women, all the different aspects of diversity in our region – then you’re just losing out in terms of being competitive and in terms of getting the very best talent, the best people for the organisation. So that’s the business case. The moral case also matters. We are a public organisation, we represent the public and we should do our very best to be representative of the community that we serve.
When you get a diversity of views, you realise that your own life experience is a lot narrower than you probably thought. Let me give you an example: getting rid of currency – a topic that some academics have promoted. But a lot of Americans don’t have bank accounts, they don’t have cell phones. They might have very limited access to financial services, or the financial services they can access are very expensive. So a lot of the population does use cash or currency to carry out transactions. Having people with different backgrounds, who understand what it’s like to live in different parts of our society and experience different things, helps you to get away from ‘groupthink’, or maybe even the mindset, “well, I’ve not personally experienced that, so it doesn’t happen”. There is usually some kind of epiphany when you talk to somebody who tells you their personal story or talks about their experience growing up. It’s so much more powerful and real than just reading an article in the Harvard Business Review.
The San Francisco Fed holds a lot of dialogue with Pacific nations. Why is this important?
The US dollar is still the world’s predominant reserve currency, so monetary policy in the US affects everybody around the world. That’s not going to change. So our colleagues and peers around the world want to understand what we’re doing and why we’re doing it, ask questions and have a dialogue. Our chair and my colleagues around the Fed play a very important role in official channels like the BIS. But for me and some of my other colleagues, it’s very good to have informal meetings where people get together either at conferences or in small meetings to have conversations about what is worrying people. It is also reassuring for our counterparts to be able to ask how the Fed is operating and its thinking on certain issues.
When we were doing QE, many people around the world were worried. They also wanted to know when we would end QE3, and how this would affect global markets. As we’ve normalised interest rates, they’ve been wanting to understand how this will affect them. Now, questions come up about the normalisation of the balance sheet
Obviously, when we were doing QE, many people around the world were worried. They also wanted to know when we would end QE3, and how this would affect global markets. As we’ve normalised interest rates, they’ve been wanting to understand how this will affect them. Now, questions come up about the normalisation of the balance sheet. It is very helpful for us both to hear what people are concerned about, what they’re thinking about, but also for us to explain why we’re doing things, what we’re planning to do and to answer questions. In the end, monetary policy in the US is driven by US economic conditions. Monetary policy in other countries is driven by their conditions. We all understand that. But we have to understand the Federal Reserve is a central part of the global financial system, and we don’t want to inadvertently or excessively create volatility or confusion through our actions. We want our actions to be well understood. It’s better to be more transparent.
The Fed seems more cognisant of the impact of its policy spillovers and spillbacks these days. Why is that?
The taper tantrum was a wake-up call about how global markets can overreact or react strongly to symbols that mean one thing maybe to us and mean something different to them. Nobody wanted to create a huge amount of market volatility, so we try to avoid creating unnecessary confusion or volatility. It doesn’t mean we are going to change what we do; it’s really more about communicating.
How do Asian nations react to the ‘savings glut’ hypothesis and accusations they are hoarding savings and keeping exchange rates low to bolster exports?
Many of these countries are either pegged or quasi-pegged to the US dollar, which creates a lot of other issues for their economy, given their sensitivity to US monetary policy. But when I look at the global savings glut issue today, it’s not really so much about what you term a “hoarding”. Obviously, China is now on the opposite side of that. It’s really more about some kind of secular shift in savings versus investment. So much of the earlier story was about the build-up of savings after the Asian financial crisis. That happened, and that’s continued. But there are other dynamics at play, too. It’s not just about governments responding to what they saw as the risks of being exposed to a crisis. There are some broader productivity demographics and risk appetite issues globally that are affecting interest rates more than just the ‘savings glut’.
In a world with a lower natural rate of interest, there could be more frequent episodes of extraordinary monetary policy, meaning more spillovers in the future. Should there be greater policy co-ordination?
There are three different words that come up: co-ordination, co-operation and consultation. Consultation is the right thing but co-ordination is probably a step too far. Ultimately, we have domestic mandates. Where we did co-ordinate during the crisis – before I was on the FOMC – was a joint policy announcement by a number of the world’s major central banks on swaps and monetary loosening. That was a very powerful message under those circumstances. The message was that central banks were working together with the same common interests, the same goals to bolster the global economy. So, in certain circumstances, that is powerful. It was appropriate then. It could be appropriate in the future.
The challenge moving forward – if we don’t change our frameworks and Charles Goodhart is not right – is that we will need to use QE more frequently. And QE is viewed – fairly or unfairly (I would say unfairly) – as a ‘beggar-thy-neighbour’ policy. Research has indicated that if several countries are at the lower bound, while these policies primarily work by boosting demand, they also result in shifting demand from one country to another. So maybe co-operation and co-ordination should be happening in the discussion about frameworks. If we all stick with the current low inflation targets, when we have to use stimulus in one country via QE, a lot of the effect will happen through demand shifting, or beggaring thy neighbour. The scary scenario is a repeat of what we’ve seen, where everybody’s at the lower bound – not because of a crisis, but due to the reasons recessions happen – and we’re all stuck with uncomfortable choices. There could be a mutual benefit for countries to change their price-stability policies together. That is a true policy strategy debate, rather than just a tactics debate, because co-ordination is almost impossible tactically in most circumstances.
John C Williams took office as president and chief executive of the Federal Reserve Bank of San Francisco on March 1, 2011. In this role, he serves on the Federal Open Market Committee, bringing the Fed’s Twelfth District’s perspective to monetary policy discussions in Washington, DC.
Williams was previously the executive vice-president and director of research for the San Francisco bank, which he joined in 2002. He started his career in 1994 as an economist at the Board of Governors of the Federal Reserve System, following the completion of his PhD in economics at Stanford University. Prior to that, he earned a master’s of science from the London School of Economics and a bachelor of arts from the University of California, Berkeley.
Williams’ research focuses on topics that include monetary policy under uncertainty, innovation and business cycles. Additionally, he has served as senior economist at the White House Council of Economic Advisers and as a lecturer at the Stanford Graduate School of Business.
This interview took place on September 5, 2017.
1. Laubach, Thomas and John C Williams, ‘Measuring the Natural Rate of Interest’ in The Review of Economics and Statistics, 85(4), November 2003, pages 1,063–1,070.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
You are currently unable to print this content. Please contact [email protected] to find out more.
You are currently unable to copy this content. Please contact [email protected] to find out more.
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Printing this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email [email protected]
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Copying this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email [email protected]