Monday, October 24, 2016

Time for a Sterling Crisis?

The UK pound plunged again earlier this month:
two dates are marked with vertical lines: the the Brexit vote (red line) and again the Prime Minister's speech signalling that the most likely outcome was a "hard Brexit" (green line) where the UK leaves the European single market (i.e., that it won't become part of the European Economic Area, like Finland, or negotiate an arrangement like Switzerland's).

Departure from the EU and the single market make the UK a less attractive location for foreign investment (see, e.g., comments from Nissan's CEO about its Sunderland plant).  A decrease in demand for UK assets implies a drop in the pound.  The UK is also a less attractive location for domestic investment now as well, and the situation is probably not a good one for consumer confidence - a reduction in demand due to lower desired consumption and investment would imply lower interest rates, which also would cause the pound to fall.

Is this yet another "sterling crisis"?  Not in the usual sense - as Gavyn Davies notes, there is no fixed exchange rate to defend this time, and most of the UK's external debt is denominated in pounds.

As Paul Krugman explains, a fall in the pound is a part of the adjustment process.  He writes:
But it’s important to be aware that not everyone in Britain is equally affected. Pre-Brexit, Britain was obviously experiencing a version of the so-called Dutch disease. In its traditional form, this referred to the way natural resource exports crowd out manufacturing by keeping the currency strong. In the UK case, the City’s financial exports play the same role. So their weakening helps British manufacturing – and, maybe, the incomes of people who live far from the City and still depend directly or indirectly on manufacturing for their incomes.
However, a rebalancing of the UK economy in favor of manufacturing exports will not come quickly, according to Barry Eichengreen (Robert Skidelsky goes further and argues for helping the process along through "import substitution" policies).

One likely consequence is inflation, as Ambrose Evans-Pritchard writes.  Prices of imported goods will rise significantly (though the process of "exchange rate pass-through" generally occurs with a lag - the "marmite row" may have been a harbinger of things to come).  The inflation will hit lower-income families especially hard, according to Evans-Pritchard's column, because the government has frozen some benefit payments, so inflation will cause their real value to fall.

Rising costs for imports don't only impact consumers - they also affect producers.  On the one hand, domestic producers benefit from increases in the relative prices of imported substitutes.  On the other - and this is becoming more and more relevant in an age of global supply chains - prices of imported inputs (intermediate goods) will rise, increasing production costs.

With the rise in cost of intermediate inputs and the greater costs of selling to its main trading partners, the impact of Brexit looks like a negative supply shock.  Supply shocks create a nasty dilemma for monetary policy.  Policy can "accommodate" the shock by allowing inflation to rise - doing so minimizes the increase in unemployment and helps keep output near its (diminished) potential.  Or the Bank of England could tighten policy to keep inflation in check, with negative consequences for output and employment.

The risk with accommodation is not just inflation itself, but a potential increase in inflation expectations and loss of the central bank's credibility. Part of the standard interpretation of 1970's stagflation is that the Fed was too accommodating after the 1973 oil shock, and which contributed to inflation expectations getting out of control.

The Bank of England has a formal 2% inflation target; right now inflation is running below target, but that will change.
I personally think its to Bank of England's credit that it's allowed inflation go above target at a couple of points during the turmoil of recent years.  If their policy is credible, an occasional miss doesn't cause inflation expectations to rise.  But the point of inflation targeting is to achieve credibility by meeting a stated target, so the BofE may be putting that at risk if it's always seen to be accommodating shocks.

Sunday, September 4, 2016

Revisiting Lehman

The eighth anniversary of the bankruptcy of the Lehman Brothers investment bank is coming up later this month.  It marked a point where the financial crisis, which had been simmering since summer 2007, seemed to go from bad to catastrophic.

One indicator of financial stress that we were all watching closely at the time was the "TED Spread" - the difference between the 3-month LIBOR (a rate on interbank loans) and the yield on 3-month US Treasuries - essentially a measure of the risk premium paid by financial institutions, which is normally quite low.
The blue line is drawn at Sept.15, 2008, the date of the Lehman Bankruptcy.

The Fed creatively expanded its "Lender of Last Resort" toolkit during the crisis, creating a number of new lending programs.  In March, it helped arrange the takeover of Bear Stearns by JP Morgan Chase.

Monday-morning quarterbacking of the government's actions (both the Fed's the Treasury's) began immediately and has never really stopped (academic macroeconomists consider it part of our jobs, after all).  One of the biggest questions has been why didn't Lehman Brothers get the same treatment as Bear Stearns?

The Fed did try to arrange a takeover by a healthier firm - a potential deal with Barclays was reportedly scuppered by British regulators - but no deal was finalized in time.  At the time, "moral hazard" concerns were prominent, and people felt the US government wanted to show that it was willing to let a financial institution fail.  Lehman's troubles were well-known, so it was hoped that the financial disruption would be modest since everyone had time to prepare for its demise.  More recently, officials have claimed that the Fed lacked the legal authority to rescue Lehman because it was truly insolvent - the intention of lender of last resort is to lend to illiquid banks, not insolvent ones (the Fed's loans are supposed to be backed by good collateral).

Laurence Ball of Johns Hopkins has dug into the records and is questioning the argument that the Fed did all it could (and should) have.  In a summary at VoxEU, he writes:
I conclude that officials’ explanation for the non-rescue of Lehman is incorrect, in two senses.
First, a perceived lack of legal authority was not the reason for the Fed’s inaction; and
Second, the Fed did in fact have the authority to rescue Lehman.
I base these broad conclusions on several findings, given below.
  • First, before the bankruptcy, Fed staff extensively analysed Lehman’s liquidity risk and how the Fed might assist the firm. In the record of these discussions, there is little concern about the adequacy of Lehman’s collateral, and nobody suggests that legal issues might preclude a Fed loan.
  • Second, arguments about legal authority made by policymakers since the bankruptcy are unpersuasive. These arguments involve flawed economic reasoning, such as confusion between the concepts of illiquidity and insolvency. They also include factual claims that are not supported by evidence. The Financial Crisis Inquiry Commission repeatedly asked Ben Bernanke for details about Lehman’s collateral problem, but Bernanke was unresponsive.
Further, from a de novo examination of Lehman’s finances, it is clear that the firm had ample collateral for a loan to meet its liquidity needs. In particular, I estimate that Lehman could have survived with $88 billion of overnight lending from the Fed’s Primary Dealer Credit Facility (PDCF), and the firm had at least $131 billion of assets that were acceptable as PDCF collateral.
Ball's argument was also discussed in a column by James Stewart in the New York Times.

In a Bloomberg View column, Barry Ritholtz disagrees with Ball's argument that Lehman was solvent, but, nonetheless, he thinks it could have rescued Lehman:
As subsequent events have shown, most especially with the Fed-led bailout of insurance giant American International Group, if there was a will, there most certainly was a way. Given all of the various bailouts of dubious legality, the Fed, Treasury and Congress most certainly could have devised a rescue plan for the 158-year old bank.
Even though he thinks Lehman could have been rescued, on the question of whether it should have, Ritholtz disagrees with Ball and believes that the Fed was correct to let it go:
No, Lehman Brothers did not “precipitate” the financial crisis. The better metaphor is that Lehman was the first trailer in the park to be destroyed by the tornado. Whether it lived or died was not going to stop the financial forces that had been decades in the making and unleashed when the credit bubble popped.

I agree with Ann Rutledge, a principal with New York-based R&R Consulting, and co-author of two books on structured finance. She noted “It wasn’t a mistake to let Lehman fail, it was a mistake to let it live so long.”
I haven't yet tackled Ball's monograph - perhaps that would be a good project for my Money and Banking students in block 5...

Sunday, July 17, 2016

Lies, Damned Lies and Ireland's GDP

Being an academic economist can be humbling - while it involves learning lots of esoteric stuff, it also makes one much more aware of how much one doesn't know.  But I know this much is true: the total amount of goods and services produced in Ireland did not increase by 26.3% in 2015.

But that's what Ireland's Central Statistics Office has reported.  It's not entirely clear how they came to such a (literally) incredible figure for real GDP growth.   The expenditure approach to calculating GDP adds up purchases of new final goods and services in four categories - consumption (C), investment (I), government purchases (G) and net exports (NX).  I took the data from table 6 of their release and normalized each component to 100 in 2010 to illustrate how the change is driven by large jumps in I and NX.
News reports have focused on activities of multinational corporations, particularly on "inversions" which involve transferring their legal headquarters to Ireland in order to take advantage of its low corporate tax rates.

That may be correct, but its not an entirely satisfying explanation.  GDP is supposed to measure the value of goods and services produced in a country.  The legal domicile of the corporations producing it is irrelevant.  Ownership of capital - both physical machinery, equipment and structures and also intangible forms (intellectual property, etc.) - also is irrelevant, so the fact that multinational corporations like to hold IP in Ireland for tax reasons shouldn't matter in principle.

Gross National Product (GNP) adds up the total value of goods and services produced with resources owned by a country's citizens.  Since many multinational corporations operate in Ireland, it has higher GDP than GNP because some of its GDP is produced using foreign-owned capital - the same data release showed a shocking increase of 18.7% in Ireland's GNP last year.

The information released by the CSO is not very detailed.  The FT Alphaville's Matthew Klein, Bloomberg Columnist Leonid Bershidsky, and Seamus Coffey at the Irish Economy blog have made useful attempts at sorting things out.

Clearly, the way the CSO is calculating GDP is failing to correspond to the concept.  Statistical agencies need to provide estimates calculated in a consistent fashion, so it wouldn't have been appropriate for them to suddenly decide to calculate it differently because they got a strange number.  But the methods for estimating GDP are not etched in stone.  If legal and accounting maneuvers of multinationals are distorting some of their source data, they need to find a way of correcting for it.  Standards for national accounts are coordinated internationally and it is not clear whether the problems in this report are due to something the CSO is doing or a more general methodological issue which happens to be most apparent because of Ireland's unique circumstances.

Statistical agencies update their methods and revise their estimates regularly - e.g., in 2013, the US BEA did a "comprehensive revision" that began the treatment of development of intellectual property as a component of investment (see this earlier blog post).  I hope the CSO will release more information to help everyone better understand what's gone wrong with their figures.  That will be a first step towards correcting the method used to estimate GDP and producing a revised set of figures - one which will not show a 26% increase in real GDP in 2015.

Update: Procedures mandated by Eurostat are mostly to blame, writes Colm McCarthy.

Tuesday, April 19, 2016

Hysteresis in a New Keynesian Model

I've posted a draft of my research paper, "Hysteresis in a New Keynesian Model" on my website.  The paper proposes a way of modelling hysteresis and integrates it into a New Keynesian macro model.

The widely noted rightward shift of the Beveridge curve relationship can be interpreted as evidence of less efficient matching between employers and workers in the labor market

In the paper, I argue that less-efficient matching is related to the increased duration of unemployment spells seen during the last recession and its aftermath.  There are several reasons why matching may be less efficient with a higher proportion of long-term unemployed:
  1. a loss of information as workers' informal networks may dry up over time
  2. a stigma associated with long-term unemployment (i.e., it acts as a negative signal)
  3. decreased search effort by long-term unemployed
I cite empirical evidence for (2) and (3) in the paper.   The paper does not take a stand on the mechanism causing the relationship between matching efficiency and duration of unemployment.

The model includes a Diamond-Mortensen-Pissarides search-and-matching labor market framework, where hires (H) are a function of the number of vacancies (V) posted by firms and unemployed (U) workers
Hysteresis is modeled with the assumption that matching efficiency (A) is a decreasing function of the average duration of unemployment spells.

Rearranging the matching function to solve for efficiency (and setting alpha to 0.5), we can see that a decrease in efficiency coincides with the increase in the duration of unemployment spells

With hysteresis, the response of unemployment to a negative productivity shock is smaller initially but more persistent, as shown by the impulse response functions:
The reason for this is that, in the absence of hysteresis, firms can adjust their labor by sharply decreasing their vacancy posting.  With hysteresis, the response of vacancies is less dramatic because firms take into account the fact that hiring will be more difficult in the future due to the decline in matching efficiency.

The model also considers demand shocks, which take the form of shocks to the discount factor, and monetary shocks (deviations from the Taylor rule), with similar results.  Overall, hysteresis acts as a mechanism that increases the persistence of the response of macroeconomic variables to shocks.  Since macro models struggle to generate endogenous persistence, this may be one of the main selling points of the paper.

Hysteresis also generates movements in the "natural rate" of unemployment, which I proxy by computing the amount of unemployment that would occur if wages and prices were flexible, taking as given the evolution of matching efficiency (A) from the baseline model.  The green line shows the change in the natural rate in response to a negative productivity shock:
Note: this is a revised version of the draft circulated last fall as my "job market paper".

Tuesday, March 1, 2016

Carbon Taxes

As an economist, one of the biggest frustrations of discussions over climate policy is that we know pretty well what to do - tax carbon (or set up a system of tradable permits, which has similar effects), and that doing so will not be harmful to the economy.  People and firms respond to incentives, and a carbon tax will motivate people to find the lowest-cost ways to reduce emissions.  People are clever and the cost of reducing emissions will likely be much less than many envisioned.

Eduardo Porter's column about British Columbia's carbon tax is yet another illustration of this; he writes:
In 2008, the British Columbia Liberal Party, which confoundingly leans right, introduced a tax on the carbon emissions of businesses and families, cars and trucks, factories and homes across the province. The party stuck to the tax even as the left-leaning New Democratic Party challenged it in provincial elections the next year under the slogan Axe the Tax. The conservatives won soundly at the polls.

Their experience shows that cutting carbon emissions enough to make a difference in preventing global warming remains a difficult challenge. But the most important takeaway for American skeptics is that the policy basically worked as advertised.

British Columbia’s economy did not collapse. In fact, the provincial economy grew faster than its neighbors’ even as its greenhouse gas emissions declined.

“It performed better on all fronts than I think any of us expected,” said Mary Polak, the province’s environment minister. “To the extent that the people who modeled it predicted this, I’m not sure that those of us on the policy end of it really believed it.”

Monday, February 8, 2016

Productivity Pessimism

I'm hoping I'll have a chance to read Robert Gordon's new book soon, though one of the ironies of being a college professor is that the job doesn't seem to leave much time to read.  Fortunately, Gordon presents a condensed version of the argument in a recent Bloomberg View column, where he explains that he doesn't expect a return to the rapid productivity growth of the mid-20th century.  He writes:
The 1920-70 expansion grew out of the second industrial revolution, when fossil fuels, the internal-combustion engine, advanced metals and factory automation came together to produce electric lighting, indoor plumbing, home appliances, motor vehicles, air travel, air conditioning, television and much longer life expectancy.
The "third industrial revolution" - computers and the internet - is less significant, in his view:
Although revolutionary, the Internet's effects were limited when compared with the second industrial revolution, which changed everything. The former had little effect on purchases of food, clothing, cars, furniture, fuel and appliances. A pedicure is a pedicure whether the customer is reading a magazine or surfing the web on a smartphone. Computers aren't everywhere: We don’t eat, wear or drive them to work. We don't let them cut our hair. We live in dwellings that have appliances much like those of the 1950s and we drive in motor vehicles that perform the same functions as in the 1950s, albeit with more convenience and safety.
Our main measure of technological progress is total factor productivity (tfp) growth, which is sometimes called the "Solow residual" because it is calculated as a leftover, by subtracting from output growth the portions that can be explained by changes in capital and labor.  That is, it is the growth that would occur even if there was no change in the factors of production.

Turning points in tfp growth can be hard to identify because the data are somewhat volatile from year-to-year and have a cyclical component.  With hindsight, economists identified a productivity slowdown around 1973 and a resurgence - with information technology playing a leading role - in the mid-1990s.  However, tfp growth has generally been weak since 2005, raising the question of whether the IT-led productivity boom is over.

This San Francisco Fed Letter from last year discusses some of the reasons for the productivity slowdown.  Gordon's book was the subject of an Eduardo Porter column and a Paul Krugman review.

Update (2/25): In an interview with Ezra Klein, Bill Gates argues against Gordon's view.

Wednesday, December 2, 2015

Fight for Our Principles!

Principles of Macroeconomics, that is, since it is under attack from Noah Smith, who argues for eliminating introductory macroeconomics classes.  In a blog post, he writes:
Why should undergrads learn macro in their first year of econ? If they go on to be econ majors they can easily start out with intermediate macro and not miss anything important. If they just take the first-year econ sequence and then go into the business world, what do they really need to know?
I think this badly misunderstands what we're trying to accomplish in a introductory level course - principles of macroeconomics is not about preparing students for business careers (though business students certainly should take it - as should everyone else).  The two main benefits of taking an introductory macroeconomics course are:

First, it prepares students to be more knowledgeable and effective citizens.  Among other things, students come away from the class able to interpret data like unemployment, inflation and GDP that they read about in the news.  They learn some basic facts about taxation and government spending that can help them evaluate claims made by politicians.  The Federal Reserve is pretty mysterious to most people - students learn what it does and how monetary policy works and the basics of how a banking/financial crisis can occur.  This seems particularly valuable in a time when the Fed is facing political criticism which is at least partly based on its widely misunderstood response to a financial crisis.

Second, working with economic models develops thinking and mathematical skills.  Smith makes the point that the models we teach in an intro class have their flaws (as do the models we teach in PhD-level classes...), though I still think they're quite useful for thinking about a number of issues.  But the act of manipulating a model and working out how assumptions are linked to conclusions helps students become sharper thinkers, and this stays with them long after they've forgotten the specifics of any particular model.

At my current station, I'm teaching a one-semester introductory course that covers both micro and macro topics, but I had a full-semester macro principles course at my previous stop - an outline of what I did is posted here.  The time students have in college is a very scarce resource, and the opportunity cost of any college class is very high, but I think principles of macroeconomics is almost always a good choice.  Though perhaps I'm a little biased...

Sunday, November 15, 2015

Hysteresis and Monetary Policy

In the Washington Post last week, Larry Summers wrote about some new research finding evidence of "hysteresis."  This is a term borrowed from the natural sciences for when temporary occurrences have lasting effects - e.g., when you hold a magnet up to a piece of metal, the metal remains magnetized even after you remove the magnet.  In macroeconomics, hysteresis occurs when an economic downturn has a lasting effect on economic capacity (i.e., reduced "potential output"); that is, lack of demand creates its own lack of supply.

Hysteresis could occur through a number of channels. Consider an economy described by an aggregate production function Y* = AF(K,N*) where potential GDP (Y*) depends on productivity (A),  capital (K) and labor at its "natural" or "full-employment" level, N*.  A recession occurs when output falls below Y* and labor is below N* (i.e., there is unemployment in excess of the "natural rate").  Hysteresis implies that there is a lasting impact on Y* - this could occur through technology, capital or labor.

All three channels could be operative. In the past several years, productivity growth has been sluggish, though its not clear if this is linked with the recession (productivity trends are always somewhat mysterious).  The recovery of investment (the rate of flow into the stock of capital) from the recession has been less than spectacular, even taking out housing - the share of GDP devoted to nonresidential fixed investment is somewhat below its peak in previous expansions. 

Here, I want to focus on labor, where the hysteresis effects are pretty evident, and raise an interesting policy dilemma. 

Although the unemployment rate has fallen to what we might consider a reasonably healthy level of 5% (the normal turnover of a healthy labor market generates some unemployment so we never expect it to get to zero), the labor market still clearly bears the scars of the 2008-09 recession.

The duration of unemployment spells rose to unprecedented levels and has remained elevated (a useful comparison is to the 1981-82 recession - the unemployment rate peaked at 10.8% at the end of 1982, but the dynamics of duration were not nearly as severe).
People with spells of long-term unemployment have a harder time finding jobs.  But looking at the unemployed leaves out those who left the labor force entirely.  The last several years have seen a significant drop in labor force participation rates, even among people aged 25-54 (focusing on this group is a rough way to control for the drop in overall participation due to an aging population, though as this Calculated Risk post notes, there is a composition effect even within the 'prime age' group).
The labor market clearly is not as robust as the headline unemployment rate suggests.

What are the implications for monetary policy of having a high proportion of long-term unemployed, and possibly a substantial latent group of unemployed who have left the labor force?  One answer is suggested by this St Louis Fed blog post by Stephen Williamson:
[I]f we think of the long-term unemployed as being subject to the mismatch problem and highly likely to leave the labor force, then these unemployed workers are not contributing much to labor market slack. They are unlikely to be hired under any conditions. 
That is, the unemployment (and presumably the depressed particpation rate, too) is "structural" in nature, and not amenable to any improvement in aggregate demand that might be generated with expansionary monetary policy.

An alternative view is that the long-term unemployed, and some of those who have exited the labor force, could be brought back into employment by particularly strong aggregate demand - what used to be called a "high pressure" economy.  This would be possible if the forces of hysteresis work in both directions, as this 1999 paper by Laurence Ball suggested.

That seemed to me to be what Janet Yellen was hinting at in her September speech at UMass-Amherst when she said:
Reducing slack along these other dimensions may involve a temporary decline in the unemployment rate somewhat below the level that is estimated to be consistent, in the longer run, with inflation stabilizing at 2 percent. For example, attracting discouraged workers back into the labor force may require a period of especially plentiful employment opportunities and strong hiring. Similarly, firms may be unwilling to restructure their operations to use more full-time workers until they encounter greater difficulty filling part-time positions. Beyond these considerations, a modest decline in the unemployment rate below its long-run level for a time would, by increasing resource utilization, also have the benefit of speeding the return to 2 percent inflation. Finally, albeit more speculatively, such an environment might help reverse some of the significant supply-side damage that appears to have occurred in recent years, thereby improving Americans' standard of living.
It seems to be that doing this would likely entail the Fed overshooting its 2% inflation target.  I have my doubts about their willingness to do this (and Yellen certainly did not suggest it).  And for it to work, inflation expectations would need to remain anchored (i.e., if any additional inflation just ratched up expectations, it would not bring unemployment down).

Sunday, September 20, 2015

One of These Things is Not Like the Others

Among the steps the Fed has taken to increase transparency in recent years is the release of projections by the board members and regional bank presidents.  This includes the "dot plot" indicating each participant's belief about what the appropriate federal funds rate target will be at the end of this year and the next three years.

One of the dots from the latest release (I've indicated with a red arrow) shows a preference for a negative fed funds rate this year and next, and a much lower rate than everyone else expects at the end of 2017.
People on twitter seem to think its most likely Minneapolis Fed President Kocherlakota's dot.  It called to mind this, from the deep recesses of childhood memory:
(That's from Sesame Street). 

Of more serious interest, the projections also included a reduction in the median "longer run" federal funds target, to 3.5%, from 3.8% at the last release in June, and also a lower estimate of the "longer run" unemployment rate, which might be taken as a proxy for a NAIRU estimate (see Krugman).

Friday, September 11, 2015

Rodrik on Economic Models

There is an an excellent piece by Dani Rodrik on economic methodology at Project Syndicate:
Economics is not the kind of science in which there could ever be one true model that works best in all contexts. The point is not “to reach a consensus about which model is right,” as Romer puts it, but to figure out which model applies best in a given setting. And doing that will always remain a craft, not a science, especially when the choice has to be made in real time.

The social world differs from the physical world because it is man-made and hence almost infinitely malleable. So, unlike the natural sciences, economics advances scientifically not by replacing old models with better ones, but by expanding its library of models, with each shedding light on a different social contingency.
Or, as Keynes put it, "Economics is the science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world." 

Rodrik goes on to discuss Borges' story "On Exactitude in Science" - a parable about cartographers who make a map on the same scale as the world it was meant to represent.  This story, which was our reading for Econ 110 yesterday, illustrates the point that "more realistic" isn't necessarily better.