Sunday, July 17, 2016

Lies, Damned Lies and Ireland's GDP

Being an academic economist can be humbling - while it involves learning lots of esoteric stuff, it also makes one much more aware of how much one doesn't know.  But I know this much is true: the total amount of goods and services produced in Ireland did not increase by 26.3% in 2015.

But that's what Ireland's Central Statistics Office has reported.  It's not entirely clear how they came to such a (literally) incredible figure for real GDP growth.   The expenditure approach to calculating GDP adds up purchases of new final goods and services in four categories - consumption (C), investment (I), government purchases (G) and net exports (NX).  I took the data from table 6 of their release and normalized each component to 100 in 2010 to illustrate how the change is driven by large jumps in I and NX.
News reports have focused on activities of multinational corporations, particularly on "inversions" which involve transferring their legal headquarters to Ireland in order to take advantage of its low corporate tax rates.

That may be correct, but its not an entirely satisfying explanation.  GDP is supposed to measure the value of goods and services produced in a country.  The legal domicile of the corporations producing it is irrelevant.  Ownership of capital - both physical machinery, equipment and structures and also intangible forms (intellectual property, etc.) - also is irrelevant, so the fact that multinational corporations like to hold IP in Ireland for tax reasons shouldn't matter in principle.

Gross National Product (GNP) adds up the total value of goods and services produced with resources owned by a country's citizens.  Since many multinational corporations operate in Ireland, it has higher GDP than GNP because some of its GDP is produced using foreign-owned capital - the same data release showed a shocking increase of 18.7% in Ireland's GNP last year.

The information released by the CSO is not very detailed.  The FT Alphaville's Matthew Klein, Bloomberg Columnist Leonid Bershidsky, and Seamus Coffey at the Irish Economy blog have made useful attempts at sorting things out.

Clearly, the way the CSO is calculating GDP is failing to correspond to the concept.  Statistical agencies need to provide estimates calculated in a consistent fashion, so it wouldn't have been appropriate for them to suddenly decide to calculate it differently because they got a strange number.  But the methods for estimating GDP are not etched in stone.  If legal and accounting maneuvers of multinationals are distorting some of their source data, they need to find a way of correcting for it.  Standards for national accounts are coordinated internationally and it is not clear whether the problems in this report are due to something the CSO is doing or a more general methodological issue which happens to be most apparent because of Ireland's unique circumstances.

Statistical agencies update their methods and revise their estimates regularly - e.g., in 2013, the US BEA did a "comprehensive revision" that began the treatment of development of intellectual property as a component of investment (see this earlier blog post).  I hope the CSO will release more information to help everyone better understand what's gone wrong with their figures.  That will be a first step towards correcting the method used to estimate GDP and producing a revised set of figures - one which will not show a 26% increase in real GDP in 2015.

Update: Procedures mandated by Eurostat are mostly to blame, writes Colm McCarthy.

Tuesday, April 19, 2016

Hysteresis in a New Keynesian Model

I've posted a draft of my research paper, "Hysteresis in a New Keynesian Model" on my website.  The paper proposes a way of modelling hysteresis and integrates it into a New Keynesian macro model.

The widely noted rightward shift of the Beveridge curve relationship can be interpreted as evidence of less efficient matching between employers and workers in the labor market

In the paper, I argue that less-efficient matching is related to the increased duration of unemployment spells seen during the last recession and its aftermath.  There are several reasons why matching may be less efficient with a higher proportion of long-term unemployed:
  1. a loss of information as workers' informal networks may dry up over time
  2. a stigma associated with long-term unemployment (i.e., it acts as a negative signal)
  3. decreased search effort by long-term unemployed
I cite empirical evidence for (2) and (3) in the paper.   The paper does not take a stand on the mechanism causing the relationship between matching efficiency and duration of unemployment.

The model includes a Diamond-Mortensen-Pissarides search-and-matching labor market framework, where hires (H) are a function of the number of vacancies (V) posted by firms and unemployed (U) workers
Hysteresis is modeled with the assumption that matching efficiency (A) is a decreasing function of the average duration of unemployment spells.

Rearranging the matching function to solve for efficiency (and setting alpha to 0.5), we can see that a decrease in efficiency coincides with the increase in the duration of unemployment spells

With hysteresis, the response of unemployment to a negative productivity shock is smaller initially but more persistent, as shown by the impulse response functions:
The reason for this is that, in the absence of hysteresis, firms can adjust their labor by sharply decreasing their vacancy posting.  With hysteresis, the response of vacancies is less dramatic because firms take into account the fact that hiring will be more difficult in the future due to the decline in matching efficiency.

The model also considers demand shocks, which take the form of shocks to the discount factor, and monetary shocks (deviations from the Taylor rule), with similar results.  Overall, hysteresis acts as a mechanism that increases the persistence of the response of macroeconomic variables to shocks.  Since macro models struggle to generate endogenous persistence, this may be one of the main selling points of the paper.

Hysteresis also generates movements in the "natural rate" of unemployment, which I proxy by computing the amount of unemployment that would occur if wages and prices were flexible, taking as given the evolution of matching efficiency (A) from the baseline model.  The green line shows the change in the natural rate in response to a negative productivity shock:
Note: this is a revised version of the draft circulated last fall as my "job market paper".

Tuesday, March 1, 2016

Carbon Taxes

As an economist, one of the biggest frustrations of discussions over climate policy is that we know pretty well what to do - tax carbon (or set up a system of tradable permits, which has similar effects), and that doing so will not be harmful to the economy.  People and firms respond to incentives, and a carbon tax will motivate people to find the lowest-cost ways to reduce emissions.  People are clever and the cost of reducing emissions will likely be much less than many envisioned.

Eduardo Porter's column about British Columbia's carbon tax is yet another illustration of this; he writes:
In 2008, the British Columbia Liberal Party, which confoundingly leans right, introduced a tax on the carbon emissions of businesses and families, cars and trucks, factories and homes across the province. The party stuck to the tax even as the left-leaning New Democratic Party challenged it in provincial elections the next year under the slogan Axe the Tax. The conservatives won soundly at the polls.

Their experience shows that cutting carbon emissions enough to make a difference in preventing global warming remains a difficult challenge. But the most important takeaway for American skeptics is that the policy basically worked as advertised.

British Columbia’s economy did not collapse. In fact, the provincial economy grew faster than its neighbors’ even as its greenhouse gas emissions declined.

“It performed better on all fronts than I think any of us expected,” said Mary Polak, the province’s environment minister. “To the extent that the people who modeled it predicted this, I’m not sure that those of us on the policy end of it really believed it.”

Monday, February 8, 2016

Productivity Pessimism

I'm hoping I'll have a chance to read Robert Gordon's new book soon, though one of the ironies of being a college professor is that the job doesn't seem to leave much time to read.  Fortunately, Gordon presents a condensed version of the argument in a recent Bloomberg View column, where he explains that he doesn't expect a return to the rapid productivity growth of the mid-20th century.  He writes:
The 1920-70 expansion grew out of the second industrial revolution, when fossil fuels, the internal-combustion engine, advanced metals and factory automation came together to produce electric lighting, indoor plumbing, home appliances, motor vehicles, air travel, air conditioning, television and much longer life expectancy.
The "third industrial revolution" - computers and the internet - is less significant, in his view:
Although revolutionary, the Internet's effects were limited when compared with the second industrial revolution, which changed everything. The former had little effect on purchases of food, clothing, cars, furniture, fuel and appliances. A pedicure is a pedicure whether the customer is reading a magazine or surfing the web on a smartphone. Computers aren't everywhere: We don’t eat, wear or drive them to work. We don't let them cut our hair. We live in dwellings that have appliances much like those of the 1950s and we drive in motor vehicles that perform the same functions as in the 1950s, albeit with more convenience and safety.
Our main measure of technological progress is total factor productivity (tfp) growth, which is sometimes called the "Solow residual" because it is calculated as a leftover, by subtracting from output growth the portions that can be explained by changes in capital and labor.  That is, it is the growth that would occur even if there was no change in the factors of production.

Turning points in tfp growth can be hard to identify because the data are somewhat volatile from year-to-year and have a cyclical component.  With hindsight, economists identified a productivity slowdown around 1973 and a resurgence - with information technology playing a leading role - in the mid-1990s.  However, tfp growth has generally been weak since 2005, raising the question of whether the IT-led productivity boom is over.

This San Francisco Fed Letter from last year discusses some of the reasons for the productivity slowdown.  Gordon's book was the subject of an Eduardo Porter column and a Paul Krugman review.

Update (2/25): In an interview with Ezra Klein, Bill Gates argues against Gordon's view.

Wednesday, December 2, 2015

Fight for Our Principles!

Principles of Macroeconomics, that is, since it is under attack from Noah Smith, who argues for eliminating introductory macroeconomics classes.  In a blog post, he writes:
Why should undergrads learn macro in their first year of econ? If they go on to be econ majors they can easily start out with intermediate macro and not miss anything important. If they just take the first-year econ sequence and then go into the business world, what do they really need to know?
I think this badly misunderstands what we're trying to accomplish in a introductory level course - principles of macroeconomics is not about preparing students for business careers (though business students certainly should take it - as should everyone else).  The two main benefits of taking an introductory macroeconomics course are:

First, it prepares students to be more knowledgeable and effective citizens.  Among other things, students come away from the class able to interpret data like unemployment, inflation and GDP that they read about in the news.  They learn some basic facts about taxation and government spending that can help them evaluate claims made by politicians.  The Federal Reserve is pretty mysterious to most people - students learn what it does and how monetary policy works and the basics of how a banking/financial crisis can occur.  This seems particularly valuable in a time when the Fed is facing political criticism which is at least partly based on its widely misunderstood response to a financial crisis.

Second, working with economic models develops thinking and mathematical skills.  Smith makes the point that the models we teach in an intro class have their flaws (as do the models we teach in PhD-level classes...), though I still think they're quite useful for thinking about a number of issues.  But the act of manipulating a model and working out how assumptions are linked to conclusions helps students become sharper thinkers, and this stays with them long after they've forgotten the specifics of any particular model.

At my current station, I'm teaching a one-semester introductory course that covers both micro and macro topics, but I had a full-semester macro principles course at my previous stop - an outline of what I did is posted here.  The time students have in college is a very scarce resource, and the opportunity cost of any college class is very high, but I think principles of macroeconomics is almost always a good choice.  Though perhaps I'm a little biased...

Sunday, November 15, 2015

Hysteresis and Monetary Policy

In the Washington Post last week, Larry Summers wrote about some new research finding evidence of "hysteresis."  This is a term borrowed from the natural sciences for when temporary occurrences have lasting effects - e.g., when you hold a magnet up to a piece of metal, the metal remains magnetized even after you remove the magnet.  In macroeconomics, hysteresis occurs when an economic downturn has a lasting effect on economic capacity (i.e., reduced "potential output"); that is, lack of demand creates its own lack of supply.

Hysteresis could occur through a number of channels. Consider an economy described by an aggregate production function Y* = AF(K,N*) where potential GDP (Y*) depends on productivity (A),  capital (K) and labor at its "natural" or "full-employment" level, N*.  A recession occurs when output falls below Y* and labor is below N* (i.e., there is unemployment in excess of the "natural rate").  Hysteresis implies that there is a lasting impact on Y* - this could occur through technology, capital or labor.

All three channels could be operative. In the past several years, productivity growth has been sluggish, though its not clear if this is linked with the recession (productivity trends are always somewhat mysterious).  The recovery of investment (the rate of flow into the stock of capital) from the recession has been less than spectacular, even taking out housing - the share of GDP devoted to nonresidential fixed investment is somewhat below its peak in previous expansions. 

Here, I want to focus on labor, where the hysteresis effects are pretty evident, and raise an interesting policy dilemma. 

Although the unemployment rate has fallen to what we might consider a reasonably healthy level of 5% (the normal turnover of a healthy labor market generates some unemployment so we never expect it to get to zero), the labor market still clearly bears the scars of the 2008-09 recession.

The duration of unemployment spells rose to unprecedented levels and has remained elevated (a useful comparison is to the 1981-82 recession - the unemployment rate peaked at 10.8% at the end of 1982, but the dynamics of duration were not nearly as severe).
People with spells of long-term unemployment have a harder time finding jobs.  But looking at the unemployed leaves out those who left the labor force entirely.  The last several years have seen a significant drop in labor force participation rates, even among people aged 25-54 (focusing on this group is a rough way to control for the drop in overall participation due to an aging population, though as this Calculated Risk post notes, there is a composition effect even within the 'prime age' group).
The labor market clearly is not as robust as the headline unemployment rate suggests.

What are the implications for monetary policy of having a high proportion of long-term unemployed, and possibly a substantial latent group of unemployed who have left the labor force?  One answer is suggested by this St Louis Fed blog post by Stephen Williamson:
[I]f we think of the long-term unemployed as being subject to the mismatch problem and highly likely to leave the labor force, then these unemployed workers are not contributing much to labor market slack. They are unlikely to be hired under any conditions. 
That is, the unemployment (and presumably the depressed particpation rate, too) is "structural" in nature, and not amenable to any improvement in aggregate demand that might be generated with expansionary monetary policy.

An alternative view is that the long-term unemployed, and some of those who have exited the labor force, could be brought back into employment by particularly strong aggregate demand - what used to be called a "high pressure" economy.  This would be possible if the forces of hysteresis work in both directions, as this 1999 paper by Laurence Ball suggested.

That seemed to me to be what Janet Yellen was hinting at in her September speech at UMass-Amherst when she said:
Reducing slack along these other dimensions may involve a temporary decline in the unemployment rate somewhat below the level that is estimated to be consistent, in the longer run, with inflation stabilizing at 2 percent. For example, attracting discouraged workers back into the labor force may require a period of especially plentiful employment opportunities and strong hiring. Similarly, firms may be unwilling to restructure their operations to use more full-time workers until they encounter greater difficulty filling part-time positions. Beyond these considerations, a modest decline in the unemployment rate below its long-run level for a time would, by increasing resource utilization, also have the benefit of speeding the return to 2 percent inflation. Finally, albeit more speculatively, such an environment might help reverse some of the significant supply-side damage that appears to have occurred in recent years, thereby improving Americans' standard of living.
It seems to be that doing this would likely entail the Fed overshooting its 2% inflation target.  I have my doubts about their willingness to do this (and Yellen certainly did not suggest it).  And for it to work, inflation expectations would need to remain anchored (i.e., if any additional inflation just ratched up expectations, it would not bring unemployment down).

Sunday, September 20, 2015

One of These Things is Not Like the Others

Among the steps the Fed has taken to increase transparency in recent years is the release of projections by the board members and regional bank presidents.  This includes the "dot plot" indicating each participant's belief about what the appropriate federal funds rate target will be at the end of this year and the next three years.

One of the dots from the latest release (I've indicated with a red arrow) shows a preference for a negative fed funds rate this year and next, and a much lower rate than everyone else expects at the end of 2017.
People on twitter seem to think its most likely Minneapolis Fed President Kocherlakota's dot.  It called to mind this, from the deep recesses of childhood memory:
(That's from Sesame Street). 

Of more serious interest, the projections also included a reduction in the median "longer run" federal funds target, to 3.5%, from 3.8% at the last release in June, and also a lower estimate of the "longer run" unemployment rate, which might be taken as a proxy for a NAIRU estimate (see Krugman).

Friday, September 11, 2015

Rodrik on Economic Models

There is an an excellent piece by Dani Rodrik on economic methodology at Project Syndicate:
Economics is not the kind of science in which there could ever be one true model that works best in all contexts. The point is not “to reach a consensus about which model is right,” as Romer puts it, but to figure out which model applies best in a given setting. And doing that will always remain a craft, not a science, especially when the choice has to be made in real time.

The social world differs from the physical world because it is man-made and hence almost infinitely malleable. So, unlike the natural sciences, economics advances scientifically not by replacing old models with better ones, but by expanding its library of models, with each shedding light on a different social contingency.
Or, as Keynes put it, "Economics is the science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world." 

Rodrik goes on to discuss Borges' story "On Exactitude in Science" - a parable about cartographers who make a map on the same scale as the world it was meant to represent.  This story, which was our reading for Econ 110 yesterday, illustrates the point that "more realistic" isn't necessarily better.

Thursday, September 10, 2015

A Note on "Credibility"

Fed watchers are speculating that the FOMC meeting later this month might be the occasion to raise the federal funds rate target off the "zero lower bound," where it has been since December 2008.  In a column arguing against such a move, Larry Summers writes:
From the Depression to the Vietnam War to the Iraq war to the euro crisis, we surely should learn that policymakers who elevate credibility over responding to clear realities make grave errors. The best way the Fed can maintain and enhance its credibility is to support a fully employed American economy achieving its inflation target with stable financial conditions. The greatest damage it could do to its credibility would be to embrace central-banking shibboleth disconnected from current economic reality.
At the Fiscal Times, Mark Thoma writes:
The inflation problems of the 1970s, the loss of Fed credibility that came with it, and the need to impose the Volcker recession in the early 1980s to bring inflation down to tolerable levels made an indelible impression on policymakers who lived through that time period. The Fed’s trigger-happy response to any suggestion of an inflation problem is directly related to the desire to never let such an inflation outburst happen again.

But it has been more than four decades since the beginning of the inflation problems of the 1970s, and the economic environment in which monetary policy operates has changed considerably since that time. Those changes support patience, particularly in response to increases in wages, wages that have been stagnant since the 1970s even as labor productivity has been increasing.
The "credibility" argument in monetary policy is based on the idea that the central bank will be tempted to use inflation to "overheat" the economy and bring unemployment down below its "natural" (or equilibrium) levels for political reasons - e.g., to help the incumbent party in an election year.  Any gains would be, at best, short-lived, as people would incorporate a higher level of inflation into their expectations and set wages and prices accordingly.  Based on this logic - which seems helpful for interpreting how we got into the "stagflation" of the 1970s - economists look for policies and institutional structures to correct this perceived inflationary bias.

In the past several years, this logic seems turned on its head.  If anything, the biases of our monetary policymakers appear to be in the other direction.  Inflation continues to be subdued, as this plot of one of the Fed's preferred measures, the "core" deflator for personal consumption expenditures, shows:
The red line is drawn at 2%.  Measures of expected inflation are also below 2%.  David Beckworth recently argued that the Fed is acting as if 2% is a ceiling, not a target - he suggests the Fed's behavior is consistent with it aiming to keep inflation between 1% and 2%.

But the the Fed declared in 2012: "The Committee judges that inflation at the rate of 2 percent, as measured by the annual change in the price index for personal consumption expenditures, is most consistent over the longer run with the Federal Reserve's statutory mandate." If the goal is to "anchor" expectations at 2%, the Fed is at risk of failing, but the greater threat to its credibility seems to be too little inflation, not too much.

Saturday, September 5, 2015

China and the Solow Model

Last month, just before China let its currency deprecate and its stock market crashed, the San Francisco Fed published a nice Economic Letter by Zheng Liu, "Is China's Growth Miracle Over?"

China's rapid, but decelerating, growth is broadly consistent with the implications of the classic Solow growth model we teach our intermediate macroeconomics students.  This model predicts that low-income countries should grow quickly, but growth will slow down as they approach the leading countries, whose per-capita growth is constrained by the rate of technological progress. That is, there should be "convergence" in per capita GDP.

As this chart from the letter shows, China is following a similar path to Korea and Japan.
The basic intuition from the model comes from the idea of diminishing marginal product of capital - i.e., where capital (machinery and equipment) is scarce, the increase in output from adding an additional unit is greater than where it is already abundant.  This diagram of output per capita (y) as a function of capital per capita (k) illustrates,
where the slope is the marginal product of capital (MPk).

The idea can be extended to include "human capital" (i.e., knowledge and skills), as Mankiw, Romer and Weil did in a 1992 paper.

While the Solow model gets the broad contours of the growth experiences of Korea, Japan and (it seems so far) China correct (and does pretty well for the US as well), it does miss a couple of big things:

(1) A diminishing marginal product of capital implies that the financial rewards to investing in a low income country should be vastly higher than in high-income countries.  In a world where people can invest across borders, this implies a huge incentives for financial flows from high-income to low-income countries, but we do not observe such large net flows.  This was the puzzle Robert Lucas noted in a 1990 paper.

(2) While the experiences of some low-income countries is consistent with the convergence hypothesis; in many cases, low-income countries have fallen further behind (or, as Lant Pritchett wrote, "Divergence, Big Time.").  From the standpoint of the Solow model, growth "miracles" like those of Korea are to be expected, and the real puzzle is the fact the failure of so many countries to converge.

As Moses Abramovitz pointed out in 1986, it is usually a subset of the low-income countries that are growing fastest.  This would suggest there are forces for convergence, but something is preventing them from applying everywhere.  Current thinking is that the answer lies in "institutions" - the set of legal rights, culture, and governance which shape the economic environment and incentives for people to take actions within it, including to accumulate capital.

This is where assuming that China will continue to follow in the convergence footsteps of Korea and Japan may be questionable.  While China's institutions have gotten it this far, there are reasons to doubt whether they are appropriate for achieving levels of GDP per capita comparable to Korea, Japan and Europe, as this column by Brad DeLong and this by Eduardo Porter discuss.  That said, the institutions in the US during its late 19th century industrialization were hardly what an economist would recommend (in particular, corruption was rampant), and yet it somehow managed to take over leadership in per capita GDP from Britain.