Friday, November 7, 2014

October Employment

A good report today from the BLS on employment in October:  the unemployment rate fell to 5.8% (from 5.9%) and employers' payrolls rose by 218,000.
The payroll figure comes from a survey of firms, while the unemployment rate is based on a survey of households (which has a smaller sample than the employer survey).  The household survey figures look even better: the number of people employed rose by 683,000, and the number unemployed fell by 267,000.  The labor force (i.e., people who are working or looking for work) rose by 416,000, which put the labor force participation rate at 62.8%, an increase from last month's historic low of 62.7%. 

The decline in labor force participation (which was at 66% in late 2007) has been one of the worrying trends of the past several years.  It partly reflects demographics, though, as the population is becoming older and a larger portion of the population is of retirement age.  Looking at the employment-population ratio for 25-54 year olds gives a picture of the labor market that takes out some of the guesswork in interpreting participation:
This ratio increased from 76.7 to 76.9 in October.  Overall, it shows some recovery over the past three years, but also gives an indication of why many Americans remain unhappy with the state of the economy - it is still less than halfway back from its low point to its pre-recession level.

Moreover, while employment is improving, wages are still growing slowly - the BLS reports that average hourly wages have increased 2% over the past year.  This suggests that there is still plenty of "slack" in the labor market.

The BLS' broader measure of un- and under-employment, 'U-6', which includes the "marginally attached" and people working part-time who want to be full-time, is at 11.5%, down from 11.8% last month (it peaked at 17.2% in April 2010).

Wednesday, September 17, 2014

Information Overload

We've arrived at the point in Econ 302 this semester where we're reading Paul David's "The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox" which means that I find myself again marveling at the prescience of this:
That's a pretty amazing thing to have written in 1989! (the paper was published in the May '90 American Economic Review, which has papers presented at the meeting in early January 1990).

For a nice (and un-gated) summary of David's argument, see this Tim Harford piece in Slate.

Friday, September 5, 2014

August Employment

According to the BLS, employment rose by 142,000 in August and the unemployment rate ticked down to 6.1%.
That's consistent with the picture of a continuing, but painfully slow, recovery that has predominated over the past several years, though this particular report was a little on the disappointing side.

The employment figure comes from a survey of firms, while the unemployment rate is based on a survey of households, which has a smaller sample.  According to the household survey, 80,000 fewer people were unemployed, but only 16,000 more were working - the difference is accounted for by 64,000 departures from the labor force (i.e., adults who are working or looking for work).  Such decreases in labor force participation are not an encouraging sign.

However, labor force participation is a little bit difficult to interpret because demographic change (more people reaching retirement age, etc.) plays a role as well.  My preferred measure of the state of the labor market is the share of 25-54 year-olds who are working - this takes out the guesswork about demographics and participation.  This measure rose in August, to 76.8% (from 76.4%)
that's up from a low of 74.8% in November 2010, but still well below pre-recession levels.  Employment is continuing to crawl out of the hole we dug in 2008-09, but we're less than halfway there.  Any talk of returning to "normal" monetary policy seems a premature to me - things may be getting slightly better, but the situation is still quite bad.

Friday, August 22, 2014

Europe in Depression

In a post back in 2012, when things were looking pretty hairy for the euro, I said it would be "a real human disaster if the euro cracked up in a crisis."  Two years later, fear of a calamitous exit by the "peripheral" Eurozone countries have eased (as evidenced by reduced bond yields).  The euro appears to have been saved - and it has been a real human disaster nonetheless.

At wonkblog, Matt O'Brien writes:
As I've said before, the euro is the gold standard with moral authority. And that last part is the problem. Europeans don't think the euro represents civilization, but rather the defense of it. It's a paper monument to peace and prosperity that's made the latter impossible. So the eurocrats who have spent their lives building it are never going to tear it down, despite the fact that, as it's currently constructed, the euro is standing between them and recovery.

Just like the 1930s, Europe is stuck with a fixed exchange system that doesn't let them print, spend, or devalue their way out of a crisis. But, unlike then, Europe might never give it up. It's a fidelity to failure that even the gold bloc couldn't have imagined.
Unemployment rates are above 20% in Spain and Greece, and above 10% in Portugal, Italy, Ireland, France, Cyprus, Slovakia and Slovenia:
Ambrose Evans-Pritchard spoke to several economics Nobel laureates:
An array of Nobel economists have launched a blistering attack on the eurozone's economic strategy, warning that contractionary policies risk years of depression and a fresh eruption of the debt crisis.
"Historians are going to tar and feather Europe's central bankers," said Professor Peter Diamond, the world's leading expert on unemployment. "Young people in Spain and Italy who hit the job market in this recession are going to be affected for decades. It is a terrible outcome, and it is surprising how little uproar there has been over policies that are so stunningly destructive," he told The Telegraph at a gathering of Nobel laureates at Lake Constance...
Professor Joseph Stiglitz said austerity policies had been a "disastrous failure" and are directly responsible for the failed recovery over the first half of this year, with Italy falling into a triple-dip recession, France registering zero growth and even Germany contracting in the second quarter.
"There is a risk of a depression lasting years, leaving even Japan's Lost Decade in the shade. The eurozone economy is 20pc below its trend growth rate," he said...
Professor Christopher Sims, a US expert on monetary policy, said EMU policy makers had not sorted out the basic design flaws in monetary union, and are driving Club Med nations into deeper trouble by imposing pro-cyclical austerity.
"If I were advising Greece, Portugal or even Spain, I would tell them to prepare contingency plans to leave the euro. There is no point being in EMU if all that happens when you are hit with a shock is that the shock gets worse," he said.
"It would be very costly to leave the euro, a form of default, but staying in the euro is also very costly for these countries. The Europeans have created a system that is worse than the Gold Standard. Countries are in the same position as Latin American states that borrowed in dollars," he said.
It may be a slightly hopeful sign that Francois Hollande is coming to a recognition of the problem, the Times' Liz Alderman reports:
After months of insisting that a recovery from Europe’s long debt crisis was at hand, President Fran├žois Hollande on Wednesday delivered a far bleaker message. He indicated that the austerity policies France had been compelled to adopt to meet the eurozone’s budget deficit targets were making growth impossible.
Paris officials say that France — the eurozone’s second-largest economy after Germany — will no longer try to meet this year’s deficit-reduction targets, to avoid making economic matters worse. Even in abandoning those targets, they indicated that France was unlikely to recover soon from its long period of stagnation or quickly reduce its unemployment rate, which exceeds 10 percent.
“The diagnosis is clear,” Mr. Hollande said in an interview published Wednesday in the French daily Le Monde. “Due to the austerity policies of the last several years, there is a problem of demand throughout Europe, and a growth rate that is not reducing employment.”
To really make a difference, though, a more inflationary monetary policy is needed, and there is no sign of that on the horizon.

A euro breakup in 2010, 11 or 12 would have been disastrous for sure, but I'm beginning to wonder if it would have been worse than what we've actually seen.

Wednesday, July 23, 2014

DSGE Failing the Market Test?

The prevailing methodology of macroeconomic theory these days is "Dynamic Stochastic General Equilibrium" (DSGE) modelling.  Although many contemporary DSGE models, including the ones I'm working on, include "Keynesian" elements such as sticky prices, unemployment and financial frictions, they represent a methodological break with an older style of "Keynesian" models based on relationships among aggregate variables.  The shift in method followed from the work of Lucas and Sargent (most prominently among others) -- which John Cochrane summarized on his blog:
As I see it, the main characteristic of "equilibrium" models Lucas and Sargent inaugurated is that they put people, time, and economics into macro.

Keynesian models model aggregates. Consumption depends on income. Investment depends on interest rates. Labor supply and demand depend on wages. Money demand depends on income and interest rates. "Consumption" and "investment" and so forth are the fundamental objects to be modeled.

"Equilibrium" models (using Lucas and Sargent's word) model people and technology. People make simultaneous decisions across multiple goods, constrained by budget constraints -- if you consume more and save more, you must work more, or hold less money.  Firms  make decisions across multiple goods constrained by technology.

Putting people and their simultaneous decisions back to the center of the model generates Lucas and Sargent's main econometric conclusion -- Sims' "incredible" identifying restrictions. When people simultaneously decide consumption, saving, labor supply, then the variables describing each must spill over in to the other. There is no reason for leaving (say) wages out of the consumption equation. But the only thing distinguishing one equation from another is which variables get left out.

People make decisions thinking about the future. I think "static" vs. "intertemporal" are good words to use.  That observation goes back to Friedman: consumption depends on permanent income, including expected future income, not today's income. Decisions today are inevitably tied to expectations --rational or not -- about the future.
A Bloomberg View column by Noah Smith nicely summarizes the methodological shift, which gained momentum from the apparent breakdown of the Phillips curve relationship between inflation and unemployment in the 1970s.  Smith writes:
Lucas showed that trying to boost gross domestic product by raising inflation might be like the tail trying to wag the dog. To avoid that kind of mistake, he and his compatriots declared, macroeconomists needed to base their models on things that wouldn’t change when government policy changed -- things like technology, or consumer preferences. And so DSGE was born. (DSGE also gave macroeconomists a chance to use a lot of cool new math tricks, which probably increased its appeal.)

OK, history lesson over. So why is this important now?

Well, for one thing, the finance industry has ignored DSGE models. That could be a big mistake! Suppose you’re a macro investor. If all you want to do is make unconditional forecasts -- say, GDP next quarter – then you can go ahead and use an old-style SEM model, because you only care about correlation, not causation. But suppose you want to make a forecast of the effect of a government policy change -- for example, suppose you want to know how the Fed’s taper will affect growth. In that case, you need to understand causation -- you need to know whether quantitative easing is actually changing people’s behavior in a predictable way, and how.

This is what DSGE models are supposed to do. This is why academic macroeconomists use these models. So why doesn’t anyone in the finance industry use them? Maybe industry is just slow to catch on. But with so many billions upon billions of dollars on the line, and so many DSGE models to choose from, you would think someone at some big bank or macro hedge fund somewhere would be running a DSGE model. And yet after asking around pretty extensively, I can’t find anybody who is.
That's an interesting question -- when thinking about issues like this, I often come back to the divide between "science" and "engineering" put forward by Greg Mankiw.  While academic macroeconomics has gone down the path marked out Lucas and Sargent, the policymaking "engineers" in Washington often still find the older-style models more useful.  It sounds like Wall Street's economists do too. 

The question is whether academic macroeconomics is on track to produce models that are more useful for the policymakers and moneymakers. The DSGE method is still fairly new, and, until recently, we've been constrained by the limitations of our computers as well as our minds (a point Narayana Kocherlakota made here), so maybe we're just not quite there yet.  But we should be open to the possibility that we're on the wrong track entirely.

Saturday, July 5, 2014

Efficiency Wages

The New York Times has a story about several restaurants that have decided to pay above-market wages.  One of them is Shake Shack, which is starting employees at $9.50/hr:
“The No. 1 reason we pay our team well above the minimum wage is because we believe that if we take care of the team, they will take care of our customers,” said Randy Garutti, the chief executive of Shake Shack.
That, and other anecdotes in the article, are consistent with the "efficiency wage" theory, where firms can induce more effort by paying a higher real wage.  This might arise if firms have a less than perfect ability to monitor individual employees' productivity - paying an above-market wage creates a stronger incentive not to "shirk". 

For more, see this brief 1984 survey by Janet Yellen, who did some of her early academic work in this area.

Tuesday, July 1, 2014

Classroom Technology

Despite evidence that having computers in class is not good for students, Slate's Rebecca Schumann argues that professors should permit them anyway:
[P]olicing the (otherwise nondisruptive) behavior of students further infantilizes these 18-to-22-year-olds. Already these students are hand-held through so many steps in the academic process: I check homework; I give quizzes about the syllabus to make sure they’ve actually read it; I walk them, baby-steps style, through every miniscule stage of their essays. Some of these practices do indeed improve what contemporary pedagogy parlance calls “learning outcomes” (barf) because they show students how invested I am in their progress. But these practices also serve as giant, scholastic water wings for people who should really be swimming by now.

My colleagues and I joke sometimes that we teach “13th-graders,” but really, if I confiscate laptops at the door, am I not creating a 13th-grade classroom? Despite their bottle-rocket butt pranks and their 10-foot beer bongs, college students are old enough to vote and go to war. They should be old enough to decide for themselves whether they want to pay attention in class—and to face the consequences if they do not.
I'm sympathetic to the argument - I've never had an "attendance policy" for essentially the same reason - but what Schumann misses is that the use of laptops have a negative spillover effect (what economists call an "externality").  A student who is using a computer will not only distract herself but also the students around her - it is the harm to others, and the classroom environment more generally, that justifies prohibiting computers in class.

Schumann goes on to argue the real problem is lecture format classes.  I don't think its appropriate to generalize - the optimal format probably varies across subjects (and across students, too, which may be a more difficult problem).  I'm planning some pretty big changes to the way I teach my classes for the coming year that will significantly reduce the amount of lecturing I do.  I wouldn't be doing this if I didn't expect the benefits to outweigh the costs, but I suspect the virtues of the traditional lecture style may be under-appreciated these days.  In particular, the act of note-taking by hand is a valuable part of the learning process.  A recent NY Times story about the decline of handwriting instruction in schools discussed some evidence on that point:
Two psychologists, Pam A. Mueller of Princeton and Daniel M. Oppenheimer of the University of California, Los Angeles, have reported that in both laboratory settings and real-world classrooms, students learn better when they take notes by hand than when they type on a keyboard. Contrary to earlier studies attributing the difference to the distracting effects of computers, the new research suggests that writing by hand allows the student to process a lecture’s contents and reframe it — a process of reflection and manipulation that can lead to better understanding and memory encoding.
Although we should always be looking for ways to improve, and to take advantage of new technology where it can be helpful, sometimes "innovation" carries hidden costs, and we will make better choices if we try to understand what those might be and take them into account.

Thursday, June 26, 2014

Not Repeating All of Our Mistakes

With all the frustrations and mistakes of the recent years, its easy to miss the good economic policy news, but there is some --

At Wonkblog, Lydia DePillis reports on the lack of a turn towards protectionism on the part of high-income countries during the global slump of the last few years.  The evidence she cites suggests that developing countries have raised trade barriers, but in a fairly muted fashion.

That's a huge improvement over the 1930s which saw widespread increases in trade barriers (including US' infamous Smoot-Hawley tariff).  Though the increases in tariffs and other trade barriers did not cause the depression, most of us economists regard them as a counter-productive response.

The architecture of the GATT and WTO was developed in part to prevent making the same mistake again.  However, the rules do allow for temporary increases in tariffs through "antidumping", "safeguard" and "countervailing duty" measures, but there hasn't been a large increase in the use of these measures. DePillis writes:
So, why did the United States appear to be less aggressive about protecting itself in the face of the latest economic meltdown? It's learned from experience.

"We designed the current system in response to what happened in the 1930s," says Chad Bown, a World Bank economist who maintains the database of temporary trade barriers. For one thing, the United States is able to target products more specifically rather than entire sectors. "That helps blow off some political steam and not have overall increases in protection," Bown says.
Another important factor may be that now, unlike the 1930s, the world is largely operating under a (non) system of floating currencies. In Trade Policy Disaster, Doug Irwin argues (persuasively, I think) that the motivation for the increasing trade barriers was more "mercantilist" than "protectionist" - that governments were concerned with preventing trade deficits, which would have led to deflationary gold outflows under the gold standard.  

Today's countries aren't bound the same way.  The one exception is Europe, where the economies of "peripheral" Europe are the hardest-suffering in the world - they can't adjust through depreciation, and the EU prevents Spain and Greece from raising tariffs.

Update: At VoxEU, Chad Bown discusses some findings from the Temporary Trade Barriers Database.

GDP in the Rear-View Mirror

appears smaller than it did before --  the BEA's "third estimate" of real GDP growth came in at -2.9% annual rate.  That's really bad, and a big revision from the "advance estimate" in April of 0.1% growth, and the "second estimate" in May of -1%.
One of the things I emphasize to my students are the limitations of GDP statistics.  One of the difficulties in using them is that they are subject to substantial revisions, that come in with considerable lags.  Policymakers - and anyone else trying to judge the state of the economy - are looking at noisy, backward-looking data. 

Here it is, just past the summer solstice, that we learn that GDP last winter (Jan. - Mar.) was dropping at its fastest rate since 2009 (the first quarter of 2011 is only one other quarter since the recession with declining GDP).  The rate of decline in the 1st quarter was worse than either of the two quarters with negative growth in the 2001 recession.

Although there are usually some changes, this particular revision was unusually large - the change from the initial to the third estimate was the largest since the BEA began releasing estimates this way in the mid-1980's.

The prevailing theory on why the first-quarter was so bad appears to be that it was mainly due to unusually severe weather; although the data are "seasonally adjusted" to account for the fact that some types of economic activity normally are lower in January and February - this winter may have been worse than most.

As Neil Irwin and CEA Chair Jason Furman both note, other indicators - like employment - looked ok during the same period.  Payroll growth averaged 190,000 during the first three months of the year.  That, as Justin Wolfers explains, means a large deviation from the historic relationship between unemployment and output growth known as "Okun's Law".  It also implies a big drop in productivity as we measure it.

Monday, June 16, 2014

Hawks, Doves and (Wesleyan) Cardinals

The Federal Reserve Board welcomed a Wesleyan alum today: Lael Brainard '83 was sworn in (she's second from right, along with Jerome Powell, Janet Yellen and Stanley Fisher).
Brainard previously served as Undersecretary of the Treasury for International Affairs; the NY Times' Annie Lowrey wrote a brief profile of Brainard when she stepped down from that post last year.