Wednesday, July 23, 2014

DSGE Failing the Market Test?

The prevailing methodology of macroeconomic theory these days is "Dynamic Stochastic General Equilibrium" (DSGE) modelling.  Although many contemporary DSGE models, including the ones I'm working on, include "Keynesian" elements such as sticky prices, unemployment and financial frictions, they represent a methodological break with an older style of "Keynesian" models based on relationships among aggregate variables.  The shift in method followed from the work of Lucas and Sargent (most prominently among others) -- which John Cochrane summarized on his blog:
As I see it, the main characteristic of "equilibrium" models Lucas and Sargent inaugurated is that they put people, time, and economics into macro.

Keynesian models model aggregates. Consumption depends on income. Investment depends on interest rates. Labor supply and demand depend on wages. Money demand depends on income and interest rates. "Consumption" and "investment" and so forth are the fundamental objects to be modeled.

"Equilibrium" models (using Lucas and Sargent's word) model people and technology. People make simultaneous decisions across multiple goods, constrained by budget constraints -- if you consume more and save more, you must work more, or hold less money.  Firms  make decisions across multiple goods constrained by technology.

Putting people and their simultaneous decisions back to the center of the model generates Lucas and Sargent's main econometric conclusion -- Sims' "incredible" identifying restrictions. When people simultaneously decide consumption, saving, labor supply, then the variables describing each must spill over in to the other. There is no reason for leaving (say) wages out of the consumption equation. But the only thing distinguishing one equation from another is which variables get left out.

People make decisions thinking about the future. I think "static" vs. "intertemporal" are good words to use.  That observation goes back to Friedman: consumption depends on permanent income, including expected future income, not today's income. Decisions today are inevitably tied to expectations --rational or not -- about the future.
A Bloomberg View column by Noah Smith nicely summarizes the methodological shift, which gained momentum from the apparent breakdown of the Phillips curve relationship between inflation and unemployment in the 1970s.  Smith writes:
Lucas showed that trying to boost gross domestic product by raising inflation might be like the tail trying to wag the dog. To avoid that kind of mistake, he and his compatriots declared, macroeconomists needed to base their models on things that wouldn’t change when government policy changed -- things like technology, or consumer preferences. And so DSGE was born. (DSGE also gave macroeconomists a chance to use a lot of cool new math tricks, which probably increased its appeal.)

OK, history lesson over. So why is this important now?

Well, for one thing, the finance industry has ignored DSGE models. That could be a big mistake! Suppose you’re a macro investor. If all you want to do is make unconditional forecasts -- say, GDP next quarter – then you can go ahead and use an old-style SEM model, because you only care about correlation, not causation. But suppose you want to make a forecast of the effect of a government policy change -- for example, suppose you want to know how the Fed’s taper will affect growth. In that case, you need to understand causation -- you need to know whether quantitative easing is actually changing people’s behavior in a predictable way, and how.

This is what DSGE models are supposed to do. This is why academic macroeconomists use these models. So why doesn’t anyone in the finance industry use them? Maybe industry is just slow to catch on. But with so many billions upon billions of dollars on the line, and so many DSGE models to choose from, you would think someone at some big bank or macro hedge fund somewhere would be running a DSGE model. And yet after asking around pretty extensively, I can’t find anybody who is.
That's an interesting question -- when thinking about issues like this, I often come back to the divide between "science" and "engineering" put forward by Greg Mankiw.  While academic macroeconomics has gone down the path marked out Lucas and Sargent, the policymaking "engineers" in Washington often still find the older-style models more useful.  It sounds like Wall Street's economists do too. 

The question is whether academic macroeconomics is on track to produce models that are more useful for the policymakers and moneymakers. The DSGE method is still fairly new, and, until recently, we've been constrained by the limitations of our computers as well as our minds (a point Narayana Kocherlakota made here), so maybe we're just not quite there yet.  But we should be open to the possibility that we're on the wrong track entirely.

Saturday, July 5, 2014

Efficiency Wages

The New York Times has a story about several restaurants that have decided to pay above-market wages.  One of them is Shake Shack, which is starting employees at $9.50/hr:
“The No. 1 reason we pay our team well above the minimum wage is because we believe that if we take care of the team, they will take care of our customers,” said Randy Garutti, the chief executive of Shake Shack.
That, and other anecdotes in the article, are consistent with the "efficiency wage" theory, where firms can induce more effort by paying a higher real wage.  This might arise if firms have a less than perfect ability to monitor individual employees' productivity - paying an above-market wage creates a stronger incentive not to "shirk". 

For more, see this brief 1984 survey by Janet Yellen, who did some of her early academic work in this area.

Tuesday, July 1, 2014

Classroom Technology

Despite evidence that having computers in class is not good for students, Slate's Rebecca Schumann argues that professors should permit them anyway:
[P]olicing the (otherwise nondisruptive) behavior of students further infantilizes these 18-to-22-year-olds. Already these students are hand-held through so many steps in the academic process: I check homework; I give quizzes about the syllabus to make sure they’ve actually read it; I walk them, baby-steps style, through every miniscule stage of their essays. Some of these practices do indeed improve what contemporary pedagogy parlance calls “learning outcomes” (barf) because they show students how invested I am in their progress. But these practices also serve as giant, scholastic water wings for people who should really be swimming by now.

My colleagues and I joke sometimes that we teach “13th-graders,” but really, if I confiscate laptops at the door, am I not creating a 13th-grade classroom? Despite their bottle-rocket butt pranks and their 10-foot beer bongs, college students are old enough to vote and go to war. They should be old enough to decide for themselves whether they want to pay attention in class—and to face the consequences if they do not.
I'm sympathetic to the argument - I've never had an "attendance policy" for essentially the same reason - but what Schumann misses is that the use of laptops have a negative spillover effect (what economists call an "externality").  A student who is using a computer will not only distract herself but also the students around her - it is the harm to others, and the classroom environment more generally, that justifies prohibiting computers in class.

Schumann goes on to argue the real problem is lecture format classes.  I don't think its appropriate to generalize - the optimal format probably varies across subjects (and across students, too, which may be a more difficult problem).  I'm planning some pretty big changes to the way I teach my classes for the coming year that will significantly reduce the amount of lecturing I do.  I wouldn't be doing this if I didn't expect the benefits to outweigh the costs, but I suspect the virtues of the traditional lecture style may be under-appreciated these days.  In particular, the act of note-taking by hand is a valuable part of the learning process.  A recent NY Times story about the decline of handwriting instruction in schools discussed some evidence on that point:
Two psychologists, Pam A. Mueller of Princeton and Daniel M. Oppenheimer of the University of California, Los Angeles, have reported that in both laboratory settings and real-world classrooms, students learn better when they take notes by hand than when they type on a keyboard. Contrary to earlier studies attributing the difference to the distracting effects of computers, the new research suggests that writing by hand allows the student to process a lecture’s contents and reframe it — a process of reflection and manipulation that can lead to better understanding and memory encoding.
Although we should always be looking for ways to improve, and to take advantage of new technology where it can be helpful, sometimes "innovation" carries hidden costs, and we will make better choices if we try to understand what those might be and take them into account.