Saturday, July 13, 2013

By John B. Taylor: Once Again, the Fed Shies Away From the Exit Door

Courtesy WSJ: Fed Shies Away From Exit Door. Not Much Different Than What Occurred In 60s & 70s. Volcker In The Late 70s Proved Money Policy Can Be Reversed.

The Federal Reserve's liquidity operations during the 2008 financial panic represented good central banking: providing loans when markets freeze up. But instead of simply letting those programs expire as the panic subsided, the Fed embarked on its first quantitative easing program (QE1)—large-scale purchases of mortgage-backed securities and Treasurys—trying to stimulate the housing market and the economy. After that, I warned on these pages in September 2010 about the dangers of "another large dose of quantitative easing" that would raise "more uncertainty about how it will ever be unwound."

Since then the Fed has injected two more massive doses of quantitative easing: QE2 starting in November 2010 and QE3 starting late last year. Then, last month, Fed Chairman Ben Bernanke let it be known in a news conference that the old QE3 (purchasing $85 billion of Treasury bonds and mortgage-backed securities a month until labor market conditions improve substantially) would taper into a new QE3—in which purchases would likely slow by the end of 2013 and stop in the middle of 2014. The turbulent reaction in the markets showed that the predicted dangers from unwinding would be real.

The Fed has justified its policies as a means of helping the economy recover. Yet economic growth has come in at less than half what the Fed predicted with all its unprecedented interventions during the past four years, and growth remains under 2% so far this year. Some at the Fed blame other factors for this terribly weak recovery—the latest excuse being cuts in state and local government purchases. But those cuts are the result not the cause of the weak economy as tax revenues have slowed.
A growing number of economists, former central bankers and senior government officials—including Martin Feldstein, Paul Volcker, Allan Meltzer, Raghu Rajan, David Malpass and Peter Fisher—have now concluded that the Fed's policies are not working. Critics want the Fed to return to a more rules-based monetary policy.
Meanwhile, the global monetary system is starting to fracture. Central bankers around the world, especially in emerging markets such as Brazil, India and South Africa, have experienced adverse spillovers of Fed policy on their currencies and economies. To prevent sharp fluctuations in the value of their currencies and volatile inflows and outflows of capital, they have had to deviate from good policy.
The Bank of Japan's recent move toward a policy of massive quantitative easing is a good example. Following the 2008 financial crisis and the weak recovery, the yen significantly appreciated against the dollar as the Fed repeatedly extended its quantitative easing. A new government was elected in Japan in December in part because of the currency issue. Immediately after the election the government asked the Bank of Japan to match the Fed with its own quantitative easing; this is exactly what it did.

Other governments and central banks have imposed capital controls to limit the inflow of capital and the appreciation of their currencies. But capital controls interfere with firms' investment decisions and cause instability as people try to circumvent them—which leads policy makers to seek even more controls to prevent the circumventions. Alarmed by these developments, the Bank for International Settlements, the central bank of central bankers, called last month for an investigation of these spillovers and international monetary policy coordination.
With so many voices rising in objection, some might assume that it will be just a short time until the Fed changes course. Unfortunately, this assumption is unwarranted based on the experience of the late 1960s and 1970s.
In 1968, Milton Friedman explained the folly of the view that permanently lower unemployment could be achieved through easier monetary policy. At first, his view was adopted by a small minority. By the mid-1970s, there was consensus that easy money was not achieving its economic growth or employment goals.
Then the argument shifted to "yes, we agree that the policy is not working, but it is too costly to end." The economy was performing adequately; shutting the money spigot would just make things worse. Yet unemployment and inflation only increased.

It was not until Paul Volcker became chairman of the Federal Reserve in August 1979 that the Fed's excessively easy monetary policy came to an end. Mr. Volcker's resolve was buttressed by new economic models based on rational expectations and price rigidities which showed that the costs of ending easy money were far less than what the pessimists said. These models also predicted that a change in Fed policy would eventually help bring about lower unemployment—which is what the change in Fed policy did help to bring about in the 1980s and '90s.

There's a parallel with today. More policy makers and economists are coming to realize that the Fed's unconventional monetary policy is not working. Yet there is also a sense that unwinding is too costly right now and can't be reversed.

We witnessed this recently with hints about the end of quantitative easing. When Mr. Bernanke mentioned that quantitative easing—today's version of the easy money of the 1970s—will taper off, the market reacted negatively. Many commentators and pundits claimed that tapering will adversely affect the economy and needs to be delayed. Members of the Federal Open Market Committee also came out publicly to say that serious tapering can be put off until later.

The minutes of the June FOMC meeting were released this week. Those minutes, accompanied by Mr. Bernanke's remarks on Wednesday, made it clear that unless a modern-day Paul Volcker soon appears, the return to conventional monetary policy is still a long way off.

Mr. Taylor is a professor of economics at Stanford University, a senior fellow at the Hoover Institution, and a former Treasury undersecretary for international affairs.

Friday, July 12, 2013

My 2012 Favorite Countries in Sequence- EFI Chart

20 Year US Economic Freedom Index Chart- w/ 5 Key Components

Dynamic Country Debt Chart:

Nation-by-Nation Debt Levels Since 1990 |

Courtesy of the WSJ

World’s Great Leap: 1B people taken out of extreme Poverty in 20 years. China's embrace of Capitalism prime cause

Towards the end of poverty
Nearly 1 billion people have been taken out of extreme poverty in 20 years. The world should aim to do the same again
IN HIS inaugural address in 1949 Harry Truman said that "more than half the people in the world are living in conditions approaching misery. For the first time in history, humanity possesses the knowledge and skill to relieve the suffering of those people." It has taken much longer than Truman hoped, but the world has lately been making extraordinary progress in lifting people out of extreme poverty. Between 1990 and 2010, their number fell by half as a share of the total population in developing countries, from 43% to 21%—a reduction of almost 1 billion people.
Now the world has a serious chance to redeem Truman's pledge to lift the least fortunate. Of the 7 billion people alive on the planet, 1.1 billion subsist below the internationally accepted extreme-poverty line of $1.25 a day. Starting this week and continuing over the next year or so, the UN's usual Who's Who of politicians and officials from governments and international agencies will meet to draw up a new list of targets to replace the Millennium Development Goals (MDGs), which were set in September 2000 and expire in 2015. Governments should adopt as their main new goal the aim of reducing by another billion the number of people in extreme poverty by 2030.

Take a bow, capitalism
Nobody in the developed world comes remotely close to the poverty level that $1.25 a day represents. America's poverty line is $63 a day for a family of four. In the richer parts of the emerging world $4 a day is the poverty barrier. But poverty's scourge is fiercest below $1.25 (the average of the 15 poorest countries' own poverty lines, measured in 2005 dollars and adjusted for differences in purchasing power): people below that level live lives that are poor, nasty, brutish and short. They lack not just education, health care, proper clothing and shelter—which most people in most of the world take for granted—but even enough food for physical and mental health. Raising people above that level of wretchedness is not a sufficient ambition for a prosperous planet, but it is a necessary one.
The world's achievement in the field of poverty reduction is, by almost any measure, impressive. Although many of the original MDGs—such as cutting maternal mortality by three-quarters and child mortality by two-thirds—will not be met, the aim of halving global poverty between 1990 and 2015 was achieved five years early.
The MDGs may have helped marginally, by creating a yardstick for measuring progress, and by focusing minds on the evil of poverty. Most of the credit, however, must go to capitalism and free trade, for they enable economies to grow—and it was growth, principally, that has eased destitution.
Poverty rates started to collapse towards the end of the 20th century largely because developing-country growth accelerated, from an average annual rate of 4.3% in 1960-2000 to 6% in 2000-10. Around two-thirds of poverty reduction within a country comes from growth. Greater equality also helps, contributing the other third. A 1% increase in incomes in the most unequal countries produces a mere 0.6% reduction in poverty; in the most equal countries, it yields a 4.3% cut.
China (which has never shown any interest in MDGs) is responsible for three-quarters of the achievement. Its economy has been growing so fast that, even though inequality is rising fast, extreme poverty is disappearing. China pulled 680m people out of misery in 1981-2010, and reduced its extreme-poverty rate from 84% in 1980 to 10% now.
That is one reason why (as the briefing explains) it will be harder to take a billion more people out of extreme poverty in the next 20 years than it was to take almost a billion out in the past 20. Poorer governance in India and Africa, the next two targets, means that China's experience is unlikely to be swiftly replicated there. Another reason is that the bare achievement of pulling people over the $1.25-a-day line has been relatively easy in the past few years because so many people were just below it. When growth makes them even slightly better off, it hauls them over the line. With fewer people just below the official misery limit, it will be more difficult to push large numbers over it.
So caution is justified, but the goal can still be achieved. If developing countries maintain the impressive growth they have managed since 2000; if the poorest countries are not left behind by faster-growing middle-income ones; and if inequality does not widen so that the rich lap up all the cream of growth—then developing countries would cut extreme poverty from 16% of their populations now to 3% by 2030. That would reduce the absolute numbers by 1 billion. If growth is a little faster and income more equal, extreme poverty could fall to just 1.5%—as near to zero as is realistically possible. The number of the destitute would then be about 100m, most of them in intractable countries in Africa. Misery's billions would be consigned to the annals of history.
Markets v misery
That is a lot of ifs. But making those things happen is not as difficult as cynics profess. The world now knows how to reduce poverty. A lot of targeted policies—basic social safety nets and cash-transfer schemes, such as Brazil's Bolsa Família—help. So does binning policies like fuel subsidies to Indonesia's middle class and China's hukou household-registration system (see article) that boost inequality. But the biggest poverty-reduction measure of all is liberalising markets to let poor people get richer. That means freeing trade between countries (Africa is still cruelly punished by tariffs) and within them (China's real great leap forward occurred because it allowed private business to grow). Both India and Africa are crowded with monopolies and restrictive practices.
Many Westerners have reacted to recession by seeking to constrain markets and roll globalisation back in their own countries, and they want to export these ideas to the developing world, too. It does not need such advice. It is doing quite nicely, largely thanks to the same economic principles that helped the developed world grow rich and could pull the poorest of the poor out of destitution.
 Read the article at: The Economist

Intellectual Integrity vs Hyperbole- Reinhart & Rogoff vs Paul Krugman... Paul at his partisan worst

Letter to Paul Krugman
Cambridge, Massachusetts, May 25 2013
Dear Paul:
Back in the late 1980s, you helped shape the concept of an emerging market debt overhang.  The financial crisis has laid bare the fact that the dividing line between emerging markets and advanced countries is not as crisp as once thought.  Indeed, this is a recurring theme of our 2009 book, This Time is Different:  Eight Centuries of Financial Folly.  Today, the growth bind of advanced countries in the periphery of the eurozone has a great deal in common with that of emerging market economies of the 1980s.
We admire your past scholarly work, which influences us to this day.  So it has been with deep disappointment that we have experienced your spectacularly uncivil behavior the past few weeks.  You have attacked us in very personal terms, virtually non-stop, in your New York Times column and blog posts.  Now you have doubled down in the New York Review of Books, adding the accusation we didn't share our data.  Your characterization of our work and of our policy impact is selective and shallow.  It is deeply misleading about where we stand on the issues.  And we would respectfully submit, your logic and evidence on the policy substance is not nearly as compelling as you imply.
You particularly take aim at our 2010 paper on the long-term secular association between high debt and slow growth. That you disagree with our interpretation of the results is your prerogative.  Your thoroughly ignoring the subsequent literature, however, including the International Monetary Fund's work as well as our own deeper and more complete 2012 paper with Vincent Reinhart, is troubling.   Perhaps, acknowledging the updated literature-not to mention decades of theoretical, empirical, and historical contributions on drawbacks to high debt-would inconveniently undermine your attempt to make us a scapegoat for austerity.  You write "Indeed, Reinhart-Rogoff may have had more immediate influence on public debate than any previous paper in the history of economics."
Setting aside this wild hyperbole, you never seem to mention our other line of work that has surely been far more influential when it comes to responding to the financial crisis.  Specifically, our 2009 book (released before our growth and debt work) showed that recoveries from deep systemic financial crises are long, slow and painful.  This was not the common wisdom at all before us, as you yourself have acknowledged  on more than one occasion.  Over the course of the crisis, and certainly by 2010, policymakers around the world were using our research, alongside their assessments, to help justify sustained macroeconomic easing of both monetary and fiscal policy fronts. 
Your desire to blame our later 2010 paper for the stances of some politicians fails to recognize a basic reality:  We were out there endorsing very different policies.  Anyone with experience in these matters knows that politicians may float a citation to an academic paper if it suits their purposes.  But there are limits to how much policy traction they can get with this device when the paper's authors are out offering very different policy conclusions.  You can refer to the appendix to this letter for our views on policy through the financial crisis as they were stated publicly in real time.  We were not silent.
Very senior former policy makers, observing the attacks of the past few weeks, have forcefully explained that real-time policies are very seldom driven to any significant extent by a single academic paper or result.
It is worth noting that in the past, polemicists have often pinned the austerity charge on the International Monetary Fund for its work with countries having temporary or permanent debt sustainability issues.  Since its origins after World War II, IMF programs have almost always involved some combination of austerity, debt restructurings, and structural reform.  When a country that has been running large deficits is suddenly no longer able to borrow new funds, some measure of adjustment is invariably required, and one of the IMF's usual roles has been to serve as a lightning rod.   Even before the IMF existed, long periods of autarky and hardship accompanied debt crises. 
Now let us turn to the substance. The events of the past few weeks do not change basic facts and fundamentals.  
Some Fundamentals on Debt
First, the advanced economies now have levels of debt that surpass most if not all historic episodes. It is public debt and private debt (which often becomes public as a crisis unfolds). Significant shares of these debts are held by foreigners in most cases, with the notable exception of Japan.  In Europe, where the (public and private) external debt exposures loom largest, financial de-globalization is well underway.  Debt financing has become an increasingly domestic business and a difficult one when the pool of domestic saving is limited.
As for the United States: our only short-lived high-debt episode involved WWII debts, which were held by domestic residents, not fickle international investors or central banks in China and elsewhere around the globe.  This observation is not meant to suggest "a scare" in the offing, with bond vigilantes driving a concerted sell-off of Treasuries by the rest of the world and a dramatic spike US in interest rates.  Carmen's work on financial repression suggests a different scenario. But many emerging markets have stepped into bubble-like territory and we have seen this movie before.  We should not take for granted their prosperity that makes possible their continuing large-scale purchases of US debt.  Reversals are possible.  Sensible risk management means planning for these and other contingencies that might disturb today's low global interest rate environment.
Second, on debt and growth.  The Herndon, Ash and Pollin paper, using a different methodology, reinforces our core result that high levels of debt are associated with lower growth.  This fact has been hidden in the tabloid media and blogosphere discourse, but this point is made plain by even a cursory look at the full set of results reported in the very paper they critique.  More importantly, the result was prominently featured in our 2012 Journal of Economic Perspectives paper with Vincent Reinhart on Debt Overhangs, which they do not cite. The main point of our 2012 paper is that while the difference in annual GDP growth between high and lower debt cases is about one percent a year, debt overhang episodes last on average 23 years. Thus, the cumulative effect on income levels over time is significant.
Third, the debate of the last few weeks does not change the fact that debt levels above 90% (even if one entirely rejects this marker for gross central government debt as a common cross-country "threshold") are very rare altogether and even rarer in peacetime.  From 1955 until right before the recent crisis, advanced economies spent less than 10% of those years at a debt/GDP ratio of higher than 90%; only about two percent of the years are above 120% debt/GDP. If governments thought high debt was a riskless proposition, why did they avoid it so consistently?
Debt and Growth Causality
Your recent April 29, 2013 NY Times blog The Italian Miracle is meant to highlight how in high-debt Italy, interest rates have come down since the European Central Bank's well-placed efforts to act more as a lender of last resort to periphery countries.  No disagreement there. However, this positive development is meant to re-enforce your strongly held view that high debt is not a problem (even for Italy) and that causality runs exclusively from slow growth to debt.  You do not mention that in this miracle economy, GDP fell by more than 2 percent in 2012 and is expected to fall by a similar amount this year. Elsewhere you have stated that you are sure that Italy's long-term secular growth/debt problems, which date back to the 1990s, are purely a case of slow growth causing high debt.  This claim is highly debatable.
Indeed, your repeatedly-expressed view that slow growth causes high debt but not visa-versa, is hardly supported by the recent literature on the subject.  Of course, as we have already noted, this work has been singularly ignored in the public discourse of the past few weeks.  The best and worst that can be said is that the results are mixed.  A number of studies looking at more comprehensive growth models have found significant effects of debt on growth. We made this point in the appendix to our New York Times piece.  Of course, it is well known that the economic cycle impacts government finances and therefore debt (causation from growth to debt).  Cyclically adjusted budgets have been around for decades, your shallow characterization of the growth-debt connection. 
As for ways debt might affect growth, there is debt with drama and debt without drama.
Debt with drama.  Do you really think that a country that is suddenly unable to borrow from international capital markets because its public and/or private debts that are a contingent public liability are deemed unsustainable will not suffer lower growth and higher unemployment as a consequence? With governments and banks shut out from international capital markets, credit to firms and households in periphery Europe remains paralyzed. This credit crunch has a crippling effect on growth and employment with or without austerity.  Fiscal austerity reinforces the procyclicality of the external and domestic credit crunch.  This pattern is not unique to this episode.
Policy response to debt with drama.  On the policy response to this sad state of affairs, we stress that restoring the credit channel is essential for sustained growth, and this is why there is a need to write off senior bank debt in many countries. Furthermore, there is no reason why the ECB should buy only sovereign debt-purchases of senior bank debt along the lines of the US Federal Reserve's purchases of mortgage-backed securities would be instrumental in rekindling credit and working capital for firms.  We don't see your attraction to fiscal largesse as a substitute. Periphery Europe cannot afford it and for Germany, which can afford it, fiscal expansion would be procyclical.  Any overheating in Germany would exert pressure on the ECB to maintain a tighter monetary policy, backtracking some of the progress made by Mario Draghi. A better use of Germany's balance sheet strength would be to agree on faster and bigger haircuts for the periphery, and to support significantly more expansionary monetary policy by the ECB.
Debt without drama.  There are other cases, like the US today or Japan since the mid-1990s, where there is debt without drama.  The plain fact that we know less about these episodes is a point we already made in our New York Times piece.  We pointedly do not include the historical episodes of 19th century UK and Netherlands among these puzzling cases. Those imperial debts were importantly financed by massive resource transfers from the colonies. They had "good" high-debt centuries because their colonies did not.  We offer a number of ideas in our 2012 paper for why debt overhang might matter even when there is no imminent collapse of borrowing capacity. 
Bad shocks do happen. What is the foundation for your certainty that as peacetime debt hits new records in coming years, the United States will be able to engage in  forceful countercyclical fiscal policy if hit by a large unexpected shock?  Furthermore, do you really want to find out the answer to that question the hard way?
The United Kingdom, which does not issue the reserve currency, is more dependent on its financial sector and suffered a bigger banking bust, has not had the same shale gas revolution, and is more vulnerable to Europe, is clearly more exposed to the drama scenario than the US.  And yet you regularly assert that the situations in the US and UK are the same and that both countries have the costless option of engaging in an open-ended fiscal expansion.  Of course, this does not preclude high-return infrastructure investments, making use of the public balance sheet directly or indirectly through public-private partnerships.
Policy response to debt without drama.  Let us be clear, we have addressed the role of somewhat higher inflation and financial repression in debt reduction in our research and in numerous pieces of commentary.  As our appendix shows, we did not advocate austerity in the immediate wake of the crisis when recovery was frail.  But the subprime crisis began in the summer of 2007, now six years ago.  Waiting 10 to 15 more years to deal with a festering problem is an invitation for decay, if not necessarily an outright debt crisis.  The end may not come with a bang but with a whimper.
Scholarship: Stick to the facts
The accusation in the New York Review of Books is a sloppy neglect on your part to check the facts before charging us with a serious academic ethical infraction.  You had already implicitly endorsed this from your perch at the New York Times by posting a link to a program that treated the misstatement as fact.
Fortunately, the "Wayback Machine" crawls the Internet and periodically makes wholesale copies of web pages. The debt/GDP database was first archived in October 2010 from Carmen's University of Maryland webpage.  The data migrated to in March 2011.  There it sits with our other data, on inflation, crises dates, and exchange rates.  These data are regularly sought and found for those doing research who care to look. The greater disclosure of debt data from official institutions is testament to this.  The IMF began to construct historical public debt data only after we had provided a roadmap in the list of our detailed references in a 2009 book (and before that in a 2008 working paper) that explained how we had unearthed the data. 
Our interaction with scholars and practitioners working on real world questions in our field is ongoing, and our doors remain open. So to accuse us of not sharing our data is an unfounded attack on our academic and personal integrity. 
Finally, we attach, as do many other mainstream economists, a somewhat higher weight on risks than you do, as debts of all measure -- including old age liabilities, public debt, private debt and external debt -- ascend into record territory.   This is not a conclusion based on one or two papers as you sometimes seem to imply, but rather on a long-standing body of economic research and extensive historical experience about the risks of record high debt levels. 
You often cite John Maynard Keynes.  We read Keynes, all the way through.  He wrote How to Pay for the War in 1940 precisely because he was not blasé about large deficits - even in support of a cause as noble as a war of survival. Debt is a slow-moving variable that cannot - and in general should not - be brought down too quickly.  But interest rates can change much more quickly than fiscal policy and debt. 
You might be right, and this time might be, after all, different.  If so, we will admit that we were wrong.  Whatever the outcome, we intend to be there to put the results in proper context for the community of scholars, policymakers, and civil society. 
Respectfully yours,
Carmen M. Reinhart and Kenneth S. Rogoff
Harvard University.

Appendix I. Reinhart and Rogoff: Selected interviews, op-eds, and media on the policy response to crisis
"Two prominent economists who published an acclaimed study last year of 800 years of national financial crises, "This Time Is Different," see flaws on both sides of today's argument. The debt must be dealt with, they say, but not too fast."
Paul Krugman, New York Times, August 18, 2010  (Citing from McClatchy article"Rogoff:  We may need another stimulus bill just to decompress from the previous one, a smaller one to cushion the landing.   Reinhart:  I'm not one of those deficit hawks.... I'm not saying you run out and pull the plug and have an adjustment that could derail what fragile recovery we do have.  Good for them."
Top Culprit in the Financial Crisis: Human Nature, Barrons, November 24, 2012, by Lawrence C. Strauss
Reinhart: "...the thrust in a deep financial crisis, when you throw in both monetary and fiscal stimulus, is to come up with something that helps raise the floor. That's why the decline wasn't 10% or 12%. However, one area where policy really has left a bit to be desired is that both in the U.S. and in Europe, we have embraced forbearance. Delaying debt write-downs and delaying marking to market is not particularly conducive to speeding up deleveraging and recovery."
Rogoff:  "...if you didn't just raise taxes or cut taxes but actually fixed the tax system, that would be very important....And, lastly, other things, like infrastructure and education spending, are important. This isn't all about austerity versus no austerity. Countries that are successful in dealing with these crises, such as Sweden, sometimes take them as an opportunity to change. We haven't."
Reinhart Testimony before Senate Budget Committee, February 9, 2010. "In light of the likelihood of continued weak consumption in the U.S. and Europe, rapid withdrawal of stimulus could easily tilt the economy back into recession. To be sure, this is not the time to exit. It is, however, the time to lay out a credible plan for a future exit."
In Praise of Carmen Reinhart, Guardian, April 2, 2010 (editorial page)
"The world's best known female economist has warned cutting the deficit the Tory way would send the UK back into recession."
5 Myths about the European debt crisis, by Carmen Reinhart and Vincent Reinhart, Washington Post, May 9, 2010
Myth #3:
 Fiscal austerity will solve Europe's debt difficulties.
"But fiscal austerity usually doesn't pay off quickly. A large and sudden contraction in government spending is almost sure to shrink economic activity as well. This means tax collections fall and unemployment and welfare benefits rise, undermining efforts to reduce the deficit. Even if new borrowing is reduced or eliminated, it takes time to whittle down a large debt, and international investors are notoriously impatient."
"One of the main goals of financial repression is to keep nominal interest rates lower than would otherwise prevail. This effect, other things being equal, reduces governments' interest expenses for a given stock of debt and contributes to deficit reduction. However, when financial repression produces negative real interest rates and reduces or liquidates existing debts, it is a transfer from creditors (savers) to borrowers and, in some cases, governments."
"The current strategy that calls for years of austerity and recession in the periphery countries is just not tenable."
The Euro's Pig-Headed Masters (Kenneth Rogoff, Project Syndicate, June 2011) "Instead of restructuring the manifestly unsustainable debt burdens of Portugal, Ireland, and Greece (the PIGs), politicians and policymakers are pushing for ever-larger bailout packages with ever-less realistic austerity conditions."
The Economy and the Candidates, Wall Street Journal Report with Maria Bartiromo, October 21, 2012 (interview with Kenneth Rogoff)
Min 2:40  on Fiscal Cliff  "Hopefully we won't commit economic suicide by actually putting in all that tightening so quickly."  I like to see something like Simpson Bowles....If we did, we could have our cake and eat it too, we could have more revenue without hurting growth."
Kenneth Rogoff on Economy, European Debt Crisis, Bloomberg Surveillance, July 27, 2012, Interviewer Tom Keene:  "You told me five years and change ago that we would need four trillion dollars of stimulus to get through this"  min 7:55: "yes to great infrastructure projects, but not to just digging ditches"
The Bullets Yet to Be Fired, Financial Times, August 8, 2011 (by Kenneth Rogoff)
"In the case of Europe, this involves very large debt write downs in the smaller periphery countries,  combined with a German guarantee of central government debt in the rest....In the case of the US,  policymakers need to offer schemes to write down underwater mortgages....there is still the option of trying to achieve some modest  deleveraging through moderate inflation of say 4 to 6 per cent for several years.....Last but not least, monetary and financial solutions must be buttressed by structural reforms...."
Farheed Zakaria, GPS "Krugman calls for Space Aliens to Fix US Economy," August 12, 2011, Ken Rogoff:  "Infrastructure spending, if it were well-spent, that's great. I'm all for that.  I'd borrow for that, assuming we're not paying Boston Big Dig kind of prices for the infrastructure."
Interview with Charlie Rose, Business Week, December 2012,  CR: Does this economy need further stimulus? KR"Certainly, withdrawing it at too rapid a rate in such a fragile economy makes no sense....We need to have areas where we spend money, like infrastructure, education."
Inflation is Now the Lesser Evil, Kenneth Rogoff, Project Syndicate, December 2008, "It is time for the world's major central banks to acknowledge that a sudden burst of moderate inflation would be extremely helpful in unwinding today's epic debt morass."

 Appendix II. Reinhart and Rogoff's support of Grand Bargain in Senator Tom Coburn's book "The Debt Bomb"

Failing to find evidence of extreme hawkish positions in interviews, op-eds and media appearances, some have claimed that we were much more hawkish in private.  Any academic who has dealt with policymakers full well knows that if one's public and private positions are incongruous, it undermines one's impact.[1]
 Senator Coburn's book gives his perspective and selected comments on one such private meeting, an April 5, 2011. It was an informal one hour breakfast meeting with forty Senators roughly evenly divided between Democrats and Republicans. The meeting was organized by the so-called "Gang of Six" centrist senators, three Democrats and three Republicans.  The Gang of Six, of course, represented a unique bi-partisan effort to strike a long term budget deal in very toxic and polarized environment.   We presume the meeting was kept off-the-record so comments so the senators could be as frank as possible, we did not care. We strongly supported the spirit of the Gang of Six and are proud of our role in helping them and the National Commission on Fiscal Responsibility.
The meeting began with Carmen Reinhart giving a fifteen minute presentation.  The whole focus of the meeting was on how to approach a gradual move towards long-term fiscal sustainability.
The main ideas being discussed at the time were variants of the Simpson Bowles proposal, an ambitious attempt at a Grand Bargain, aiming to gradually reduce deficits over ten years, with a mix of tax increases, spending cuts and entitlement and tax reforms.  In his book, Senator Coburn notes
"Neither Reinhart nor Rogoff said we could fix our debt problem with just tax increases.  Both emphasized the need for comprehensive tax reform and tax code simplification.  Reinhart said the mortgage interest deduction discourages savings, while Rogoff told me later, 'The current code is jalopy.'"   The Debt Bomb, by Senator Tom Coburn, p. 30 (2012)."
Of particular importance to us, their proposal envisioned significant reform of the income tax system in a way that might have potentially have created a more efficient  and fairer system that could have increased revenue with less growth-compromising distortions.  Many others on both sides of the fence, including President Obama and later 2012 Republican Presidential candidate Mitt Romney endorsed such reforms.  When Representative Ryan later cited our work in support of his counterplan, we did not endorse his plan, and continued to favor the Simpson-Bowles approach.
A couple of Senator Coburn's quotes from us at the meeting, taken without the full context of our introductory remarks have been interpreted as saying we endorsed immediately closing the budget.  This was at odds with our position, notably our work on slow and often halting recoveries from financial crises, which we also emphasized.   In fact, taking into account our opening remarks, it is our impression that the Senators full well understood the urgency we were expressing referred to adopting a long-term Grand Bargain a la Simpson Bowles.

Read the article at: Carmen M. Reinhart

How Barack Obama’s 2012 Election Team Used Big Data to Rally Voters & A Win

The Definitive Story of How President Obama Mined Voter Data to Win A Second Term | MIT Technology Review
Two years after Barack Obama's election as president, Democrats suffered their worst defeat in decades. The congressional majorities that had given Obama his legislative successes, reforming the health-insurance and financial markets, were swept away in the midterm elections; control of the House flipped and the Democrats' lead in the Senate shrank to an ungovernably slim margin. Pundits struggled to explain the rise of the Tea Party. Voters' disappointment with the Obama agenda was evident as independents broke right and Democrats stayed home. In 2010, the Democratic National Committee failed its first test of the Obama era: it had not kept the Obama coalition together.
But for Democrats, there was bleak consolation in all this: Dan Wagner had seen it coming. When Wagner was hired as the DNC's targeting director, in January of 2009, he became responsible for collecting voter information and analyzing it to help the committee approach individual voters by direct mail and phone. But he appreciated that the raw material he was feeding into his statistical models amounted to a series of surveys on voters' attitudes and preferences. He asked the DNC's technology department to develop software that could turn that information into tables, and he called the result Survey Manager.That fall, when a special election was held to fill an open congressional seat in upstate New York, Wagner successfully predicted the final margin within 150 votes—well before Election Day. Months later, pollsters projected that Martha Coakley was certain to win another special election, to fill the Massachusetts Senate seat left empty by the death of Ted Kennedy. But Wagner's Survey Manager correctly predicted that the Republican Scott Brown was likely to prevail in the strongly Democratic state. "It's one thing to be right when you're going to win," says Jeremy Bird, who served as national deputy director of Organizing for America, the Obama campaign in abeyance, housed at the DNC. "It's another thing to be right when you're going to lose."
It is yet another thing to be right five months before you're going to lose. As the 2010 midterms approached, Wagner built statistical models for selected Senate races and 74 congressional districts. Starting in June, he began predicting the elections' outcomes, forecasting the margins of victory with what turned out to be improbable accuracy. But he hadn't gotten there with traditional polls. He had counted votes one by one. His first clue that the party was in trouble came from thousands of individual survey calls matched to rich statistical profiles in the DNC's databases. Core Democratic voters were telling the DNC's callers that they were much less likely to vote than statistical probability suggested. Wagner could also calculate how much the Democrats' mobilization programs would do to increase turnout among supporters, and in most races he knew it wouldn't be enough to cover the gap revealing itself in Survey Manager's tables.
His congressional predictions were off by an average of only 2.5 percent. "That was a proof point for a lot of people who don't understand the math behind it but understand the value of what that math produces," says Mitch Stewart, Organizing for America's director. "Once that first special [election] happened, his word was the gold standard at the DNC."
The significance of Wagner's achievement went far beyond his ability to declare winners months before Election Day. His approach amounted to a decisive break with 20th-century tools for tracking public opinion, which revolved around quarantining small samples that could be treated as representative of the whole. Wagner had emerged from a cadre of analysts who thought of voters as individuals and worked to aggregate projections about their opinions and behavior until they revealed a composite picture of everyone. His techniques marked the fulfillment of a new way of thinking, a decade in the making, in which voters were no longer trapped in old political geographies or tethered to traditional demographic categories, such as age or gender, depending on which attributes pollsters asked about or how consumer marketers classified them for commercial purposes. Instead, the electorate could be seen as a collection of individual citizens who could each be measured and assessed on their own terms. Now it was up to a candidate who wanted to lead those people to build a campaign that would interact with them the same way.

Dan Wagner, the chief analytics officer for Obama 2012, led the campaign's "Cave" of data scientists.
After the voters returned Obama to office for a second term, his campaign became celebrated for its use of technology—much of it developed by an unusual team of coders and engineers—that redefined how individuals could use the Web, social media, and smartphones to participate in the political process. A mobile app allowed a canvasser to download and return walk sheets without ever entering a campaign office; a Web platform called Dashboard gamified volunteer activity by ranking the most active supporters; and "targeted sharing" protocols mined an Obama backer's Facebook network in search of friends the campaign wanted to register, mobilize, or persuade.
But underneath all that were scores describing particular voters: a new political currency that predicted the behavior of individual humans. The campaign didn't just know who you were; it knew exactly how it could turn you into the type of person it wanted you to be.
The Scores
Four years earlier, Dan Wagner had been working at a Chicago economic consultancy, using forecasting skills developed studying econometrics at the University of Chicago, when he fell for Barack Obama and decided he wanted to work on his home-state senator's 2008 presidential campaign. Wagner, then 24, was soon in Des Moines, handling data entry for the state voter file that guided Obama to his crucial victory in the Iowa caucuses. He bounced from state to state through the long primary calendar, growing familiar with voter data and the ways of using statistical models to intelligently sort the electorate. For the general election, he was named lead targeter for the Great Lakes/Ohio River Valley region, the most intense battleground in the country.
After Obama's victory, many of his top advisors decamped to Washington to make preparations for governing. Wagner was told to stay behind and serve on a post-election task force that would review a campaign that had looked, to the outside world, technically flawless.
In the 2008 presidential election, Obama's targeters had assigned every voter in the country a pair of scores based on the probability that the individual would perform two distinct actions that mattered to the campaign: casting a ballot and supporting Obama. These scores were derived from an unprecedented volume of ongoing survey work. For each battleground state every week, the campaign's call centers conducted 5,000 to 10,000 so-called short-form interviews that quickly gauged a voter's preferences, and 1,000 interviews in a long-form version that was more like a traditional poll. To derive individual-level predictions, algorithms trawled for patterns between these opinions and the data points the campaign had assembled for every voter—as many as one thousand variables each, drawn from voter registration records, consumer data warehouses, and past campaign contacts.

This innovation was most valued in the field. There, an almost perfect cycle of microtargeting models directed volunteers to scripted conversations with specific voters at the door or over the phone. Each of those interactions produced data that streamed back into Obama's servers to refine the models pointing volunteers toward the next door worth a knock. The efficiency and scale of that process put the Democrats well ahead when it came to profiling voters. John McCain's campaign had, in most states, run its statistical model just once, assigning each voter to one of its microtargeting segments in the summer. McCain's advisors were unable to recalculate the probability that those voters would support their candidate as the dynamics of the race changed. Obama's scores, on the other hand, adjusted weekly, responding to new events like Sarah Palin's vice-presidential nomination or the collapse of Lehman Brothers.
Within the campaign, however, the Obama data operations were understood to have shortcomings. As was typical in political information infrastructure, knowledge about people was stored separately from data about the campaign's interactions with them, mostly because the databases built for those purposes had been developed by different consultants who had no interest in making their systems work together.
But the task force knew the next campaign wasn't stuck with that situation. Obama would run his final race not as an insurgent against a party establishment, but as the establishment itself. For four years, the task force members knew, their team would control the Democratic Party's apparatus. Their demands, not the offerings of consultants and vendors, would shape the marketplace. Their report recommended developing a "constituent relationship management system" that would allow staff across the campaign to look up individuals not just as voters or volunteers or donors or website users but as citizens in full. "We realized there was a problem with how our data and infrastructure interacted with the rest of the campaign, and we ought to be able to offer it to all parts of the campaign," says Chris Wegrzyn, a database applications developer who served on the task force.
Wegrzyn became the DNC's lead targeting developer and oversaw a series of costly acquisitions, all intended to free the party from the traditional dependence on outside vendors. The committee installed a Siemens Enterprise System phone-dialing unit that could put out 1.2 million calls a day to survey voters' opinions. Later, party leaders signed off on a $280,000 license to use Vertica software from Hewlett-Packard that allowed their servers to access not only the party's 180-million-person voter file but all the data about volunteers, donors, and those who had interacted with Obama online.
Many of those who went to Washington after the 2008 election in order to further the president's political agenda returned to Chicago in the spring of 2011 to work on his reëlection. The chastening losses they had experienced in Washington separated them from those who had known only the ecstasies of 2008. "People who did '08, but didn't do '10, and came back in '11 or '12—they had the hardest culture clash," says Jeremy Bird, who became national field director on the reëlection campaign. But those who went to Washington and returned to Chicago developed a particular appreciation for Wagner's methods of working with the electorate at an atomic level. It was a way of thinking that perfectly aligned with their ­simple theory of what it would take to win the president reëlection: get everyone who had voted for him in 2008 to do it again. At the same time, they knew they would need to succeed at registering and mobilizing new voters, especially in some of the fastest-growing demographic categories, to make up for any 2008 voters who did defect.
Obama's campaign began the election year confident it knew the name of every one of the 69,456,897 Americans whose votes had put him in the White House. They may have cast those votes by secret ballot, but Obama's analysts could look at the Democrats' vote totals in each precinct and identify the people most likely to have backed him. Pundits talked in the abstract about reassembling Obama's 2008 coalition. But within the campaign, the goal was literal. They would reassemble the coalition, one by one, through personal contacts.
The Experiments
When Jim Messina arrived in Chicago as Obama's newly minted campaign manager in January of 2011, he imposed a mandate on his recruits: they were to make decisions based on measurable data. But that didn't mean quite what it had four years before. The 2008 campaign had been "data-driven," as people liked to say. This reflected a principled imperative to challenge the political establishment with an empirical approach to electioneering, and it was greatly influenced by David Plouffe, the 2008 campaign manager, who loved metrics, spreadsheets, and performance reports. Plouffe wanted to know: How many of a field office's volunteer shifts had been filled last weekend? How much money did that ad campaign bring in?
But for all its reliance on data, the 2008 Obama campaign had remained insulated from the most important methodological innovation in 21st-century politics. In 1998, Yale professors Don Green and Alan Gerber conducted the first randomized controlled trial in modern political science, assigning New Haven voters to receive nonpartisan election reminders by mail, phone, or in-person visit from a canvasser and measuring which group saw the greatest increase in turnout. The subsequent wave of field experiments by Green, Gerber, and their followers focused on mobilization, testing competing modes of contact and get-out-the-vote language to see which were most successful.
The first Obama campaign used the findings of such tests to tweak call scripts and canvassing protocols, but it never fully embraced the experimental revolution itself. After Dan Wagner moved to the DNC, the party decided it would start conducting its own experiments. He hoped the committee could become "a driver of research for the Democratic Party."
To that end, he hired the Analyst Institute, a Washington-based consortium founded under the AFL-CIO's leadership in 2006 to coördinate field research projects across the electioneering left and distribute the findings among allies. Much of the experimental world's research had focused on voter registration, because that was easy to measure. The breakthrough was that registration no longer had to be approached passively; organizers did not have to simply wait for the unenrolled to emerge from anonymity, sign a form, and, they hoped, vote. New techniques made it possible to intelligently profile nonvoters: commercial data warehouses sold lists of all voting-age adults, and comparing those lists with registration rolls revealed eligible candidates, each attached to a home address to which an application could be mailed. Applying microtargeting models identified which nonregistrants were most likely to be Democrats and which ones Republicans.
The Obama campaign embedded social scientists from the Analyst Institute among its staff. Party officials knew that adding new Democratic voters to the registration rolls was a crucial element in their strategy for 2012. But already the campaign had ambitions beyond merely modifying nonparticipating citizens' behavior through registration and mobilization. It wanted to take on the most vexing problem in politics: changing voters' minds.
The expansion of individual-level data had made possible the kind of testing that could help do that. Experimenters had typically calculated the average effect of their interventions across the entire population. But as campaigns developed deep portraits of the voters in their databases, it became possible to measure the attributes of the people who were actually moved by an experiment's impact. A series of tests in 2006 by the women's group Emily's List had illustrated the potential of conducting controlled trials with microtargeting databases. When the group sent direct mail in favor of Democratic gubernatorial candidates, it barely budged those whose scores placed them in the middle of the partisan spectrum; it had a far greater impact upon those who had been profiled as soft (or nonideological) Republicans.
That test, and others that followed, demonstrated the limitations of traditional targeting. Such techniques rested on a series of long-standing assumptions—for instance, that middle-of-the-roaders were the most persuadable and that infrequent voters were the likeliest to be captured in a get-out-the-vote drive. But the experiments introduced new uncertainty. People who were identified as having a 50 percent likelihood of voting for a Democrat might in fact be torn between the two parties, or they might look like centrists only because no data attached to their records pushed a partisan prediction in one direction or another. "The scores in the middle are the people we know less about," says Chris Wyant, a 2008 field organizer who became the campaign's general election director in Ohio four years later. "The extent to which we were guessing about persuasion was not lost on any of us."

One way the campaign sought to identify the ripest targets was through a series of what the Analyst Institute called "experiment-informed programs," or EIPs, designed to measure how effective different types of messages were at moving public opinion.
The traditional way of doing this had been to audition themes and language in focus groups and then test the winning material in polls to see which categories of voters responded positively to each approach. Any insights were distorted by the artificial settings and by the tiny samples of demographic subgroups in traditional polls. "You're making significant resource decisions based on 160 people?" asks Mitch Stewart, director of the Democratic campaign group Organizing for America. "Isn't that nuts? And people have been doing that for decades!"
An experimental program would use those steps to develop a range of prospective messages that could be subjected to empirical testing in the real world. Experimenters would randomly assign voters to receive varied sequences of direct mail—four pieces on the same policy theme, each making a slightly different case for Obama—and then use ongoing survey calls to isolate the attributes of those whose opinions changed as a result.
In March, the campaign used this technique to test various ways of promoting the administration's health-care policies. One series of mailers described Obama's regulatory reforms; another advised voters that they were now entitled to free regular check-ups and ought to schedule one. The experiment revealed how much voter response differed by age, especially among women. Older women thought more highly of the policies when they received reminders about preventive care; younger women liked them more when they were told about contraceptive coverage and new rules that prohibited insurance companies from charging women more.
When Paul Ryan was named to the Republican ticket in August, Obama's advisors rushed out an EIP that compared different lines of attack about Medicare. The results were surprising. "The electorate [had seemed] very inelastic," says Terry Walsh, who coördinated the campaign's polling and paid-media spending. "In fact, when we did the Medicare EIPs, we got positive movement that was very heartening, because it was at a time when we were not seeing a lot of movement in the electorate." But that movement came from quarters where a traditional campaign would never have gone hunting for minds it could change. The Obama team found that voters between 45 and 65 were more likely to change their views about the candidates after hearing Obama's Medicare arguments than those over 65, who were currently eligible for the program.
A similar strategy of targeting an unexpected population emerged from a July EIP testing Obama's messages aimed at women. The voters most responsive to the campaign's arguments about equal-pay measures and women's health, it found, were those whose likelihood of supporting the president was scored at merely 20 and 40 percent. Those scores suggested that they probably shared Republican attitudes; but here was one thing that could pull them to Obama. As a result, when Obama unveiled a direct-mail track addressing only women's issues, it wasn't to shore up interest among core parts of the Democratic coalition, but to reach over for conservatives who were at odds with their party on gender concerns. "The whole goal of the women's track was to pick off votes for Romney," says Walsh. "We were able to persuade people who fell low on candidate support scores if we gave them a specific message."
At the same time, Obama's campaign was pursuing a second, even more audacious adventure in persuasion: one-on-one interaction. Traditionally, campaigns have restricted their persuasion efforts to channels like mass media or direct mail, where they can control presentation, language, and targeting. Sending volunteers to persuade voters would mean forcing them to interact with opponents, or with voters who were undecided because they were alienated from politics on delicate issues like abortion. Campaigns have typically resisted relinquishing control of ground-level interactions with voters to risk such potentially combustible situations; they felt they didn't know enough about their supporters or volunteers. "You can have a negative impact," says Jeremy Bird, who served as national deputy director of Organizing for America. "You can hurt your candidate."
In February, however, Obama volunteers attempted 500,000 conversations with the goal of winning new supporters. Voters who'd been randomly selected from a group identified as persuadable were polled after a phone conversation that began with a volunteer reading from a script. "We definitely find certain people moved more than other people," says Bird. Analysts identified their attributes and made them the core of a persuasion model that predicted, on a scale of 0 to 10, the likelihood that a voter could be pulled in Obama's direction after a single volunteer interaction. The experiment also taught Obama's field department about its volunteers. Those in California, which had always had an exceptionally mature volunteer organization for a non-­battleground state, turned out to be especially persuasive: voters called by Californians, no matter what state they were in themselves, were more likely to become Obama supporters.

Alex Lundry created Mitt Romney's data science unit. It was less than one-tenth the size of Obama's analytics team.

With these findings in hand, Obama's strategists grew confident that they were no longer restricted to advertising as a channel for persuasion. They began sending trained volunteers to knock on doors or make phone calls with the objective of changing minds.
That dramatic shift in the culture of electioneering was felt on the streets, but it was possible only because of advances in analytics. Chris Wegrzyn, a database applications developer, developed a program code-named Airwolf that matched county and state lists of people who had requested mail ballots with the campaign's list of e-mail addresses. Likely Obama supporters would get regular reminders from their local field organizers, asking them to return their ballots, and, once they had, a message thanking them and proposing other ways to be involved in the campaign. The local organizer would receive daily lists of the voters on his or her turf who had outstanding ballots so that the campaign could follow up with personal contact by phone or at the doorstep. "It is a fundamental way of tying together the online and offline worlds," says Wagner.
Wagner, however, was turning his attention beyond the field. By June of 2011, he was chief analytics officer for the campaign and had begun making the rounds of the other units at headquarters, from fund-raising to communications, offering to help "solve their problems with data." He imagined the analytics department—now a 54-person staff, housed in a windowless office known as the Cave—as an "in-house consultancy" with other parts of the campaign as its clients. "There's a process of helping people learn about the tools so they can be a participant in the process," he says. "We essentially built products for each of those various departments that were paired up with a massive database we had."
The Flow
As job notices seeking specialists in text analytics, computational advertising, and online experiments came out of the incumbent's campaign, Mitt Romney's advisors at the Republicans' headquarters in Boston's North End watched with a combination of awe and perplexity. Throughout the primaries, Romney had appeared to be the only Republican running a 21st-century campaign, methodically banking early votes in states like Florida and Ohio before his disorganized opponents could establish operations there.
But the Republican winner's relative sophistication in the primaries belied a poverty of expertise compared with the Obama campaign. Since his first campaign for governor of Massachusetts, in 2002, Romney had relied upon ­TargetPoint Consulting, a Virginia firm that was then a pioneer in linking information from consumer data warehouses to voter registration records and using it to develop individual-level predictive models. It was TargetPoint's CEO, Alexander Gage, who had coined the term "microtargeting" to describe the process, which he modeled on the corporate world's approach to customer relationship management.
Such techniques had offered George W. Bush's reëlection campaign a significant edge in targeting, but Republicans had done little to institutionalize that advantage in the years since. By 2006, Democrats had not only matched Republicans in adopting commercial marketing techniques; they had moved ahead by integrating methods developed in the social sciences.
Romney's advisors knew that Obama was building innovative internal data analytics departments, but they didn't feel a need to match those activities. "I don't think we thought, relative to the marketplace, we could be the best at data in-house all the time," Romney's digital director, Zac Moffatt, said in July. "Our idea is to find the best firms to work with us." As a result, Romney remained dependent on TargetPoint to develop voter segments, often just once, and then deliver them to the campaign's databases. That was the structure Obama had abandoned after winning the nomination in 2008.
In May a TargetPoint vice president, Alex Lundry, took leave from his post at the firm to assemble a data science unit within Romney's headquarters. To round out his team, Lundry brought in Tom Wood, a University of Chicago postdoctoral student in political science, and Brent McGoldrick, a veteran of Bush's 2004 campaign who had left politics for the consulting firm Financial Dynamics (later FTI Consulting), where he helped financial-services, health-care, and energy companies communicate better. But Romney's data science team was less than one-tenth the size of Obama's analytics department. Without a large in-house staff to handle the massive national data sets that made it possible to test and track citizens, Romney's data scientists never tried to deepen their understanding of individual behavior. Instead, they fixated on trying to unlock one big, persistent mystery, which Lundry framed this way: "How can we get a sense of whether this advertising is working?"
"You usually get GRPs and tracking polls," he says, referring to the gross ratings points that are the basic unit of measuring television buys. "There's a very large causal leap you have to make from one to the other."
Lundry decided to focus on more manageable ways of measuring what he called the information flow. His team converted topics of political communication into discrete units they called "entities." They initially classified 200 of them, including issues like the auto industry bailout, controversies like the one surrounding federal funding for the solar-power company Solyndra, and catchphrases like "the war on women." When a new concept (such as Obama's offhand remark, during a speech about our common dependence on infrastructure, that "you didn't build that") emerged as part of the election-year lexicon, the analysts added it to the list. They tracked each entity on the National Dialogue Monitor, TargetPoint's system for measuring the frequency and tone with which certain topics are mentioned across all media. TargetPoint also integrated content collected from newspaper websites and closed-caption transcripts of broadcast programs. Lundry's team aimed to examine how every entity fared over time in each of two categories: the informal sphere of social media, especially Twitter, and the journalistic product that campaigns call earned press coverage.
Ultimately, Lundry wanted to assess the impact that each type of public attention had on what mattered most to them: Romney's position in the horse race. He turned to vector autoregression models, which equities traders use to isolate the influence of single variables on market movements. In this case, Lundry's team looked for patterns in the relationship between the National Dialogue Monitor's data and Romney's numbers in Gallup's daily tracking polls. By the end of July, they thought they had identified a three-step process they called "Wood's Triangle."
Within three or four days of a new entity's entry into the conversation, either through paid ads or through the news cycle, it was possible to make a well-informed hypothesis about whether the topic was likely to win media attention by tracking whether it generated Twitter chatter. That informal conversation among political-class elites typically led to traditional print or broadcast press coverage one to two days later, and that, in turn, might have an impact on the horse race. "We saw this process over and over again," says Lundry.
They began to think of ads as a "shock to the system"—a way to either introduce a new topic or restore focus on an area in which elite interest had faded. If an entity didn't gain its own energy—as when the Republicans charged over the summer that the White House had waived the work requirements in the federal welfare rules—Lundry would propose a "re-shock to the system" with another ad on the subject five to seven days later. After 12 to 14 days, Lundry found, an entity had passed through the system and exhausted its ability to alter public opinion—so he would recommend to the campaign's communications staff that they move on to something new.
Those insights offered campaign officials a theory of information flows, but they provided no guidance in how to allocate campaign resources in order to win the Electoral College. Assuming that Obama had superior ground-level data and analytics, Romney's campaign tried to leverage its rivals' strategy to shape its own; if Democrats thought a state or media market was competitive, maybe that was evidence that Republicans should think so too. "We were necessarily reactive, because we were putting together the plane as it took off," Lundry says. "They had an enormous head start on us."
Romney's political department began holding regular meetings to look at where in the country the Obama campaign was focusing resources like ad dollars and the president's time. The goal was to try to divine the calculations behind those decisions. It was, in essence, the way Microsoft's Bing approached Google: trying to reverse-engineer the market leader's code by studying the visible output. "We watch where the president goes," Dan ­Centinello, the Romney deputy political director who oversaw the meetings, said over the summer.
Obama's media-buying strategy proved particularly hard to decipher. In early September, as part of his standard review, Lundry noticed that the week after the Democratic convention, Obama had aired 68 ads in Dothan, Alabama, a town near the Florida border. Dothan was one of the country's smallest media markets, and Alabama one of the safest Republican states. Even though the area was known to savvy ad buyers as one of the places where a media market crosses state lines, Dothan TV stations reached only about 9,000 Florida voters, and around 7,000 of them had voted for John McCain in 2008. "This is a hard-core Republican media market," Lundry says. "It's incredibly tiny. But they were advertising there."
Romney's advisors might have formed a theory about the broader media environment, but whatever was sending Obama hunting for a small pocket of votes was beyond their measurement. "We could tell," says McGoldrick, "that there was something in the algorithms that was telling them what to run."
The March
In the summer of 2011, Carol Davidsen received a message from Dan Wagner. Already the Obama campaign was known for its relentless e-mails beseeching supporters to give their money or time, but this one offered something that intrigued Davidsen: a job. Wagner had sorted the campaign's list of donors, stretching back to 2008, to find those who described their occupation with terms like "data" and "analytics" and sent them all invitations to apply for work in his new analytics department.
Davidsen was working at Navic Networks, a Microsoft-owned company that wrote code for set-top cable boxes to create a record of a user's DVR or tuner history, when she heeded Wagner's call. One year before Election Day, she started work in the campaign's technology department to serve as product manager for Narwhal. That was the code name, borrowed from a tusked whale, for an ambitious effort to match records from previously unconnected databases so that a user's online interactions with the campaign could be synchronized. With Narwhal, e-mail blasts asking people to volunteer could take their past donation history into consideration, and the algorithms determining how much a supporter would be asked to contribute could be shaped by knowledge about his or her reaction to previous solicitations. This integration enriched a technique, common in website development, that Obama's online fund-raising efforts had used to good effect in 2008: the A/B test, in which users are randomly directed to different versions of a thing and their responses are compared. Now analysts could leverage personal data to identify the attributes of those who responded, and use that knowledge to refine subsequent appeals. "You can cite people's other types of engagement," says ­Amelia ­Showalter, Obama's director of digital analytics. "We discovered that there were a lot of things that built goodwill, like signing the president's birthday card or getting a free bumper sticker, that led them to become more engaged with the campaign in other ways."
If online communication had been the aspect of the 2008 campaign subjected to the most rigorous empirical examination—it's easy to randomly assign e-mails in an A/B test and compare click-through rates or donation levels—mass-media strategy was among those that received the least. Television and radio ads had to be purchased by geographic zone, and the available data on who watches which channels or shows, collected by research firms like Nielsen and Scarborough, often included little more than viewer age and gender. That might be good enough to guide buys for Schick or Foot Locker, but it's of limited value for advertisers looking to define audiences in political terms.

As campaign manager Jim Messina prepared to spend as much as half a billion dollars on mass media for Obama's reëlection, he set out to reinvent the process for allocating resources across broadcast, cable, satellite, and online channels. "If you think about the universe of possible places for an advertiser, it's almost infinite," says Amy Gershkoff, who was hired as the campaign's media-planning director on the strength of her successful negotiations, while at her firm Changing Targets in 2009, to link the information from cable systems to individual microtargeting profiles. "There are tens of millions of opportunities where a campaign can put its next dollar. You have all this great, robust voter data that doesn't fit together with the media data. How you knit that together is a challenge."
By the start of 2012, ­Wagner had deftly wrested command of media planning into his own department. As he expanded the scope of analytics, he defined his purview as "the study and practice of resource optimization for the purpose of improving programs and earning votes more efficiently." That usually meant calculating, for any campaign activity, the number of votes gained through a given amount of contact at a given cost.
But when it came to buying media, such calculations had been simply impossible, because campaigns were unable to link what they knew about voters to what cable providers knew about their customers. Obama's advisors decided that the data made available in the private sector had long led political advertisers to ask the wrong questions. Walsh says of the effort to reimagine the media-targeting process: "It was not to get a better understanding of what 35-plus women watch on TV. It was to find out how many of our persuadable voters were watching those dayparts."
Davidsen, whose previous work had left her intimately familiar with the rich data sets held in set-top boxes, understood that a lot of that data was available in the form of tuner and DVR histories collected by cable providers and then aggregated by research firms. For privacy reasons, however, the information was not available at the individual level. "The hardest thing in media buying right now is the lack of information," she says.
Davidsen began negotiating to have research firms repackage their data in a form that would permit the campaign to access the individual histories without violating the cable providers' privacy standards. Under a $350,000 deal she worked out with one company, Rentrak, the campaign provided a list of persuadable voters and their addresses, derived from its microtargeting models, and the company looked for them in the cable providers' billing files. When a record matched, ­Rentrak would issue it a unique household ID that identified viewing data from a single set-top box but masked any personally identifiable information.
The Obama campaign had created its own television ratings system, a kind of Nielsen in which the only viewers who mattered were those not yet fully committed to a presidential candidate. But Davidsen had to get the information into a practical form by early May, when Obama strategists planned to start running their anti-Romney ads. She oversaw the development of a software platform the Obama staff called the Optimizer, which broke the day into 96 quarter-hour segments and assessed which time slots across 60 channels offered the greatest number of persuadable targets per dollar. (By September, she had unlocked an even richer trove of data: a cable system in Toledo, Ohio, that tracked viewers' tuner histories by the second.) "The revolution of media buying in this campaign," says Walsh, "was to turn what was a broadcast medium into something that looks a lot more like a narrowcast medium."
When the Obama campaign did use television as a mass medium, it was because the Optimizer had concluded it would be a more efficient way of reaching persuadable targets. Sometimes a national cable ad was a better bargain than a large number of local buys in the 66 media markets reaching battleground states. But the occasional national buy also had other benefits. It could boost fund-raising and motivate volunteers in states that weren't essential to Obama's Electoral College arithmetic. And, says Davidsen, "it helps hide some of the strategy of your buying."
Even without that tactic, Obama's buys perplexed the Romney analysts in Boston. They had invested in their own media-intelligence platform, called Centraforce. It used some of the same aggregated data sources that were feeding into the Optimizer, and at times both seemed to send the campaigns to the same unlikely ad blocks—for example, in reruns on TV Land. But there was a lot more to what Lundry called Obama's "highly variable" media strategy. Many of the Democrats' ads were placed in fringe markets, on marginal stations, and at odd times where few political candidates had ever seen value. Romney's data scientists simply could not decode those decisions without the voter models or persuasion experiments that helped Obama pick out individual targets. "We were never able to figure out the level of advertising and what they were trying to do," says McGoldrick. "It wasn't worth reverse-engineering, because what are you going to do?"
The Community
Although the voter opinion tables that emerged from the Cave looked a lot like polls, the analysts who produced them were disinclined to call them polls. The campaign had plenty of those, generated by a public-opinion team of eight outside firms, and new arrivals at the Chicago headquarters were shocked by the variegated breadth of the research that arrived on their desks daily. "We believed in combining the qual, which we did more than any campaign ever, with the quant, which we [also] did more than any other campaign, to make sure all communication for every level of the campaign was informed by what they found," says David Simas, the director of opinion research.
Simas considered himself the "air-traffic controller" for such research, which was guided by a series of voter diaries that Obama's team commissioned as it prepared for the reëlection campaign. "We needed to do something almost divorced from politics and get to the way they're seeing their lives," he says. The lead pollster, Joel Benenson, had respondents write about their experiences. The entries frequently used the word "disappointment," which helped explain attitudes toward Obama's administration but also spoke to a broader dissatisfaction with economic conditions. "That became the foundation for our entire research program," says Simas.

Carol Davidsen matched Obama 2012's lists of persuadable voters with cable providers' billing information.

Obama's advisors used those diaries to develop messages that contrasted Obama with Romney as a fighter for the middle class. Benenson's national polls tested language to see which affected voters' responses in survey experiments and direct questioning. A quartet of polling firms were assigned specific states and asked to figure out which national themes fit best with local concerns. Eventually, Obama's media advisors created more than 500 ads and tested them before an online sample of viewers selected by focus-group director David Binder.
But the campaign had to play defense, too. When something potentially damaging popped up in the news, like Democratic consultant Hilary Rosen's declaration that Ann Romney had "never worked a day in her life," Simas checked in with the Community, a private online bulletin board populated by 100 undecided voters Binder had recruited. Simas would monitor Community conversations to see which news events penetrated voter consciousness. Sometimes he had Binder show its members controversial material—like a video clip of Obama's "You didn't build that" comment—and ask if it changed their views of the candidate. "For me, it was a very quick way to draw back and determine whether something was a problem or not a problem," says Simas.
When Wagner started packaging his department's research into something that campaign leadership could read like a poll, a pattern became apparent. Obama's numbers in key battleground states were low in the analytic tables, but Romney's were too. There were simply more undecided voters in such states—sometimes nearly twice as many as the traditional pollsters found. A basic methodological distinction explained this discrepancy: microtargeting models required interviewing a lot of unlikely voters to give shape to a profile of what a nonvoter looked like, while pollsters tracking the horse race wanted to screen more rigorously for those likely to cast a ballot. The rivalry between the two units trying to measure public opinion grew intense: the analytic polls were a threat to the pollsters' primacy and, potentially, to their business model. "I spent a lot of time within the campaign explaining to people that the numbers we get from analytics and the numbers we get from external pollsters did not need strictly to be reconciled," says Walsh. "They were different."
The scope of the analytic research enabled it to pick up movements too small for traditional polls to perceive. As Simas reviewed Wagner's analytic tables in mid-October, he was alarmed to see that what had been a Romney lead of one to two points in Green Bay, Wisconsin, had grown into an advantage of between six and nine. Green Bay was the only media market in the state to experience such a shift, and there was no obvious explanation. But it was hard to discount. Whereas a standard 800-person statewide poll might have reached 100 respondents in the Green Bay area, analytics was placing 5,000 calls in Wisconsin in each five-day cycle—and benefiting from tens of thousands of other field contacts—to produce microtargeting scores. Analytics was talking to as many people in the Green Bay media market as traditional pollsters were talking to across Wisconsin every week. "We could have the confidence level to say, 'This isn't noise,'" says Simas. So the campaign's media buyers aired an ad attacking Romney on outsourcing and beseeched Messina to send former president Bill Clinton and Obama himself to rallies there. (In the end, Romney took the county 50.3 to 48.5 percent.)
For the most part, however, the analytic tables demonstrated how stable the electorate was, and how predictable individual voters could be. Polls from the media and academic institutions may have fluctuated by the hour, but drawing on hundreds of data points to judge whether someone was a likely voter proved more reliable than using a seven-question battery like Gallup's to do the same. "When you see this Pogo stick happening with the public data—the electorate is just not that volatile," says Mitch Stewart, director of the Democratic campaign group Organizing for America. The analytic data offered a source of calm.
Romney's advisors were similarly sanguine, but they were losing. They, too, believed it possible to project the composition of the electorate, relying on a method similar to Gallup's: pollster Neil Newhouse asked respondents how likely they were to cast a ballot. Those who answered that question with a seven or below on a 10-point scale were disregarded as not inclined to vote. But that ignored the experimental methods that made it possible to measure individual behavior and the impact that a campaign itself could have on a citizen's motivation. As a result, the Republicans failed to account for voters that the Obama campaign could be mobilizing even if they looked to Election Day without enthusiasm or intensity.
On the last day of the race, Wagner and his analytics staff left the Cave and rode the elevator up one floor in the campaign's Chicago skyscraper to join members of other departments in a boiler room established to help track votes as they came in. Already, for over a month, Obama's analysts had been counting ballots from states that allowed citizens to vote early. Each day, the campaign overlaid the lists of early voters released by election authorities with its modeling scores to project how many votes they could claim as their own.
By Election Day, Wagner's analytic tables turned into predictions. Before the polls opened in Ohio, authorities in Hamilton County, the state's third-largest and home to Cincinnati, released the names of 103,508 voters who had cast early ballots over the previous month. Wagner sorted them by microtargeting projections and found that 58,379 had individual support scores over 50.1—that is, the campaign's models predicted that they were more likely than not to have voted for Obama. That amounted to 56.4 percent of the county's votes, or a raw lead of 13,249 votes over Romney. Early ballots were the first to be counted after Ohio's polls closed, and Obama's senior staff gathered around screens in the boiler room to see the initial tally. The numbers settled almost exactly where Wagner had said they would: Obama got 56.6 percent of the votes in Hamilton County. In Florida, he was as close to the mark; Obama's margin was only two-tenths of a percent off. "After those first two numbers, we knew," says Bird. "It was dead-on."
When Obama was reëlected, and by a far larger Electoral College margin than most outsiders had anticipated, his staff was exhilarated but not surprised. The next morning, Mitch Stewart sat in the boiler room, alone, monitoring the lagging votes as they came into Obama's servers from election authorities in Florida, the last state to name a winner. The presidency was no longer at stake; the only thing that still hung in the balance was the accuracy of the analytics department's predictions.
The Legacy
A few days after the election, as Florida authorities continued to count provisional ballots, a few staff members were directed, as four years before, to remain in Chicago. Their instructions were to produce another post-mortem report summing up the lessons of the past year and a half. The undertaking was called the Legacy Project, a grandiose title inspired by the idea that the innovations of Obama 2012 should be translated not only to the campaign of the next Democratic candidate for president but also to governance. Obama had succeeded in convincing some citizens that a modest adjustment to their behavior would affect, however marginally, the result of an election. Could he make them feel the same way about Congress?
Simas, who had served in the White House before joining the team, marveled at the intimacy of the campaign. Perhaps more than anyone else at headquarters, he appreciated the human aspect of politics. This had been his first presidential election, but before he became a political operative, Simas had been a politician himself, serving on the city council and school board in his hometown of Taunton, Massachusetts. He ran for office by knocking on doors and interacting individually with constituents (or those he hoped would become constituents), trying to track their moods and expectations.
In many respects, analytics had made it possible for the Obama campaign to recapture that style of politics. Though the old guard may have viewed such techniques as a disruptive force in campaigns, they enabled a presidential candidate to view the electorate the way local candidates do: as a collection of people who make up a more perfect union, each of them approachable on his or her terms, their changing levels of support and enthusiasm open to measurement and, thus, to respect. "What that gave us was the ability to run a national presidential campaign the way you'd do a local ward campaign," Simas says. "You know the people on your block. People have relationships with one another, and you leverage them so you know the way they talk about issues, what they're discussing at the coffee shop."
Few events in American life other than a presidential election touch 126 million adults, or even a significant fraction that many, on a single day. Certainly no corporation, no civic institution, and very few government agencies ever do. Obama did so by reducing every American to a series of numbers. Yet those numbers somehow captured the individuality of each voter, and they were not demographic classifications. The scores measured the ability of people to change politics—and to be changed by it.

Read the article at: MIT Review