Friday, July 12, 2013

How Barack Obama’s 2012 Election Team Used Big Data to Rally Voters & A Win

The Definitive Story of How President Obama Mined Voter Data to Win A Second Term | MIT Technology Review
Two years after Barack Obama's election as president, Democrats suffered their worst defeat in decades. The congressional majorities that had given Obama his legislative successes, reforming the health-insurance and financial markets, were swept away in the midterm elections; control of the House flipped and the Democrats' lead in the Senate shrank to an ungovernably slim margin. Pundits struggled to explain the rise of the Tea Party. Voters' disappointment with the Obama agenda was evident as independents broke right and Democrats stayed home. In 2010, the Democratic National Committee failed its first test of the Obama era: it had not kept the Obama coalition together.
But for Democrats, there was bleak consolation in all this: Dan Wagner had seen it coming. When Wagner was hired as the DNC's targeting director, in January of 2009, he became responsible for collecting voter information and analyzing it to help the committee approach individual voters by direct mail and phone. But he appreciated that the raw material he was feeding into his statistical models amounted to a series of surveys on voters' attitudes and preferences. He asked the DNC's technology department to develop software that could turn that information into tables, and he called the result Survey Manager.That fall, when a special election was held to fill an open congressional seat in upstate New York, Wagner successfully predicted the final margin within 150 votes—well before Election Day. Months later, pollsters projected that Martha Coakley was certain to win another special election, to fill the Massachusetts Senate seat left empty by the death of Ted Kennedy. But Wagner's Survey Manager correctly predicted that the Republican Scott Brown was likely to prevail in the strongly Democratic state. "It's one thing to be right when you're going to win," says Jeremy Bird, who served as national deputy director of Organizing for America, the Obama campaign in abeyance, housed at the DNC. "It's another thing to be right when you're going to lose."
It is yet another thing to be right five months before you're going to lose. As the 2010 midterms approached, Wagner built statistical models for selected Senate races and 74 congressional districts. Starting in June, he began predicting the elections' outcomes, forecasting the margins of victory with what turned out to be improbable accuracy. But he hadn't gotten there with traditional polls. He had counted votes one by one. His first clue that the party was in trouble came from thousands of individual survey calls matched to rich statistical profiles in the DNC's databases. Core Democratic voters were telling the DNC's callers that they were much less likely to vote than statistical probability suggested. Wagner could also calculate how much the Democrats' mobilization programs would do to increase turnout among supporters, and in most races he knew it wouldn't be enough to cover the gap revealing itself in Survey Manager's tables.
His congressional predictions were off by an average of only 2.5 percent. "That was a proof point for a lot of people who don't understand the math behind it but understand the value of what that math produces," says Mitch Stewart, Organizing for America's director. "Once that first special [election] happened, his word was the gold standard at the DNC."
The significance of Wagner's achievement went far beyond his ability to declare winners months before Election Day. His approach amounted to a decisive break with 20th-century tools for tracking public opinion, which revolved around quarantining small samples that could be treated as representative of the whole. Wagner had emerged from a cadre of analysts who thought of voters as individuals and worked to aggregate projections about their opinions and behavior until they revealed a composite picture of everyone. His techniques marked the fulfillment of a new way of thinking, a decade in the making, in which voters were no longer trapped in old political geographies or tethered to traditional demographic categories, such as age or gender, depending on which attributes pollsters asked about or how consumer marketers classified them for commercial purposes. Instead, the electorate could be seen as a collection of individual citizens who could each be measured and assessed on their own terms. Now it was up to a candidate who wanted to lead those people to build a campaign that would interact with them the same way.

Dan Wagner, the chief analytics officer for Obama 2012, led the campaign's "Cave" of data scientists.
After the voters returned Obama to office for a second term, his campaign became celebrated for its use of technology—much of it developed by an unusual team of coders and engineers—that redefined how individuals could use the Web, social media, and smartphones to participate in the political process. A mobile app allowed a canvasser to download and return walk sheets without ever entering a campaign office; a Web platform called Dashboard gamified volunteer activity by ranking the most active supporters; and "targeted sharing" protocols mined an Obama backer's Facebook network in search of friends the campaign wanted to register, mobilize, or persuade.
But underneath all that were scores describing particular voters: a new political currency that predicted the behavior of individual humans. The campaign didn't just know who you were; it knew exactly how it could turn you into the type of person it wanted you to be.
The Scores
Four years earlier, Dan Wagner had been working at a Chicago economic consultancy, using forecasting skills developed studying econometrics at the University of Chicago, when he fell for Barack Obama and decided he wanted to work on his home-state senator's 2008 presidential campaign. Wagner, then 24, was soon in Des Moines, handling data entry for the state voter file that guided Obama to his crucial victory in the Iowa caucuses. He bounced from state to state through the long primary calendar, growing familiar with voter data and the ways of using statistical models to intelligently sort the electorate. For the general election, he was named lead targeter for the Great Lakes/Ohio River Valley region, the most intense battleground in the country.
After Obama's victory, many of his top advisors decamped to Washington to make preparations for governing. Wagner was told to stay behind and serve on a post-election task force that would review a campaign that had looked, to the outside world, technically flawless.
In the 2008 presidential election, Obama's targeters had assigned every voter in the country a pair of scores based on the probability that the individual would perform two distinct actions that mattered to the campaign: casting a ballot and supporting Obama. These scores were derived from an unprecedented volume of ongoing survey work. For each battleground state every week, the campaign's call centers conducted 5,000 to 10,000 so-called short-form interviews that quickly gauged a voter's preferences, and 1,000 interviews in a long-form version that was more like a traditional poll. To derive individual-level predictions, algorithms trawled for patterns between these opinions and the data points the campaign had assembled for every voter—as many as one thousand variables each, drawn from voter registration records, consumer data warehouses, and past campaign contacts.





This innovation was most valued in the field. There, an almost perfect cycle of microtargeting models directed volunteers to scripted conversations with specific voters at the door or over the phone. Each of those interactions produced data that streamed back into Obama's servers to refine the models pointing volunteers toward the next door worth a knock. The efficiency and scale of that process put the Democrats well ahead when it came to profiling voters. John McCain's campaign had, in most states, run its statistical model just once, assigning each voter to one of its microtargeting segments in the summer. McCain's advisors were unable to recalculate the probability that those voters would support their candidate as the dynamics of the race changed. Obama's scores, on the other hand, adjusted weekly, responding to new events like Sarah Palin's vice-presidential nomination or the collapse of Lehman Brothers.
Within the campaign, however, the Obama data operations were understood to have shortcomings. As was typical in political information infrastructure, knowledge about people was stored separately from data about the campaign's interactions with them, mostly because the databases built for those purposes had been developed by different consultants who had no interest in making their systems work together.
But the task force knew the next campaign wasn't stuck with that situation. Obama would run his final race not as an insurgent against a party establishment, but as the establishment itself. For four years, the task force members knew, their team would control the Democratic Party's apparatus. Their demands, not the offerings of consultants and vendors, would shape the marketplace. Their report recommended developing a "constituent relationship management system" that would allow staff across the campaign to look up individuals not just as voters or volunteers or donors or website users but as citizens in full. "We realized there was a problem with how our data and infrastructure interacted with the rest of the campaign, and we ought to be able to offer it to all parts of the campaign," says Chris Wegrzyn, a database applications developer who served on the task force.
Wegrzyn became the DNC's lead targeting developer and oversaw a series of costly acquisitions, all intended to free the party from the traditional dependence on outside vendors. The committee installed a Siemens Enterprise System phone-dialing unit that could put out 1.2 million calls a day to survey voters' opinions. Later, party leaders signed off on a $280,000 license to use Vertica software from Hewlett-Packard that allowed their servers to access not only the party's 180-million-person voter file but all the data about volunteers, donors, and those who had interacted with Obama online.
Many of those who went to Washington after the 2008 election in order to further the president's political agenda returned to Chicago in the spring of 2011 to work on his reëlection. The chastening losses they had experienced in Washington separated them from those who had known only the ecstasies of 2008. "People who did '08, but didn't do '10, and came back in '11 or '12—they had the hardest culture clash," says Jeremy Bird, who became national field director on the reëlection campaign. But those who went to Washington and returned to Chicago developed a particular appreciation for Wagner's methods of working with the electorate at an atomic level. It was a way of thinking that perfectly aligned with their ­simple theory of what it would take to win the president reëlection: get everyone who had voted for him in 2008 to do it again. At the same time, they knew they would need to succeed at registering and mobilizing new voters, especially in some of the fastest-growing demographic categories, to make up for any 2008 voters who did defect.
Obama's campaign began the election year confident it knew the name of every one of the 69,456,897 Americans whose votes had put him in the White House. They may have cast those votes by secret ballot, but Obama's analysts could look at the Democrats' vote totals in each precinct and identify the people most likely to have backed him. Pundits talked in the abstract about reassembling Obama's 2008 coalition. But within the campaign, the goal was literal. They would reassemble the coalition, one by one, through personal contacts.
The Experiments
When Jim Messina arrived in Chicago as Obama's newly minted campaign manager in January of 2011, he imposed a mandate on his recruits: they were to make decisions based on measurable data. But that didn't mean quite what it had four years before. The 2008 campaign had been "data-driven," as people liked to say. This reflected a principled imperative to challenge the political establishment with an empirical approach to electioneering, and it was greatly influenced by David Plouffe, the 2008 campaign manager, who loved metrics, spreadsheets, and performance reports. Plouffe wanted to know: How many of a field office's volunteer shifts had been filled last weekend? How much money did that ad campaign bring in?
But for all its reliance on data, the 2008 Obama campaign had remained insulated from the most important methodological innovation in 21st-century politics. In 1998, Yale professors Don Green and Alan Gerber conducted the first randomized controlled trial in modern political science, assigning New Haven voters to receive nonpartisan election reminders by mail, phone, or in-person visit from a canvasser and measuring which group saw the greatest increase in turnout. The subsequent wave of field experiments by Green, Gerber, and their followers focused on mobilization, testing competing modes of contact and get-out-the-vote language to see which were most successful.
The first Obama campaign used the findings of such tests to tweak call scripts and canvassing protocols, but it never fully embraced the experimental revolution itself. After Dan Wagner moved to the DNC, the party decided it would start conducting its own experiments. He hoped the committee could become "a driver of research for the Democratic Party."
To that end, he hired the Analyst Institute, a Washington-based consortium founded under the AFL-CIO's leadership in 2006 to coördinate field research projects across the electioneering left and distribute the findings among allies. Much of the experimental world's research had focused on voter registration, because that was easy to measure. The breakthrough was that registration no longer had to be approached passively; organizers did not have to simply wait for the unenrolled to emerge from anonymity, sign a form, and, they hoped, vote. New techniques made it possible to intelligently profile nonvoters: commercial data warehouses sold lists of all voting-age adults, and comparing those lists with registration rolls revealed eligible candidates, each attached to a home address to which an application could be mailed. Applying microtargeting models identified which nonregistrants were most likely to be Democrats and which ones Republicans.
The Obama campaign embedded social scientists from the Analyst Institute among its staff. Party officials knew that adding new Democratic voters to the registration rolls was a crucial element in their strategy for 2012. But already the campaign had ambitions beyond merely modifying nonparticipating citizens' behavior through registration and mobilization. It wanted to take on the most vexing problem in politics: changing voters' minds.
The expansion of individual-level data had made possible the kind of testing that could help do that. Experimenters had typically calculated the average effect of their interventions across the entire population. But as campaigns developed deep portraits of the voters in their databases, it became possible to measure the attributes of the people who were actually moved by an experiment's impact. A series of tests in 2006 by the women's group Emily's List had illustrated the potential of conducting controlled trials with microtargeting databases. When the group sent direct mail in favor of Democratic gubernatorial candidates, it barely budged those whose scores placed them in the middle of the partisan spectrum; it had a far greater impact upon those who had been profiled as soft (or nonideological) Republicans.
That test, and others that followed, demonstrated the limitations of traditional targeting. Such techniques rested on a series of long-standing assumptions—for instance, that middle-of-the-roaders were the most persuadable and that infrequent voters were the likeliest to be captured in a get-out-the-vote drive. But the experiments introduced new uncertainty. People who were identified as having a 50 percent likelihood of voting for a Democrat might in fact be torn between the two parties, or they might look like centrists only because no data attached to their records pushed a partisan prediction in one direction or another. "The scores in the middle are the people we know less about," says Chris Wyant, a 2008 field organizer who became the campaign's general election director in Ohio four years later. "The extent to which we were guessing about persuasion was not lost on any of us."



One way the campaign sought to identify the ripest targets was through a series of what the Analyst Institute called "experiment-informed programs," or EIPs, designed to measure how effective different types of messages were at moving public opinion.
The traditional way of doing this had been to audition themes and language in focus groups and then test the winning material in polls to see which categories of voters responded positively to each approach. Any insights were distorted by the artificial settings and by the tiny samples of demographic subgroups in traditional polls. "You're making significant resource decisions based on 160 people?" asks Mitch Stewart, director of the Democratic campaign group Organizing for America. "Isn't that nuts? And people have been doing that for decades!"
An experimental program would use those steps to develop a range of prospective messages that could be subjected to empirical testing in the real world. Experimenters would randomly assign voters to receive varied sequences of direct mail—four pieces on the same policy theme, each making a slightly different case for Obama—and then use ongoing survey calls to isolate the attributes of those whose opinions changed as a result.
In March, the campaign used this technique to test various ways of promoting the administration's health-care policies. One series of mailers described Obama's regulatory reforms; another advised voters that they were now entitled to free regular check-ups and ought to schedule one. The experiment revealed how much voter response differed by age, especially among women. Older women thought more highly of the policies when they received reminders about preventive care; younger women liked them more when they were told about contraceptive coverage and new rules that prohibited insurance companies from charging women more.
When Paul Ryan was named to the Republican ticket in August, Obama's advisors rushed out an EIP that compared different lines of attack about Medicare. The results were surprising. "The electorate [had seemed] very inelastic," says Terry Walsh, who coördinated the campaign's polling and paid-media spending. "In fact, when we did the Medicare EIPs, we got positive movement that was very heartening, because it was at a time when we were not seeing a lot of movement in the electorate." But that movement came from quarters where a traditional campaign would never have gone hunting for minds it could change. The Obama team found that voters between 45 and 65 were more likely to change their views about the candidates after hearing Obama's Medicare arguments than those over 65, who were currently eligible for the program.
A similar strategy of targeting an unexpected population emerged from a July EIP testing Obama's messages aimed at women. The voters most responsive to the campaign's arguments about equal-pay measures and women's health, it found, were those whose likelihood of supporting the president was scored at merely 20 and 40 percent. Those scores suggested that they probably shared Republican attitudes; but here was one thing that could pull them to Obama. As a result, when Obama unveiled a direct-mail track addressing only women's issues, it wasn't to shore up interest among core parts of the Democratic coalition, but to reach over for conservatives who were at odds with their party on gender concerns. "The whole goal of the women's track was to pick off votes for Romney," says Walsh. "We were able to persuade people who fell low on candidate support scores if we gave them a specific message."
At the same time, Obama's campaign was pursuing a second, even more audacious adventure in persuasion: one-on-one interaction. Traditionally, campaigns have restricted their persuasion efforts to channels like mass media or direct mail, where they can control presentation, language, and targeting. Sending volunteers to persuade voters would mean forcing them to interact with opponents, or with voters who were undecided because they were alienated from politics on delicate issues like abortion. Campaigns have typically resisted relinquishing control of ground-level interactions with voters to risk such potentially combustible situations; they felt they didn't know enough about their supporters or volunteers. "You can have a negative impact," says Jeremy Bird, who served as national deputy director of Organizing for America. "You can hurt your candidate."
In February, however, Obama volunteers attempted 500,000 conversations with the goal of winning new supporters. Voters who'd been randomly selected from a group identified as persuadable were polled after a phone conversation that began with a volunteer reading from a script. "We definitely find certain people moved more than other people," says Bird. Analysts identified their attributes and made them the core of a persuasion model that predicted, on a scale of 0 to 10, the likelihood that a voter could be pulled in Obama's direction after a single volunteer interaction. The experiment also taught Obama's field department about its volunteers. Those in California, which had always had an exceptionally mature volunteer organization for a non-­battleground state, turned out to be especially persuasive: voters called by Californians, no matter what state they were in themselves, were more likely to become Obama supporters.

Alex Lundry created Mitt Romney's data science unit. It was less than one-tenth the size of Obama's analytics team.

With these findings in hand, Obama's strategists grew confident that they were no longer restricted to advertising as a channel for persuasion. They began sending trained volunteers to knock on doors or make phone calls with the objective of changing minds.
That dramatic shift in the culture of electioneering was felt on the streets, but it was possible only because of advances in analytics. Chris Wegrzyn, a database applications developer, developed a program code-named Airwolf that matched county and state lists of people who had requested mail ballots with the campaign's list of e-mail addresses. Likely Obama supporters would get regular reminders from their local field organizers, asking them to return their ballots, and, once they had, a message thanking them and proposing other ways to be involved in the campaign. The local organizer would receive daily lists of the voters on his or her turf who had outstanding ballots so that the campaign could follow up with personal contact by phone or at the doorstep. "It is a fundamental way of tying together the online and offline worlds," says Wagner.
Wagner, however, was turning his attention beyond the field. By June of 2011, he was chief analytics officer for the campaign and had begun making the rounds of the other units at headquarters, from fund-raising to communications, offering to help "solve their problems with data." He imagined the analytics department—now a 54-person staff, housed in a windowless office known as the Cave—as an "in-house consultancy" with other parts of the campaign as its clients. "There's a process of helping people learn about the tools so they can be a participant in the process," he says. "We essentially built products for each of those various departments that were paired up with a massive database we had."
The Flow
As job notices seeking specialists in text analytics, computational advertising, and online experiments came out of the incumbent's campaign, Mitt Romney's advisors at the Republicans' headquarters in Boston's North End watched with a combination of awe and perplexity. Throughout the primaries, Romney had appeared to be the only Republican running a 21st-century campaign, methodically banking early votes in states like Florida and Ohio before his disorganized opponents could establish operations there.
But the Republican winner's relative sophistication in the primaries belied a poverty of expertise compared with the Obama campaign. Since his first campaign for governor of Massachusetts, in 2002, Romney had relied upon ­TargetPoint Consulting, a Virginia firm that was then a pioneer in linking information from consumer data warehouses to voter registration records and using it to develop individual-level predictive models. It was TargetPoint's CEO, Alexander Gage, who had coined the term "microtargeting" to describe the process, which he modeled on the corporate world's approach to customer relationship management.
Such techniques had offered George W. Bush's reëlection campaign a significant edge in targeting, but Republicans had done little to institutionalize that advantage in the years since. By 2006, Democrats had not only matched Republicans in adopting commercial marketing techniques; they had moved ahead by integrating methods developed in the social sciences.
Romney's advisors knew that Obama was building innovative internal data analytics departments, but they didn't feel a need to match those activities. "I don't think we thought, relative to the marketplace, we could be the best at data in-house all the time," Romney's digital director, Zac Moffatt, said in July. "Our idea is to find the best firms to work with us." As a result, Romney remained dependent on TargetPoint to develop voter segments, often just once, and then deliver them to the campaign's databases. That was the structure Obama had abandoned after winning the nomination in 2008.
In May a TargetPoint vice president, Alex Lundry, took leave from his post at the firm to assemble a data science unit within Romney's headquarters. To round out his team, Lundry brought in Tom Wood, a University of Chicago postdoctoral student in political science, and Brent McGoldrick, a veteran of Bush's 2004 campaign who had left politics for the consulting firm Financial Dynamics (later FTI Consulting), where he helped financial-services, health-care, and energy companies communicate better. But Romney's data science team was less than one-tenth the size of Obama's analytics department. Without a large in-house staff to handle the massive national data sets that made it possible to test and track citizens, Romney's data scientists never tried to deepen their understanding of individual behavior. Instead, they fixated on trying to unlock one big, persistent mystery, which Lundry framed this way: "How can we get a sense of whether this advertising is working?"
"You usually get GRPs and tracking polls," he says, referring to the gross ratings points that are the basic unit of measuring television buys. "There's a very large causal leap you have to make from one to the other."
Lundry decided to focus on more manageable ways of measuring what he called the information flow. His team converted topics of political communication into discrete units they called "entities." They initially classified 200 of them, including issues like the auto industry bailout, controversies like the one surrounding federal funding for the solar-power company Solyndra, and catchphrases like "the war on women." When a new concept (such as Obama's offhand remark, during a speech about our common dependence on infrastructure, that "you didn't build that") emerged as part of the election-year lexicon, the analysts added it to the list. They tracked each entity on the National Dialogue Monitor, TargetPoint's system for measuring the frequency and tone with which certain topics are mentioned across all media. TargetPoint also integrated content collected from newspaper websites and closed-caption transcripts of broadcast programs. Lundry's team aimed to examine how every entity fared over time in each of two categories: the informal sphere of social media, especially Twitter, and the journalistic product that campaigns call earned press coverage.
Ultimately, Lundry wanted to assess the impact that each type of public attention had on what mattered most to them: Romney's position in the horse race. He turned to vector autoregression models, which equities traders use to isolate the influence of single variables on market movements. In this case, Lundry's team looked for patterns in the relationship between the National Dialogue Monitor's data and Romney's numbers in Gallup's daily tracking polls. By the end of July, they thought they had identified a three-step process they called "Wood's Triangle."
Within three or four days of a new entity's entry into the conversation, either through paid ads or through the news cycle, it was possible to make a well-informed hypothesis about whether the topic was likely to win media attention by tracking whether it generated Twitter chatter. That informal conversation among political-class elites typically led to traditional print or broadcast press coverage one to two days later, and that, in turn, might have an impact on the horse race. "We saw this process over and over again," says Lundry.
They began to think of ads as a "shock to the system"—a way to either introduce a new topic or restore focus on an area in which elite interest had faded. If an entity didn't gain its own energy—as when the Republicans charged over the summer that the White House had waived the work requirements in the federal welfare rules—Lundry would propose a "re-shock to the system" with another ad on the subject five to seven days later. After 12 to 14 days, Lundry found, an entity had passed through the system and exhausted its ability to alter public opinion—so he would recommend to the campaign's communications staff that they move on to something new.
Those insights offered campaign officials a theory of information flows, but they provided no guidance in how to allocate campaign resources in order to win the Electoral College. Assuming that Obama had superior ground-level data and analytics, Romney's campaign tried to leverage its rivals' strategy to shape its own; if Democrats thought a state or media market was competitive, maybe that was evidence that Republicans should think so too. "We were necessarily reactive, because we were putting together the plane as it took off," Lundry says. "They had an enormous head start on us."
Romney's political department began holding regular meetings to look at where in the country the Obama campaign was focusing resources like ad dollars and the president's time. The goal was to try to divine the calculations behind those decisions. It was, in essence, the way Microsoft's Bing approached Google: trying to reverse-engineer the market leader's code by studying the visible output. "We watch where the president goes," Dan ­Centinello, the Romney deputy political director who oversaw the meetings, said over the summer.
Obama's media-buying strategy proved particularly hard to decipher. In early September, as part of his standard review, Lundry noticed that the week after the Democratic convention, Obama had aired 68 ads in Dothan, Alabama, a town near the Florida border. Dothan was one of the country's smallest media markets, and Alabama one of the safest Republican states. Even though the area was known to savvy ad buyers as one of the places where a media market crosses state lines, Dothan TV stations reached only about 9,000 Florida voters, and around 7,000 of them had voted for John McCain in 2008. "This is a hard-core Republican media market," Lundry says. "It's incredibly tiny. But they were advertising there."
Romney's advisors might have formed a theory about the broader media environment, but whatever was sending Obama hunting for a small pocket of votes was beyond their measurement. "We could tell," says McGoldrick, "that there was something in the algorithms that was telling them what to run."
The March
In the summer of 2011, Carol Davidsen received a message from Dan Wagner. Already the Obama campaign was known for its relentless e-mails beseeching supporters to give their money or time, but this one offered something that intrigued Davidsen: a job. Wagner had sorted the campaign's list of donors, stretching back to 2008, to find those who described their occupation with terms like "data" and "analytics" and sent them all invitations to apply for work in his new analytics department.
Davidsen was working at Navic Networks, a Microsoft-owned company that wrote code for set-top cable boxes to create a record of a user's DVR or tuner history, when she heeded Wagner's call. One year before Election Day, she started work in the campaign's technology department to serve as product manager for Narwhal. That was the code name, borrowed from a tusked whale, for an ambitious effort to match records from previously unconnected databases so that a user's online interactions with the campaign could be synchronized. With Narwhal, e-mail blasts asking people to volunteer could take their past donation history into consideration, and the algorithms determining how much a supporter would be asked to contribute could be shaped by knowledge about his or her reaction to previous solicitations. This integration enriched a technique, common in website development, that Obama's online fund-raising efforts had used to good effect in 2008: the A/B test, in which users are randomly directed to different versions of a thing and their responses are compared. Now analysts could leverage personal data to identify the attributes of those who responded, and use that knowledge to refine subsequent appeals. "You can cite people's other types of engagement," says ­Amelia ­Showalter, Obama's director of digital analytics. "We discovered that there were a lot of things that built goodwill, like signing the president's birthday card or getting a free bumper sticker, that led them to become more engaged with the campaign in other ways."
If online communication had been the aspect of the 2008 campaign subjected to the most rigorous empirical examination—it's easy to randomly assign e-mails in an A/B test and compare click-through rates or donation levels—mass-media strategy was among those that received the least. Television and radio ads had to be purchased by geographic zone, and the available data on who watches which channels or shows, collected by research firms like Nielsen and Scarborough, often included little more than viewer age and gender. That might be good enough to guide buys for Schick or Foot Locker, but it's of limited value for advertisers looking to define audiences in political terms.



As campaign manager Jim Messina prepared to spend as much as half a billion dollars on mass media for Obama's reëlection, he set out to reinvent the process for allocating resources across broadcast, cable, satellite, and online channels. "If you think about the universe of possible places for an advertiser, it's almost infinite," says Amy Gershkoff, who was hired as the campaign's media-planning director on the strength of her successful negotiations, while at her firm Changing Targets in 2009, to link the information from cable systems to individual microtargeting profiles. "There are tens of millions of opportunities where a campaign can put its next dollar. You have all this great, robust voter data that doesn't fit together with the media data. How you knit that together is a challenge."
By the start of 2012, ­Wagner had deftly wrested command of media planning into his own department. As he expanded the scope of analytics, he defined his purview as "the study and practice of resource optimization for the purpose of improving programs and earning votes more efficiently." That usually meant calculating, for any campaign activity, the number of votes gained through a given amount of contact at a given cost.
But when it came to buying media, such calculations had been simply impossible, because campaigns were unable to link what they knew about voters to what cable providers knew about their customers. Obama's advisors decided that the data made available in the private sector had long led political advertisers to ask the wrong questions. Walsh says of the effort to reimagine the media-targeting process: "It was not to get a better understanding of what 35-plus women watch on TV. It was to find out how many of our persuadable voters were watching those dayparts."
Davidsen, whose previous work had left her intimately familiar with the rich data sets held in set-top boxes, understood that a lot of that data was available in the form of tuner and DVR histories collected by cable providers and then aggregated by research firms. For privacy reasons, however, the information was not available at the individual level. "The hardest thing in media buying right now is the lack of information," she says.
Davidsen began negotiating to have research firms repackage their data in a form that would permit the campaign to access the individual histories without violating the cable providers' privacy standards. Under a $350,000 deal she worked out with one company, Rentrak, the campaign provided a list of persuadable voters and their addresses, derived from its microtargeting models, and the company looked for them in the cable providers' billing files. When a record matched, ­Rentrak would issue it a unique household ID that identified viewing data from a single set-top box but masked any personally identifiable information.
The Obama campaign had created its own television ratings system, a kind of Nielsen in which the only viewers who mattered were those not yet fully committed to a presidential candidate. But Davidsen had to get the information into a practical form by early May, when Obama strategists planned to start running their anti-Romney ads. She oversaw the development of a software platform the Obama staff called the Optimizer, which broke the day into 96 quarter-hour segments and assessed which time slots across 60 channels offered the greatest number of persuadable targets per dollar. (By September, she had unlocked an even richer trove of data: a cable system in Toledo, Ohio, that tracked viewers' tuner histories by the second.) "The revolution of media buying in this campaign," says Walsh, "was to turn what was a broadcast medium into something that looks a lot more like a narrowcast medium."
When the Obama campaign did use television as a mass medium, it was because the Optimizer had concluded it would be a more efficient way of reaching persuadable targets. Sometimes a national cable ad was a better bargain than a large number of local buys in the 66 media markets reaching battleground states. But the occasional national buy also had other benefits. It could boost fund-raising and motivate volunteers in states that weren't essential to Obama's Electoral College arithmetic. And, says Davidsen, "it helps hide some of the strategy of your buying."
Even without that tactic, Obama's buys perplexed the Romney analysts in Boston. They had invested in their own media-intelligence platform, called Centraforce. It used some of the same aggregated data sources that were feeding into the Optimizer, and at times both seemed to send the campaigns to the same unlikely ad blocks—for example, in reruns on TV Land. But there was a lot more to what Lundry called Obama's "highly variable" media strategy. Many of the Democrats' ads were placed in fringe markets, on marginal stations, and at odd times where few political candidates had ever seen value. Romney's data scientists simply could not decode those decisions without the voter models or persuasion experiments that helped Obama pick out individual targets. "We were never able to figure out the level of advertising and what they were trying to do," says McGoldrick. "It wasn't worth reverse-engineering, because what are you going to do?"
The Community
Although the voter opinion tables that emerged from the Cave looked a lot like polls, the analysts who produced them were disinclined to call them polls. The campaign had plenty of those, generated by a public-opinion team of eight outside firms, and new arrivals at the Chicago headquarters were shocked by the variegated breadth of the research that arrived on their desks daily. "We believed in combining the qual, which we did more than any campaign ever, with the quant, which we [also] did more than any other campaign, to make sure all communication for every level of the campaign was informed by what they found," says David Simas, the director of opinion research.
Simas considered himself the "air-traffic controller" for such research, which was guided by a series of voter diaries that Obama's team commissioned as it prepared for the reëlection campaign. "We needed to do something almost divorced from politics and get to the way they're seeing their lives," he says. The lead pollster, Joel Benenson, had respondents write about their experiences. The entries frequently used the word "disappointment," which helped explain attitudes toward Obama's administration but also spoke to a broader dissatisfaction with economic conditions. "That became the foundation for our entire research program," says Simas.

Carol Davidsen matched Obama 2012's lists of persuadable voters with cable providers' billing information.

Obama's advisors used those diaries to develop messages that contrasted Obama with Romney as a fighter for the middle class. Benenson's national polls tested language to see which affected voters' responses in survey experiments and direct questioning. A quartet of polling firms were assigned specific states and asked to figure out which national themes fit best with local concerns. Eventually, Obama's media advisors created more than 500 ads and tested them before an online sample of viewers selected by focus-group director David Binder.
But the campaign had to play defense, too. When something potentially damaging popped up in the news, like Democratic consultant Hilary Rosen's declaration that Ann Romney had "never worked a day in her life," Simas checked in with the Community, a private online bulletin board populated by 100 undecided voters Binder had recruited. Simas would monitor Community conversations to see which news events penetrated voter consciousness. Sometimes he had Binder show its members controversial material—like a video clip of Obama's "You didn't build that" comment—and ask if it changed their views of the candidate. "For me, it was a very quick way to draw back and determine whether something was a problem or not a problem," says Simas.
When Wagner started packaging his department's research into something that campaign leadership could read like a poll, a pattern became apparent. Obama's numbers in key battleground states were low in the analytic tables, but Romney's were too. There were simply more undecided voters in such states—sometimes nearly twice as many as the traditional pollsters found. A basic methodological distinction explained this discrepancy: microtargeting models required interviewing a lot of unlikely voters to give shape to a profile of what a nonvoter looked like, while pollsters tracking the horse race wanted to screen more rigorously for those likely to cast a ballot. The rivalry between the two units trying to measure public opinion grew intense: the analytic polls were a threat to the pollsters' primacy and, potentially, to their business model. "I spent a lot of time within the campaign explaining to people that the numbers we get from analytics and the numbers we get from external pollsters did not need strictly to be reconciled," says Walsh. "They were different."
The scope of the analytic research enabled it to pick up movements too small for traditional polls to perceive. As Simas reviewed Wagner's analytic tables in mid-October, he was alarmed to see that what had been a Romney lead of one to two points in Green Bay, Wisconsin, had grown into an advantage of between six and nine. Green Bay was the only media market in the state to experience such a shift, and there was no obvious explanation. But it was hard to discount. Whereas a standard 800-person statewide poll might have reached 100 respondents in the Green Bay area, analytics was placing 5,000 calls in Wisconsin in each five-day cycle—and benefiting from tens of thousands of other field contacts—to produce microtargeting scores. Analytics was talking to as many people in the Green Bay media market as traditional pollsters were talking to across Wisconsin every week. "We could have the confidence level to say, 'This isn't noise,'" says Simas. So the campaign's media buyers aired an ad attacking Romney on outsourcing and beseeched Messina to send former president Bill Clinton and Obama himself to rallies there. (In the end, Romney took the county 50.3 to 48.5 percent.)
For the most part, however, the analytic tables demonstrated how stable the electorate was, and how predictable individual voters could be. Polls from the media and academic institutions may have fluctuated by the hour, but drawing on hundreds of data points to judge whether someone was a likely voter proved more reliable than using a seven-question battery like Gallup's to do the same. "When you see this Pogo stick happening with the public data—the electorate is just not that volatile," says Mitch Stewart, director of the Democratic campaign group Organizing for America. The analytic data offered a source of calm.
Romney's advisors were similarly sanguine, but they were losing. They, too, believed it possible to project the composition of the electorate, relying on a method similar to Gallup's: pollster Neil Newhouse asked respondents how likely they were to cast a ballot. Those who answered that question with a seven or below on a 10-point scale were disregarded as not inclined to vote. But that ignored the experimental methods that made it possible to measure individual behavior and the impact that a campaign itself could have on a citizen's motivation. As a result, the Republicans failed to account for voters that the Obama campaign could be mobilizing even if they looked to Election Day without enthusiasm or intensity.
On the last day of the race, Wagner and his analytics staff left the Cave and rode the elevator up one floor in the campaign's Chicago skyscraper to join members of other departments in a boiler room established to help track votes as they came in. Already, for over a month, Obama's analysts had been counting ballots from states that allowed citizens to vote early. Each day, the campaign overlaid the lists of early voters released by election authorities with its modeling scores to project how many votes they could claim as their own.
By Election Day, Wagner's analytic tables turned into predictions. Before the polls opened in Ohio, authorities in Hamilton County, the state's third-largest and home to Cincinnati, released the names of 103,508 voters who had cast early ballots over the previous month. Wagner sorted them by microtargeting projections and found that 58,379 had individual support scores over 50.1—that is, the campaign's models predicted that they were more likely than not to have voted for Obama. That amounted to 56.4 percent of the county's votes, or a raw lead of 13,249 votes over Romney. Early ballots were the first to be counted after Ohio's polls closed, and Obama's senior staff gathered around screens in the boiler room to see the initial tally. The numbers settled almost exactly where Wagner had said they would: Obama got 56.6 percent of the votes in Hamilton County. In Florida, he was as close to the mark; Obama's margin was only two-tenths of a percent off. "After those first two numbers, we knew," says Bird. "It was dead-on."
When Obama was reëlected, and by a far larger Electoral College margin than most outsiders had anticipated, his staff was exhilarated but not surprised. The next morning, Mitch Stewart sat in the boiler room, alone, monitoring the lagging votes as they came into Obama's servers from election authorities in Florida, the last state to name a winner. The presidency was no longer at stake; the only thing that still hung in the balance was the accuracy of the analytics department's predictions.
The Legacy
A few days after the election, as Florida authorities continued to count provisional ballots, a few staff members were directed, as four years before, to remain in Chicago. Their instructions were to produce another post-mortem report summing up the lessons of the past year and a half. The undertaking was called the Legacy Project, a grandiose title inspired by the idea that the innovations of Obama 2012 should be translated not only to the campaign of the next Democratic candidate for president but also to governance. Obama had succeeded in convincing some citizens that a modest adjustment to their behavior would affect, however marginally, the result of an election. Could he make them feel the same way about Congress?
Simas, who had served in the White House before joining the team, marveled at the intimacy of the campaign. Perhaps more than anyone else at headquarters, he appreciated the human aspect of politics. This had been his first presidential election, but before he became a political operative, Simas had been a politician himself, serving on the city council and school board in his hometown of Taunton, Massachusetts. He ran for office by knocking on doors and interacting individually with constituents (or those he hoped would become constituents), trying to track their moods and expectations.
In many respects, analytics had made it possible for the Obama campaign to recapture that style of politics. Though the old guard may have viewed such techniques as a disruptive force in campaigns, they enabled a presidential candidate to view the electorate the way local candidates do: as a collection of people who make up a more perfect union, each of them approachable on his or her terms, their changing levels of support and enthusiasm open to measurement and, thus, to respect. "What that gave us was the ability to run a national presidential campaign the way you'd do a local ward campaign," Simas says. "You know the people on your block. People have relationships with one another, and you leverage them so you know the way they talk about issues, what they're discussing at the coffee shop."
Few events in American life other than a presidential election touch 126 million adults, or even a significant fraction that many, on a single day. Certainly no corporation, no civic institution, and very few government agencies ever do. Obama did so by reducing every American to a series of numbers. Yet those numbers somehow captured the individuality of each voter, and they were not demographic classifications. The scores measured the ability of people to change politics—and to be changed by it.

Read the article at: MIT Review

No comments:

Post a Comment