The Referendum of 2004

The Referendum of 2004

Alan Wolfe

The election of 2004 suggested much bad blood between the two parties, but questions remain about how polarized the electorate really is.

Share:
Read Time:
19m 18sec

The presidential election of 2004 is widely regarded as one of the most important in the past 100 years. But its importance does not derive from clear ideological differences between the candidates and the parties. For example, the two candidates generally agree about longer term goals in Iraq, however much they disagree about appointments to the U.S. Supreme Court or the rollback of tax cuts for the wealthiest Americans. The election is important, rather, because the candidacies of President George W. Bush and Senator John Kerry have been framed by two different theoretical understandings of the nature of American society. The victory of one or the other will go a long way toward resolving whether we are a deeply polarized nation, with little hope of reconciliation, or a fundamentally unified one, whose disagreements are not all that deep.

Never before have Americans been polled so much, subjected to so many focus groups, and broken into so many different demographic categories. And yet we still lack consensus on some of the most basic questions of political science. Take what should be a simple one to answer: Have we become more conservative? Clearly the answer is yes if we look at which party dominates the White House, holds majorities in both houses of Congress, and elects the most governors. Yet conservatives do well in politics because they have, under both Ronald Reagan and George W. Bush, not only expanded the size of government—a traditional liberal inclination—but adopted policies associated with their opponents, such as Medicare reform. To further complicate the problem, the country may have become more conservative on some issues, such as distrust of government, while becoming more liberal on others, such as increased support for the principles embodied in the Civil Rights Act of 1964 (which many conservatives opposed), or for greater religious and moral tolerance.

To qualify as polarized, people must be divided into competing camps. Yet without a clear sense of what those camps stand for, it can hardly be surprising that social scientists have reached different conclusions about to what extent—and even whether—Americans disagree with one other. No one doubts that there are red states, which voted for Bush in 2000, and blue states, which voted for Al Gore. Nor can one ignore that there exist popular cable television talking heads who clash vehemently on every political issue under the sun. But there’s no civil war taking place in the United States, and we may be not nearly as divided as we were when anti-Vietnam War protestors confronted supporters of the war in the 1960s and ’70s. As partisan and contentious as our news media have become, by another good measure of political division—the number of Americans whose lives have been lost over political disagreements—we are at a relative low point in our history.

No wonder, then, that when political scientists examine the issue of polarization, they come up with contradictory findings. Whatever the extent of the culture war in the nation, there’s deep division among those of us who take its pulse. I know this from personal experience. In 1999 I published One Nation, After All, which reported the findings of interviews I had conducted with 200 middle-class suburbanites in Massachusetts, Georgia, Oklahoma, and California. I concluded that, when it came to some of the deepest moral issues with which human beings concern themselves (obligations to the poor, respect for the religious convictions of people who adhere to faiths different from our own, welcoming immigrants to our shores), the people with whom I spoke had few fundamental disagreements. The culture war was alive and well inside the Beltway, I decided, but elsewhere we were one na­tion, struggling to find common ground. I was not the only social scientist to come to this conclusion. Sociologist Paul DiMaggio examined quantitative data about American public opinion and found roughly what I had found through my reliance on data from interviews. Even on the issue of abortion, which I had chosen not to study, DiMaggio concluded that there was a rough consensus that it was wrong, though allowable under certain circumstances.

Although these views about American polarization were somewhat counterintuitive for the time, events in the real world—such as the impeachment of President Bill Clinton—gave them considerable credibility. No doubt Republicans thought that significant numbers of Americans, believers in traditional morality, would be so shocked by Clinton’s sexual escapades and deliberate lying that they would drive him from office. In fact, Americans clearly did not like Clinton’s behavior, but they were also surprisingly tolerant of him and inclined to judge his harshest critics unfavorably. There really did seem to be a new morality in the United States, and it overlapped with the picture of “live and let live” morality I had drawn through my research.

On so sensitive a subject, however, there was bound to be disagreement. The distinguished historian Gertrude Himmelfarb, for one, looked at the United States and found a country very different from the one I had described. The title of her book One Nation, Two Cultures (1999) accurately characterized her thesis. The 1960s had left a deep divide in America, Himmelfarb argued; adherents to a culture of individual self-fulfillment had little in common with those who identified with a culture of respect for rules and authority. Whatever the realm—religion, patriotism, the family—there clearly were two different American ways of life, and the hostility between them was palpable. From Himmelfarb’s perspective, the failure to remove Clinton from office demonstrated not that there was no culture war but that the forces of personal liberation had the upper hand.

Himmelfarb is a conservative, but there was nothing conservative about her findings (just as I believe that, though I am a liberal, there was nothing liberal about mine). In The Two Nations: Our Current Political Deadlock and How to Break It (2003), Stanley Greenberg, a respected pollster with decidedly liberal political views, generally agreed with Himmelfarb’s description of a moral divide, though his two Americas were riven more by politics than by cultural issues. Those who regard polarization in the United States as a serious problem in need of a remedy had their views confirmed by Greenberg’s book.

The thesis that we are one nation seemed to receive empirical support from the country’s mixed reaction to Kenneth Starr, even as the notion that we are two was substantiated by the stalemate in the 2000 presidential election. Who could doubt, in the wake of that year’s electoral map, that a Bible-reading, gun-toting Tennessee Christian looked at the world in radically different ways from a civil-liberties-loving Massachusetts cultural relativist? Red and blue leave little room for gray. As I watched the fierce partisan furies unleashed by the Florida voting debacle, I began to wonder whether my research had failed me. Where were the moderate voices I had heard throughout the United States: the politically conservative Oklahomans who nonetheless distrusted the extremism of the Religious Right, the committed egalitarians in California who disliked multilingualism in schools and had serious reservations about affirmative action? The moral world of the Americans with whom I talked in the 1990s was nuanced. The moral world of Americans as revealed by the 2000 election was not.

Still, these kinds of things come in cycles, and no sooner did the “two nations” thesis seem to get America right than the cycle began to turn. There were, of course, the attacks of September 11, 2001, which revealed a hunger for unity and a sense of common purpose in the American people. But some of the classic wedge issues that presume to divide Americans into two camps have all but disappeared from the political radar screen. The most significant of these is affirmative action, which had been widely interpreted as a clever way to divide working-class whites, who in the past had tended to vote Democratic, from liberal-leaning African Americans and their white allies. Yet after the U.S. Supreme Court essentially found a compromise position on affirmative action—targets, not quotas—politicians began to shun the issue. So too did Roe v. Wade become less of a rallying cry for abortion opponents when Congress voted to ban so-called “partial-birth” abortions, thereby undermining the conservatives’ claim that their voices in the debate had been silenced. (It remains to be seen whether recent court rulings overturning the law will revitalize the abortion debate.) Maybe we’re on the way to becoming one nation with one culture again.

That, at least, was the conclusion of the most exhaustive study of the issue to date, Culture War? The Myth of a Polarized America (2004), by Morris Fiorina, Samuel J. Abrams, and Jeremy G. Pope. Reviewing nearly all the published data, Fiorina and his colleagues found that, on a variety of supposedly hot-button issues (school vouchers, the death penalty, immigration, equal rights for women), opinion in the so-called red states was little different from attitudes in the blue ones. There was, in fact, more competition even within the red states and the blue than the polarization thesis allows: Gore received more than 55 percent of the votes in only six states in 2000, and Bush more than 55 percent in 17 (smaller) states, meaning that all of the rest were up for grabs. Political activists, to be sure, have strong disagreements with one another, but this does not mean that ordinary Americans do, Fiorina and his colleagues decided.

They also concluded that technical features of the ways Americans conduct their politics explain why a gap exists between how politicians act and what Americans generally think. Primary turnout, for example, is traditionally low, which means that candidates have to appeal to voters with more extreme views—the ones most likely to vote—if they are to get their party’s nomination. Elections in congressional districts are increasingly noncompetitive, which reduces the incentive of politicians to move toward the center. Most important of all to Fiorina and his colleagues is a fact too often overlooked: Voters can vote only for people already on the ballot, and if the parties put more extreme politicians there, people will vote for them even if their own views have not become more extreme.

In emphasizing these technical points, Fiorina and his fellow authors paid relatively little attention to such on-the-ground realities as the growing conservatism of the South, or the anger generated by the 2000 election itself. Still, their data demonstrate conclusively that, if the views of all Americans, and not just party activists, are taken into account, the people of the United States are actually more centrist than they’ve been for some time.

Studying political polarization is not like studying earthquakes; one nearly always knows when the latter occur, but scholars and commentators, often with impressive credentials, will disagree about whether the former even exists. Even those who agree that the country is divided disagree on how many divisions it contains. In perhaps the most inventive contribution to the debate, journalist Robert David Sullivan, in 2002, argued in CommonWealth, a Massa­chusetts-based public-policy magazine, that there are 10 different Americas, not merely one or two. They include El Norte (parts of Arizona, New Mexico, Texas, and Southern California), the Upper Coasts (Maine on one side, Washington State on the other), the Southern Lowlands of the Carolinas, and the Farm Belt. (Sullivan had persuasive maps and figures to back up his analysis.) So split is scholarly and journalistic opinion on the question of how split we are as a nation that we either have to live with a high degree of uncertainty or find a method besides surveys and qualitative interview data to help us resolve it.

Fortunately, we do have another tool at our disposal. In fact, it’s something not unlike the kind of natural experiment available to physical scientists. It’s called the 2004 presidential election.

Imagine that we’re in a laboratory designing presidential campaigns that will tell us something about the state of American society. We might develop three possible models. The first, which was given its classic expression in Anthony Downs’s An Economic Theory of Democracy (1957), is an election in which both parties move to the center, where they hope to capture more voters. For decades, Downs’s model seemed to describe an “iron law” of American politics. It applied in 1932, when Franklin Roosevelt ran as an economic conservative; in 1960, when John Kennedy and Richard Nixon barely disagreed with each other; and in the successful campaign of George H. W. Bush against Michael Dukakis in 1988. When candidates fudge the differences between themselves and their opponents, they’re in agreement that the country is unified, and that moderates in the center of the ideological spectrum best represent that unity.

A second scenario is based on the assumption that Americans are deeply divided and that politicians, rather than converge toward the center, should tailor their message to win voters at the margins. “Critical elections” of this sort are not all that common, but their number includes some of the most important in our history: the bitterly disputed contest of 1896, in which populist William Jennings Bryan ran against probusiness Republican William McKinley; the four-candidate campaign that brought Abraham Lincoln to the presidency in 1860. The critical-election model is based on an assumption that’s the polar opposite of the assumption underlying the consensual model, but on one important point the two converge: In each model, politicians from both sides of the partisan divide agree on the sociological conditions they face, whether those conditions involve unity or division.  

Rarely in America have we had a third type of situation, one that would seem too artificial to occur in the real world. This would be an election in which one party concluded that the country was fundamentally polarized and that its best chance of winning was to appeal to the extremes, while the other party concluded the exact opposite and decided that themes of unity and solidarity would best enhance its chances of success.

It’s no mystery why campaigns of this sort are so rare. Especially these days, campaigns stake a great deal on the theories they advance to guide their candidates; they raise and spend so much money that the incentives to get reality right are overpowering. The last thing one would expect is that two parties would study the same electorate and come away with radically different conclusions about what will move people to vote. And yet, if this third type of campaign did occur, it would tell us a great deal about the country, for whoever won the election would not only assume the presidency but would also, in the language of social science, confirm a hypothesis. If the divider wins, we conclude that the country is indeed divided; if the unifier wins, we conclude the opposite.

The election of 2004 is not quite a perfect fit with any abstract model. On some issues, the campaign between Bush and Kerry manifests similarities with the common patterns that have revealed themselves over time in American history. On the war in Iraq, as I have already noted, Kerry was slow to criticize Bush’s past policies and has not distanced himself too far on the question of larger goals, a situation not unlike the one in which Kennedy and Nixon approached China or Cuba. On other issues, such as stem-cell research, both candidates have moved toward their respective extremes—Bush toward religious conservatives, Kerry toward liberals—in ways reminiscent of polarizing elections in American history. Still, there’s little doubt that when it comes to assumptions about how Americans understand their political differences, the Bush campaign and the Kerry campaign have opted for very different strategies, in ways that are unusual in the American electoral experience.

Consider the two main themes developed by Republicans against Kerry: that he is one of the most extreme liberals in the Democratic Party, and that he cannot be trusted because he’s a flip-flopper.

Charging someone with liberalism assumes that Americans know what liberalism is, and that they think ideologically when they think about politics. For some voters, this is undoubtedly the case. When they hear the word liberal, they think of a civil libertarian opposed to capital punishment, or someone who wants to raise their taxes, and they look immediately to the other candidate. But political scientists have often found that Americans rarely think in ideological terms—especially as those terms are defined by pundits and philosophers. By trying so hard to characterize Kerry as a liberal, the Bush camp is placing its bet on one interpretation of how Americans think about politics and not another. Is the choice correct? We won’t know until the returns are in.

The charge of flip-flopping also carries with it a set of assumptions about how and why people act politically. The charge is an example of negative campaigning, understood in the technical sense of a campaign strategy that focuses critically on the record of an opponent. Though America is far from facing anything like a civil war, negative campaigning assumes that people believe politics to be a rough-and-tumble human activity; one tries not only to win, but to leave one’s opponent bloody and bruised. There’s little talk of bipartisan cooperation, or of the need to resolve differences and get on with the business of the country once the election is over, or of respect for traditional rules of the game that define certain kinds of conduct as unwise or unethical. (“Gentlemen do not read each other’s mail,” former secretary of war Henry L. Stimson once said of our enemies; imagine anyone applying that rule to domestic politics today.) War is the most polarizing of all human activities, and though negative campaigning may stop well short of actual war, its reliance on martial tactics and language assumes that people believe passionately enough in winning an election to justify any means of achieving victory. Do people really hold such passionate beliefs? Once again, we don’t know. But this year’s Republican National Convention provided numerous examples of one party attacking the candidates of the other in exceptionally harsh terms. If the election proves those attacks to have been successful, we’ll know a lot more.

John Kerry never calls George Bush a conservative in the way Bush calls him a liberal. Of course, that may be because conservatism is more popular than liberalism, at least as Americans understand the meaning of the terms. But it’s also clear that Kerry has opted for a strategy quite different from the one chosen by Bush. Kerry did not decide to downplay ideology for moral reasons; instead, he made a tactical calculation that without votes from the center of the spectrum, he could not win the election. From a sociological perspective, his motives are irrelevant. Kerry bet that people care about things other than a candidate’s worldview when they make their voting decisions, which is one reason he surrounds himself so often with veterans and talks so frequently of values. If Kerry, who is a liberal, succeeds in winning a large number of votes from those who are not, or who aren’t sure what the term liberal means, he will have demonstrated that Americans are looking for a leader who wants to bring them together—with one another and with peoples around the world.

The situation involving negative campaigning is more complicated, for once one candidate opts to attack, the other has little choice but to respond or to be accused of wimpishness. We saw a perfect example of this in the attempts by groups close to the Bush campaign, such as Swift Boat Veterans for Truth, to attack Kerry’s war record and subsequent antiwar activities—to which Kerry eventually responded by citing Vice President Dick Cheney’s five deferments from military service during the Vietnam period.

Yet significant differences between the two parties remain. Kerry chose as his vice-presidential nominee John Edwards, who is strongly identified with delivering positive and upbeat messages; even his widely publicized campaign speech emphasizing that there are two Americas was designed to make the point that there really should be only one. Bush’s advertisements have focused on Kerry’s record, while Kerry’s advertisements have focused on—Kerry’s record. Nominated by an unusually united Democratic Party, Kerry has relatively little need to fire up his base with anti-Bush attacks, especially if the harsh language might turn off the centrist voters he seeks. Kerry’s strategy of appealing to the Center hinges on a great un­known: the number of Americans who consider themselves undecided, and the direction they will swing if appeals are made to them. But that very unknowability is what makes the difference between a negative campaign and a more positive one so interesting. Only in retrospect will it be clear which was more in accord with popular sentiment.

Viewing elections as a way of understanding ourselves may seem an exotic activity, of interest only to social scientists and not to the general public or the candidates, who are focused more on winning. But even in the days before regression analyses and Gallup polls, our presidential elections helped us name the kind of people we were. When politics was a more gentlemanly affair, we went through an “Era of Good Feeling.” Before the Civil War broke out, war had already broken out among—and within—the political parties; Lincoln won the presidency with only 40 percent of the popular vote. There have been times in American history when partisan passions were muted, electoral campaigns uninteresting, and the winners undistinguished, and other times when campaigns fired the public imagination, invective flew, and the winners got to shape the future.

In the current age, there’s no doubt that politics matter greatly to those who are deeply immersed in politics. Nor is there any doubt that Americans are faced in 2004 with choices that have demonstrably important consequences for the future of their country. What’s not clear is whether ordinary Americans are caught up in the passions that motivate our political and media elites. Nor are we any closer to solving the longstanding mystery of what motivates people to go to the polls and cast their ballots. But because each new election tells us a little more about who we are, we’ll have a better sense, when this year’s election is over, of whether the purported cultural divisions that have dominated our society for more than two decades will continue, or even be exacerbated, or whether they’ll begin to recede into insignificance, in the face of all that unites us.

More From This Issue