The Making of The PUBLIC MIND

Essays

The Making of The PUBLIC MIND

...

Share:
Read Time:
147m 3sec




THE MAKING OF THE

PUBLIC


MIND


"Learned Institutions ought to be favorite objects with every free people," wrote James Madison. "They throw that light over the public mind which is the best security against crafty & dangerous encroachments on the public liberty." Those words could well be the motto of the Wilson Quarterly. To mark the magazine’s 25th anniversary, the editors posed a question to several writers: What quality of light do scholars and other thinkers throw on the public mind today?

43 Why Public Intellectuals? by Jean Bethke Elshtain
51 Undisciplined by Louis Menand
60 Wittgenstein’s Curse by Jay Tolson
68 The Struggle for the Soul of the Sentence by Sven Birkerts
76 The Professors and Bush v. Gore by Peter Berkowitz & Benjamin Wittes
90 Knowing the Public Mind by Karlyn Bowman
99 History for a Democracy by Wilfred M. McClay





Why Public
Intellectuals?


by Jean Bethke Elshtain

Some time ago I spent a year at the Institute for Advanced Study in Princeton, New Jersey, where one of the pleasures is the opportunity to exchange ideas with scholars from other countries. One evening, a particularly animated member of an informal discussion group I had joined began to lament the sorry state of public intel

lectualism in the United States—this by contrast to her native France, and particularly Paris, with its dizzying clash of opinions. I remember being somewhat stung by her comments, and joined the others in shaking my head at the lackluster state of our public intellectual life. Why couldn’t Americans be more like Parisians?

The moment passed rather quickly, at least in my case. I recalled just how thoroughly the French intellectual class—except for the rare dissenters, such as the estimable, brave, and lonely Albert Camus—had capitulated to the seductions of totalitarian logic, opposing fascism only to become apologists for what Camus called "the socialism of the gallows."

French political life would have been much healthier had France embraced Camus and his few compatriots rather than Jean-Paul Sartre and the many others of his kind who wore the mantle of the public intellectual. When Camus spoke in a political voice, he spoke as a citizen who understood politics to be a process that involves debate and compromise, not as an ideologue seeking to make politics conform to an overarching vision. In the end, Camus insisted, the ideologue’s vision effectively destroys politics.

Perhaps, I reflected, America’s peculiar blend of rough-and-ready pragmatism and a tendency to fret about the moral dimensions of public life—unsystematic and, from the viewpoint of lofty ideology, unsophisticated as this combination might be—was a better guarantor of constitutionalism and a healthy civil society than were intellectuals of the sort my French interlocutor favored. Historically, public intellectuals in America were, in fact, members of a wider public. They shared with other Americans access to religious and civic idioms that pressed the moral questions embedded in political debate; they were prepared to live, at least most of the time, with the give-and-take of political life, and they favored practical results over systems.

The American temperament invites wariness toward intellectuals. Because they are generally better at living in their heads than at keeping their feet on the ground, intellectuals are more vulnerable than others to the seductions of power that come with possessing a worldview whose logic promises to explain everything, and perhaps, in some glorious future, control and manage everything. The 20th century is littered with the disastrous consequences of such seductions, many of them spearheaded and defined by intellectuals who found themselves superseded, or even destroyed, by ruthless men of action once they were no longer needed as apologists, provocateurs, and publicists. The definitive crackup since 1989 of the political utopianism that enthralled so many 20th-century public intellectuals in the West prompts several important questions: Who, exactly, are the public intellectuals in contemporary America? Do we need them? And if we do, what should be their job description?

Let us not understand these questions too narrowly. Every country’s history is different. Many critics who bemoan the paucity of public intellectuals in America today have a constricted view of them—as a group

of independent thinkers who, nonetheless, seem to think remarkably alike. In most

accounts, they are left-wing, seek the overthrow of bourgeois convention, and spend

endless hours (or at least did so once-upon-a-time) talking late into the night in

smoke-filled cafés and Greenwich Village lofts. We owe this vision not only to the

self-promotion of members of the group but to films such as Warren Beatty’s Reds.

But such accounts distort our understanding of American intellectual life. There

was a life of the mind west of the Hudson River, too, as Louis Menand shows in

his recent book, The Metaphysical Club. American intellectuals have come in a

number of modes and have embraced a variety of approaches.

But even Menand pays too little attention to an important part of the

American ferment. American public intellectual life is unintelligible if one

ignores the extraordinary role once played by the Protestant clergy and similar

thinkers, from Jonathan Edwards in the 18th century through Reinhold

Niebuhr in the 20th. The entire Social Gospel movement, from its late-19th

century origins through its heyday about the time of World War I, was an

attempt by the intellectuals in America’s clergy and seminaries to define an

American civil religion and to bring a vision of something akin to the Peaceable

Kingdom to fruition on earth, or at least in North America.

As universities became prominent homes for intellectual life, university

based intellectuals entered this already-established public discourse. They did

so as generalists rather than as spokesmen for a discipline. In the minds of

thinkers such as William James, George Herbert Mead, and John Dewey, there

was no way to separate intellectual and political issues from larger moral con

cerns. Outside the university proper during the last decades of the 19th centu

ry and early decades of the 20th, there arose extraordinary figures such as Jane

Addams and Randolph Bourne. These thinkers and social activists combined moral

urgency and political engagement in their work. None trafficked in a totalizing

ideology on the Marxist model of so many European intellectuals.

Addams, for example, insisted that the settlement house movement she pio

neered in Chicago remain open, flexible, and experimental—a communal

home for what might be called organic intellectual life. Responding to the

>Jean Bethke Elshtain is the Laura Spelman Rockefeller Professor of Social and Political Ethics at the

University of Chicago and the author of many books, including Jane Addams and the Dream of American

Democracy, to be published by Basic Books in December. She is a contributing editor of the New Republic.

Copyright © 2001 by Jean Bethke Elshtain.



Anarchist, writer, and agitator Emma Goldman, shown here in New York in 1916, was one of the figures who created a new image of the public intellectual as antibourgeois radical.

clash of the social classes that dominated the public life of her day, she spoke of the need for the classes to engage in "mutual interpretation," and for this to be done person to person. Addams stoutly resisted the lure of ideology—she told droll stories about the utopianism that was sometimes voiced in the Working Man’s Social Science Club at Hull-House.

Addams saw in Nathaniel Hawthorne’s short story "Ethan Brand" an object lesson for intellectuals. Ethan Brand is a lime burner who leaves his village to search for the "Unpardonable Sin." And he finds it: an "intellect that triumphed over the sense of brotherhood with man and reverence for God, and sacrificed everything to its mighty claims!" This pride of intellect, operating in public life, tries to force life to conform to an abstract model. Addams used the lesson of Ethan Brand in replying to the socialists who claimed that she refused to convert to their point of view because she was "caught in the coils of capitalism." In responding to her critics, Addams once described an exchange in one of the weekly Hull-House drawing room discussions. An ardent socialist proclaimed that "socialism will cure the toothache." A second fellow upped the ante by insisting that when every child’s teeth were systematically cared for from birth, toothaches would disappear from the face of the earth. Addams, of course, knew that we would always have toothaches.

Addams, James, Dewey, and, later, Niebuhr shared a strong sense of living in a distinctly Protestant civic culture. That culture was assumed, whether one was a religious believer or not, and from the days of abolitionism through the struggle for women’s suffrage and down to the civil rights movement of the 1960s, public intellectuals could appeal to its values. But Protestant civic culture thinned out with the rise of groups that had been excluded from the consensus (Catholics, Jews, Evangelical Christians), with the triumph of a generally secular, consumerist worldview, and with mainline Protestantism’s abandonment of much of its own intellectual tradition in favor of a therapeutic ethos.

The consequence, for better and for worse, is that there is no longer a unified intellectual culture to address—or to rebel against. Pundits of one sort or another often attempt to recreate such a culture rhetorically

and to stoke old fears, as if we were fighting theocrats in the Massachusetts Bay Colony all over again. Raising the stakes in this way promotes a sense of self-importance by exaggerating what one is ostensibly up against. During the Clinton-Lewinsky scandal, for example, those who were critical of the president’s dubious use of the Oval Office were often accused of trying to resurrect the morality of Old Salem. A simple click of your television remote gives the lie to all such talk of a Puritan restoration: The screen is crowded with popular soft-core pornography packaged as confessional talk shows or self-help programs.

The specter of Old Salem is invoked in part because it provides, at least temporarily, a clear target for counterargument and gives television’s talking heads an issue that seems to justify their existence. But the truth is that there are no grand, clear-cut issues around which public intellectuals, whether selfdescribed media hounds or scholars yearning to break out of university-defined disciplinary boundaries, now rally. The overriding issues of three or four decades ago on which an unambiguous position was possible—above all, segregation and war—have given way to matters that are complex and murky. We now see in shades of gray rather than black and white. It is difficult to build a grand intellectual argument around how best to reform welfare, structure a tax cut, or protect the environment. Even many of our broader civic problems do not lend themselves to the sorts of thematic and cultural generalizations that have historically been the stuff of most public intellectual discourse.

My point is not that the issues Americans now face raise no major ethical or conceptual concerns; rather, these concerns are so complex, and the arguments from all sides often so compelling, that each side seems to have some part of the truth. That is why those who treat every issue as if it fit within the narrative of moral goodness on one side and venality and inequity on the other become so wearying. Most of us, whether or not we are part of what one wag rather uncharitably dubbed "the chattering classes," realize that matters are not so simple. That is one reason we often turn to expert researchers, who do not fit the historical profile of the public intellectual as omnicompetent generalist.

For example, well before today’s mountains of empirical evidence came in, a number of intellectuals were writing about what appeared to be Americans’ powerful disaffection from public life and from the work of civil society. Political theorists like me could speak to widespread discontents, but it was finally the empirical evidence presented by, among others, political scientist Robert Putman in his famous 1995 "Bowling Alone" essay that won these concerns a broad public hearing. In this instance, one finds disciplinary expertise put to the service of a public intellectual enterprise. That cuts against the grain of the culturally enshrined view of the public intellectual as a bold, lone intellect. Empirical researchers work in teams. They often have hordes of assistants. Their data are complex and must be translated for public consumption. Their work is very much the task of universities and think tanks, not of the public intellectual as heroic dissenter.

Yet it would be a mistake simply to let the experts take over. A case in point is the current debate over stem cell research and embryonic cloning for the purpose of "harvesting" stem cells. Anyone aware of the history of technological advance and the power of an insatiable desire for profit understands that such harvesting is a first step

toward cloning, and that irresponsible individuals and companies are already moving in that direction. But because the debate is conducted in highly technical terms, it is very difficult



There are no grand, clear-cut issues around which public intellectuals now rally.

for the generalist, or any nonspecialist, to find a point of entry. If you are not prepared to state an authoritative view on whether adult stem cells have the "pluripotent" potential of embryonic stem cells, you may as well keep your mouth shut. The technical debate excludes most citizens and limits the involvement of nonscientists who think about the long-range political implications of projects that bear a distinct eugenics cast.

Genetic "enhancement," as it is euphemistically called, will eventually become a eugenics project, meant to perfect the genetic composition of the human race. But our public life is so dominated by short-term considerations that someone who brings to the current genetic debate such a historical understanding sounds merely alarmist. This kind of understanding does not sit well with the can-do, upbeat American temperament. Americans are generally relieved to have moral and political urgency swamped by technicalities. This is hardly new. During the Cold War, debators who had at their fingertips the latest data on missile throwweights could trump the person who was not that sort of expert—but who wasn’t a naif either, who had read her Thucydides, and who thought there were alternatives to mutually assured destruction.

Americans prefer cheerleaders to naysayers. We tend to concentrate on the positive side of the ledger and refuse to conjure with the negative features—whether actual or potential—of social reform or

technological innovation. Americans notoriously lack a sense of tragedy, or even, as Reinhold Niebuhr insisted, a recognition of the ironies of our own his-tory. By naysayers I do not refer to those who, at the drop of a hat, issue a prefabricated condemnation of more-or-less anything going on in American politics and popular culture. I mean those who recognize that there are always losers when there are winners, and that it has never been the case in the history of any society that the benefits of a change or innovation fall evenly on all groups.

Whenever I heard the wonders of the "information superhighway" extolled during America’s years of high-tech infatuation, my mind turned to the people who would inevitably be found sitting

in antiquated jalopies in the breakdown lane. It isn’t easy to get Americans to think about such things. One evening, on a nightly news show, I debated a dot.com millionaire who proclaimed that the enormous wealth and expertise being amassed by rich techno-whiz kids would soon allow us to realize a cure for cancer, the end of urban gridlock, and world peace. World peace would follow naturally from market globalization. Having the right designer label on your jeans would be the glue that held people together, from here to Beijing. When I suggested that this was pretty thin civic glue, the gentleman in question looked at me as if I were a member of some extinct species. It was clear that he found such opinions not only retrograde but nearly unintelligible.

The dot.com millionaire’s attitude exemplified a larger American problem: the dangers of an excess of pride, not just for individuals but for the culture as a whole. It isn’t easy in our public intellectual life, or in our church life, for that matter, to get Americans to think about anything to do with sin, the focus of much public intellectual discourse in America from Edwards to Niebuhr. We are comfortable with "syndromes." The word has a soothing, therapeutic sound. But



A rooted intellectual: Jane Addams at Hull-House in the 1930s

the sin of pride, in the form of a triumphalist stance that recognizes no limits to human striving, is another matter.

The moral voices—the Jane Addamses and Reinhold Niebuhrs—that once had real public clout and that warned us against our tendency toward cultural pride and triumphalism seem no longer to exist, or at least to claim an audience anywhere near the size they once did. There are a few such voices in our era, but they tend not to be American. I think of President Václav Havel of the Czech Republic, who has written unabashedly against what happens when human beings, in his words, forget that they are not God or godlike. Here is Havel, in a lecture reprinted in the journal First Things (March 1995):

The relativization of all moral norms, the crisis of authority, the reduction of life to the pursuit of immediate material gain without regard for its general consequences—the very things Western democracy is most criticized for—do not originate in democracy but in that which modern man has lost: his transcendental anchor, and along with it the only genuine source of his responsibility and selfrespect. Given its fatal incorrigibility, humanity probably will have to go through many more Rwandas and Chernobyls before it understands how unbelievably shortsighted a human being can be who has forgotten that he is not God.

Our era is one of forgetting. If there is a role for the public intellectual, it is to insist that we remember, and that remembering is a moral act requiring the greatest intellectual and moral clarity. In learning to remember the Holocaust, we have achieved a significant (and lonely) success. Yet to the extent that we now see genocide as a historical anomaly unique to a particular regime or people, or, alternatively, as a historical commonplace that allows us to brand every instance of political killing a holocaust, we have failed to achieve clarity. The truth lies somewhere between.

Where techno-enthusiasm and utopia are concerned, we are far gone on the path of forgetting. One already sees newspaper ads offering huge financial rewards to young egg donors if they have SAT scores of at least 1400 or above, stand at least 5'10" tall, and are athletic. The "designer genes" of the future are talked about in matter-of-fact tones. Runaway technological utopianism, because it presents itself to us with the imprimatur of science, has an automatic authority in American culture that ethical thinkers, intellectual generalists, the clergy, and those with a sense of historic irony and tragedy no longer enjoy. The lay Catholic magazine Commonweal may editorialize against our newfangled modes of trading in human flesh—against what amounts to a "world where persons carry a price tag, and where the cash value of some persons is far greater than that of others." But the arguments seem to reach only those who are already persuaded. Critics on the environmental left and the social-conservative right who question techno-triumphalism fare no better. Instead of being seen as an early warning system—speaking unwelcome truths and reminding us what happens when people are equated with their genetic potential—the doubters are dismissed as a rear guard standing in the way of progress.

So this is our situation. Many of our pressing contemporary issues—issues that are not often construed as intrinsically political but on which politics has great bearing—raise daunting moral concerns. The concerns cannot be dealt with adequately without a strong ethical framework, a historical sensibility, and an awareness of human limits and tragedies. But such qualities are in short supply in an era of specialization and technological triumphalism. Those who seize the microphone and can bring the almost automatic authority of science to their side are mostly apologists for the coming new order. Those who warn about this new order’s possible baneful effects and consequences can be marginalized as people who refuse, stubbornly, to march in time, or who illegitimately seek to import to the public arena concerns that derive from religion.

We are so easily dazzled. We are so proud. If we can do it, we must do it. We must be first in all things—and if we become serious about bringing ethical restraint to bear on certain technologies, we may fall behind country X or country Y. And that seems un-American. The role for public intellectuals under such circumstances is to step back and issue thoughtful warnings. But where is the venue for this kind of discourse? Where is the training ground for what political theorist Michael Walzer calls "connected critics," thinkers who identify strongly with their culture, who do not traffic in facile denunciations of the sort we hear every night on television (along with equally facile cheerleading), but who speak to politics in a moral voice that is not narrowly moralizing?

That question underlies much of the debate about the state of civil society that occurred during the past decade. The writers and thinkers who warned about the decline of American civil society were concerned about finding not just more effective ways to reach desirable ends in public policy but about finding ways to stem the rushing tide of consumerism, of privatization and civic withdrawal, of public apathy and disengagement. We will not stem that tide without social structures and institutions that promote a fuller public conversation about the questions that confront us.

Whenever I speak about the quality of our public life before civic groups, I find a real hunger for public places like Hull-House. Americans yearn for forums where they can engage and interpret

the public questions of our time, and where a life of the mind can emerge and grow communally, free of the fetters of overspecialization. Without an engaged public, there can be no true public conversations, and no true public intellectuals. At Hull-House, Jane Addams spoke in a civic and ethical idiom shaped and shared by her fellow citizens. The voices of the Hull-House public served as a check on narrow, specialized, and monolithic points of view. It was from this rich venue that Addams launched herself into the public debates of her time. Where are the institutions for such discussion today? How might we create them? It is one of the many ironies of their vocation that contemporary public intellectuals can no longer presume a public.

Intellectuals and others who speak in a public moral voice do not carry a card that says "Have Ideology, Will Talk." Instead, they embrace Hannah Arendt’s description of the task of the political theorist as one who helps us to think about what we are doing. In a culture that is always doing, the responsibility to think is too often evaded. Things move much too fast. The role for public intellectuals today is to bestir the quiet voice of ethically engaged reason. ❏

The Making of the Public Mind




Undisciplined


by Louis Menand

Almost everyone agrees that American academic culture has changed dramatically in the past 25 years. Some people (mostly inside the academy) talk about those changes in terms of accessibility, diver

sity, increased public engagement, and so on. Others (mostly outside the academy) talk about them in terms of political correctness, affirmative action, the "death of literature," the rise of "grievance studies," and so on. In general, the differences between the two groups are framed as a debate over consciously held views: People with bad (or good) ideas seized control of higher education and drove out the good (or bad) ideas of the previous generation. To frame the debate so is not wrong: If changes in academic culture, where people are paid to think, are not driven in part by consciously held ideas, what changes are? But ideas are often driven, in turn, by long-term structural movements, and it is useful to step back from the debate over academic politics and values to see the evolution of the culture of higher education from a more impersonal perspective. One place to watch the change occurring is in the demise of the traditional academic disciplines.

Traditionally, an academic discipline was a paradigm inhabiting an institutional structure. "Anthropology" or "English" was both the name of an academic department and a discrete, largely autonomous program of inquiry. If, 30 or 40 years ago, you asked a dozen anthropology professors what anthropology’s program of inquiry was—what anthropology professors did that distinguished them from other professors—you might have gotten different, and possibly contradictory, answers, because academic fields have always had rival schools in them. But, by and large, the professors would have had little trouble filling in the blank in the sentence, "Anthropology is ____." (And if they did not have a ready definition—for anthropology has gone through periods of identity crisis in the past— they would not have boasted about the fact.) Today, you would be likely to get two types of definitions, neither one terribly specific, or even terribly useful. One type might be called the critical definition: "Anthropology is the study of its own assumptions." The other type could be called the pragmatic definition: "Anthropology is whatever people in anthropology departments do."

Not every liberal arts discipline is in the condition of anthropology, of course, but that only heightens the sense of confusion. It tends to set people in fields in which identification with a paradigm remains fairly tight, such as philosophy, against people in fields in which virtually anything goes, such as English. Philosophy professors (to caricature the situation slightly) tend to think that the work done by English professors lacks rigor, and English professors tend to think that the work of philosophy professors is introverted and irrelevant.

The dissociation of academic work from traditional departments has become so expected in the humanities that it is a common topic of both conferences and



jokes. During a recent conference (titled "Have the Humanistic Disciplines

Collapsed?") at the Stanford Humanities Center, one of the center’s directors,

to demonstrate the general dissipation of scholarly focus, read the titles of pro

jects submitted by applicants for fellowships and asked the audience to guess each

applicant’s field. The audience was right only once—when it guessed that an appli

cant whose project was about politics must be from an English department.

The usual response to the problem of "the collapse of the disciplines" has

been to promote interdisciplinary teaching and scholarship. But interdiscipli

narity is not only completely consistent with disciplinarity—the concept that

each academic field has its own distinctive program of inquiry—it actually depends

on that concept. More and more colleges are offering more and more inter

disciplinary classes, and even interdisciplinary majors, but increased interdis

ciplinarity is not what is new, and it is not the cause of today’s confusion. What

the academy is now experiencing is postdisciplinarity—not a joining of disci

plines, but an escape from disciplines.

How did this come about? The most common way of explaining paradigm loss has been to tie it to the demographic shift that has

occurred in higher education since 1945. That shift has certainly

been dramatic. In 1947, 71 percent of college students in the United States were

men; today, a minority of college students, 44 percent, are men. As late as 1965,

94 percent of American college students were classified as white; today, the fig

ure (for non-Hispanic whites) is 73 percent. Most of the change has occurred

>Louis Menand is Distinguished Professor at the Graduate Center of the City University of New York, and the

author, most recently, of The Metaphysical Club (Farrar, Straus & Giroux). He thanks the Alfred P. Sloan

Foundation for its support of the research on which this essay is based. Copyright © 2001 by Louis Menand.




Landscape (1994), by Mark Tansey

in the past 25 years. A single statistic tells the story: In the decade between 1984 and 1994, the total enrollment in American colleges and universities increased by two million, but not one of those two million new students was a white American man. They were all nonwhites, women, and foreign students.

Faculty demographics altered in the same way, and so far as the change in the status of the disciplines is concerned, that is probably the more relevant shift. Current full-time American faculty who were hired before 1985 are 28 percent female and about 11 percent nonwhite or Hispanic. Full-time faculty who have

been hired since 1985—individuals who, for the most part, entered graduate school after 1975—are half again as female (40 percent) and more than half again as nonwhite (18 percent). And these figures are for full-time professors only; they do not include part-time faculty, who now constitute 40 percent of the teaching force in American higher education, and who are more likely to be female than are full-time faculty. In 1997, there were 45,394 doctoral degrees conferred in the United States; 40 percent of the recipients were women (in the arts and humanities, just under 50 percent were women), and only 63 percent were classified as white American citizens. The other 37 percent were nonwhite Americans and foreign students.

The arrival of those new populations just happens to have coincided with the period of the so-called culture wars—a time, beginning around 1987, the year of Allan Bloom’s The Closing of the American Mind, when higher education came under intense outside criticism for radicalism and elitism. This coincidence has made it natural to assume a connection between the new faces on campus and the "collapse" (or the "redefinition") of the disciplines. There are two ways of explaining the connection. One is to suggest that many nonwhite and female students and professors understood the disciplines as rigid and exclusionary abstractions, and brought a new spirit of transgressiveness and play (things not associated with the culture of white men) into the academy. That interpretation is a little too similar to its evil twin— the view that many women and nonwhites lack the temperament for rigorous scholarship and pedagogy. A less perilous explanation is that the new populations inevitably created a demand for new subject matter, a demand to which university departments, among the most sluggish and conservative institutions in America, were slow to respond. The sluggishness produced a backlash: When women and nonwhites began arriving at universities in significant numbers after 1975, what happened was a kind of antidisciplinarity. Academic activity began flowing toward paradigms that essentially defined themselves in antagonism to the traditional disciplines.

Women’s studies departments, for example, came into being not because female professors wished to be separate, but because English and history and sociology departments were at first not terribly interested in incorporating gender-based courses into their curricula. The older generation of professors, whatever their politics personally, in most cases did not recognize gender or ethnic identity as valid rubrics for teaching or scholarship. So outside the discipline became a good place for feminist scholars to be. Indeed, there was a period, beginning in the late 1970s, when almost all the academic stars were people who talked about the failures and omissions of their own fields.

That was especially the case in English literature, where there was (allegedly) a "canon" of institutionally prescribed texts available to question. Edward Said’s Orientalism (1978) was about the scholarly bias against non-Western cultures; Sandra Gilbert and Susan Gubar’s The Madwoman in the Attic (1979) was about the exclusion and misinterpretation of work by women; Jane Tompkins’s Sensational Designs (1985) was about the exclusion of popular literature; and so on. Scholarly works such as those did not simply criticize their own disciplines; they simultaneously opened up and legitimated new areas of research and teaching. And when departments were slow to adopt the new areas, centers were happy to take up the slack. The period since 1975 has been the era of the center—for women’s studies, postcolonial studies, African American studies, gay and lesbian studies, science studies, cultural studies. And every university seems either to have or to be busy creating its very own humanities center. Few of the centers grant degrees—they lack the institutional power of the departments. But they are interdisciplinary by definition (they are made up of professors from a variety of disciplines) and antidisciplinary in temper (they were established to compensate for some perceived inadequacy in the existing departments).

But the era of antidisciplinarity is essentially over, for the simple reason that the traditional disciplines have by now almost all been coopted. Virtually no one in the university today believes that gen

der or ethnic identity (or any of the other areas of research associated with the centers) is not a valid rubric for research or teaching. People in English departments and anthropology departments do exactly what people used to have to go to women’s studies and cultural studies centers to do. The arrival of new populations, in other words, helps to explain the emergence of the critical definition of the discipline—that "anthropology (or English or history) is the study of its own assumptions." That formulation is a holdover from the days of antidisciplinarity. But the influx doesn’t really explain the pragmatic definition—that "anthropology (and the rest) is whatever anthropology professors do." Merely adding new areas of study (women’s history, postcolonial writers, and so on) doesn’t threaten the integrity of a discipline, even if it entails (as it often does) rethinking traditional standards and practices. Postdisciplinarity is a different phenomenon, and it has a distinct etiology.

The contemporary American university is an institution shaped by the Cold War. It was first drawn into the business of government-related scientific research during World War II, by men such as James Bryant

Conant, who was the president of Harvard University and civilian overseer of scientific research during the war, and Vannevar Bush, who was a former vice president and dean of engineering at the Massachusetts Institute of Technology and director of the federal Office of Scientific Research and Development. At the time of the First World War, scientific research for military purposes had been carried out by military personnel, so-called soldier-scientists. It was Bush’s idea to contract this work out instead to research universi

ties, scientific institutes, and independent private laboratories. In 1945 he oversaw publication of the report

Science: The Endless Frontier, which became the standard argument for government subvention of basic




What the academy is now experiencing is postdisciplinarity—not a joining of disciplines, but an escape from disciplines.

science in peacetime and launched the collaboration between American universities and the national government.

Then came Sputnik, in 1957. Sputnik stirred up a panic in the United States, and among the political responses was the passage of the National Defense Education Act of 1958. The legislation put the federal government, for the first time, in the business of subsidizing higher education directly, rather than through government contracts for specific research. This was also the period when economists such as Gary Becker and Theodore Schultz introduced the concept of "human capital," which, by counting educated citizens as a strategic resource, offered a further national security rationale for increased government investment in higher education. In the words of the enabling legislation for the National Defense Education Act: "The security of the Nation requires the fullest development of the mental resources and technical skills of its young men and women.... We must increase our efforts to identify and educate more of the talent of our Nation. This requires programs that will give assurance that no students of ability will be denied an opportunity for higher education because of financial need."

The national financial commitment to higher education was accompanied by the arrival of the baby-boom generation of college students. Between 1955 and 1970, the number of 18-to-24-year-olds in America grew from 15 million to 25 million. The result was a tremendous expansion of the higher education system. In 1945, 15 percent of all Americans attended college; today, 50 percent attend college at some point in their lives. In 1949 there were about 1,800 institutions of higher education in the United States, enrolling just under two and a half million students; today, there are just over 4,000 American colleges and universities, and they enroll more than 14 million students, about 5 percent of the population. Current public expenditure on higher education is the equivalent of 5.6 percent of gross domestic product (GDP). To put those numbers in perspective: In the United Kingdom, 14.7 percent of the population goes on to university, and public expenditure on higher education (in a country where almost all universities are public) is 4.1 percent of GDP.

The expansion undoubtedly accounts for some of the decay disciplinary paradigms have undergone. In a system of (essentially) mass higher education, a much smaller proportion of students is interested in pursuing traditional academic work. That is not why they choose to go to college. Only a third of bachelor’s degrees awarded in the United States each year are in liberal arts fields (which include the natural and social sciences), and less than a third of those are in the humanities. It is not surprising that a sense of being squeezed onto the margins of a system increasingly obsessed with other things should generate uncertainty and self-doubt among people in the humanities.

You can see the effects in college catalogues. At Trinity College in Hartford, Connecticut, for example, the philosophy department’s announcement says: "A good philosopher should know at least a little something about everything." The department then recommends the study of a foreign language, but only because it "encourages the habit of careful attention to a text." It recommends a "broad understanding of modern science," but suggests that "any good science course...is suitable." It recommends courses in history, literature, and the arts, but advises that students generally select courses in these fields according to the amount of reading assigned (the more reading, the more desirable). It ends by saying what was already clear enough: "We require no particular non-departmental courses as part of the major." The next section of the announcement, titled "Introductory Courses," begins, "There is no single best way to be introduced to philosophy." That is not a confession of uselessness; it is an effort to conceive of philosophy as continuous with all other areas of thought—the "philosophy is whatever philosophers do" approach. (Still, it is unusual to find a philosophy department knocking down its own disciplinary fences with such abandon.)

Expansion was only one of the effects Cold War educational policies had on the university. There was a more insidious effect as well. The historian Thomas Bender has suggested, in his contribution to the illuminating volume American Academic Culture in Transformation (1997), that the new availability of state monies affected the tenor of academic research. Scholars in the early Cold War era tended to eschew political commitments because they wished not to offend their granting agencies. The idea that academics, particularly in the social sciences, could provide the state with neutral research results on which public policies could be based was an animating idea in the 1950s university. It explains why the dominant paradigms of academic work were scientific, and stressed values such as objectivity, value neutrality, and rigor.

In the sciences, the idea of neutrality led to what Talcott Parsons called the ethos of cognitive rationality. In fields such as history, it led to the consensus approach. In sociology, it produced what Robert Merton called theories of the middle range—an emphasis on the formulation of limited hypotheses subject to empirical verification. Behaviorism and rational choice theory became dominant paradigms in psychology and political science. In literature, even when the mindset was antiscientific, as in the case of the New Criticism and structuralism, the ethos was still scientistic: Literary theorists aspired to analytic precision. Boundaries were respected and methodologies were codified. Discipline reigned in the disciplines. Scholars in the 1950s who looked back on their prewar educations tended to be appalled by what they now regarded as a lack of rigor and focus.

Because public money was being pumped into the system at the high end—into the large research universities—the effect of the Cold War was to make the research professor the type of the professor generally. In 1968, Christopher Jencks and David Riesman referred to the phenomenon as "the academic revolution": For the first time in the history of American higher education, research, rather than teaching or service, defined the work of the professor, not just in the doctoral institutions but all the way down the institutional ladder. (That is why, today, even junior professors at teaching-intensive liberal arts colleges are often obliged to produce two books to qualify for tenure.) The academic revolution strengthened the grip of the disciplines on scholarly and pedagogical practice. Distinctions among different types of institutions, so far as the professoriate was concerned, began to be sanded down. The Cold War homogenized the academic profession.

If you compare the values of the early Cold War university with the values of the 21st-century university, you find an almost complete reversal of terms. A vocabulary of "disinterestedness," "objectivity," "reason," and "knowledge," and talk about such things as "the scientific method," "the canon of great books," and "the fact-value distinction," have been replaced, in many fields, by talk about "interpretations" (rather than "facts"), "perspective" (rather than "objectivity"), and "understanding" (rather than "reason" or "analysis"). An emphasis on universalism and "greatness" has yielded to an emphasis on diversity and difference; the scientistic norms that once prevailed in many of the "soft" disciplines are viewed with skepticism; context and contingency are continually emphasized; attention to "objects" has given way to attention to "representations"; there has been a turn to "personal criticism."



© Estate of Grant Wood/Licensed by VAGA, New York, N.Y., Courtesy of the Dubuque Museum of Art.


Honorary Degree (1938), by Grant Wood



The trend is essentially a backlash against the scientism and the excessive respect for disciplinarity of the Cold War university. We cannot attribute it solely to demographic diversification because most of the people one would name as its theorists are white men, and because the seeds of the undoing of the old disciplinary models were already present within the disciplines themselves. The people whose work is most closely associated with the demise of faith in disciplinary autonomy were, in fact, working entirely within the traditions in which they had been trained in the 1950s and early 1960s—people such as Clifford Geertz, Paul De Man, Hayden White, Stanley Fish, and Richard Rorty. The principal source of the "critical definition" of disciplines, Thomas Kuhn’s The Structure of Scientific Revolutions (1962), was in itself a perfectly traditional exercise in the philosophy and history of science. (Kuhn’s mentor, to whom the book is dedicated, was James Conant.) But Kuhn’s argument that "progress" in scientific knowl-



For the first time in the history of American higher education, research, rather than teaching or service, defined the work of the professor.

edge can be explained in large part as the substitution of new paradigms for old proved infectious in disciplines far removed from the philosophy of science. Kuhn’s book was not a work of science studies. He was not trying to explain science as displaced biography or sociology. He was only trying to describe how science opens up new paths of inquiry, and for the rest

of his career he resisted the suggestion that his theory of paradigm change implied that scientific knowledge was relativistic or socially constructed. Still, he set the analytic template for people in many other fields.

Richard Rorty, for example, has always cited Kuhn as a key influence on his own effort to debunk (or to transcend) the tradition of analytic philosophy. Philosophy and the Mirror of Nature (1979), Rorty’s landmark work, constructed its attack on the claims of analytic philosophy entirely from within the discipline itself—from arguments advanced by mainstream analytic philosophers such as Ludwig Wittgenstein, Wilfred Sellars, W. V. O. Quine, Nelson Goodman, and Hilary Putnam. Rorty’s point was not that analytic philosophy was a mere academic formalism, or the politically objectionable artifact of a mandarin intellectual class (the sort of argument, in other words, one could imagine from a person outside the discipline). His point was that analytic philosophy had refuted itself on its own terms.

The scholar who most successfully adopted Kuhn’s conception of the progress of knowledge as a series of paradigm shifts was Stanley Fish. Whatever the problems Kuhn’s theory posed for scientists, many people in English departments saw developments within their own field as precisely a succession of largely ungrounded paradigm shifts. Since 1945, the discipline had been dominated by, in turn, New Criticism, structuralism, and deconstruction—each theoretical dispensation claiming to have unlocked the true nature of literary language, which its predecessor had misunderstood. Fish interpreted the shifts as a succession of "communities of inquiry," whose norms and values set the boundaries for what was professionally acceptable and what was not. Once this interpretation was grasped, the belief that "English" represented any single way of approaching literature came to seem naive. Thus the pragmatic definition: The study of English is whatever people within the community of inquiry known as "the English department" happen to count as the study of English. There is no objective referent, such as "the nature of literary language," to use as an arbiter among approaches. The foundation has not shifted—it has vanished.

The story of paradigm loss is the story of many converging trends—which is a good reason for concluding that the loss is not likely to be reversed anytime soon. One can ask, though, whether postdisciplinarity is a good

place to be. My own view, for what it is worth, is that the academy is well rid of the disciplinary hubris of the early Cold War university, but that it is at some risk of sliding into a predictable and aimless eclecticism (as opposed to an imaginative and dynamic eclecticism, which I support). In a perfect world, which is to say in a fully funded world, the intellectual uncertainties caused by the collapse of the disciplines would eventually shake themselves out. The good ideas would drive out the bad, and people would find a way to separate what is worth studying and teaching from what is trendy or meretricious. But the world is not fully funded. Disciplines do not have an infinite amount of time to sort out their rationales. When they have a hard time explaining what they are about, they are in danger of losing out in the competition.

What is the chief obstacle to a productive resolution of the current disciplinary confusion? Doctoral education is the sphere of the American educational system most resistant to reform. It remains bound to a disciplinary structure first put in place 100 years ago—even though the curriculum of the liberal arts college, the demographic composition of student bodies, and the status of knowledge itself in the global economy have all been transformed. Graduate students still specialize in a small subfield within a traditional department, still become disciples of senior specialists for eight or 10 or sometimes 12 years, still produce a scholarly monograph to secure a degree that will license them to teach. All the buzz of academic intellectual life is happening in sex and gender studies, cultural studies, American studies, postcolonial studies, and so on, but all the credentialing goes on in departments of English, history, sociology, philosophy, and the rest of the traditional liberal arts fields. The academic establishment has become so overinvested in the notion that a Ph.D. in one of those fields stands for something immutably real and valuable that it cannot imagine reproducing itself other than by putting the next generation over exactly the same hurdles. Once a device for professional selfcontrol, the doctoral degree has become a fetish of the academic culture. There must be other ways to train college teachers. There must be other ways to pursue scholarly inquiry. ❏

The Making of the Public Mind



Wittgenstein’s Curse

by Jay Tolson

It’s easy to go on about how bad most academic writing is these days, and how it became so during the past 30 or 40 years. Curmudgeonly journalists have been pouncing on prof-prose at least since the days

of H. L. Mencken. But now high sport is made of the subject even within the academy. One academic journal awards annual prizes in a Bad Writing Contest, causing pain and sometimes anger among the unwitting winners. Scholars agonize about the problem, too. Russell Jacoby, for one, links it to the disappearance of the great public intellectuals who once enriched the larger culture. And it seems clear that the decline of scholarly writing has widened the eternal divide between the world of scholars and the public realm, to the impoverishment of both. Just as bad, the pursuit of truth and knowledge—an activity that should be charged with passion and engagement—now appears to the larger public to be an exercise in nonsensical irrelevance.

Perhaps nothing brought the whole sorry matter to a more dramatic head than the parodic gibberish-and-jargon-filled article that New York University physicist Alan Sokal tricked the scholarly journal Social Text into publishing in 1996. Titled "Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity," the essay argued that scientific knowledge was socially constructed, an argument very much in line with the journal’s postmodernist agenda. What the editors failed to see, though, is that the piece was packed with illogic, non sequiturs, and nonsense, including an unargued rejection of the "dogma" that asserts the existence of "an external world, whose properties are independent of any human being and indeed of humanity as a whole."

On the day the article was published, Sokal let the world know that it had been a hoax, and an uproar ensued. Many of the more interesting contributions to that controversy were published last year in a book, The Sokal Hoax—and not all of them were critical of the journal’s editors. In fact, literary scholar Stanley Fish made a plausible defense of the argument that Sokal had parodied: "What sociologists of science say," Fish wrote, "is that of course the world is real and independent of our observations but that accounts of the world are produced by observers and are therefore relative to their capacities, education, training, etc. It is not the world or its properties but the vocabularies in whose terms we know them that are socially constructed—fashioned by human beings—which is why our understanding of those properties is continually changing."

That is true and sensible and clearly put. Unfortunately, it’s not a distinction the editors of the journal seemed to grasp, because what Sokal said in his trickster voice was precisely that there was no external world independent of human constructions of it. And the trickster didn’t even make an argument for his outlandish claim. He simply tossed around the jargon, let it fall where it might, and concluded—voilà—that there is nothing out there unless we construct it into being.



Philosopher Ludwig Wittgenstein, in a 1943 photograph set amid one of his manuscripts.

Maybe Fish failed to get the point for the same reason the editors didn’t see it: because the writing was as impenetrably bad as most prose published in Social Text, and indeed as bad as so much current academic writing. The not-so-secret little secret, it turns out, is that no one really reads this stuff anyway, not even folks who produce reams of it for countless scholarly publications. And in truth, the stuff is not meant to be read. It’s a form of professional feather display, the ritual gesturing by which scholars establish standing with others in their particular niche, or subniche, of the scholarly trade. Display the jargon—feminist, neo-Marxist, postcolonialist, deconstructionist, whatever—and you’re in, you’re one of us, we want you on our tenure track.

If this seems to be a partisan slam against only the more progressive, left-leaning, and postmodern members of the academic community, let me second a point made by Patricia Nelson Limerick in the New York Times Book Review (Oct. 31, 1993): The more conservative traditionalists within the academy can often be just as bad as the Sado-Marxists and the Martian-Leninists (or maybe almost as bad). Limerick quotes a passage from that best-selling tract The Closing of the American Mind (1987), by the late Allan Bloom, a University of Chicago scholar who trained, and was subsequently revered by, a cadre of neoconservative

thinkers now gone forth into the world to pursue an assortment of academic and nonacademic occupations:

If openness means to "go with the flow," it is necessarily an accommodation to the present. That present is so closed to doubt about so many things impeding the progress of its principles that unqualified openness to it would mean forgetting the despised alternatives to it, knowledge of which would only make us aware of what is doubtful in it.

Got that? And does it not read like something only barely translated from German, or a directive from the Department of Housing and Urban Development? Postmodernish, far-leftish types may commit more, and more grievous, sins against the ideal of clear prose, but they are not alone in their sins.

Why have so many been undone by willful obscurantism and given themselves over to cant and nonsense? So many reasons, so little time to state them all. In fact, many have already

been stated, and many times over. But let me mention a couple that might not have received quite as much attention as they deserve, before coming to what I think is a fundamental cause.

First of all, academic writing has never been all that much fun to read. Mencken, as I mentioned earlier, went to town on the foibles of academese, focusing with particular viciousness on sociologist Thorstein Veblen’s tortured, jargon-flecked prose. But does that mean that Veblen’s theories about the leisure class and conspicuous consumption were unimportant? Not at all. Writing about difficult matters can be difficult—and often requires neologisms and complicated, subtle analysis. We have a hard time following the explanations of auto mechanics. Why should the explanations of a philosopher or sociologist be easier to follow? Clarity of expression should be a handmaiden of intellectual brilliance, but Veblen and many others demonstrate that often it is not.

That said, the rife obscurantism in scholarly publications today comports itself in a self-congratulatory, almost arrogant manner. Its promulgators argue that the difficulty is essential to the gravity of their ideas or to an intellectual or political stance, and that clarity, in any case, is just some elitist, dead-white-male convention. In "Troubling Clarity: The Politics of Accessible Language," published by the Harvard Educational Review (Fall 1996), Patti Lather justifies the liberating complexity of her own feminist writings:

Sometimes we need a density that fits the thoughts being expressed. In such places, clear and precise plain prose would be a sort of cheat tied to the antiintellectualism rife in U.S. society that deskills readers. . . . Positioning language as productive of new spaces, practices, and values, what might come of encour

>Jay Tolson, a senior writer at U.S. News & World Report, was the editor of the Wilson Quarterly from 1989 to 1998. He is the author of Pilgrim in the Ruins (1994) and the editor of The Correspondence of Shelby Foote and Walker Percy (1996). Copyright © 2001 by Jay Tolson.

aging a plurality of discourses and forms and levels of writing in a way that refus

es the binary between so-called "plain speaking" and complex writing? . . . What

is the violence of clarity, its non-innocence?

Claiming that her book about women with HIV/AIDS, Troubling Angels, was aimed at a popular audience, and even intended to be what she calls a "Kmart book," Lather boasts at the same time that she refused to produce a "tidy book" or a "comfort text," with the kind of writing "that maps easily into our ways of making sense and ‘giving sense.’" I have yet to encounter Troubling Angels on any of my visits to Kmart. I wonder whether any other Kmart shoppers have come across it.

Lather, like so many who proudly assert their obscurity, does not have the justification of a Veblen or a Hegel.

There is no brilliance or insight or originality in her work. There is only a thicket of nonsense, faddishness, and claptrap. But Lather wears her opacity proudly, like a badge, and no doubt enjoys tenure at Ohio State University because of it. And she is no rarity, no exception. Her kind are every-



Why have so many been undone by willful obscurantism and given themselves over to cant and nonsense?

where—troubling texts, troubling clarity, troubling the hegemonic hold on beauty and truth—and the sheer quantity of the drivel they produce is another big part of the problem.

The endless production is a matter of necessity and survival, of course. The academic professions require it—and not just the noble drudgery of teaching, research, editing, and monograph writing that engaged more modest scholars in the past (particularly those who recognized their intellectual and writerly limitations). No, the professions today demand substantial "original" works by all members of the professoriate who hope to rise to tenure. And that demand is simply unrealistic. For how much new is there under the sun? Not much—in scholarship or in any other human pursuit. Yet never have so many words been used so badly, and to say so little, as in these works of professedly original scholarship. Yes, there are still scholarly writers who produce truly groundbreaking work that reaches, informs, and enlightens not just other scholars but popular audiences as well. But beneath that apex, how enormous is the mountain of entirely superfluous scholarly prose!

One remedy seems obvious: more modesty on the part of the academic professions and a return to other scholarly tasks, including teaching, greater mastery of the core subject matter of a field, and recognition that in the realm of "original" work, less is more. But the obvious solution is no easy solution. It may even require coming to terms with a difficult matter indeed—the very character of the modern scholarly enterprise. The formation of that character has a complicated history, which has already been the subject of many works of scholarship. Let me attempt to make sense of the problem by blaming it, only half facetiously, on one of the more brilliant minds of the past century.

Great minds can do great mischief, and few minds have been greater than that of Ludwig Wittgenstein (1889–1951), the Vienna-born philosopher who spent some of his productive years disturbing the donnish waters of Cambridge University. Wittgenstein first decided to establish very precisely, in his Tractatus Logico-Philosophicus (1922), what philosophy should and should not discuss. He then all but reversed the conclusions of that book to develop his notion that human language has a fundamentally gamelike quality—a notion that implied a far less restrictive view of philosophy’s mission. Though he accomplished those feats in a prose so gnomically stringent that it almost defies comprehension, he left a deep imprint not just on philosophy but on 20th-century intellectual life in general. But that influence, alas, was not wholly benign.

The baleful part of Wittgenstein’s legacy is not so much a matter of strict logical-philosophical inadequacy as it is a problem of intellectual style—a certain prejudice, expressed both in his personal dealings with people and in his work, about what the life of the mind should be. One way to get a sense of this style is through an anecdote recounted by one of his Cambridge friends, the literary critic F. R. Leavis. In a short memoir about their friendship, Leavis told how Wittgenstein came to him one day "and, without any prelude, said, ‘Give up literary criticism!’ " Cambridge being a relatively civil place, Leavis didn’t assault the brash Austrian. He didn’t even make the obvious retort— "Give up philosophy!"—in part because he thought that Wittgenstein had fallen under the sway of John Maynard Keynes and other Bloomsbury wits who liked to toss off facile putdowns of people or ideas they disagreed with. More to the point, Leavis noted that Wittgenstein had only a "rudimentary" sense of literature, and so was incapable of thinking that it (much less literary criticism) "might matter intellectually." Such a view could not have been more inimical to Leavis’s conviction "that the fullest use of language is to be found in creative literature, and that a great creative work is a work of original exploratory thought." And to validate his conviction, Leavis adverted to his view about the inadequacy of philosophers: They were, he said, "weak on language."



What confidence! Had it endured within the precincts of higher learning, it’s fair to ask whether we would have avoided the current parlous state of academic letters. I think so, even as I acknowledge the overstatement implicit in my assertion, and even as I allow that Leavis’s confidence was itself a little shaky.

Many factors share responsibility for the deplorable condition of academic writing, but none is more fundamental than the fatally mistaken view that intellectual work must be "serious."

By claiming that literary criticism was serious in a way that Wittgenstein should have been able to appreciate, Leavis all but embraced, however unwittingly, Wittgenstein’s definition of seriousness: a rigorous way of thinking and proceeding intellectually, rooted in the assumedly clear procedural ways of the inductive sciences and leading to objective truth about the world, people, and what Wittgenstein called "everything that is the case." That is scientism, of course, driven by a Protestant intentness on having one’s subjective perceptions validated by claims to the kind of objective truth that can be revealed by the scientific method. No, I am not attacking science, the scientific method, or the many real and obvious blessings that have resulted from them. Nor am I attacking the notion of objectivity or the laudable goal of objective truth. I am merely pointing to the misapplication of the scientific idea, and to the consequences of the same.

Wittgenstein’s early philosophy led him to the conclusion that we cannot talk rigorously or precisely about most things that humans deem of ultimate importance: truth, beauty, goodness, the meaning and ends of life. We can speak precisely and meaningfully only about those things that objective science can demonstrate. In his view, philosophy was to be a helpful tag-along of science: It can paint clear verbal pictures of what science divulges. But even Wittgenstein recognized that this understanding of the limitations of language was too limiting, and he became more and more interested in the provisional and social character of language, and in how the mystery of meaning emerges out of the shared play of making worlds out of words. He was struggling beyond scientism, and his final book, Philosophical Investigations (1953), posthumously assembled, seems to point suggestively away from the narrowness and inconsequentiality of his earlier position.

But if Wittgenstein struggled against the conclusions of his early work, I fear that the Western academic world increasingly succumbed to a desire for the kind of dubious seriousness that enticed the young philosopher. Scholars of literature and the arts, historians, philosophers, and other academic humanists joined sociologists, anthropologists, and political scientists in trying to make their fields as "serious" as the hard sciences. They grew obsessed with theory and methodology, and particularly with the most abstract issues of epistemology—how we know what we know. This is largely the story of professionalization, of course, of how professional standards and approved behaviors got established in the academic realm. It was Wittgenstein’s curse upon the professionals of the humanistic and social science disciplines that they took his kind of seriousness as an essential goal.

Why a curse? For one thing, because it burdened those professions with a narrow-spirited utilitarianism. In his early work, Wittgenstein believed that his job was to make philosophy use

ful. He wanted to clear out, like so much underbrush, all the metaphysics and other matters that couldn’t be resolved the way a problem in, say, engineering (in which he had had training) can be resolved. In his early view, remember, philosophy was supposed to become a helpful user’s manual for the hard sciences. For it to be anything else was frivolous, an indulgence, unserious. Wittgenstein, as many of his contemporaries noted, had a genius for making colleagues and students feel guilty about not doing useful, productive work. He urged a number of his students to abandon scholarship altogether and become car mechanics or hospital orderlies. Some took his advice—to the shock and sorrow of their parents.

The compulsion to prove the utility of ideas spread through the humanities and social sciences like a contagion, assuming a variety of political, ideological, and theoretical colorings. It was no longer sufficient to master and convey the great historical record, or to locate and celebrate the pleasures of great works of literature or painting or music. Even the pursuit of wisdom was not enough, once wisdom got problematized. Theorizing took over. Elaborate theorymongering, often French- or German-inspired, displaced the mastering of subject matter, so that fledgling literary scholars, for example, ended up knowing more (or thinking they knew more) about Bakhtin than about Chekhov, more about queer theory than about any literary tradition. The pretense of helping the working class, or liberating gays by deconstructing texts, or doing meta-meta-interpretations of historical questions appeared to be the really serious work. No matter that such seriousness arguably achieved no serious real-world consequences. No matter that it became increasingly irrelevant to the real world—and completely impenetrable to most people in that world.

There’s an additional problem. The drift of much postmodern thought has been toward the conclusion that there is no absolute or objective truth; there are only constructions of the truth, influenced by power and power relations within society (might makes right—and truth) or by unacknowledged biases rooted in, say, gender or race. This radical skepticism, elaborated by such thinkers as the pragmatist Richard Rorty, holds that the pursuit of truth is essentially bootless. Whether such skepticism is itself simplistic (and, in Rorty’s case, whether it’s a misreading of the far more complicated view of truth held by earlier American pragmatists such as Charles Sanders Peirce) is beyond discussion here. But skepticism’s almost dogmalike standing within much of the academic community introduces a rich irony: Whereas skepticism would seem to invite scholars within the humanities, and even the social sciences, to abandon their reliance on pseudoscientific theories and methodologies and become truly independent thinkers and writers, it has in fact enslaved them all the more to pseudoscientific doctrines.

And make no mistake: The doctrines are pseudo. The same Sokal who fooled the editors of Social Text subsequently teamed up with philosopher Jean Bricmont to write a book, Fashionable Nonsense (1998), that showed the absurd and often hilarious efforts by leading postmodern thinkers to dress up their theories with scientific terminology and even mathematical formulas. (The highly influential Jacques Lacan, for example, boasted that his theories drew from "the most recent developments in topology.") On close inspection, the terminology and the formulas make no sense at all. "They imagine, perhaps, that they can exploit the prestige of the natural sciences in order to give their own discourse a veneer of rigor," write Sokal and Bricmont. "And they seem confident that no one will notice their misuse of concepts."

Such dishonesty is bad enough in itself. But the effect of the pseudoscientific doctrines on writing throughout the humanities and social sciences—and the writing remains unchanged,

despite Sokal and Bricmont’s valuable unmasking—only increases the seriousness of the crime. Forcing their ideas into the Procrustean beds of Foucaultian or Lacanian theoretical constructs—or others equally dubious—scholars produce a prose that seems to have emerged from a machine, a subjectless void. Where in that prose is the self, the individual? Nowhere. There is no mind grappling freshly with a problem. There is no feeling, no humor, no spark of what is human; there is only the unspooling of phony formulas, speciously applied to the matter at hand.

The great harm in all of this has been a loss of confidence in the fundamental worth of the seemingly irrelevant pursuit of knowledge, wisdom, and even pleasure for their own sake. Though an edge of defensiveness crept into his voice, Leavis was right to say that respectful but not uncritical reflection upon great literary works was worthwhile. Such activity deepens and complicates the individual, even as it expands the individual’s appreciation of the larger world of other people, society, politics, the natural and physical order. The pleasurable pursuit of knowledge and wisdom is, in great part, an extended meditation on the relations between self and world, subjectivity and objectivity, and on the question of where truth resides. It is, of all pursuits, the most relevant for human lives, and to the extent that the academy chooses to stand apart from it, academic writing withers and dies. ❏

The Making of the Public Mind



The Struggle for the
Soul of the Sentence

by Sven Birkerts

Ours is the great era of infotainment, of the much lamented migration away from serious reading. The communications revolution—everything from e-mail to the ubiquitous cell phone—

has spawned what seems to many an impoverished, phrase-based paradigm. The sound byte, the instant message—with every year, increments of meaning and expression seem to shrink. One might naturally expect American fiction of the last quarter-century to reflect that contraction, and gifted young writers, the products of an accelerated culture of distraction, to map in their prose the rhythms and diction patterns of our times.

Instead, almost to a writer, a new generation of novelists and short-story writers are forging styles of notable complexity and of cultural, if not always psychological, nuance. Life as presented in fiction has never seemed more ramified, more mined with implication, more multiplex in possibility. This shocking reverse of expectation marks a major shift in the how and what of literary fiction in America. A pitched battle between ways of seeing and representing the world—what might be called a struggle over the soul of the sentence—has been fought for at least a half-century now, and skirmishes during the past two decades have brought a victory for complexity that few would have predicted.

To give this battle a crude first formulation, we are witnessing the later stages of a long warfare between what I think of as ascetic realism—a belief in the artistic and ethical primacy of the understated treatment of the here and now— and something we might call, for want of an official term, "maximalism," a tendency toward expansive, centrifugal narrative that aspires to embrace the complexity of contemporary life. If we go back a quarter-century, to the mid-1970s, we can see the polarity alive and well, represented, on the one hand, by Raymond Carver’s influential short-story collection Will You Please Be Quiet, Please? (1976) and, on the other, by Thomas Pynchon’s limitbusting novel Gravity’s Rainbow (1973).

In these works, the conflict between worldviews is revealed at the level of the sentence. The aesthetics of a Carver and a Pynchon could not be more different. Carver’s writing registers, by way of a harshly pruned-back affect, the injurious impact of the world on the susceptible psyche. Pynchon’s prose opens itself to the overwhelmingness of life, registering detail, exploring myriad connections (often in a playful manner), and communicating a




Capturing a "minimalist" moment: Edward Hopper’s Summer Evening (1947).

sense of open-endedness that is always outrunning the perceptions of the moment.

At the subsentence—thematic—level, what we confront is the gulf between two visions of Americanness, one older and one of more recent vintage. The perspective with the longer lineage assumes a link between willed simplicity and virtue, and harks back to a mythos of rural and small-town beginnings that has been at the core of our popular culture from the start. The newer vision would mark the epochal changes brought on by the acceleration, interconnectedness, and radically expanded sense of context that are the products of late modernity. What Philip Rahv once described as the core split in our literature between "redskins" and "palefaces"—primitives and aesthetes, if you will—can now be seen as the split between the conserving and the liberating impulses. There are those who have a hard time facing the fact that our world has been refigured in the last decades by globalism and electronic communications, among other things, and those who are scrambling to make sense of the new situation.

For a long time I shared what I think of as the great populist prejudice. I had imbibed it in my schooling and in all the reading I’d done growing up in the 1950s and ’60s, in what might fairly be called the

Age of Hemingway in American fiction. Our American genius, I was far from alone in believing, was at root an unpretentious directness, a humble, plainspoken, verb-and-noun relation to the primary conditions of life and the largely stoical codes that honor them. I mean, among other things, the "manly" restraint of excessive feeling, and a rejection of pretense and, with it, intellectual complexity. This credo had its iconic father and manner: Within the

plank-and-nail sentences of Ernest Hemingway, the ethos had its most rep

resentative life. "The door of Henry’s lunch-room opened and two men

came in": A standard of purity and realism was embodied in such prose.

This equating of the demotic with the essential American virtues did

not originate with Hemingway, but it found its great midcentury expres

sion in his work and his public presence. (Something of the same hier

archy could be said to have prevailed in poetry, with Robert Frost taking

the Hemingway position, and possibly in the essay as well, where the chair

belonged to E. B. White.) The plainspoken tradition had its mainly male

line of succession. The spirit and the prose were passed along through writ

ers such as Robert Stone, Andre Dubus, Richard Ford, and a number of

others. But Raymond Carver was Hemingway’s primary heir.

Stylistically, he was a direct descendant, with his pared-down, under

stated prose idiom. Carver’s thematic interests, though, took more of a turn

toward implied interiority. Where Hemingway was preoccupied with

war and its lacerating effects on the manly self-conception, never mind

the soul, Carver took on the loss and failure faced by individuals left behind

by the general rush into modernity. His was the blue-collar lament, the

cry of the new superfluous man. The downbeat poignancy of this passage

from "They’re Not Your Husband" is vintage Carver:

Early Ober was between jobs as a salesman. But Doreen, his wife, had gone to work nights as a waitress at a twenty-four-hour coffee shop at the edge of town. One night, when he was drinking, Early decided to stop by the coffee shop and have something to eat. He wanted to see where Doreen worked, and he wanted to see if he could order something on the house.

We find a similar naturalistic bluntness in such writers as Stone, Dubus, Ford, Russell Banks, Tobias Wolff, and Geoffrey Wolff, to name a few. Yet all of them work more with an eye toward narrative development, and cannot be said to be Carver protégés in any sense. Carver’s influence is far more apparent in the work of the so-called minimalists, a group of mainly young writers, many of whom were published in the 1970s and ’80s by an influential editor at Alfred A. Knopf, Gordon Lish (who, as an editor at Esquire, had been instrumental in getting Carver’s early work published).

Minimialism took to more stylized extremes the idea of the understated utterance, though with more ironic inflection, and the belief that suggestion and implication were built

through careful strategies of withholding. Minimalists likewise eschewed

big themes, preferring to create uneasy portraits of American middle-class

domesticity. But here we bump up against one aspect of the paradox that

is at the root of this seeming face-off between approaches. For if the sub

ject matter was, in this most reduced sense, realistic, the impetus of the

>Sven Birkerts is the author of five books of essays. He will publish a memoir, My Sky Blue Trades, early next year. Copyright © 2001 by Sven Birkerts.

mode was aesthetic: The prose of minimalist exemplars such as Amy Hempel, Mary Robison, Janet Kauffman, and others unmistakably reflects a highly craft-conscious sensibility. Every feature in this closecropped scene from one of Hempel’s stories is bathed in hyperawareness: "Ten candles in a fish stick tell you it’s Gully’s birthday. The birthday girl is the center of attention; she squints into the popping flash cubes. The black cat seems to know every smooth pose there is."

Hempel’s carefully posed affect is fairly representative. If the popular equation of minimalism with an antiornamental—therefore democratic/populist—approach ever really held up, it does so no longer. Indeed, if we look past the reflexive association of Hemingway’s clipped sentences with the plainspoken truth of things, we find a high degree of aestheticism there as well. Hemingway is as mannered, in his way, as James Joyce and Virginia Woolf are in theirs, as studied as Cézanne (whom he studied).

So it was hardly a surprise when gadfly essayist and novelist Tom Wolfe saw no realism to commend in minimalism in his hyperbolic blast "Stalking the Billion-Footed Beast: A Literary Manifesto for the New Social Novel," published in Harper’s in 1989. Pistols popping in all directions, Wolfe declared the landscape of American fiction blighted and plumped hard for the kind of reheated Balzacianism that his two best-selling novels, The Bonfire of the Vanities (1990) and A Man in Full (1998), could be said to represent. As Wolfe wrote in a much-quoted passage:

At this weak, pale, tabescent moment in the history of American literature, we need a battalion, a brigade, of Zolas to head out into this wild, bizarre, unpredictable, hog-stomping Baroque country of ours and reclaim it as literary property. Philip Roth was absolutely right. The imagination of the novelist is powerless before what he knows he’s going to read in tomorrow morning’s newspaper. But a generation of American writers has drawn precisely the wrong conclusion from that perfectly valid observation. The answer is not to leave the rude beast, the material, also known as the life around us, to the journalists but to do what journalists do, or are supposed to do, which is to wrestle the beast and bring it to terms.

Wolfe, though he growled and gnashed in his distinctively big-bad style, was hardly alone in his impatience with the evasions of minimalism and with the more self-consciously formalized metafictional experiments of writers such as Robert Coover, John Hawkes, and John Barth, in which the artifice of fiction becomes in some sense the subject. His essay helped to expose the limitations of American piety about the truth-telling power of plainspoken prose—and to reveal that the polarity between the ascetic realists and the mandarin maximalists was not what it seemed at all. For, in his high dudgeon, Wolfe also swept aside as hopeless aesthetes the "palefaces," whose elaborate sentences may, in fact, have been lassoing the "rude wild beast" in new and inventive ways that he failed to appreciate, wedded as he was to a 19th-century prose of enumerative specificity and linearity. He did not seem to see that the deeper nature of selfhood and social reality was itself changing, transforming our fundamental notions of connectedness, of subject and object, of consciousness, in a world less temporally and spatially fixed than ever before.

There are many ways to write the story of the gradual triumph of the maximalist approach. But a catalytic moment surely was the publication in 1973 of Pynchon’s Gravity’s Rainbow, the novel that ambitiously combined antic black comedy, a compellingly paranoid historical vision, and a sensibility saturated in the ethos of the then-counterculture. To be sure, that big book’s arrival was preceded by the publication in 1953 of Saul Bellow’s The Adventures of Augie March and in 1955 of William Gaddis’s The Recognitions. And in their very different ways, more elaborate stylists such as John Updike and John Cheever, along with Roth and Bellow, were also staking an ambitious claim to charting our turbulent social and spiritual landscape. Still, Pynchon’s novel remains, more than any other work, the ur-text for more contemporary makers of fiction; the book exerts its influence even on those who have never read it.

Pynchon’s opening sentence is, it’s true, arrestingly declarative: "A screaming comes across the sky." But before long, we are in the spawn bogs of the real, the essential, Pynchon sentences:

On a wooden pub sign daringly taken, one daylight raid, by a drunken Barley Gobbitch, across which still survives in intaglio the legend SNIPE AND SHAFT, Teddy Bloat is mincing bananas with a great isoceles knife, from beneath whose nervous blade Pirate with one hand shovels the blond mash into waffle batter resilient with fresh hens’ eggs, for which Osbie Feel has exchanged an equal number of golf balls, these being even rarer this winter than real eggs, other hand blending the fruit in, not overvigorously, with a wire whisk, whilst surly Osbie himself, sucking frequently at the half-pint milkbottle filled with VAT 69 and water, tends to the bananas in the skillet and broiler.

Gloriously elliptical, digressive, allowing his clauses to loosen and drift before drawing tight around noun and verb, Pynchon is, by design or not, making a revolutionary turn against the Hemingway mode. Keep in mind, too, that Pynchon was writing before the advent of our polymorphous electronic culture. His contribution—one of many—was to patent a style, an approach that could later be adapted to rendering the strange interdependencies of a world liberated from its provincial boundedness. He modeled a swoop of mind, a way of combining precision with puckishness, a kind of rolling agglomeration that would prove formative for the generation now coming into its own.

What is happening can be seen as a kind of gradual iceheave action against the seemingly dominant presence of the plainspoken and simplified. Slowly they advance, the

proponents of the richer and headier view, each one different in form and

Sally Said (1994), by Jane Calvin. Courtesy of the artist and PaintingsDIRECT (http://www.PaintingsDIRECT.com).





Sally Said (1994), by Jane Calvin

particular expression, sharing only the impulse to break the confining box, the austere stoicist ethos, and to get hold of—annex—the sense of a burgeoning world. In the footsteps of writers such as Updike, Roth, and Bellow, with their complex intelligences, we now remark the ascendancy of William Gass, Don DeLillo, Cormac McCarthy, Cynthia Ozick, Harold Brodkey, Annie Proulx, Toni Morrison, Paul West, and Maureen Howard, as well as short-story acrobats Barry Hannah, Denis Johnson, and Thom Jones. There is obviously a world of difference between the verbally impacted sentences of a Gass and the almost mythical involutions of Morrison, but at root one senses a common expansive will: to embrace, to mime, to unfold in the cadence of a sentence the complexities of life as lived. Far from a betrayal of the real, the elaboration of stylistic surface is often a more faithful transcription than the willfully reduced expression.

From David Foster Wallace (Infinite Jest) to Richard Powers (Galatea 2.2, Plowing the Dark) to Donald Antrim (The Verificationist) to Helen DeWitt (The Last Samurai) to Rick Moody (Purple America) to Colson Whitehead (John Henry Days) to Jonathan Franzen (The Corrections), and on and on, the drive is not just to structural layering and counterpoint, but to the building of sentences that articulate, at every point, implicit-

ly, the fact that life and the consciousness that greets it are deeply involved and involving. Consider the tour de force convolutions of Wallace:

The student engineer, a pre-doctoral transuranial metallurgist working off massive G.S.L. debt, locks the levels and fills out the left side of his time sheet and ascends with his book back through a treillage of inter-neural stairways with semitic ideograms and developer smell and past snack bar and billiard hall and modem-banks and extensive student counseling offices around the rostral lamina, all the little-used many-staired neuroform way up to the artery-red fire door of the Union’s rooftop, leaving Madam Psychosis, as is S.O.P., alone with her show and screen in the shadowless chill.

We might marvel at, and also feel ourselves numbed by, the detailed density, the terminological fetishism, the "neuroform" intricacy of consciousness in descriptive motion. We might also look at this tweezerextracted bit from Powers’s densely woven novel Galatea 2.2:

The web was a neighborhood more efficiently lonely than the one it replaced. Its solitude was bigger and faster. When relentless intelligence finally completed its program, when the terminal drop box brought the last barefoot, abused child on line and everyone could at last say anything instantly to everyone else in existence, it seemed to me that we’d still have nothing to say to each other and many more ways not to say it.

Not only is the prose elegant and clear, but it captures in its cadences, in its deferral of predicate, something of the phenomenon it reflects upon. There is here a palpable sense of language venturing a stretch, challenging our idea of sufficiency, opening itself to take in more reality.

Granted, these brief samples are from two of our more cerebral and experimental young writers, but I could very likely make my point by look-




The quest to capture complexity and nuance has been part of writerly—indeed, artistic—sensibility since the time of herodotus.

ing at the prose of betterknown, or less overtly heady, writers—DeLillo, Proulx, Ozick, Howard, Michael Chabon, Michael Cunningham, Brad Leithauser, Steven Millhauser, Alice Munro, and Michael Ondaajte. All could be said to share a belief in linguistic potency, in language’s achieving its highest and most essential aims through enfold

ing, not through suggesting by omission.

Maybe this prospering of the maximal does not represent a paradox, or contradiction, after all. To look at our new culture solely in terms of the forms of electronic communications—the byte-speak mode—is to ignore the impact of the system itself. The net effect (pun intended) of that system is to make the world hugely more complex, and, perhaps less obviously, to force us to retool our reflexes, thereby allowing us to tolerate, possibly even requiring us to seek out, ever greater levels of sensory input. We do not live as our parents did. We do not live even as we lived 10 years ago. We might have to accept that we are changing, evolving new capacities that permit us to discern patterns and harmonies—rather than mere noise—in the muchexpanded orchestration of reality.

This literary transformation has been working itself out from two directions. On the one side, contemporary writing, in prose style and subject matter, reflects the excitements and anxieties of the arrival of cyber-culture in all its permutations. At the same time—on the other side—we are witnessing the displacement of older themes and approaches. One generation of novelists after another cannot keep finding inspiration in, say, the confusions wrought by the sexual revolution (Updike, Mailer, Oates, Roth), or in the tensions and ambitions bound up in Jewish assimilation (Bellow, Roth, Malamud)—though younger writers, such as Chang-rae Lee in A Gesture Life or Jhumpa Lahiri in The Interpreter of Maladies, have found new twists and turns to chart in the assimilation struggles of other cultures. The simple fact is that changing realities do solicit the artist; they declare new needs and imperatives.

And that is the difference, the larger shift I’m talking about. The expansive thrust is not in itself a new thing. The quest to capture complexity and nuance has been part of writerly—

indeed, artistic—sensibility since the time of Herodotus. Even in America, where an anti-intellectual suspicion of overly intricate subtlety took root early on (one byproduct, perhaps, of our frontier origins), many of the literary titans of the last century were expansive to the highest degree. What is new is a sense, not of arrival exactly, but of breaking through— in prose styles that signal an ascension to a new plane of vantage. These writers are pushing toward a vision based on the idea of radical social and psychological shifts in our ways of living and interacting. I see this as evidence of movement—I would even use that freighted word progress. It belies the tired postmodernist assumption that everything has been done and that there is no place left to go.

The diverse works of the young maximalists can be seen as the first reflection of this larger transformation in consciousness. They help mark our steady movement into global awareness, into the recognition that we are now and henceforth living in a world connected by a grid of lightning impulses. This world will never get simpler. Perceptions, communications, social relations, the meaning of time and distance, the very materiality of things—nothing is as it was. More than ever before, our living needs to be mirrored and interpreted, vigorously and discerningly. The struggle for the soul of the sentence is, at the same time, a struggle for the mastery of subject matter, which is nothing less than a world that threatens at every moment to outstrip us. ❏

The Making of the Public Mind




The Professors
and Bush v. Gore


by Peter Berkowitz and Benjamin Wittes

"You cannot raise the standard against oppression, or leap into the breach to relieve injustice, and still keep an open mind to every disconcerting fact, or an open ear to the cold voice of doubt," warned

the great American jurist Learned Hand (1872–1961). "I am satisfied that a scholar who tries to combine these parts sells his birthright for a mess of pottage; that, when the final count is made, it will be found that the impairment of his powers far outweighs any possible contribution to the causes he has espoused."

One need not share Learned Hand’s drastic view to appreciate that political engagement by scholars runs the risk of betraying intellectual integrity. Scholars have a vital role in democratic debate, but to perform it properly they must exercise a certain restraint. Americans today confront a range of complex public-affairs issues—from the economic consequences of law and government policies to the practical effects and moral implications of cloning and stem cell research—that can be understood only with the help of expert knowledge. In trying to come to reasoned and responsible judgments about such matters, citizens depend upon scholars to marshal relevant facts and figures, to identify the more and less likely consequences of law and public policy, and to clarify the moral principles at stake. But deference to expert knowledge depends in part on public confidence that scholars will honor their obligation to separate the pursuit of truth from political advocacy and personal advantage. When scientists wade into the public debate over stem cell research, for example, we expect, above all, that they will give a fair and accurate account of the facts. This is not to say that scholars cannot express opinions. It means rather that their first obligation is to speak the truth. Scholars are paid to not rush to judgment. If one scholar violates this obligation, the authority of the rest is compromised, and the public is invited to view all scholars as no different from the seasoned spinners and polished operators and purveyors of the party line who crowd our public life.

Restraint may be hardest when justice is at stake. For legal scholars, the risk is especially acute when they weigh in on a controversial case while they are serving as consultants to a party to the controversy, or take an unyielding stand before partisan fires have cooled. In recent years, law professors have assumed a higher profile in public debates, and scholarly restraint has steadily declined. No longer confined to the pages of professional journals, law professors now appear regularly as pundits on TV and radio shows. Their new prominence dates at least to 1987,



Tempers were running high throughout the nation last December when police were forced to separate angry supporters of George W. Bush and Al Gore outside the U.S. Supreme Court.

when, amid an uprising in the legal academy, the testimony of eminent law professors in the bitter Senate confirmation hearings of Supreme Court nominee Robert Bork was nationally televised. A decade later, legal academics found a new stage with the O. J. Simpson trial, and they really hit their stride with the Kenneth Starr investigation and the impeachment and Senate trial of President Bill Clinton. Few of them performed admirably during those public spectacles. But this past winter, with the Florida election controversy, members of the legal academy, in their role as public intellectuals, reached a distressing new low in the exercise of scholarly restraint.

Although the war over Florida’s 25 electoral votes was waged on many fronts, the decisive battles occurred in courts of law. Following the blunders by the television networks in calling the Florida vote on the evening

of November 7, 2000, and up through the U.S. Supreme Court’s dramatic intervention five Tuesdays later, on December 12, the Bush camp and the Gore camp, an army of pundits, Florida lawyers, and an ample supply of law professors from around the country struggled to make sense of the legal wrangling in Florida. There were disputes about the legality of the notoriously confusing butterfly ballot in Palm Beach County, the legality of conducting manual recounts in some counties and not others, the legality of varying standards for interpreting chads in manual recounts (dimpled chads, dangling chads, chads through which light passes), the legality of excluding recounts finished after the statutorily imposed deadline, the legality of improperly completed overseas absentee ballots

in Martin and Seminole counties, the legality of excluding recounts completed

after the deadline imposed by the Florida Supreme Court, and a host of other ques

tions of law. These disputes culminated in two controversial Florida Supreme Court

decisions, which were celebrated by Democrats as vindications of the will of the

people and denounced by Republicans as acts of judicial usurpation.

Partisan rivalry quickly turned to bitterness and anger in Florida, and people

on both sides passed beyond the limits of political civility. There is surely some

thing to be said for controversy in a democracy that worries about the fading polit

ical engagement of its citizens. Yet when the case passed to the highest court in

the land, citizens had every right to expect that at least one group would main

tain a degree of calm and dispassion: the scholars who serve as our national inter

preters of the law. But Bush v. Gore provoked from the legal academy a response

that was without precedent. Never before had a decision of the Supreme Court

been subjected by large numbers of law professors to such swift, intense, and uncom

promising denunciation in the popular press as greeted the December 12, 2000,

ruling that effectively sealed Governor George W. Bush’s victory in the presidential

election. No doubt the professors’ fury, which has yet to abate, tells us something

about Bush v. Gore. It also tells us something important about the professors’ under

standing, or rather misunderstanding, of the public responsibilities of intellectuals.

Many aspects of the Court’s 5–4 decision in Bush v. Gore and the Florida election controversy that it brought to an end should disturb the democratic conscience. Despite a certain skepticism about the use of the

equal protection clause of the 14th Amendment, and a pronounced aversion to con

stitutional innovation, the U.S. Supreme Court’s five conservative justices expand

ed equal protection doctrine and offered a novel reading of Article II, section 1, of

the Constitution, which provides that each state shall appoint presidential electors

"in such manner as the Legislature thereof may direct." Even if one allows that the

recount ordered by the Florida Supreme Court violated equal protection guaran

tees (as seven of nine justices of the U.S. Supreme Court and three of seven judges

of the Florida Supreme Court said), the Court’s justification for halting the recount

rather than directing the Florida court to continue it on the basis of constitution

ally appropriate standards (as the two dissenting justices on the U.S. Supreme

Court who acknowledged equal protection problems with the Florida recount wished)

has the appearance of a technical legal trap being sprung. The evidence indicates

that a disproportionate number of African American voters in Florida saw their votes

spoiled. There is good reason to believe that on November 7, 2000, a majority of

Florida voters cast their ballots intending to vote for Vice President Al Gore. All nine

justices of the U.S. Supreme Court, moreover, faced a conflict of interest in decid

ing Bush v. Gore: The new president would very likely have the opportunity to nom

inate their new colleagues (or their successors). In addition, the Florida election con

troversy raised divisive political questions that the Court might have been wise to

leave for resolution to Florida and, ultimately, to Congress.

>Peter Berkowitz teaches at George Mason University School of Law and is a contributing editor of the

New Republic. His book Virtue and the Making of Modern Liberalism was recently published in paperback.

Benjamin Wittes is a member of the Washington Post’s editorial page staff. His book Starr: A Reassessment

is forthcoming from Yale University Press. Copyright © 2001 by Peter Berkowitz and Benjamin Wittes.

The foregoing are serious matters, and they demand careful public consideration. The problem is that much that has been written about Bush v. Gore by law professors in their role as public intellectuals has not advanced that kind of careful consideration. Instead, it has muddied the waters and stirred more partisan ire. Far from counteracting the public’s tendency to collapse the legal dimension of the controversy into the political, many scholars have encouraged it. The two dimensions can—and must—be separated.

The overarching political question was whether the electoral system, in Florida and in the nation, reflected the will of the people. The fundamental legal question was whether the Florida Supreme Court’s two critical decisions, on November 21 and December 8, complied with the requirements of American constitutional law. (In the first case, in a lawsuit brought by Vice President Gore, the Florida

Supreme Court overruled a lower Florida court and extended by 12 days the deadline for protesting election returns and for officially certifying the results; on December 8, again in a lawsuit brought by the vice president, it overruled a lower Florida court and ordered as part of Gore’s contest of the official certification a statewide



The overarching political question was whether the electoral system, in florida and in the nation, reflected the will of the people.

manual recount of undervotes.) The U.S. Supreme Court was called upon to resolve only the legal dispute—the constitutionality of the conduct of the Florida Supreme Court.

To listen to the nation’s preeminent constitutional theorists tell it, Bush v. Gore was an obvious outrage—nothing less than a politically driven repudiation of democracy and the rule of law. In the months immediately following the decision, Bruce Ackerman, a professor of law and political science at Yale University and one of the nation’s most prominent legal intellectuals, spoke for a substantial majority of law professors when he issued the brutal judgment—in agreement, he plausibly argued, with Justice John Paul Stevens’s dissent—that the majority opinion was "a blatantly partisan act, without any legal basis whatsoever." Leading conservative professors of constitutional law were not much heard from, and they were comparatively measured in their statements: By and large they found in Bush v. Gore a reasonable though flawed ruling. Two days after the decision, University of Utah law professor Michael McConnell argued in the Wall Street Journal that the Court was correct to conclude that the "manual recount, as ordered by the Supreme Court of Florida, would be unconstitutional," but he found the "question of remedy" to be "the troubling aspect of the decision." Conservatives, however, form only a small fraction of the legal professoriate. The great majority of their fellow law professors who spoke out on Bush v. Gore followed Ackerman and other leaders in pouring scorn on it:



  • Vanderbilt University law professor Suzanna Sherry maintained in the New York Times that "there is really very little way to reconcile this opinion other than that they wanted Bush to win."


  • Harvard University law professor Randall Kennedy proclaimed in the American Prospect that Bush v. Gore was a "hypocritical mishmash of ideas," and that "the Court majority acted in bad faith and with partisan prejudice."


  • University of Texas law professor Sanford Levinson asserted in the Nation that "Bush v. Gore is all too easily explainable as the decision by five conservative Republicans—at least two of whom are eager to retire and be replaced by Republicans nominated by a Republican president—to assure the triumph of a fellow Republican who might not become president if Florida were left to its own legal process."


  • American University law professor Jamin Raskin opened an article in the Washington Monthly by describing the case as "quite demonstrably the worst Supreme Court decision in history," and proceeded to compare it unfavorably with the notorious Dred Scott decision.


  • A total of 554 law professors from 120 American law schools placed a full-page ad in the New York Times on January 13, 2001, declaring that the justices had acted as "political proponents for candidate Bush, not as judges. . . . By taking power from the voters, the Supreme Court has tarnished its own legitimacy."


  • Harvard University law professor Alan Dershowitz asserted in Supreme Injustice: How the High Court Hijacked Election 2000 that "the decision in the Florida election case may be ranked as the single most corrupt decision in Supreme Court history, because it is the only one that I know of where the majority justices decided as they did because of the personal identity and political affiliation of the litigants. This was cheating, and a violation of the judicial oath."


  • The gravamen of the complaint was that the five conservatives on the Court— Chief Justice William Rehnquist, and Justices Sandra Day O’Connor, Antonin Scalia, Anthony Kennedy, and Clarence Thomas—hypocritically threw overboard their long-held and repeatedly affirmed judicial philosophy of restraint, deference to the states, and a preference that the political process, rather than the courts, resolve disputes. In a breathtakingly important case, one in which that philosophy would have guided them to a correct result, they betrayed their principles. They energetically extended the equal protection clause of the 14th Amendment, they failed to defer to the Florida Supreme Court’s interpretation of Florida law, and they aggressively intervened in the political process before it had a chance to play itself out. According to the Court’s accusers, the majority’s rank partisan passion was the only explanation for this egregious betrayal. And the damage, they contended, would be considerable: Bush v. Gore would undermine the legitimacy of the Bush presidency—and of the Court itself.

    If these charges are true, then Bush v. Gore deserves the opprobrium that law professors have showered upon it. Yet the scholarly critics generally seemed to regard the truth of their assertions as too obvious to require sustained evidence or argument, if they considered evidence or argument necessary at all. In fact, the careful study they failed to carry out before announcing their verdict shows that not a single one of their charges is obviously true, and that all, quite possibly, are false.

    We do not mean to pass judgment on the ultimate correctness of the Court’s decision. The case, which is complicated and raises a variety of multilayered questions of fact and law and politics, will be debated for years to come. Indeed, our aim is to defend the case’s difficulty against those scholars who, sadly, insist that there is virtually nothing to understand about Bush v. Gore that cannot be summed up with the term partisanship. The scholars’ hasty accusations of gross politicking may apply with more obvious justice to the accusers themselves than to the Court majority whom they convict.

    Recurring defects in the legal academy’s initial reaction to Bush v. Gore can be seen in the public pronouncements of three of its most eminent constitutional theorists: Ackerman, Cass Sunstein, and Ronald Dworkin.

    Even those scholars whose public utterances were relatively responsible could be found making flamboyant assertions supported only by their authority. In the Chronicle of Higher Education (Jan. 5, 2001), for example, University of Chicago law professor Sunstein declared that future historians would conclude that the Court had "discredited itself" with its "illegitimate, unprincipled, and undemocratic decision." We do not know what factors caused Sunstein to come to this harsh conclusion, because in his brief article he provided no arguments to support it. Nor did Sunstein mention that only three weeks earlier he had taken a much more measured view. On December 13, the day after the case was decided, Sunstein told ABC News reporter Jackie Judd that the opinion "was a stabilizing decision that restored order to a very chaotic situation." On the same day on National Public Radio, Sunstein observed: "The fact that five of them [the justices who signed the majority opinion] reached out for a new doctrine over four dissenting votes to stop counting—it’s not partisan, but it’s troublesome." While he did not "expect the Court to intervene so aggressively," Sunstein allowed on NPR that its decision may have provided "the simplest way for the constitutional system to get out of this. And it’s possible it’s the least bad way. The other ways maybe were more legitimate legally but maybe worse in terms of more chaotic." Many months later, in the University of Chicago Law Review, Sunstein attempted to synthesize these two seemingly irreconcilable views. His more detailed analysis of the case, however, falls far short of supporting the inflammatory language he used while the controversy was still hot.

    A more troubling characteristic of the assaults on the Court was the tendency to misstate matters of fact and law. In the New York Review of Books (Jan. 11, 2001), New York University law professor Dworkin offered a highminded warning against "reckless accusations" of partisanship: "It is, after all, inherently implausible that any—let alone all—of them [the five- member majority] would stain the Court’s reputation for such a sordid reason, and respect for the Court requires that we search for a different and more creditable explanation of their action." In "sorrow," however, Dworkin concluded that the "implausible" charge was correct—because "the legal case they offered for crucial aspects of their decisions was exceptionally weak." Yet in his essay, Dworkin failed even to restate accurately the legal case the majority offered, and without meeting that minimal requirement he never fairly engaged the majority’s reasoning.

    The defects in Dworkin’s approach begin with a tendentious characterization of events:

    The conservatives stopped the democratic process in its tracks, with thousands of votes yet uncounted, first by ordering an unjustified stay of the statewide recount of the Florida vote that was already in progress, and then declaring, in one of the least persuasive Supreme Court opinions that I have ever read, that there was not time left for the recount to continue.

    Whether the U.S. Supreme Court "stopped the democratic process in its tracks" depends in part on whether the two Florida Supreme Court rulings— of November 21 and December 8—that were guiding the process in Florida were lawful and democratic. A scholar might responsibly criticize the Court by showing that the two rulings were indeed lawful and democratic. But Dworkin examined neither of them.

    If you believe—as three dissenting members of the Florida Supreme Court argued in that body’s 4–3 decision on December 8—that the majority’s ruling departed substantially from the legislative scheme in place on November 7 for resolving election disputes, created serious equal protection problems, and provided a remedy that was inherently unworkable and hence unlawful, the U.S. Supreme Court’s action begins to look very different. One might reasonably conclude that, far from having "stopped the democratic process in its tracks," the Court rescued it.

    Dworkin’s contention that the recount was stopped with "thousands of votes still uncounted" obscures the fact that Florida’s ballots were actually counted twice, by machines, as required by Florida law in close elections (where the margin of victory is 0.5 percent or less). At the same time, his anodyne reference to "the statewide recount of the Florida vote" glosses over the dubious parameters of the manual recount actually ordered by the Florida Supreme Court. It was not a full manual recount of the presidential vote. Nor was it a full manual recount of undamaged ballots that failed to yield a valid, machine-readable vote for president, as would appear to have been required by the Florida Supreme Court’s own principle that all votes should be counted in pursuit of a "clear indication of the intent of the voter."

    Rather, the Florida court ordered a manual recount of a subset of the so-called nonvotes, the undervotes, which are ballots (estimated to number about 60,000) with no machine-readable vote for president. Despite the objections raised by Florida chief justice Charles T. Wells in his dissent, indeed without explanation, the majority excluded from the recount overvotes, an entire class of undamaged ballots (estimated to number about 110,000) that were invalidated because machines detected multiple votes for president. And yet, like the undervotes, they too may have contained (and we now know did contain) discernible choices.

    Dworkin also misstates the majority’s holding, though he claims it was "quite simple." The U.S. Supreme Court, Dworkin incorrectly argues, held that the Florida recount violated equal protection only because it failed to establish a uniform and specific standard for determining in the recount whether a ballot revealed a voter’s clear intention. In fact, the Court identified four discrete features of the manual recount ordered by the Florida Supreme Court that raised equal protection problems. In addition to the one Dworkin mentions, the Court singled out problems with the arbitrary exclusion of overvotes, the inclusion in the results of an uncompleted recount in Miami-Dade County, and the use of untrained and unsupervised personnel to conduct the statewide recount.

    Having failed to mention three of the four problems that taken together, the Supreme Court held, violated the fundamental right to vote

    protected by the equal protection clause of the 14th Amendment, Dworkin never reached the central question: whether, as the majority concluded, the Florida recount in its various features violated the principle articulated in Reynolds v. Sims (1964) that "the right of suffrage can be denied by a debasement or dilution of the weight of a citizen’s vote just as effectively as by wholly prohibiting the free exercise of the franchise."

    Though in the end, and for reasons that are not altogether clear, Dworkin allows that the Court’s equal protection holding was "defensible," he insists that the controversial remedy, which he also misstates, was not. In Dworkin’s understanding, the U.S. Supreme Court halted the Florida recount by adopting a "bizarre interpretation" of the intention of the Florida legislature expressed in the state’s election law. The question concerned the state’s approach to the December 12 federal "safe-harbor" deadline (Title III, section 5, of the U.S. Code), which provides that in counting electoral votes, Congress will not challenge presidential electors if states appoint them by the safe-harbor date and on the basis of laws in place before the election. As Dworkin correctly notes, adherence to the federal safe-harbor law is not mandatory—if Florida wished to put its electoral votes at risk by failing to meet the December 12 deadline, it was free under federal law to do so. But, according to Dworkin, the Court read into the Florida statutory scheme a legal obligation to meet the "safe-harbor" deadline and then, "in violation of the most basic principles of constitutional law," imposed that interpretation of Florida law on the Florida Supreme Court.

    But the majority argued that in addressing the question of remedy it was giving effect to the Florida Supreme Court’s interpretation of Florida law:

    Because the Florida Supreme Court has said that the Florida Legislature intended to obtain the safe-harbor benefits of 3 U.S.C. §5, Justice Breyer’s proposed remedy—remanding to the Florida Supreme Court for its ordering of a constitutionally proper contest until December 18—contemplates action in violation of the Florida election code, and hence could not be part of an "appropriate" order authorized by Fla. Stat. §102.168(8) (2000).

    In other words, the majority claimed that the Florida Supreme Court itself had interpreted Florida law as imposing the December 12 deadline. Indeed, the Florida Supreme Court appears to affirm that deadline as many four times in its December 11 opinion (which it issued in direct response to the U.S. Supreme Court’s request on December 4 for clarification of the grounds for the Florida Court’s November 21 decision). But Dworkin never examines the December 11 opinion.

    In fashioning its remedy, the majority plausibly claimed to rely upon and defer to the Florida Supreme Court’s interpretation of Florida law. In fact, it was the remedy contemplated by the dissents of Justices Stephen Breyer and David Souter, and endorsed by Dworkin himself, that very likely would have involved the Court in repudiating the Florida Supreme Court’s reading of Florida law. To be sure, even notable defenders of the U.S. Supreme Court’s opinion regard the remedy as its weakest link, but to be fairly criticized it must first be correctly understood.

    Perhaps the most serious infirmity in the law professors’ response to Bush v. Gore was the tendency, under the guise of legal analysis, to abandon legal analysis. In contrast to Sunstein and Dworkin, Ackerman did not so much as pause in his attack to caution against premature accusations of partisanship. His verdict in the American Prospect (Feb. 12, 2001) was uncompromising: "Succumbing to the crudest partisan temptations, the Republicans managed to get their man into the White House, but at grave cost to the nation’s ideals and institutions. It will take a decade or more to measure the long-term damage of this electoral crisis to the Presidency and the Supreme Court—but especially in the case of the Court, Bush v. Gore will cast a very long shadow." As Ackerman explained in an article that appeared almost simultaneously in the London Review of Books (Feb. 8, 2001), the trouble with the 2000 election began with "the gap between the living and written Constitutions." Under what Ackerman derisively calls "the written Constitution," the president is selected by the Electoral College, which gives smaller states disproportionate representation. But "the living Constitution"—which is nowhere written down or codified—rejects that unjust formula, having "created a system in which Americans think and act as if they choose their President directly." Because Gore won the popular vote, "George W. Bush’s victory is entirely a product of the federalist bias inherited from 1787." For Ackerman, Bush v. Gore was part of the vast right-wing conspiracy, and, he declared in the American Prospect, it called for drastic countermeasures: "When sitting [Supreme Court] justices retire or die, the Senate should refuse to confirm any nominations offered up by President Bush."

    Ackerman is far clearer regarding what should be done about the Court’s perfidy than he is about what exactly was wrong with the justices’ work. Whereas in the American Prospect he accuses the Court of acting lawlessly, in the London Review of Books he accuses it of foolishly applying the wrong law—the law that actually exists (the written Constitution), rather than the one he believes time has made more relevant (the living Constitution). At other times in the same article, Ackerman argues only halfheartedly that the Court incorrectly applied the "written Constitution." He concedes in the London Review of Books that there were strong pragmatic reasons for the Court to get involved: "If one is haunted by the specter of acute crisis, one can view the justices’ intervention more charitably. However much the Court may have hurt itself, did it not save the larger Constitutional structure from greater damage? Perhaps." He even goes so far as to acknowledge, without actually engaging the legal arguments of the majority or of the dissenters, that the Court’s central holding, which he misstates much as does Dworkin, was correct: He says that he does "not challenge [the Court’s] doctrinal conclusion."

    In the end, Ackerman’s problem is not that the Court intervened, but that it did so on Bush’s behalf rather than Gore’s: "The more democratic solution would have been...to stop the Bush brothers from creating Constitutional chaos by submitting a second slate of legislatively selected electors. The court could have taken care of all the serious difficulties by enjoining [Florida governor] Jeb Bush not to send this slate to Congress."

    Leave aside the considerable legal difficulties in Ackerman’s call for the Court to issue an injunction that was not requested by any party to the litigation against other persons and entities that were also not parties to the litigation. The larger problem is that he would have had the Court issue orders to elected state officials based on a nonexistent document (the living Constitution), to whose authority neither Bush nor Gore ever appealed, to protect a recount that he admits violated the law the justices were sworn to uphold. What, one wonders, is democratic or lawful about that?

    Of course, it is possible that while the critics failed to state accurately the arguments in Bush v. Gore, their basic charge—that the Supreme Court undermined its legitimacy by riding roughshod over

    its own principles to reach a purely partisan conclusion—is still correct. Yet even a brief examination of those principles—an examination that none of the major critics offered the public in conjunction with their harsh condemnations— and reflection on the critics’ premises and predictions reveal that the law professors’ prima facie case against the decision is at best a caricature.

    Consider first the gross oversimplification in the charge that Bush v. Gore violated the majority’s core jurisprudential commitments. The Supreme Court’s conservatives have indeed shown a commitment to ruling generally on the basis of explicit textual statements and well-settled precedents rather than abstract values thought to be implicit in the constitutional text and previous opinions. These conservatives have also displayed an instinct to avoid unnecessarily



    interfering in state court matters, and a readiness to recognize zones of state authority in which Congress is forbidden to tread. The solicitude for state power is particularly visible in habeas corpus litigation, where the Court has been increasingly reluctant to allow federal courts to second-guess state convictions. It can also be seen in the Court’s insistence that Congress’s power to regulate interstate commerce has limits, and in its expansive interpretation of state immunity against suits conferred upon state governments by the 11th Amendment. However, the majority’s federalism is scarcely recognizable in the crude version of it that law professors constantly invoke against Bush v. Gore.

    In no sense does the modern conservative vision of federalism contend that state action—including state court action—is not subject to federal court review for compliance with the federal Constitution. In fact, the conservative justices often vote to reverse state supreme court holdings on grounds that they offend federal constitutional imperatives. Only six months before Bush v. Gore, the same U.S. Supreme Court majority reversed the New Jersey Supreme Court’s decision that the Boy Scouts could not discriminate on the basis of sexual orientation.

    The state court had held that the Boy Scouts were a public accommodation within the meaning of a state anti-discrimination law; the Supreme Court said that the law, so interpreted, violated the Boy Scouts’ First Amendment right of expressive association. The parallel with Bush v. Gore is exact: The Supreme Court invalidated a state court interpretation of state law on the ground that what state law required offended the federal Constitution.

    Nor is it true that the Court’s conservatives were uniformly hostile to applying the equal protection clause to strike down state actions before Bush v. Gore. In a series of voting rights cases beginning with the 1993 decision in Shaw

    v. Reno, the same five justices relied on the equal protection clause to strike down legislative districting schemes motivated primarily by racial considerations. The conservative justices have also used the equal protection clause to rein in affirmative action programs. To be sure, the conservative interpretation of this clause is different from the liberal one, and in critical respects it is less expansive. It still serves, however, for the conservatives as a constraint on state action, and it is by no means obviously inconsistent with the holding the Court majority issued in Bush v. Gore.

    In addition, Chief Justice Rehnquist’s concurring opinion, which argued that the Florida court changed the state’s election laws in violation of Article II, section 1 of the Constitution, has been criticized as hypocritical. Conservatives, the criticism goes, profess to respect state court holdings on state law, yet in this instance the chief justice—and Justices Scalia and Thomas, who joined him—dissected the Florida court’s interpretation of Florida’s election statutes. Again, however, conservatives, and certainly the Court’s three most conservative justices, do not argue that the deference owed to state courts on matters of state law entitles states to violate the federal Constitution. From the conservatives’ point of view, Article II, section 1 of the Constitution, which declares that a state shall appoint presidential electors "in such manner as the legislature thereof may direct," provides an explicit textual obligation on the part of the state courts to interpret—rather than rewrite or disregard—state law concerning presidential elections.



    The willingness of the conservatives to review state supreme court interpretations of state law is particularly evident in cases involving the takings clause of

    the Fifth Amendment, which forbids government seizures of private property without just compensation. In 1998, for example, the Court ruled that interest on clients’ money held by their lawyers constituted "private property" for purposes of the takings clause. This contradicted the view of Texas property law taken by the Texas Supreme Court, which had promulgated a rule under which interest from trust accounts was used to pay for counsel for indigents. In another case, the Court said it reserved the right to examine the "background principles of nuisance and property law" under which a state supreme court determined that the state can restrain uses of private property without compensating property owners. In one case, Justices Scalia and O’Connor even dissented from a denial of certiorari on grounds that the Court should not be too deferential to state court interpretations of state law in takings matters. "As a general matter," Justice Scalia wrote, "the Constitution leaves the law of real property to the States. But just as a State may not deny rights protected under the Federal Constitution through pretextual procedural rulings . . . neither may it do so by invoking nonexistent rules of state substantive law." The opinion in Bush v. Gore is based on the same principle: While the Court owes great deference to the Florida Supreme Court’s view of Florida law, that deference ends where federal law requires the Court to ensure that state supreme courts have reasonably interpreted state law.

    The larger point is not that the majority opinion and concurrence in Bush

    v. Gore were perfectly consistent with the conservatives’ judicial philosophy. Whether they were is debatable. As we have noted, there is certainly some-thing unexpected in the majority’s willingness to expand equal protection doctrine and in the concurring justices’ novel Article II argument. But noting those oddities, and appreciating the novel circumstances in which they arose, should be the start of the discussion, not the end of it.

    Consider next the accusation that in Bush v. Gore the conservative majority was driven by a self-interested political motive: A conservative president would appoint like-minded jurists to the Supreme Court. The critics’ failure to properly engage the Court’s reasoning suggests that partisan corruption on the justices’ part was not the scholars’ sad conclusion, as they claim, but rather their operative premise from the beginning. But it is a dogmatic and dangerous premise, especially for intellectuals engaged in shaping public opinion. For one thing, it obviates the need for careful evaluation of legal arguments, converting them, before examination, from reasons to be weighed and considered into rationalizations to be deflected and discarded. And the premise is easily turned against its user. It is not difficult to identify potent partisan interests driving the scholarly critics of Bush v. Gore. Many were stalwart supporters of the Clinton administration, and many keenly favored Gore for president. Were Gore appointing federal judges, many would have significantly improved their chances of placing their students in prestigious judicial clerkships, as well as of disseminating their constitutional theories throughout the judiciary.

    Consider finally the prediction that Bush v. Gore would gravely damage President Bush’s and the Court’s own legitimacy. That claim is subject to empirical testing. And the tests prove it false—that is, if legit

    imacy is regarded as a function of public opinion. By April 2001, after his first 100 days in office, President Bush enjoyed a 63 percent overall approval rating in a Washington Post-ABC News poll. In response to the question "Do you consider Bush to have been legitimately elected as president, or not?" fully 62 percent answered affirmatively. That was actually a small increase over the 55 percent who regarded Bush’s election as legitimate in the immediate aftermath of the Court’s decision. Bush’s popularity will wax and wane like any other president’s, but he does not seem to have legitimacy problems.

    Nor has the Court itself fared badly in the public’s eye. The Pew Center for the People and the Press has been measuring the Court’s approval rating since 1987. In that time, the rating has fluctuated from a low of 65 percent in 1990 to a high of 80 percent in 1994. In January 2001, the Court’s favorability rating stood at 68 percent. Three months later, it stood at 72 percent. More interestingly, the Court was viewed favorably by 67 percent of Democrats.

    The continued high opinion of the Supreme Court is consistent with other surveys that straddle the date of the Court’s action. The Gallup Organization, for example, asked people immediately after the decision how much confidence they had in the Court. Forty-nine percent of Americans had either "a great deal" or "quite a lot" of confidence, up slightly from the 47 percent who expressed such confidence the previous June. Both the Pew and Gallup polls suggest that the partisan composition of the support changed somewhat following the Court’s action, with Democratic confidence declining and Republican increasing. That shift, however, does not constitute a national legitimacy crisis, any more than conservative disaffection with the Warren Court did during the 1960s. The Court has enjoyed a remarkably stable level of public confidence and trust over a long period of time.

    The academics worrying themselves about the crisis of the Court’s legitimacy present as a sociological claim what is really normative criticism: The Court deserves to lose the public’s confidence, or, put differently, as a result of Bush v. Gore the Court has lost legitimacy in the eyes of the majority of academic pundits (namely, themselves) whom the public ought to follow. The Rehnquist Court’s "loss" of legitimacy among leading constitutional theorists might be more troubling if it had ever enjoyed such legitimacy. But despite all the expressions of concern for the Rehnquist Court’s standing following its December fall from grace, it is hard to find any evidence that the Court’s more prominent scholarly critics ever held it in much esteem. Sunstein, whose writings on the Court reflect a complicated relationship, is an exception. But Ackerman and Dworkin certainly are not. Even before Bush v. Gore, their work dripped with disdain for the conservative majority, whose legitimacy they discovered only when they felt at liberty to say that it had been lost for good.

    Bush v. Gore was a hard case. The Court confronted novel and difficult legal questions, both parties made plausible arguments, the political stakes loomed large, partisan passions ran excruciatingly high,

    and the controversy deeply implicated fundamental concerns about justice and democratic self-government. Reasonable people may differ over whether Bush

    v. Gore was correctly decided. But the charge that the decision is indefensible is itself indefensible. That this untenable charge has been made by legal scholars repeatedly and emphatically, and with dubious support in fact and law, is an abuse of authority and a betrayal of trust. If scholars do not maintain a reputation for fairness and disinterestedness, their own legitimacy may well suffer grievously in the eyes of the public, and so could American democracy.

    When scholars address the public on matters about which they are expert, the public has a right to expect that the scholars’ reason, not their passion, is speaking. Because liberal democracy is grounded in the rule of law, and because law is a technical discipline—the resolution of whose cases and controversies often involves the interpretation of arcane statutes, the mastery of voluminous case law, the understanding of layers of history, and the knowledge of complicated circumstances—the public is particularly dependent on scholars for accurate and dispassionate analysis of legal matters. Those scholars who assume the office of public intellectual must exercise a heightened degree of care and restraint in their public pronouncements.

    Scholarly restraint—so lacking in the aftermath of Bush v. Gore—is indeed compatible with lively participation by scholars in democratic debate. By putting truth before politics, out in public as well as inside the ivory tower, scholars make their distinctive contribution to that precious public good, reasoned and responsible judgment. ❏

    The Making of the Public Mind




    Knowing the
    Public Mind


    by Karlyn Bowman

    In their 1940 book The Pulse of Democracy, George Gallup and Saul Rae defended a new instrument, the public opinion poll, but they cautioned as well that polling, an industry then just out of its "swaddling

    clothes," would need to be evaluated afresh in the future. The infant industry, long since matured, is full of life today. Polls are a commonplace of American life, conducted almost nonstop on almost every conceivable subject. But some of the same questions Gallup and Rae asked about polling six decades ago are still being asked: Is public opinion unreliable as a guide in politics? Are samples truly representative? What are polling’s implications for the processes of democracy? And along with the old questions, there are significant new ones, too: Is the proliferation of polls, for example, seriously devaluing the polling enterprise?

    The amount of polling on a subject much in the news of late may suggest an affirmative answer to that last question. In late July, the Gallup Organization asked Americans for their views on embryonic stem-cell research, a matter that has vexed scholars, biologists, and theologians. From August 3 to August 5, Gallup polled Americans again. On August 9, immediately after President George W. Bush announced his decision to provide limited federal funding for the research, the survey organization was in the field once more with an instant poll to gauge reaction. From August 10 to August 12, Gallup interviewers polled yet again. Gallup wasn’t the only polling organization to explore Americans’ views on this complex issue. Ten other pollsters, working with news organizations or academic institutions, conducted polls, too. Hoping to influence the debate and the president’s decision, advocacy groups commissioned polls of their own. The Juvenile Diabetes Research Foundation International, a supporter of stemcell research, reported that a solid majority of Americans were in favor of federal funding, and touted the findings in newspaper advertisements shortly before the president spoke. The National Council of Catholic Bishops, an organization opposed to stem-cell research, released survey findings that showed how the wording of questions on stem-cell research can affect a poll’s results.

    So much polling activity on a single issue isn’t unusual anymore, and it clearly indicates how powerful a force polls have become today. Fourteen national pollsters release data publicly on a regular basis, as do




    Can pollsters really see into these minds?

    scores of others at the state and local level. Many of these organizations also poll for private clients, though much of that work never becomes public; market research on new products and consumer preferences (conducted privately for the most part) dwarfs the public side of the business. In the political life of the nation, campaign and public pollsters, particularly

    those associated with media organizations, have enormous influence, and

    they are the focus of this essay.

    The Roper Center, at the University of Connecticut, collects and

    archives polling data for most of the national survey organizations that

    release their data publicly. The Roper archive, the oldest and largest devot

    ed to public opinion data, contains about 9,000 questions from the 1960s—

    and more than 150,000 questions from the 1990s. Nine organizations regu

    larly contributed to the Roper archive in the 1960s. Today, 104 do. Materials

    from Gallup and Harris, two of the most familiar names in the survey busi

    ness, represented slightly more than 75 percent of the Roper Center’s hold

    ings in the 1960s; in the 1990s, they accounted for less than 25 percent. There

    were 16 questions asked about Medicare in 1965, the year that legislation

    became law, and more than 1,400 questions about the Clinton health care

    plan in 1994, the year that proposed legislation died. From 1961 to 1974, poll

    sters asked some 1,400 questions about Vietnam; in the eight months from

    August 1990 to March 1991, they asked 800 questions about the Persian Gulf

    War. A combined total of 400 questions were asked about the 10 first ladies

    from Eleanor Roosevelt through Barbara Bush; twice that many questions were

    asked about First Lady Hillary Rodham Clinton alone.

    The polling business has grown dramatically outside the United States

    as well. Five firms polled for major British newspapers and television sta

    tions in the last days of the British election campaign this past June. About

    a dozen different news organizations, including three from the United

    States, conducted polls during the 2000 Mexican presidential campaign.

    The presence of independent pollsters surveying voters on election day in

    Mexico, and the expeditious broadcast of their findings, reinforced the belief

    that the election, which was won by the challenger, Vicente Fox, was fair.

    The New Yorker recently chronicled the work of a political pollster in

    Ulaanbaatar, Mongolia. In the past three Mongolian national elections, the

    pollster "predicted the winner within fewer than 2.8 percentage points." The

    article described how one of the pollster’s young associates traveled by motor

    bike, in a remote province with no roads, to speak to prospective

    Mongolian voters. When he handed out his questionnaires, the nomads

    began weeping because, as the young man said, "for the first time they feel

    that somebody cares about what they think."

    Polls in the United States have achieved a degree of prominence in public life that was inconceivable when George Gallup, Archibald Crossley, and Elmo Roper started using scientific sam

    pling techniques almost seven decades ago to gauge Americans’ opinions.

    Some of the most familiar polling questions today ("What is the most

    important problem facing the United States?"; "Do you approve or disap

    prove of how the president is handling his job?"; "In politics, do you con

    sider yourself a Democrat or a Republican?") were asked for the first time

    >Karlyn Bowman is a resident fellow at the American Enterprise Institute, in Washington, D.C. Portions of

    this article are adapted from other writing by the author, including "Polling to Campaign and to Govern," in The

    Permanent Campaign and Its Future (2000). Copyright © 2001 by Karlyn Bowman.

    by those pollsters—the founding fathers—in the 1930s. All three measured Franklin Roosevelt’s popularity and predicted his victory in 1936. Roosevelt himself became an enthusiast for polls after they predicted his win, and he enlisted Hadley Cantril of Princeton University to measure opinion about issues that concerned him, particularly views about the war in Europe. Cantril used Gallup’s facilities at first, but he later set up an independent operation that provided secret poll reports to the White House. Harry Truman, not surprisingly, became skeptical about polls after their famously incorrect prediction that

    Thomas E. Dewey would defeat him in 1948. Most observers date the modern era of political polling to Louis Harris’s work for John

    F. Kennedy in 1960. Since then, pollsters working privately for political candidates have become so influential that virtually no




    The public side of the polling business derives its great influence in part from media alliances and coverage.

    candidate runs for major office without hiring one.

    Private polling is used in almost every aspect of political campaigns today—from strategic planning to message development to fund-raising— and at every stage of campaigns. And the activity doesn’t stop when the campaigning is over. In a post-election memo to Jimmy Carter in 1976, Patrick Caddell, the president-elect’s pollster, argued that politics and governing could not be separated. Thus was launched "the permanent campaign," with its armies of pollsters and political consultants. Once in office, presidents continue to poll privately, and they collect data from the public pollsters as well. During the Kennedy, Johnson, and Nixon administrations, according to political scientists Lawrence R. Jacobs and Robert Y. Shapiro, "public opinion analysis became an integral part of the institution of the presidency," with staff members given the task of monitoring the data. Successive administrations have become "veritable warehouses for public opinion data." (The private polling that’s done for presidents and paid for by the political parties is lucrative indeed for pollsters—and often helps attract new clients.)

    The public side of the polling business derives its great influence in part from media alliances and coverage. Since the earliest days of polling, pollsters who release data publicly have depended on news organizations to disseminate their findings. Gallup syndicated his polls in various newspapers; Crossley polled for Hearst, and Roper for Fortune. It wasn’t until 1967 that a news organization—CBS News—started conducting its own polls. CBS polled alone at first, but joined forces with the New York Times in 1975. (In the 1990s, CBS News and the Times asked Americans more than 10,000 questions.) Some of the other prominent partnerships today include Gallup, CNN, and USA Today; Harris Interactive, Time, and CNN; and Opinion Dynamics and Fox News. ABC News polls both alone and with the Washington Post. A bipartisan team led by Democrat Peter D. Hart and Republican Robert Teeter polls regularly for NBC News and the Wall Street Journal. Princeton Survey Research Associates polls for Bloomberg News and, separately, for Newsweek. Zogby International, which recently conducted a poll for NBC, worked with Reuters during the 2000 campaign.

    Like their counterparts that poll for candidates, pollsters associated with news organizations are involved in all phases of the permanent political campaign. Pollsters inquire about how the president-elect is handling his transition, and whether the outgoing president is making a graceful exit. In the first 100 days of the Kennedy administration, Gallup asked four questions about how the new president was handling his job. During the same period in Jimmy Carter’s presidency, four national pollsters asked 14 job approval questions. In George W. Bush’s first 100 days, 14 pollsters asked 44 such questions. The total is substantially higher if one includes questions about how the president has handled specific aspects of his job, such as the economy, the environment, or foreign policy. Americans have already been asked whom they will vote for in the presidential election and senatorial contests in 2004. All this activity is a mark of how successful the pollsters have become, but it has also given rise to criticism that the sheer volume of the activity may be diminishing the value of polls.

    In the media/pollster partnerships, the needs of the media often trump those of the pollsters. The press has to work quickly, whereas good polling usually takes time. The competitive news environment has pollsters vying to provide the first reaction to a breaking news story. Kathleen Frankovic, director of surveys at CBS, reports that it took Gallup two weeks to tell the country who won the 1960 Kennedy-Nixon debates. In 1992, CBS had results within 15 minutes of the second presidential debate. Technological advances have made it possible to conduct interviews and to process responses faster and more inexpensively than in the past, but the advances don’t necessarily make the practice wise. Instant polls such as those conducted after President Bush’s speech on stem-cell research and Connie Chung’s interview with congressman Gary Condit (D-Calif.) may satisfy a journalist’s requirement for speed and timeliness (and perhaps even sensationalism), but they do not always satisfy a pollster’s need for adequate samples. To understand just what the public is saying often takes time, and time is a luxury media organizations don’t have.

    The media’s preoccupation with speed caught up with the pollsters in spectacular fashion last year. Although their record of prediction in the 2000 national election was one of the best ever, the exit

    poll consortium (the five networks and the Associated Press pool resources and conduct a joint poll of voters leaving selected precincts) was roundly criticized for its role in precipitous election-night calls. CNN’s internal report on the election night fiasco argued that "television news organizations staged a collective drag race... recklessly endangering the electoral process, the political life of the nation and their own credibility." As the results of a national Los Angeles Times poll make clear, the public objects to the practice of calling elections before voting has finished. Three-quarters of those surveyed told interviewers that the networks’ practice of predicting the results in some parts of the country while citizens in other parts of the country are still casting ballots "is interfering with the voting process and the practice should be stopped." (Just 22 percent said that the results constitute "breaking news" and that the networks should be allowed to continue the practice.)

    Because competition in the news business is so great, polls are being conducted and reported about many matters on which opinion isn’t firm—or may not exist at all. Questions about a candidate’s strength or a voter’s intention, asked years before an elec

    tion, are largely meaningless. In Gallup’s first poll about the stem-cell controversy, taken in July 2001, only nine percent of those interviewed said they were following the debate about government funding "very closely," and 29 percent



    The media’s preoccupation with speed caught up with the pollsters in spectacular fashion last year.

    "somewhat closely." Sixty percent said they were following it "not too closely" or "not closely at all." Asked whether the government should fund this type of research, 57 percent of respondents said that they "didn’t know enough to say." In the weeks that followed, Americans did not take a short course in molecular biology or theology. Yet many pollsters reported their views as if they had. Poll findings released by advocacy organizations—on issues from stem-cell research to missile defense—have become weapons in political battles, and the development may undermine polling generally if it causes people to believe that you can prove anything with a poll.

    In his forthcoming book Flattering the Leviathan, political scientist Robert Weissberg levels a serious indictment at contemporary polling on policy issues. He argues that polls, as currently constructed, "measure the wishes and preferences of respondents, neither of which reflect the costs or risks associated with a policy," and he urges policy makers to ignore them. He takes two superficially popular ideas—that the government should provide money to hire more grade school teachers and that it should provide money to make day care more affordable and accessible—and subjects them to rigorous scrutiny through a poll of his own. Opinions about the ideas turn out to be far more complicated, and far more skeptical, than the initial positive responses suggested. Weissberg believes that "contemporary polls tell us almost nothing worthwhile about policy choices facing the nation." In his view, polls have an important place in the political life of the nation when they measure personal values and subjective opinions, but they subvert democracy when they purport to provide guidance on complicated policy debates.

    Although the public displays no overt hostility to polls, fewer Americans are bothering to respond these days to the pollsters who phone them. Rob Daves, of the Minnesota Poll, says that "nearly all researchers who have been in the profession longer than a decade or so agree that no matter what the measure, response rates to telephone surveys have been declining." Harry O’Neill, a principal at Roper Starch Worldwide, calls the response-rate problem the "dirty little secret" of the business. Industry-sponsored studies from the 1980s reported refusal rates (defined as the proportion of people whom surveyors reached on the phone but who declined either to participate at all or to complete an interview) as ranging between 38 and 46 percent. Two studies done by the market research arm of Roper Starch Worldwide, in 1995 and 1997, each put the refusal rate at 58 percent. A 1997 study by the Pew Research Center for the People & the Press found statistically significant differences on five of 85 questions between those who participated in a five-day survey and those who responded in a more rigorous survey, conducted over eight weeks, that was designed to coax reluctant individuals into participating.

    Much more research needs to be done on the seriousness of the response-rate problem, but it does seem to pose a major challenge to the business and might help to usher in new ways of polling. (Internet polling, for example, could be the wave of the future—if truly representative samples can be constructed.) Polling error may derive from other sources, too, including the construction of samples, the wording of questions, the order in which questions are asked, and interviewer and dataprocessing mistakes.

    The way many polls are conducted and reported today obscures some very important findings they have to offer about public opinion. Polls taken over long periods of time, for example, reveal a profound continuity about many of the core values that define American society. Huge majorities consistently tell pollsters that they believe in God and that religion is



    The way many polls are conducted and reported today obscures some very important findings they have to offer about public opinion.

    important in their daily lives. In 1939, 41 percent of those surveyed by Gallup answered "yes" when asked if they had attended church or synagogue in the past seven days. When Gallup asked the same question this year, an identical 41 percent answered "yes." Americans’ views about the role of the United States in the world show a similar long

    term stability. In 1947, 68 percent of those surveyed told National Opinion Research Center interviewers that it would be best for the future of the United States if it played an active role in world affairs, and 25 percent said that it would be best for the country if it did not. When the question was asked 50 years later, 66 percent favored an active role and 28 per

    2 5 +" ~ E ~ ~ /-~ -~ ~ ~ E ~ S ~a ~~

    ~a 8~ ~ h%

    ..

    ~a~~en~c~~Is~



    ~ %sw~t m


    ~lcD~ H~Pn~ls.

    Enjoy the holidays and celebrate with us! In our 25th year, give a one-year Wilson Quarterly subscription at

    Special Holidayratesas low as $16.


    hnniversa~ Gift

    EveryAnniversary includes


    Gift Subscription member-

    ship in The Wilson Center Associates. Your member-
    ship privileges include:

    ~ Handsome gift card to announce your gift
    ~ Membership identification card for your recipient
    oaccess to reports on Wilson Center conferences and

    seminars

    ·Discounts on Wilson Center books ~Savings on Smithsonian books, music, museum repro- ductions and gifts ~ Full eligibility for Smithsonian study tours and regional

    events

    ~ Plus--a full year of The Wilson Quarterly, the unique
    newsmagazine of the world of ideas.

    Throughout the year, your friends and relatives will enjoy the world's greatest thinkers and read their insights and observations on topics ranging from furing American uni- versities to the rise of democracy in Taiwan; from Descartes' search for truth to advertising's influence on

    our culture.

    Please use one of the reply cards in this issue to order
    holiday gift memberships for all of your intellectually
    curious friends and relatives. rust $20 for the first sub-

    scription and $16 for each additional subscription or

    renewal order.



    ~r 25 year~ Surveying the Mrorld of Hdeas

    For Faster Se~ice, Ca IB 1 ·880-829-5 i 08 Or Mis it Us ~ a~n~aAI.Wi comI] sonQua ~e rB~JI.

    cent were opposed. In dozens of iterations of the question, opinion hasn’t budged. Americans are cranky at times about shouldering so many burdens abroad, but they are internationalists nonetheless.

    There are other telling instances of stability. When Gallup asked in 1938 whether the government should be responsible for providing medical care to people unable to pay for it, 81 percent said "yes." When the question was repeated in 1991, 80 percent so responded. Polling on the minimum wage, too, shows consistent support for a wage floor beneath American workers. Many early observers of American democracy feared that public opinion would be too fickle and volatile to make democracy successful. But the polling data on many issues reveal a public strong and unyielding in its convictions.

    Polls can also reveal how the nation has changed its mind. In 1958, only four percent of whites approved of marriage between "whites and colored people." Today, a solid majority of whites approve. In 1936, only 31 percent of respondents said they would be willing to vote for a woman for president, even if she were qualified in every respect. Today, more than 90 percent respond that they would vote for a woman. When Gallup asks people whether they would vote for a black, a Jew, or a homosexual, solid majorities answer affirmatively. (People are evenly divided about voting for an atheist for president, a finding that underscores the depth of Americans’ religious convictions.) In 1955, Americans were divided about which they enjoyed more—time on the job or time off the job. Today, time away from work wins hands down. The work ethic is still strong, but Americans are taking leisure more seriously than they once did.

    Polls show that Americans are of two minds on many matters, and that makes the findings difficult to interpret. Take the issue of abortion. When Americans are asked whether abortion is an act of mur

    der, pluralities or majorities tell pollsters that it is. When they are asked whether the choice to have an abortion should be left to women and their doctors, large majorities answer that it should. Americans tell pollsters that they want government off the back of business—even as they also tell them that government should keep a sharp eye on business practices. The nation wants a strong and assertive military, but Americans are reluctant to send troops abroad. The "on the one hand/on the other hand" responses to many questions are a prominent feature of American public opinion, and the deep ambivalence seems unlikely to change.

    It’s essential in a democracy to know what citizens are thinking, and polls are a valuable resource for understanding a complex, heterogeneous public. Gallup and Rae had high hopes that polls would improve the machinery of democracy. But polls can be both overused and misused. Instead of oiling the machinery of democracy, the polls now seem to be clogging it up. In an article in this magazine in 1979, the editors wrote, "Americans today seem obsessed with their reflection in the polls." If contemporary refusal rates are a fair indication of their interest, that is no longer the case. Their former enthusiasm is now ennui. ❏

    The Making of the Public Mind

    History for a







    Democracy


    by Wilfred M. McClay

    Americans are said to be notoriously indifferent to the past. They are thought to be forward looking, practical, innovative, and results oriented, a people passionately committed to new beginnings and sec

    ond (and third) chances. They are optimists and dreamers, whom the green light of personal betterment and social transformation always beckons, and whose attitude toward history was conclusively (if crudely) summarized in the dismissive aphorisms of Henry Ford, the most famous perhaps being this: "History is more or less bunk."

    Maybe those propensities were inevitable features of the American way of life. The United States has been a remarkably energetic and prosperous mass democracy, shaped by the dynamic forces of economic growth, individual liberty, material acquisitiveness, technological innovation, social mobility, and ethnic multiplicity. In so constantly shifting a setting, a place where (in Henry David Thoreau’s words) "the old have no very important advice to give the young," what point is there in hashing over a past that is so easily and profitably left behind? "Old deeds for old people," sneered Thoreau, "and new deeds for new." That could almost be the national motto.

    Even on the rare occasions when tradition enjoys its moment in the spotlight, the nation’s love affair with possibility manages to slip on stage and steal the show. Consider, for example, the standard fare in an outdoor concert for the Fourth of July. Along with Sousa’s "Stars and Stripes Forever" and Tchaikovsky’s 1812 Overture, one can expect to hear Copland’s stately Lincoln Portrait, with an inspirational narrative that draws on the 16th president’s own words. But in addition to familiar phrases from the Gettysburg Address, Copland includes the following: "The dogmas of the quiet past are inadequate to the stormy present.... As our case is new, so we must think anew and act anew. We must disenthrall ourselves and then we shall save our country."

    Disenthrall is a rather strong word to use against the past on a day of national piety. Yet Lincoln’s words seem merely to echo Thoreau’s sentiments—or, for that matter, those of Thomas Paine, who urged his contemporaries to discard useless precedents and think "as if we were the first men that thought." Such statements limn a familiar American paradox: We are to honor our past on Independence Day precisely because it teaches us that we should become independent of our past.

    What, indeed, could be more American than to treat the past as a snare, something to which we are always potentially in thrall? Yet by that standard, it would be hard to account for a notable phenomenon of the American summer of 2001. I refer to the re-emergence of John Adams—revolutionary leader,

    Founding Father, second president of the United States, sparring partner of

    Jefferson, nonadmirer of Paine—as an icon of our public life. Who can have failed

    to notice Adams’s round and rosy countenance peering at us with 18th-century

    seriousness and stolidity from the cover of David McCullough’s new biogra

    phy—the publishing sensation of the summer, a 751-page tome stacked high in

    nearly every bookstore in every mall and airport terminal in the land?

    Adams hardly seems the stuff of which modern bestsellers are made. Despite

    his boundless energy and ambition, and his many accomplishments, he cannot

    be judged an especially skillful politician or a notably successful president. (It was

    not for nothing that he was our first one-term president, and his son John Quincy

    our second.) A man of high integrity, he was free of the lower Jeffersonian or

    Clintonian vices that stir the interest of tabloid-minded readers. Nor was he a fig

    ure cast in the classic heroic mold, being small and rotund, with a vain and prick

    ly personality and a self-confessed tendency to fits of pettiness and pique. His sober

    and distrustful view of human nature, including his own, would earn him a

    thumbs-down from the positive thinkers in the Oprah Book Club. His approach

    to politics was grounded in a belief in the inevitability of permanent social and

    economic inequalities—and that approach, even in his day, was slowly but sure

    ly on its way out of American life.

    And yet, astonishing to report, there are close to a million hardcover copies of

    McCullough’s book in print. We cannot account for this success merely by not

    ing the author’s literary gifts or Simon and Schuster’s marketing prowess. There

    must be other factors boosting Adams’s popular appeal. Does the revival of his rep

    utation have something to do with public disillusionment over the low charac

    ter of our public officials, past and present, and a desire to find at least one who

    was estimable? Might it relate to Adams’s stubborn commitment to principle

    throughout his political career, a commitment that repeatedly cost him power and

    influence—in stark contrast to recent politicians whose success seems directly relat

    ed to their utter lack of principle? Does it have to do with the steadily declining

    reputation of Thomas Jefferson, so often seen as Adams’s opposite number?

    Could it be because of the human interest of Adams’s unusually devoted and com

    panionate relationship with his wife, Abigail? Is it because Adams’s principled straight

    talk and aversion to "spin" and partisanship contrast so sharply with the pervasive

    verbal dissembling of our current political culture?

    All of those possible explanations have some merit, but the real reason may be a good deal simpler: A considerable part of the American pub

    lic actually has a broad and sustained hunger for history and has

    repeatedly shown that it will respond generously to an accessible, graceful work about

    an important subject by a trusted and admired author. Americans yearn for solid

    knowledge of their nation’s origins, which in a real sense are their own origins too.

    Their hunger is entirely healthy and natural, though it is often neglected and ill fed.

    One could see the yearning in the celebration of the nation’s bicentennial in

    1976, particularly in the excitement generated by the spectacle of the Tall Ships.

    >Wilfred M. McClay, a former Wilson Center fellow, holds the Sun Trust Chair of Humanities at the

    University of Tennessee at Chattanooga. He is writing an intellectual biography of the American sociologist David

    Riesman. Copyright © 2001 by Wilfred M. McClay.



    A hunger for history: Crowds jam Civil War reenactments like this one in Gettysburg, Pa.

    That parade of venerable, restored sailing vessels passed in review through New York harbor on July 4, like a procession of great and ghostly heroes from a vanished epic world, and was observed by a crowd estimated at seven million. Although the Tall Ships had little or nothing to do directly with the American Revolution, their remarkable presence elicited an affective link to the American past, a link so clear and poignant that a broad American public needed no scholarly explanations to grasp it. A similar response was evoked by Ken Burns’s television series on the Civil War, which did more than any number of professional historians to keep alive public interest in the American past.

    Americans do not want to view the nation’s history as merely a culturalliteracy grab bag of factoids and tales. They want, rather, to establish a sense of connection with it as something from which they can draw meaning and sustenance, and in which their own identity is deeply embedded. That should suggest how critical a role the writing and teaching of history play in refining the nation’s intellectual and moral life. Far from being of little interest—a record of old deeds for old people—history turns out to be of great consequence in the formation of the public mind.

    That may help to explain why discussions of historical subjects, and conflicts over questions of historical interpretation and practice, have become so visible and lively a feature of our cultural life in recent years. The gradual passing of the World War II generation has served as an especially powerful stimulant to historical consciousness, and has given rise to films such as Saving Private Ryan and the TV miniseries Band of Brothers, attractions such as the D-Day Museum in New Orleans and the controversial World War II Memorial planned for the National Mall in Washington, and popular books such as Tom Brokaw’s The Greatest Generation (1998) and Stephen Ambrose’s Citizen Soldiers (1997).

    A passion for history is reflected as well in various heated, and sometimes nasty, debates that have occurred over the past decade, often as an offshoot of the socalled culture wars: debates over the National History Standards, the Enola Gay exhibition at the Smithsonian, a slavery exhibition at the Library of Congress, the public display of the Confederate flag, reparations for slavery, Jefferson’s personal relations with the slave Sally Hemings, Edmund Morris’s fictionalizing in his biography of Ronald Reagan, the historian Joseph Ellis’s lying in the classroom about his military service and personal life. All of those episodes—and more— mirror the public’s growing engagement with historical controversies.

    But even as we note the engagement, we must acknowledge something else as well: the immense, appalling, and growing historical ignorance of most Americans. To say that an abiding appetite for history exists

    is not the same as to say that the hunger is being satisfied. On the contrary. The steady abandonment of instruction in history by our schools and colleges shows no sign of reversal, and makes it a near certainty that the next generations of young Americans will lack even the sketchiest knowledge of the country’s historical development.

    Survey after dismal survey confirms that Americans are being poorly served by their educational institutions, at all levels. One-fifth of American teenagers don’t know the name of the country from which the United States declared independence. A fourth don’t know who fought in the Civil War, and cannot say what happened in 1776; three-fifths do not know that Columbus discovered America in 1492. Perhaps the most depressing study of all, released last year by the American Council of Trustees and Alumni (ACTA), examined the historical knowledge of graduating seniors at America’s 55 most selective colleges and universities. The study found that 81 percent of the seniors could not pass a simple test of American historical knowledge, which asked about such basic matters as the separation of powers and the events at Valley Forge. Not one of the colleges required the students to take a course in American history, and less than a fourth of them required any history courses at all. (On the bright side, 99 percent of the students surveyed were able to identify the cartoon characters Beavis and Butthead. So they are learning something.)

    The ACTA report caught the attention of Robert C. Byrd (D-W.Va.), one of the Senate’s most historically minded members. He resolved on the spot to show his concern in a highly tangible way: by adding a $50 million amendment to the Department of Education’s FY 2001 appropriations bill (and promising $100 million more in FY 2002) to support the development and implementation of "programs to teach American history." But the ACTA survey suggests that money is not the problem. It was, after all, a study of students at America’s elite colleges, most of which are private institutions that charge upward of $30,000 a year for their services, and that have endowments in the hundreds of millions, and in some cases billions, of dollars. Whatever problems these institutions may have, a lack of financial resources is not one of them—and is certainly not the reason they are failing to teach their students American history.

    Nevertheless, Byrd’s passion on the subject is encouraging. It suggests that, with the clashes over the National History Standards now behind us, there might be grounds for a national consensus on the need for dramatic improvement in history education. But formidable barriers remain—barriers that cannot be much affected by the appropriation of fresh federal money.

    To begin with, one would have to challenge the entrenched power of educators who have relentlessly sought over a period of decades to displace the study of history in our schools in favor of a "social studies" curriculum that they believe is more conducive than the "fact-grubbing" specificity of history to the creation of useful habits of problem solving, generalization, and harmonious living. The triumph of what social critic Russell Kirk called "social stew" led to a whole series of subsequent disasters: the downgrading of history in state social-studies standards, the near disappearance of history from the primary grades, the weakening of standards for history teaching, and the replacement of real books with inane, plodding, politically correct texts that misrepresent the subject of history by robbing it of its narrative zest and interpretive fascination. It will take nothing short of a revolution in educational philosophy to reverse the trends. More money poured into the system will only reinforce the status quo and compound the historical illiteracy of Americans.

    There are other, more complex barriers to improvement: the character of the historical profession itself and the nature of its public responsibilities in a democratic society. In reality, the clashes over history standards are no more behind us than the culture wars that lay behind the clashes. Americans have generally been willing to trust in the probity and judgment of those calling themselves historians. But that trust has eroded somewhat in recent years, and for entirely understandable reasons. Part of that erosion derives from ideological factors, made all too obvious by such follies as the American Historical Association’s official opposition to the Reagan defense buildup in 1982, or, more recently, the illadvised petition signed by historians who opposed the impeachment of President Bill Clinton. In both cases, certain professional historians drew improperly upon the authority of their discipline to lend force to partisan political positions, and, in so doing, damaged the long-term credibility of all historians.

    But the distrust is also grounded in divergent views of the function of history and the responsibility of historians. There are profound tensions inherent in the practice of history in a democracy—between a history that is the property of all and a history that is the insight of an accredited few, or between a history organized around the requirements of American citizenship and a history that takes its bearings from, and bases its authority upon, more strictly professional criteria. The tensions cannot be, and should not be, finally resolved; neither side holds a trump card. Certainly, professional historians should be able to challenge conventional wisdom. One can understand, for example, the chagrin of the historians and curators who found their professional judgments being overruled in the Enola Gay case. But their perspective was not the whole of the matter, particularly when the subject in question was a publicly supported commemoration of a profoundly significant event in the nation’s four-year-long war effort. Historians who use public money in public forums to express views with public implications cannot expect to be insulated from the public’s reaction. On the contrary, the endless interplay between the public and professional uses of history should be a source of intellectual vitality. This makes it all the more lamentable that so many pro-fessional historians have come to embrace an understanding of history that looks more and more like a dead end, both on its own terms and for the enrichment of public life.

    More than three decades ago, the British historian J. H. Plumb, in a book called The Death of the Past (1970), argued that "true history" is a "destructive" process: It assaults all the forms of "created ide

    ology" by which people give meaning to the life of their institutions and societies, and it intends finally "to cleanse the story of mankind from those deceiving visions of a purposeful past." That credo may sound brutal, but it is nothing more than a particularly succinct and candid expression of the logical conclusion to which the relentlessly critical spirit animating modern professional historiography is drawn. That spirit would ruthlessly sweep away both the large narratives of nationbuilding and the small pieties human beings have always used to shield their eyes from the harsh light of reality. It’s not that there is nothing to be said for the work of the critical spirit. The difficulty, rather, is that what would be available to put in the place of the large narratives and small pieties when they are finally vanquished has never been made clear.

    In the beginning, of course, there was great value in bringing the conventional narratives of American history into question, for they had often served the purpose of rendering minorities and marginalized groups silent or invisible. But the energy of those more particular histories is almost entirely derivative and, ironically, dependent upon the grand narratives of American national identity against which they push. The nation has not yet disappeared entirely from American history, but it often resembles nothing more than, in John Higham’s marvelous phrase, the "villain in other people’s stories." Yet without the nation, and some of the other narratives and pieties that critical history has dispensed with, there can be no plausible way to organize history into larger meanings that can, in turn, inform and inspire the work of citizenship and reform.

    Indeed, by the late 1980s, historian Peter Novick was arguing in That Noble Dream (1988), an exhaustive and highly influential study of the American historical profession, that there was no unifying purpose at all left in the profession; there remained only a vast congeries of subdisciplinary fields within which small armies of specialists worked at solving small-scale technical problems. "As a broad community of discourse," said Novick, "the discipline of history" envisioned in the founding of the American Historical Association in 1884 "had ceased to exist." Under such circumstances, the very possibility of cultivating a public historical consciousness, substantively informed by academic historical work, was rendered practically nil, as was the antique notion that historical understanding might contribute to the refinement or deepening of individual awareness. French historian Pierre Nora brought a touch of Gallic intellectual delicacy to his summary of the situation: "History is perpetually suspicious of memory, and its true mission is to suppress and destroy it."

    The problem with such programmatic skepticism is not only that it is completely self-contradictory and unworkable in human terms, but that its final result is a historical understanding as cleansed of human interest as it is of deceptive visions. To suppress and destroy memory is to violate human nature in a fundamental way.



    John Adams was a familiar face at the beach this summer thanks to David McCullough’s biography. Books on the Founders—Washington, Jefferson, Hamilton—have enjoyed a recent vogue.

    And to imply that the honest writing of history requires such erasure is a travesty. As professional historiography trudges further and further down its chosen path of specialization and fragmentation, satisfied with its increasingly hollow rhetoric about "pushing back the frontiers of knowledge," it pays a steep price for every step, and the price comes directly out of its own hide, out of an animating sense of purpose. In writing off the larger audience it might have had, professionalization of that sort impoverishes not only the public mind, but the discipline itself.

    This is not to suggest that historians should entirely abandon the critical enterprise. But they need to be honest enough to turn their criticism back upon the act of criticism itself, modest enough to concede that man does not live by critical discourse alone, and wise enough to understand that a relentlessly debunking spirit cannot possibly be a basis for anything resembling a civilized life.

    Historical knowledge and historical understanding are two quite different things. As Novick well expressed it, one can speak of historical knowledge as "something accumulating on library shelves," but historical understanding "is in the mind of a human being or it is nowhere." The acquisition of a genuinely historical consciousness amounts to a kind of moral discipline of the soul. It means learning to appropriate into our own moral imaginations, and learning to be guided by, the distilled memories of others, the stories of events we never witnessed and times and places we never experienced. By an expansion of inward sympathy, we make those things our own, not merely by knowing about them, but by incorporating them into our awareness, looking at the world through their filter, learning to see the past as an immanent presence woven invisibly into the world that lies before us. By its very nature, historical consciousness can never be the exclusive province of a historical guild or priesthood, for it is meant to be the common possession of all.

    Ademocratic nation needs a democratic history. There was a time not so long ago when this was assumed to mean that a genuinely demo

    cratic history should ignore politics and constitutions and intellectual elites and the like and insist upon viewing the past exclusively "from the bottom up," through a study of the social history of nonelite groups. But that assumption now seems far less obvious. Indeed, there is a kind of unconscious scorn buried in it—as if political and intellectual history were beyond the common people’s means, and as if individuals could not be expected to take an interest in any aspect of history that did not involve them, or others exactly like them. There is every reason to believe that the United States can nurture a national culture in which a rich acquaintance with the great documents, debates, and events of the nation’s past becomes the common property of all citizens.

    If that is ever to happen, the historical profession will have to take more seriously its role as a potential shaper of the public mind and public life. It’s not necessary to do so by justifying history as a source of public-policy initiatives. The historian can make a far greater contribution by playing the essentially conservative role—or is it a radical one?—of standing athwart the turbulence of modern life and insisting on the dignity of memory and the reality of the past. Historians should not forget, in the pressure to find "practical" justifications for what they do as historians, that they further an important public purpose simply by being what they are, and by preserving and furthering a certain kind of consciousness, a certain kind of memory—qualities of mind and soul, and features of our humanity, that a culture of ceaseless novelty and instant erasure has all but declared war upon.

    As it happens, John Adams himself had something exemplary to say about all this. McCullough relates in the final pages of his book that Adams composed no epitaph for himself in anticipation of his death. In that respect, as in so many others, he was the opposite of Jefferson, who designed the very obelisk that was to mark his grave and specified the precise words that were to be inscribed on it. Yet Adams did compose an inscription for the sarcophagus lid of his ancestor Henry Adams, the first Massachusetts Adams, who had arrived in 1638. The inscription speaks volumes about how Adams conceived his place in history, and how he accepted the obligation to instruct the future by honoring the past:

    This stone and several others have been placed in this yard by a great, great, grandson from a veneration of the piety, humility, simplicity, prudence, frugality, industry, and perseverance of his ancestors in hopes of recommending an affirmation of their virtues to their posterity.

    In concluding his book with this marvelous inscription, McCullough means us to see yet another contrast between Adams and Jefferson. But we should not miss the even more instructive contrast: the one between Adams and us. ❏