Undisciplined

Undisciplined

Louis Menand

It is useful to step back from the debate over academic politics and values to see the evolution of the culture of higher education from a more impersonal perspective. One place to watch the change occurring is in the demise of the traditional academic disciplines.

Share:
Read Time:
20m 22sec

Almost everyone agrees that American academic culture has changed dramatically in the past 25 years. Some people (mostly inside the academy) talk about those changes in terms of accessibility, diversity, increased public engagement, and so on. Others (mostly outside the academy) talk about them in terms of political correctness, affirmative action, the "death of literature," the rise of "grievance studies," and so on. In general, the differences between the two groups are framed as a debate over consciously held views: People with bad (or good) ideas seized control of higher education and drove out the good (or bad) ideas of the previous generation. To frame the debate so is not wrong: If changes in academic culture, where people are paid to think, are not driven in part by consciously held ideas, what changes are? But ideas are often driven, in turn, by long-term structural movements, and it is useful to step back from the debate over academic politics and values to see the evolution of the culture of higher education from a more impersonal perspective. One place to watch the change occurring is in the demise of the traditional academic disciplines.

Traditionally, an academic discipline was a paradigm inhabiting an institutional structure. "Anthropology" or "English" was both the name of an academic department and a discrete, largely autonomous program of inquiry. If, 30 or 40 years ago, you asked a dozen anthropology professors what anthropology's program of inquiry was--what anthropology professors did that distinguished them from other professors--you might have gotten different, and possibly contradictory, answers, because academic fields have always had rival schools in them. But, by and large, the professors would have had little trouble filling in the blank in the sentence, "Anthropology is ____." (And if they did not have a ready definition--for anthropology has gone through periods of identity crisis in the past--they would not have boasted about the fact.) Today, you would be likely to get two types of definitions, neither one terribly specific, or even terribly useful. One type might be called the critical definition: "Anthropology is the study of its own assumptions." The other type could be called the pragmatic definition: "Anthropology is whatever people in anthropology departments do."

Not every liberal arts discipline is in the condition of anthropology, of course, but that only heightens the sense of confusion. It tends to set people in fields in which identification with a paradigm remains fairly tight, such as philosophy, against people in fields in which virtually anything goes, such as English. Philosophy professors (to caricature the situation slightly) tend to think that the work done by English professors lacks rigor, and English professors tend to think that the work of philosophy professors is introverted and irrelevant. The dissociation of academic work from traditional departments has become so expected in the humanities that it is a common topic of both conferences and jokes. During a recent conference (titled "Have the Humanistic Disciplines Collapsed?") at the Stanford Humanities Center, one of the center's directors, to demonstrate the general dissipation of scholarly focus, read the titles of projects submitted by applicants for fellowships and asked the audience to guess each applicant's field. The audience was right only once--when it guessed that an applicant whose project was about politics must be from an English department.

The usual response to the problem of "the collapse of the disciplines" has been to promote interdisciplinary teaching and scholarship. But interdisciplinarity is not only completely consistent with disciplinarity--the concept that each academic field has its own distinctive program of inquiry--it actually depends on that concept. More and more colleges are offering more and more interdisciplinary classes, and even interdisciplinary majors, but increased interdisciplinarity is not what is new, and it is not the cause of today's confusion. What the academy is now experiencing is postdisciplinarity--not a joining of disciplines, but an escape from disciplines.

How did this come about? The most common way of explaining paradigm loss has been to tie it to the demographic shift that has occurred in higher education since 1945. That shift has certainly been dramatic. In 1947, 71 percent of college students in the United States were men; today, a minority of college students, 44 percent, are men. As late as 1965, 94 percent of American college students were classified as white; today, the figure (for non-Hispanic whites) is 73 percent. Most of the change has occurred in the past 25 years. A single statistic tells the story: In the decade between 1984 and 1994, the total enrollment in American colleges and universities increased by two million, but not one of those two million new students was a white American man. They were all nonwhites, women, and foreign students.

Faculty demographics altered in the same way, and so far as the change in the status of the disciplines is concerned, that is probably the more relevant shift. Current full-time American faculty who were hired before 1985 are 28 percent female and about 11 percent nonwhite or Hispanic. Full-time faculty who have been hired since 1985--individuals who, for the most part, entered graduate school after 1975--are half again as female (40 percent) and more than half again as nonwhite (18 percent). And these figures are for full-time professors only; they do not include part-time faculty, who now constitute 40 percent of the teaching force in American higher education, and who are more likely to be female than are full-time faculty. In 1997, there were 45,394 doctoral degrees conferred in the United States; 40 percent of the recipients were women (in the arts and humanities, just under 50 percent were women), and only 63 percent were classified as white American citizens. The other 37 percent were nonwhite Americans and foreign students.

The arrival of those new populations just happens to have coincided with the period of the so-called culture wars--a time, beginning around 1987, the year of Allan Bloom's The Closing of the American Mind, when higher education came under intense outside criticism for radicalism and elitism. This coincidence has made it natural to assume a connection between the new faces on campus and the "collapse" (or the "redefinition") of the disciplines. There are two ways of explaining the connection. One is to suggest that many nonwhite and female students and professors understood the disciplines as rigid and exclusionary abstractions, and brought a new spirit of transgressiveness and play (things not associated with the culture of white men) into the academy. That interpretation is a little too similar to its evil twin--the view that many women and nonwhites lack the temperament for rigorous scholarship and pedagogy. A less perilous explanation is that the new populations inevitably created a demand for new subject matter, a demand to which university departments, among the most sluggish and conservative institutions in America, were slow to respond. The sluggishness produced a backlash: When women and nonwhites began arriving at universities in significant numbers after 1975, what happened was a kind of antidisciplinarity. Academic activity began flowing toward paradigms that essentially defined themselves in antagonism to the traditional disciplines.

Women's studies departments, for example, came into being not because female professors wished to be separate, but because English and history and sociology departments were at first not terribly interested in incorporating gender-based courses into their curricula. The older generation of professors, whatever their politics personally, in most cases did not recognize gender or ethnic identity as valid rubrics for teaching or scholarship. So outside the discipline became a good place for feminist scholars to be. Indeed, there was a period, beginning in the late 1970s, when almost all the academic stars were people who talked about the failures and omissions of their own fields.

That was especially the case in English literature, where there was (allegedly) a "canon" of institutionally prescribed texts available to question. Edward Said's Orientalism (1978) was about the scholarly bias against non-Western cultures; Sandra Gilbert and Susan Gubar's The Madwoman in the Attic (1979) was about the exclusion and misinterpretation of work by women; Jane Tompkins's Sensational Designs (1985) was about the exclusion of popular literature; and so on. Scholarly works such as those did not simply criticize their own disciplines; they simultaneously opened up and legitimated new areas of research and teaching. And when departments were slow to adopt the new areas, centers were happy to take up the slack. The period since 1975 has been the era of the center--for women's studies, postcolonial studies, African American studies, gay and lesbian studies, science studies, cultural studies. And every university seems either to have or to be busy creating its very own humanities center. Few of the centers grant degrees--they lack the institutional power of the departments. But they are interdisciplinary by definition (they are made up of professors from a variety of disciplines) and antidisciplinary in temper (they were established to compensate for some perceived inadequacy in the existing departments).

But the era of antidisciplinarity is essentially over, for the simple reason that the traditional disciplines have by now almost all been co-opted. Virtually no one in the university today believes that gender or ethnic identity (or any of the other areas of research associated with the centers) is not a valid rubric for research or teaching. People in English departments and anthropology departments do exactly what people used to have to go to women's studies and cultural studies centers to do. The arrival of new populations, in other words, helps to explain the emergence of the critical definition of the discipline--that "anthropology (or English or history) is the study of its own assumptions." That formulation is a holdover from the days of antidisciplinarity. But the influx doesn't really explain the pragmatic definition--that "anthropology (and the rest) is whatever anthropology professors do." Merely adding new areas of study (women's history, postcolonial writers, and so on) doesn't threaten the integrity of a discipline, even if it entails (as it often does) rethinking traditional standards and practices. Postdisciplinarity is a different phenomenon, and it has a distinct etiology.

The contemporary American university is an institution shaped by the Cold War. It was first drawn into the business of government-related scientific research during World War II, by men such as James Bryant Conant, who was the president of Harvard University and civilian overseer of scientific research during the war, and Vannevar Bush, who was a former vice president and dean of engineering at the Massachusetts Institute of Technology and director of the federal Office of Scientific Research and Development. At the time of the First World War, scientific research for military purposes had been carried out by military personnel, so-called soldier-scientists. It was Bush's idea to contract this work out instead to research universities, scientific institutes, and independent private laboratories. In 1945 he oversaw publication of the report cience: The Endless Frontier, which became the standard argument for government subvention of basic science in peacetime and launched the collaboration between American universities and the national government.

Then came Sputnik, in 1957. Sputnik stirred up a panic in the United States, and among the political responses was the passage of the National Defense Education Act of 1958. The legislation put the federal government, for the first time, in the business of subsidizing higher education directly, rather than through government contracts for specific research. This was also the period when economists such as Gary Becker and Theodore Schultz introduced the concept of "human capital," which, by counting educated citizens as a strategic resource, offered a further national security rationale for increased government investment in higher education. In the words of the enabling legislation for the National Defense Education Act: "The security of the Nation requires the fullest development of the mental resources and technical skills of its young men and women. . . . We must increase our efforts to identify and educate more of the talent of our Nation. This requires programs that will give assurance that no students of ability will be denied an opportunity for higher education because of financial need."

The national financial commitment to higher education was accompanied by the arrival of the baby-boom generation of college students. Between 1955 and 1970, the number of 18-to-24-year-olds in America grew from 15 million to 25 million. The result was a tremendous expansion of the higher education system. In 1945, 15 percent of all Americans attended college; today, 50 percent attend college at some point in their lives. In 1949 there were about 1,800 institutions of higher education in the United Statey´, enrolling just under two and a half million students; today, there are just over 4,000 American colleges and universities, and they enroll more than 14 million students, about 5 percent of the population. Current public expenditure on higher education is the equivalent of 5.6 percent of gross domestic product (GDP). To put those numbers in perspective: In the United Kingdom, 14.7 percent of the population goes on to university, and public expenditure on higher education (in a country where almost all universities are public) is 4.1 percent of GDP.

The expansion undoubtedly accounts for some of the decay disciplinary paradigms have undergone. In a system of (essentially) mass higher education, a much smaller proportion of students is interested in pursuing traditional academic work. That is not why they choose to go to college. Only a third of bachelor's degrees awarded in the United States each year are in liberal arts fields (which include the natural and social sciences), and less than a third of those are in the humanities. It is not surprising that a sense of being squeezed onto the margins of a system increasingly obsessed with other things should generate uncertainty and self-doubt among people in the humanities.

You can see the effects in college catalogues. At Trinity College in Hartford, Connecticut, for example, the philosophy department's announcement says: "A good philosopher should know at least a little something about everything." The department then recommends the study of a foreign language, but only because it "encourages the habit of careful attention to a text." It recommends a "broad understanding of modern science," but suggests that "any good science course . . . is suitable." It recommends courses in history, literature, and the arts, but advises that students generally select courses in these fields according to the amount of reading assigned (the more reading, the more desirable). It ends by saying what was already clear enough: "We require no particular non-departmental courses as part of the major." The next section of the announcement, titled "Introductory Courses," begins, "There is no single best way to be introduced to philosophy." That is not a confession of uselessness; it is an effort to conceive of philosophy as continuous with all other areas of thought--the "philosophy is whatever philosophers do" approach. (Still, it is unusual to find a philosophy department knocking down its own disciplinary fences with such abandon.)

Expansion was only one of the effects Cold War educational policies had on the university. There was a more insidious effect as well. The historian Thomas Bender has suggested, in his contribution to the illuminating volume American Academic Culture in Transformation (1997), that the new availability of state monies affected the tenor of academic research. Scholars in the early Cold War era tended to eschew political commitments because they wished not to offend their granting agencies. The idea that academics, particularly in the social sciences, could provide the state with neutral research results on which public policies could be based was an animating idea in the 1950s university. It explains why the dominant paradigms of academic work were scientific, and stressed values such as objectivity, value neutrality, and rigor.

In the sciences, the idea of neutrality led to what Talcott Parsons called the ethos of cognitive rationality. In fields such as history, it led to the consensus approach. In sociology, it produced what Robert Merton called theories of the middle range--y´n emphasis on the formulation of limited hypotheses subject to empirical verification. Behaviorism and rational choice theory became dominant paradigms in psychology and political science. In literature, even when the mindset was antiscientific, as in the case of the New Criticism and structuralism, the ethos was still scientistic: Literary theorists aspired to analytic precision. Boundaries were respected and methodologies were codified. Discipline reigned in the disciplines. Scholars in the 1950s who looked back on their prewar educations tended to be appalled by what they now regarded as a lack of rigor and focus.

Because public money was being pumped into the system at the high end--into the large research universities--the effect of the Cold War was to make the research professor the type of the professor generally. In 1968, Christopher Jencks and David Riesman referred to the phenomenon as "the academic revolution": For the first time in the history of American higher education, research, rather than teaching or service, defined the work of the professor, not just in the doctoral institutions but all the way down the institutional ladder. (That is why, today, even junior professors at teaching-intensive liberal arts colleges are often obliged to produce two books to qualify for tenure.) The academic revolution strengthened the grip of the disciplines on scholarly and pedagogical practice. Distinctions among different types of institutions, so far as the professoriate was concerned, began to be sanded down. The Cold War homogenized the academic profession.

If you compare the values of the early Cold War university with the values of the 21st-century university, you find an almost complete reversal of terms. A vocabulary of "disinterestedness," "objectivity," "reason," and "knowledge," and talk about such things as "the scientific method," "the canon of great books," and "the fact-value distinction," have been replaced, in many fields, by talk about "interpretations" (rather than "facts"), "perspective" (rather than "objectivity"), and "understanding" (rather than "reason" or "analysis"). An emphasis on universalism and "greatness" has yielded to an emphasis on diversity and difference; the scientistic norms that once prevailed in many of the "soft" disciplines are viewed with skepticism; context and contingency are continually emphasized; attention to "objects" has given way to attention to "representations"; there has been a turn to "personal criticism."

The trend is essentially a backlash against the scientism and the excessive respect for disciplinarity of the Cold War university. We cannot attribute it solely to demographic diversification because most of the people one would name as its theorists are white men, and because the seeds of the undoing of the old disciplinary models were already present within the disciplines themselves. The people whose work is most closely associated with the demise of faith in disciplinary autonomy were, in fact, working entirely within the traditions in which they had been trained in the 1950s and early 1960s--people such as Clifford Geertz, Paul De Man, Hayden White, Stanley Fish, and Richard Rorty. The principal source of the "critical definition" of disciplines, Thomas Kuhn's The Structure of Scientific Revolutions (1962), was in itself a perfectly traditional exercise in the philosophy and history of science. (Kuhn's mentor, to whom the book is dedicated, was James Conant.) But Kuhn's argument that "progress" in scientific knowledge can be explained in large part as the substitution of new paradigms for old proved infectious in disciplines far removed from the philosophy of science. Kuhn's book was not a work of science studies. He was not trying to explain science as displaced biography or sociology. He was only trying to describe how science opens up new paths of inquiry, and for the rest of his career he resisted the suggestion that his theory of paradigm change implied that scientific knowledge was relativistic or socially constructed. Still, he set the analytic template for people in many other fields.

Richard Rorty, for example, has always cited Kuhn as a key influence on his own effort to debunk (or to transcend) the tradition of analytic philosophy. Philosophy and the Mirror of Nature (1979), Rorty's landmark work, constructed its attack on the claims of analytic philosophy entirely from within the discipline itself--from arguments advanced by mainstream analytic philosophers such as Ludwig Wittgenstein, Wilfred Sellars, W. V. O. Quine, Nelson Goodman, and Hilary Putnam. Rorty's point was not that analytic philosophy was a mere academic formalism, or the politically objectionable artifact of a mandarin intellectual class (the sort of argument, in other words, one could imagine from a person outside the discipline). His point was that analytic philosophy had refuted itself on its own terms.

The scholar who most successfully adopted Kuhn's conception of the progress of knowledge as a series of paradigm shifts was Stanley Fish. Whatever the problems Kuhn's theory posed for scientists, many people in English departments saw developments within their own field as precisely a succession of largely ungrounded paradigm shifts. Since 1945, the discipline had been dominated by, in turn, New Criticism, structuralism, and deconstruction--each theoretical dispensation claiming to have unlocked the true nature of literary language, which its predecessor had misunderstood. Fish interpreted the shifts as a succession of "communities of inquiry," whose norms and values set the boundaries for what was professionally acceptable and what was not. Once this interpretation was grasped, the belief that "English" represented any single way of approaching literature came to seem naive. Thus the pragmatic definition: The study of English is whatever people within the community of inquiry known as "the English department" happen to count as the study of English. There is no objective referent, such as "the nature of literary language," to use as an arbiter among approaches. The foundation has not shifted--it has vanished.

The story of paradigm loss is the story of many converging trends--which is a good reason for concluding that the loss is not likely to be reversed anytime soon. One can ask, though, whether postdisciplinarity is a good place to be. My own view, for what it is worth, is that the academy is well rid of the disciplinary hubris of the early Cold War university, but that it is at some risk of sliding into a predictable and aimless eclecticism (as opposed to an imaginative and dynamic eclecticism, which I support). In a perfect world, which is to say in a fully funded world, the intellectual uncertainties caused by the collapse of the disciplines would eventually shake themselves out. The good ideas would drive out the bad, and people would find a way to separate what is worth studying and teaching from what is trendy or meretricious. But the world is not fully funded. Disciplines do not have an infinite amount of time to sort out their rationales. When they have a hard time explaining what they are about, they are in danger of losing out in the competition.

What is the chief obstacle to a productive resolution of the current disciplinary confusion? Doctoral education is the sphere of the American educational system most resistant to reform. It remains bound to a disciplinary structure first put in place 100 years ago--even though the curriculum of the liberal arts college, the demographic composition of student bodies, and the status of knowledge itself in the global economy have all been transformed. Graduate students still specialize in a small subfield within a traditional department, still become disciples of senior specialists for eight or 10 or sometimes 12 years, still produce a scholarly monograph to secure a degree that will license them to teach. All the buzz of academic intellectual life is happening in sex and gender studies, cultural studies, American studies, postcolonial studies, and so on, but all the credentialing goes on in departments of English, history, sociology, philosophy, and the rest of the traditional liberal arts fields. The academic establishment has become so overinvested in the notion that a Ph.D. in one of those fields stands for something immutably real and valuable that it cannot imagine reproducing itself other than by putting the next generation over exactly the same hurdles. Once a device for professional self-control, the doctoral degree has become a fetish of the academic culture. There must be other ways to train college teachers. There must be other ways to pursue scholarly inquiry.