Regime Change 2.0

Regime Change 2.0

Robert S. Litwak

There is more than one way to get a rogue state to change its ways.

Share:
Read Time:
19m 8sec

“The Most Dangerous Man in the World?” shouted the cover of Newsweek. Iran’s radical president Mahmoud Ahmadinejad in 2008? No, the man was Libya’s Muammar al-Qaddafi and the year was 1981. Twenty-two years later, in late 2003, the Libyan dictator surprised the world with the announcement that his country would terminate its weapons of mass destruction (WMD) programs. The strategic turnabout ended years of secret negotiations with the United States and Britain that had focused initially on Libyan complicity in the terrorist bombing of Pan Am flight 103 in 1988 and subsequently on Libya’s proscribed WMD programs. The Bush administration claimed the disarmament coup (coming just eight months after the toppling of Saddam Hussein’s regime) as a dividend of the Iraq war and declared that Libya could now emerge from its United Nations–imposed diplomatic isolation. Libya was poised to rejoin what American presidents from Woodrow Wilson to George W. Bush have metaphorically called “the family of nations.” Does the Libyan precedent—“The Rogue Who Came in From the Cold,” as a headline in Foreign Affairs put it—hold lessons for dealing with other states that egregiously violate international norms of conduct?

 
In 2005, when Secretary of State Condoleezza Rice identified six countries—Cuba, Iran, North Korea, Zimbabwe, Burma, and Belarus—as “outposts of tyranny,” she conspicuously omitted Libya as a seventh. Yet though that former “rogue state” was no longer engaged in weapons proliferation or terrorism—the issues of urgent concern to the United States after the 9/11 attacks—the Libyan regime’s miserable human rights record secured Qaddafi 11th place and a “dishonorable mention” in Parade magazine’s 2008 ranking of“The World’s Worst Dictators.” That pop compilation of autocrats included not only those Rice singled out but also, embarrassingly for an administration trumpeting a “freedom agenda,” the leaders of three key U.S. allies—Saudi Arabia, Egypt, and Pakistan. One man’s dictator is another’s indispensable partner in the “global war on terrorism.” The competition between contradictory values and objectives—on the one hand, President Bush’s Wilson-on-steroids rhetoric about “ending tyranny”; on the other, the ugly accommodations Washington has made with the “world’s worst” for the sake of counterterrorism and oil—has naturally fueled charges of hypocrisy. There may be no resolving this traditional tension in American foreign policy between ideals and interests, but the tension can be managed.
 
The roots of the current debate can be traced to an important conceptual shift that occurred around 1980. Before then, the terms “rogue,” “pariah,” and “outlaw” were used interchangeably to describe states whose repressive ruling regimes engaged in the most extreme violations of international norms governing the treatment of civilian populations; notorious examples were Pol Pot’s Cambodia and Idi Amin’s Uganda. After 1980, the focus shifted from the internal behavior of a state (how a regime treats its own people) to its external behavior (how it relates to other states in the international system). Two key criteria marked a state as “rogue”: the sponsorship of terrorism and the pursuit of WMD. In accordance with the shift to a concern with states’ external behavior, the State Department inaugurated an official listing of countries employing terrorism as an instrument of policy. And in a 1985 speech, President Ronald Reagan called Iran, Libya, North Korea, Cuba, and Nicaragua “an international version of Murder Incorporated” with “outlaw governments who are sponsoring terrorism against our nation.”
 
Over the years, the U.S. list of state sponsors of terrorism has been subject to politicization. Particularly glaring was the decision in 1982 to drop Iraq from the list as part of Washington’s “tilt” toward the Saddam Hussein regime just as Iraq was suffering battlefield setbacks in its attritional war with Ayatollah Khomeini’s Iran. Ironically, the country that would one day be held up as the archetypal “rogue state” was being courted, not penalized, by the Reagan and George H. W. Bush administrations through what proved a flawed engagement strategy. Iraq was not placed back on the State Department’s terrorist list until a month after its August 1990 invasion of Kuwait.
 
The new conception of rogue states was strongly reinforced by the coincidence of the end of the Cold War and the waging of a hot war in the Persian Gulf in 1991 to reverse Saddam’s aggression. Richard Cheney, then secretary of defense, spoke of the need to prepare for the “Iraqs of the future.” This mission assumed added urgency with the postwar discovery by UN weapons inspectors of Iraq’s unexpectedly large WMD programs. The Clinton administration further elevated the rogue state concept in U.S. policy by asserting that the rogues, whose core group comprised Iran, Iraq, North Korea, and Libya, constituted a distinct category in the post–Cold War international system. The “rogue” rubric carried the dubious connotation of essentially crazy states not susceptible to deterrence and traditional cost-benefit diplomacy. Secretary of State Madeleine Albright told the Council on Foreign Relations in 1997 that “dealing with the rogue states is one of the greatest challenges of our time . . . because they are there with the sole purpose of destroying the system.”
 
But the Clinton administration’s translation of rogue state rhetoric into strategy exposed major liabilities of the term. The pejorative label was an American political rubric without standing in international law. And because it was analytically soft and quintessentially political, it was applied selectively and inconsistently. Syria, for example, a state with active WMD programs and links to terrorism, was then being wooed by the Clinton administration in the Middle East peace process and so was pointedly not referred to as a rogue state, whereas Cuba, which met none of the criteria, was included in the roster of rogue states because of the political clout of the Cuban émigré community. The definitional problem went further: How was one to categorize states that met some of the criteria, such as India and Pakistan after their 1998 nuclear tests? Another important reason that the term was so elastic was its focus solely on objectionable external behavior; the Clinton administration did not address odious actions within states, such as Burma, that violated international norms. Opponents of the administration soon appropriated the term for their own purposes. Thus, one conservative critic labeled China a rogue state because of its human rights abuses and nuclear cooperation with Pakistan and urged President Bill Clinton to cancel his 1998 state visit to Beijing. As with pornography, people know a rogue state when they see one.
 
The translation of the rogue state concept into policy sharply limited strategic flexibility. The assertion that these countries constituted a distinct class of states pushed policymakers toward adopting a one-size-fits-all strategy of comprehensive containment and isolation. Once a country was relegated to the “rogue” or “outlaw” category, critics viewed any deviation from hard-line containment and isolation as tantamount to appeasement. The rogue state strategy proved more an attitude than a coherent guide to policy. And in practice, the attitude came up against hard political realities—first in North Korea, where the threat posed by Pyongyang’s advanced nuclear program in 1994 necessitated negotiation, and later in Iran, where reformist president Mohammed Khatami’s surprise election in 1997 offered Washington a perceived opportunity for diplomatic engagement. Concluding that the category had become a political straitjacket, in 2000 the Clinton administration jettisoned the term “rogue state” in favor of the infelicitous “states of concern.” But the incoming George W. Bush administration pointedly restored it to the U.S. foreign-policy lexicon in accordance with what observers called its “ABC”—“anything but Clinton”—stance.
 
Al Qaeda’s terrorist attacks on 9/11 recast the American debate on rogue states. Bush administration officials argued that the threats to the United States in this
new era were inextricably linked to the character of
its adversaries—undeterrable terrorist groups and unpredictable rogue states. Accordingly, administration hard-liners insisted that merely changing the behavior of these states would no longer suffice because the bad behavior derived from their very nature. The proliferation of WMD capabilities to rogue states, in tandem with the sponsorship of terrorism by their unstable ruling regimes, created a deadly new “nexus.” The nightmare scenario was that rogue regimes could transfer nuclear, biological, or chemical weapons to their terrorist clients, who would have no moral or political compunctions about using them against the United States. This redefinition of the threat led to a radical change in U.S. strategy. Viewing Iraq through “the prism of 9/11,” in then–secretary of defense Donald Rumsfeld’s phrase, the administration made the decisive shift from a pre-9/11 strategy of containing regimes to a new strategy of undoing them.
 
The UN Security Council crisis leading up to the onset of the Iraq war in 2003 began as a debate about Iraq and Saddam Hussein but turned into a referendum on the United States and the legitimate exercise of American power. The rancorous, divisive debate was in sharp contrast to the international solidarity mobilized in the immediate aftermath of 9/11. The terrorist attacks ushered in a new era of vulnerability, but despite the constant refrain at the time that “everything has changed,” they did not alter the structure of international relations. To the contrary, they solidified that structure. Relations between the United States and its former Cold War adversaries Russia and China moved to their closest since World War II.
 
Political scientist John Ikenberry has argued persuasively that the key to America’s international success during the Cold War was the embedding of U.S. power in international security and economic institutions, such as NATO and the World Bank. That made the exercise of American power more legitimate and less threatening to other states and fostered the perception of the United States as a benign superpower, even as it advanced American national interests. It also explains why the demise of the Soviet Union and the end of the bipolar Cold War system did not trigger the rise of a coalition of states to balance American power.
 
But as historian John Lewis Gaddis observes, “The rush to war in Iraq in the absence of a ‘first shot’ or ‘smoking gun’ left . . . a growing sense throughout the world there could be nothing worse than American hegemony if it was to be used in this way.” The perception of the United States as a rogue superpower, which had arrogated an unfettered right of military preemption, unleashed a diplomatic effort by France, Germany, and Russia to block the use of force against Iraq by withholding the legitimizing imprimatur of the United Nations.
 
At the heart of the dispute was the cardinal principle of sovereignty. President George H. W. Bush faced a far easier task building an international coalition for a showdown with Iraq in 1991 than his son did 12 years later. In the first gulf war, Security Council authorization and the forging of a broad multinational coalition to liberate Kuwait were diplomatically possible because Saddam Hussein had violated a universally supported international norm: State sovereignty is to be protected from external aggression. By contrast, in the bitter 2003 UN debate, the attainment of Security Council approval for military action was bound to rouse strong opposition rooted in that same international norm: Compelling Iraqi WMD disarmament through an externally imposed regime change would be a precedent-setting negation of state sovereignty.
 
In contrast to the change of regime in Iraq, the Libyan case offered the precedent of change in a regime. When Qaddafi announced that Libya was voluntarily terminating its covert WMD programs and submitting to intrusive international inspections to certify compliance, the Bush administration and its supporters claimed that he had been “scared straight” (as one analyst put it) by the regime-change precedent in Iraq. The Iraq war and the powerful video broadcast worldwide of Saddam Hussein being inspected for lice by a U.S. military medic after his capture were no doubt an important factor affecting the timing of Qaddafi’s WMD decision. It was a necessary but not sufficient condition for Libya’s WMD disarmament. The crux of the Libyan deal was the Bush administration’s willingness to eschew the objective of regime change and instead offer a tacit assurance of regime survival. In essence, if Qaddafi halted his objectionable activities in the areas of proliferation and terrorism, Washington would not press for a change of regime in Tripoli. Without such a credible security assurance, Qaddafi would have had no incentive to relinquish his WMD arsenal; to the contrary, the belief that he was targeted by the U.S. administration after Iraq regardless of any change in Libyan policy would have created a powerful incentive for him to accelerate his regime’s efforts to acquire unconventional weapons as a strategic deterrent.
 
The contrasting precedents set in Iraq and Libya have important implications for the nuclear crises with North Korea and Iran, but they also raise a fundamental question about the meaning of a term that has been central to the U.S. foreign-policy debate: “regime change.” The Iraq war reinforced the widespread but misleading connotation of regime change as a sharp split between old and new, and as something brought about by outsiders rather than insiders. The term is better viewed as embodying a dynamic process that occurs along a continuum. Total change—through war (Germany and Japan) or revolution (China and Iran)—that not only removes a regime’s leadership but also transforms governmental institutions is rare. More commonly, the degree of change is limited, as when a newly elected political party makes a significant policy shift, or when one leader supplants another in an authoritarian regime. Leadership is perhaps the key determinant of change, affecting its pace and extent, or indeed influencing whether it will be undertaken at all.
 
The most important instance of regime change in the latter half of the 20th century was accomplished in the Soviet Union under President Mikhail Gorbachev through neither revolution nor war. In 1989, diplomat George Kennan declared an end to the Cold War, arguing that the Soviet Union under Gorbachev had evolved from a revolutionary expansionist state into an orthodox great power. Gorbachev’s grand strategy—a form of regime change by internal evolution—was to integrate a transformed Soviet Union into the international order forged after World War II. The complementary U.S. strategy of the post–Cold War era has been to promote the integration of post-Soviet Russia into that international order.
 
Historically, the periods of greatest turmoil in the modern era have arisen from the emergence of expansionist great powers with unbounded ambition, such as Nazi Germany or Stalin’s Soviet Union, seeking the wholesale transformation of the international order. With the demise of the Soviet Union, the defining feature of contemporary international relations has been the absence of competition among the great powers that might bring with it the risk of major war. Although China’s meteoric rise and Russia’s uncertain political trajectory have prompted balance-of-power realists to question the long-term durability of this current condition, neither great power is mounting a frontal assault on the existing international order. Some commentators declared that Russia’s military intervention in Georgia in August marked the return of the Cold War. This development could alternatively be viewed as the reassertion of traditional Russian national interests. Though a State Department official called Russia a “revisionist” state after its move into Georgia, its revisionism is in the conventional tradition of a great power seeking to create a sphere of influence on its periphery. This stance is closer to the Monroe Doctrine than to the Comintern. To be sure, Russia’s new assertiveness carries risks of regional strife and inadvertent military escalation, but in contrast to its behavior during the Cold War, the Kremlin is not advancing an alternative vision of international order.
 
Operating beyond the bounds of international order are a diverse group of weak, isolated countries—ranging from Burma to Zimbabwe, and Belarus to North Korea—that defy global norms of behavior but do not threaten the stability of the entire system. How can these states be induced or compelled to comply with international norms? Through targeted strategies that create effective influence on their ruling regimes. The aim is to present each with a structured choice between the rewards of behavior change and the penalties for non-compliance. Of course, some outlaw states may still strongly resist this process of “resocialization” (to use political scientist Alexander George’s term).
 
In the case of Libya, the origins of Qaddafi’s strategic turnabout date to the mid-1990s, when Libya’s domestic economy was collapsing under the twin impacts of UN sanctions and low oil prices. With even the regime’s core constituencies under stress, Qaddafi’s hand was forced. The Libyan leader reportedly sided with the regime’s pragmatic technocrats, who argued that the country’s radical foreign policies (which had landed the “dangerous” Qaddafi on the cover of Newsweek in 1981) had become a costly liability. Bowing to “new realities,” Qaddafi even embraced economic globalization, declaring, “The world has changed radically . . . and being a revolutionary and progressive man, I have to follow this movement.”
 
Globalization—the driving force of the world economy—is a double-edged sword. Reintegration, especially for an oil-exporting state such as Libya, offers tangible benefits. But opening up their countries and engaging in the global economy also carries for these beleaguered regimes the risk of political contagion that might threaten their survival. Dictators such as North Korea’s Kim Jong Il realize that a soft landing for their society would likely mean a hard landing for their regime. Since autarky is not a viable long-term alternative to integration, their strategy is essentially to muddle through, gaining the benefits of outside economic links while attempting to insulate themselves from the political consequences.
 
If these states can’t be induced to comply with international norms , they should be compelled to do so. By credibly threatening the interests of those who keep the regime in power—the military, security services, key ethnic groups, and other elites—the international community can leverage a change in behavior. Comprehensive sanctions, as evidenced by the decade-long UN experience in Iraq, have an indiscriminate negative impact on the civilian populace. By contrast, targeted sanctions, such as travel and financial restrictions, are directed at individuals, commercial entities, and organizations. In the Libyan case, the impact of multilateral sanctions on the regime’s power base ratcheted up the pressure on Qaddafi to alter course. When elite groups conclude that their country’s defiance of global norms is a threat to their own specific interests, they become what political scientist Bruce Jentleson has characterized as “transmission belts, carrying forward the coercive pressure on the regime to comply.” But such pressure can also be short-circuited (again, to use Jentleson’s metaphor). Take the case of Iran, where the financial windfall from the elevated price of oil permits Ahmadinejad to cover his regime’s economic misanagement and buy off critics. Or the case of insular North Korea, where China, fearful of precipitating the collapse of the Kim Jong Il regime, has refused to exert its unique leverage on Pyongyang over the nuclear issue.
 
In offering a structured choice to these regimes, the United States must be prepared to take “yes” for an answer when one of them changes its behavior. Yet throughout the nuclear crisis with Iran, the Bush administration has sent a mixed message. Top officials have stuck to the familiar mantra “All options are on the table”—a clear reference to the possibility of military action. But to what end? Is the U.S. goal to change the behavior of this “axis of evil” member or to change its ruling regime? Iran faces profound societal contradictions and hard choices: Is the Islamic Republic an “ordinary” state that accepts the legitimacy of the international system, or a revolutionary state that rejects the norms of a system regarded by hard-liners as U.S.-dominated? But pushing Tehran to make the right choice also requires Washington to make a choice, to resolve its own policy contradiction. It must make clear, as it did with Libya, that the U.S. objective is to change the behavior of regimes, not replace their leaders. Because of the cardinal principle of state sovereignty, Washington will be hard pressed to win the support of Russia and China for meaningful sanctions on Iran if Moscow and Beijing believe that the United States means to overthrow the Iranian regime.
 
The promotion of a rules-based international order also requires that the United States not turn a blind eye to non-democratic allies, such as Egypt and Saudi Arabia, that are not pariahs along the lines of Burma and Zimbabwe but that also flout important international norms. To avoid charges of hypocrisy and double standards where competing foreign-policy interests are at stake, the United States must be willing to set a minimum bar for compliance by its allies and to pay the price when nations do not comply. Easier said than done, but that is the task facing U.S. policymakers.
 
Perhaps most important to America’s efforts to support international order is the need to reaffirm its own commitment to work through international institutions and abide by their norms. After 9/11, President Bush asserted that Washington would not be so constrained; it did not need “permission” to defend America. The Iraq war was the high-water mark of that instance of U.S. unilateralism. Washington has since acknowledged that multilateralism conveys political legitimacy and that the involvement of other states provides practical utility. The embedding of U.S. power within international institutions would mark a return to what liberal internationalists view as America’s formula for success after World War II. The pressing challenge for the United States in the post-9/11 era of vulnerability is to tend to the national interest without calling into question the nation’s commitment to international norms of order.
 

About the Author

Robert S. Litwak is director of international security studies at the Woodrow Wilson Center. A former director for nonproliferation on the U.S. National Security Council staff, he is the author of Regime Change: U.S. Strategy Through the Prism of 9/11 (2007).

More From This Issue