5 July 2005Znet/ Boston Review
Thirty-five years ago I agreed, in a weak moment, to give a talk with the title “Language and Freedom.” When the time came to think about it, I realized that I might have something to say about language and about freedom, but the word “and” was posing a serious problem. There is a possible strand that connects language and freedom, and there is an interesting history of speculation about it, but in substance it is pretty thin. The same problem extends to my topic here, “universality in language and human rights.” There are useful things to say about universality in language and about universality in human rights, but that troublesome connective raises difficulties.
The only way to proceed, as far as I can see, is to say a few words about universality in language, and in human rights, with barely a hint about the possible connections, a problem still very much on the horizon of inquiry.
To begin with, what about universality in language? The most productive way to approach the problem, I think, is within the framework of what has been called “the biolinguistic perspective,” an approach to language that treats the capacity to acquire and use language as an aspect of human biology. This approach began to take shape in the early 1950s, much influenced by recent developments in mathematics and biology, and interacted productively with a more general shift of perspective in the study of mental faculties, commonly called “the cognitive revolution.” It would be more accurate, I think, to describe it as a second cognitive revolution, reviving and extending important insights and contributions of the cognitive revolution of the 17th and 18th centuries, which had regrettably been forgotten, and—despite some interesting historical research on rationalist and Romantic theories of language and mind—are still little known.
In the 1950s, the study of language and mind was commonly considered part of the behavioral sciences. As the term indicates, the object of inquiry was taken to be behavior, and in linguistics, also its products: texts, perhaps a corpus elicited from native informants. Linguistic theory consisted of procedures of analysis, primarily segmentation and classification, guided by limited assumptions about structural properties and their arrangement. The prominent American theoretician Martin Joos hardly exaggerated in a 1955 exposition when he identified the “decisive direction” for the study of language as the decision that language can be “described without any preexistent scheme of what a language must be.” Prevailing approaches in the behavioral sciences were generally similar. No one, of course, literally believed in the incoherent notion of a “blank slate.” But it was common to suppose that apart from some initial delimitation of elementary properties detected in the environment (a “quality space,” in W.V. Quine’s highly influential framework, which assumed an innate human ability to detect colors, say, and order them as more or less similar), undifferentiated learning mechanisms of some kind account for what organisms know and do, humans included.
The biolinguistic approach took as its object of inquiry not behavior and its products but rather the internal cognitive systems, including computational mechanisms, that enter into action and interpretation—and, at a deeper level, the basis in our biological nature for the growth and development of these internal systems. The goal was to discover what Juan Huarte, in the 16th century, described as the essential property of human intelligence: the capacity of the human mind to “engender within itself, by its own power, the principles on which knowledge rests”—ideas that were developed in important ways in the years that followed.
For language, “the principles on which knowledge rests” are those of the attained state of the language faculty, an internalized language distinct from the culturally specific symbolic systems (English, Spanish, Guarani) to which the term “language” is applied in informal usage. The knowledge that rests on these internal principles covers a wide range, from sound to structure to meaning. In even the most elementary cases, what is known is quite intricate. To take a word that interested British empiricists, consider the word river, a “common notion,” in 17th-century terms, part of our innate knowledge. Thomas Hobbes suggested that rivers are mentally individuated by place of origin. But while there is some truth to the observation, it is not fully accurate and only scratches the surface of our intuitive understanding of the concept. Thus, the Charles River would remain very same river under quite extreme changes, and would not be a river at all under very slight changes. It would remain the Charles River if its course were reversed (as Stalin planned to do with the Volga), if it were divided into separate streams that converged in some new place, if any H20 that happened to be in it were replaced by chemicals from an upstream manufacturing plant. On the other hand, it would no longer be a river at all if it were directed between fixed boundaries and used for shipping freight (in which case it would be a canal, not a river) or if its surface were hardened by some near-undetectable physical change, a line were painted down the middle, and it came to be used for driving to Boston (in which case it would be a highway).
As we proceed, we find much more intricate properties, varying in complex ways with mentally constructed circumstances, no matter how simple the words we investigate. Such commonplace facts undermine an approach to reference—more accurately, the act of referring, using words to talk about things and events in the world—that is based on some mystical and fixed word-object relation. Insights about these matters were developed from Aristotle through British empiricism, but most have been lost. Even the most elementary human concepts appear to be entirely different from anything found in animal symbolic or communicative behavior, a significant problem for evolutionary theory, one of several. Problems mount very rapidly when we move from words to expressions formed from them. What human beings know is remarkably intricate and subtle.
One essential task of inquiry is to determine the principles on which such knowledge rests for the widest variety of possible human languages. A deeper problem is to discover what Huarte called “the power to engender” these principles of internal language: in current terms, the virtually uniform biological endowment that constitutes the human language faculty and enables the acquisition of the range of internal languages. The power to engender an internal language is the topic of “universal grammar,” adapting a traditional term to a new context. The universal properties of language captured by universal grammar constitute, in effect, the genetic component of the language faculty.
A significant insight of the first cognitive revolution was that properties of the world that are informally called mental may involve unbounded capacities of a finite organ, the “infinite use of finite means,” in Wilhelm von Humboldt’s phrase. In a rather similar vein, Hume had recognized that our moral judgments are unbounded in scope, and must be founded on general principles that are part of our nature though they are beyond our “original instincts.” That observation poses Huarte’s problem in a different domain, where we might find part of the thin thread that links the search for cognitive and moral universals.
By mid-20th century, it had become possible to face such problems in more substantive ways than before. By then, there was a clear understanding, from the study of recursive functions, of finite generative systems with unbounded scope—which could be readily adapted to the reframing and investigation of some of the traditional questions that had necessarily been left obscure—though only some, it is important to stress. Humboldt referred to the infinite use of language, quite a different matter from the unbounded scope of the finite means that characterizes language, where a finite set of elements yields a potentially infinite array of discrete expressions: discrete, because there are six-word sentences and seven-word sentences, but no 6.2 word sentences; infinite because there is no longest sentence (append “I think that” to the start of any sentence). Another influential factor in the renewal of the cognitive revolution was the work of ethologists, then just coming to be more widely known, with their concern for “the innate working hypotheses present in subhuman organisms” (Nikolaas Tinbergen) and the “human a priori” (Konrad Lorenz), which should have much the same character. That framework too could be adapted to the study of human cognitive organs (for example, the language faculty) and their genetically determined nature, which constructs experience and guides the general path of development, as in other aspects of growth of organisms, including the human visual, circulatory, and digestive systems, among others.
Meanwhile, efforts to sharpen and refine the procedural approaches of structural linguistics ran into serious difficulties, revealing what appear to be intrinsic inadequacies in their capacity to account for the scope of human language, and the complex and subtle knowledge of speakers. It became increasingly clear that even the simplest elements of language—and surely more complex ones—do not have the “beads-on-a-string” property that is required for approaches based on segmentation and classification. Rather, they relate much more indirectly to phonetic form. Their nature and properties are fixed within the internal language, the computational system that determines the unbounded range of expressions. These expressions, in turn, can be regarded as “instructions” to other systems that are used for mental operations, as well as for the production and interpretation of external signals. In the behavioral sciences more generally, closer study of the postulated mechanisms of learning also revealed fundamental inadequacies, and soon questions were arising within the disciplines about whether even their core concepts could be sustained.
The natural conclusion seemed to be that the internal language attained by a competent speaker—the integrated system of rules and principles from which the expressions of the language can be derived—has roughly the character of a scientific theory. The child must somehow select the internal language from the flux of experience. The problem is similar to what Charles Sanders Peirce, in his inquiries into the nature of scientific discovery, had called abduction. And as in the case of the sciences, the task is impossible without what Peirce called a “limit on admissible hypotheses” that permits only certain theories to be entertained, but not infinitely many others compatible with relevant data. In the language case, it appeared that universal grammar must impose a format for rule systems that is sufficiently restrictive so that candidate languages considered and tested against the linguistic data available to the child are “scattered,” and only a small number can even be considered in the course of language acquisition. It follows that the format must be highly articulated, and specific to language. The most challenging theoretical problem in linguistics was that of discovering the principles of universal grammar, which determine the choice of hypotheses, the accessible internal languages.
At the same time, it was also recognized that for language, as for other biological organs, a still more challenging problem lies on the horizon: to discover the laws that determine possible successful mutation and the nature of complex organisms. Investigation of such factors seemed too remote to merit much attention, though even some of the earliest work was implicitly guided by such concerns, which bear quite directly on universality in language: insofar as these factors enter into growth and development, less is attributed to universal grammar as a language-specific property—and, incidentally, the study of evolution of language becomes more feasible, for obvious reasons.
By the early 1980s, a substantial shift of perspective within linguistics reframed the basic questions considerably, abandoning entirely the format conception of linguistic theory in favor of an approach that sought to limit attainable internal languages to a finite set, aside from lexical choices. As a research program, this shift has been highly successful, yielding an explosion of empirical inquiry into a wide range of typologically varied languages, posing new theoretical questions that could scarcely have been formulated before, often providing at least partial answers as well, while also revitalizing related areas of language acquisition and processing. Another consequence was that the shift of perspective removed some basic conceptual barriers to the serious inquiry into deeper principles in growth and development of language. In this revised “principles and parameters” conception, language acquisition is dissociated from the fixed principles of universal grammar, and does not compel the conclusion that the format provided by the innate language faculty must be highly articulated and specific to it, so as to restrict the space of admissible hypotheses. That opens new paths to studying universality in language.
It had been recognized from the origins of modern biology that general aspects of structure and development constrain the growth of organisms and their evolution. By now such constraints have been adduced for a wide range of problems of development and evolution, from cell division to optimization of structure and function of cortical networks.
Assuming that language has the same general properties of other biological systems, we should, therefore, be seeking three factors that shape the growth of language in the individual:
1. Genetic factors, the topic of universal grammar. These interpret part of the environment as linguistic experience and determine the general course of development to the languages attained.
2. Experience, which permits variation within a fairly narrow range.
3. Principles not specific to the faculty of language, including principles of efficient computation, which would be expected to be of particular significance for systems such as language, determining the general character of attainable languages.
At this point we have to move on to more technical discussion than is possible here, but I think it is fair to say that in recent years there has been considerable progress in moving toward principled explanation in terms of third-factor considerations, considerably sharpening the question of the specific properties that determine the nature of language—in one form or another, the core problem of the study of language since its origins millennia ago, and now taking quite new forms.
With each step toward principled explanation in these terms, we gain a clearer grasp of the universals of language. It should be kept in mind, however, that any such progress still leaves unresolved problems that have been raised for hundreds of years. Among these are the mysterious problems of the creative and coherent ordinary use of language, a core problem of Cartesian science.
* * *
We are now moving to domains of will and choice and judgment, and the thin strands that may connect what seems within the range of scientific inquiry to essential problems of human life, in particular vexed questions about universal human rights. One possible way to draw connections is by proceeding along the lines of Hume’s remarks that I mentioned earlier: his observation that the unbounded range of moral judgments—like the unbounded range of linguistic knowledge—must be founded on general principles that are part of our nature though they lie beyond our “original instincts,” which elsewhere he took to include the “species of natural instincts” on which knowledge and belief are grounded.
In recent years, there has been intriguing work in moral philosophy and experimental cognitive science that carries these ideas forward, investigating what seem to be deep-seated moral intuitions that often have a very surprising character, in invented cases, and that suggest the operation of internal principles well beyond anything that could be explained by training and conditioning. To illustrate, I will take a real example that carries us directly to the issue of universality of human rights.
In 1991, the chief economist of the World Bank wrote an internal memo on pollution, in which he demonstrated that the bank should be encouraging migration of polluting industries to the poorest countries. The reason is that “measurement of the costs of health impairing pollution depends on the foregone earnings from increased morbidity and mortality,” so it is rational for “health impairing pollution” to be sent to the poorest countries, where mortality is higher and wages are lowest. Other factors lead to the same conclusion, for example, the fact that “aesthetic pollution concerns” are more “welfare enhancing” among the rich. He pointed out, accurately, that the logic of his memo is “impeccable,” and any “moral reasons” or “social concerns” that might be adduced “could be turned around and used more or less effectively against every Bank proposal for liberalization,” so they presumably cannot be relevant.
The memo was leaked and elicited a storm of protest, typified by the reaction of Brazil’s secretary of the environment, who wrote him a letter saying that “your reasoning is perfectly logical but totally insane.” The secretary was fired, while the author of the memo became treasury secretary under President Clinton and is now the president of Harvard University.
The reaction led to evasions and denials that we can ignore. What is relevant here is the virtual unanimity of the moral judgment that the reasoning is insane, even if logical. That merits a closer look, now turning to the modern history of human-rights doctrines.
The standard codification of human rights in the modern period is the Universal Declaration of Human Rights (UD), adopted in December 1948 by almost all nations, at least in principle. The UD reflected a very broad crosscultural consensus. All of its components were given equal status, including “anti-torture rights,” socioeconomic rights, and others, such as those enumerated in Article 25:
Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.These provisions have been reaffirmed in enabling conventions of the UN General Assembly and international agreements on the right to development, in almost the same words.
It seems reasonably clear that this formulation of universal human rights rejects the impeccable logic of the chief economist of the World Bank, if not as insane then at least as profoundly immoral—which was, in fact, the virtually universal judgment, at least as far as it was publicly expressed.
The word “virtually” must not be overlooked. As is well known, Western culture condemns some nations as “relativists,” who interpret the UD selectively, rejecting components they do not like. There has been great indignation about “Asian relativists,” or the unspeakable communists, who descend to this degraded practice. Less noticed is that one of the leaders of the relativist camp is also the leader of the self-designated “enlightened states,” the world’s most powerful state. We see examples almost daily, though “see” is perhaps the wrong word, since we see them without noticing them.
To illustrate, let’s go back to March 1. There were lead stories in the press about the release of the State Department’s annual report on human rights around the world. The spokesperson at the news conference was Paula Dobriansky, the undersecretary of state for global affairs. She affirmed that “promoting human rights is not just an element of our foreign policy; it is the bedrock of our policy and our foremost concern.” But there is a bit more to the story. Dobriansky was the deputy assistant secretary of state for human rights and humanitarian affairs in the Ronald Reagan and George H.W. Bush administrations, and in that capacity she sought to dispel what she called “myths” about human rights, the most salient being the myth that so-called “‘economic and social rights’ constitute human rights.” She denounced the efforts to obfuscate human-rights discourse by introducing these spurious rights—which are entrenched in the UD and formulated through U.S. initiative, but which the U.S. government explicitly rejects, and increasingly the entire West rejects, within the framework of the neoliberal doctrines on which the chief economist of the World Bank was relying.
I should stress that it is the U.S. government that rejects these provisions of the UD. The population strongly disagrees. One current illustration is the federal budget that was recently announced, along with a study of public reactions to it carried out by the world’s most prestigious institution for study of public opinion. The public calls for sharp cuts in military spending along with sharply increased social spending: education, medical research, job training, conservation and renewable energy, as well as increased spending for the UN and economic and humanitarian aid, and the reversal of President Bush’s tax cuts for the wealthy. Government policy is dramatically the opposite in every respect. Studies of public opinion, which regularly demonstrate this sharp divide, are rarely even reported, so the public is not only removed from the arena of policy formation, but is also kept unaware of public opinion.
There is much international concern about the “twin deficits” of the United States, the trade and budget deficits. Closely related is a third deficit: the growing democratic deficit, not just in the United States but in the West generally. It is little discussed because it is welcomed by wealth and power, which have every reason to want the public largely removed from policy choices and implementation, a matter that should be of considerable concern, quite apart from its relation to the universality of human rights.
It is unfair to focus on Dobriansky. Her position is standard. UN Ambassador Jeane Kirkpatrick described the socioeconomic provisions of the UD as “a letter to Santa Claus . . . Neither nature, experience, nor probability informs these lists of `entitlements,’ which are subject to no constraints except those of the mind and appetite of their authors.” Essentially the same view was expressed in 1990 by the U.S. representative to the UN Commission on Human Rights, Ambassador Morris Abram, explaining Washington’s unilateral veto of the UN resolution on the “right to development,” which virtually repeated the socioeconomic provisions of the UD. These are not rights, Abram informed the Commission. They yield conclusions that “seem preposterous.” Such ideas are “little more than an empty vessel into which vague hopes and inchoate expectations can be poured,” and even a “dangerous incitement.” The fundamental error of the alleged “right to development” is that it presupposes that Article 25 of the UD actually means what it clearly says, and is not a mere “letter to Santa Claus.”
Recently, Condoleezza Rice praised Jeane Kirkpatrick as an exemplary model when she announced the appointment of John Bolton as ambassador to the United Nations. Bolton has been clear and forthright in expressing his attitude toward the United Nations: “There is no United Nations,” he said. “When the United States leads, the United Nations will follow. When it suits our interests to do so, we will do so. When it does not suit our interests, we will not.” That position is at the extreme of a rather narrow elite consensus, which is opposed by the overwhelming majority of the public. Public support for the UN is so strong that a majority even thinks that the United States should give up the Security Council veto and accept majority decisions. But again, the democratic deficit prevails.
The principle of universality arises in other connections too. One instructive example occupied the World Court for several years. After the 1999 bombing of Serbia, a group of international lawyers presented the International Criminal Tribunal for the Former Yugoslavia with charges against NATO, relying on documentation by the major human-rights organizations and admissions by the NATO command. The prosecutors refused to consider the matter, in violation of tribunal rules, stating that they relied on NATO’s good faith. Yugoslavia then took the matter to the World Court. The United States alone withdrew from the proceedings. The reason was that Yugoslavia had invoked the Genocide Convention, which the United States had signed after 40 years, but with a reservation that it does not apply to the United States. Apparently, Washington retains the unilateral right to carry out genocide. The court, correctly, agreed with this argument, and the United States was excused.
That has happened before, in ways that are highly relevant today. John Negroponte was recently appointed as the first director of intelligence. Like Bolton, he has credentials for the position. In the 1980s, during the first reign of the current incumbents in Washington or their mentors, he was ambassador to Honduras, where he presided over the world’s largest CIA station, not because Honduras is so important on the world stage, but because he was supervising the camps in which the American-run terror army was trained and armed for the war against Nicaragua—which was no small matter. If Nicaragua had adopted our norms, it would have responded by terror attacks within the United States, in self-defense; in this case, authentic self-defense. Instead, Nicaragua pursued the peaceful means required by international law. It brought the U.S. attack to the World Court. Nicaragua’s case was presented by the Harvard law professor Abram Chayes. The court bent over backward to accommodate Washington, even though it refused to appear. The court eliminated a large part of the case that Chayes presented, because when the United States had accepted World Court jurisdiction in 1946, it entered a reservation excluding the United States from multilateral treaties, notably the UN Charter, which bans the unauthorized use of force as criminal—the “supreme international crime,” in the words of the Nuremberg Tribunal.
* * *
The Court therefore kept to bilateral United States–Nicaragua treaties and customary international law, but even on those narrow grounds it charged Washington with “unlawful use of force” (in lay terms, international terrorism), and ordered it to terminate the crimes and pay substantial reparations, which would go far beyond overcoming the debts that are strangling the country, accrued during the American war. The Security Council affirmed the court’s judgment in two resolutions vetoed by the United States, which immediately escalated the attack, leaving the country utterly wrecked, with a death toll that in per capita terms would be 2.5 million if it had happened in the United States, more than all American deaths in all wars in its history. The country has so declined that 60 percent of children under age two are suffering from severe malnutrition, with probable brain damage. All of this is deep in the memory hole in the elite intellectual culture. So deep that we can read editorials the last few days puzzling about “anti-American attitudes” in Nicaragua after the “failed revolution.”
There are several relevant conclusions to be drawn from this case. One is that it is another illustration of Washington’s self-exemption from international law, including humanitarian law based on universal principles of human rights, one with very grim human consequences. The example also reveals again the self-exemption of the elite intellectual culture from responsibility for our crimes, a conclusion reinforced by the reaction to the fact that Washington has just appointed to the post of the world’s leading anti-terrorism czar a person who qualifies rather well as a condemned international terrorist for his critical role in major atrocities. Orwell would not have known whether to laugh or weep.
The United States has refused to ratify most of the enabling conventions that were passed by the General Assembly to implement the UD. More accurately, it has accepted none of them, to my knowledge, because the few cases of ratification are accompanied by reservations that exclude the United States. That includes the anti-torture conventions that have stirred up a good deal of recent debate. There was an important article on the matter in the journal of the American Academy of Arts and Sciences by the distinguished constitutional law specialist Sanford Levinson. Joining most others, he condemned the Bush administration’s Justice Department, including the recently appointed attorney general, for having articulated “a view of presidential authority that is all too close to the power that Schmitt was willing to accord his own F؆hrer”—referring to Carl Schmitt, the leading German philosopher of law during the Nazi period, whom Levinson describes as “the true Ø(c)minence grise of the [Bush] administration.” Levinson nevertheless offers some defense of the Justice Department’s authorization of torture. He points out that when the Senate ratified the UN Convention Against Torture and Other Cruel, Inhuman, or Degrading Treatment or Punishment, it “offered what one might call a more ‘interrogator-friendly’ definition of torture than that adopted by the UN negotiators.” And the unilateral American definition does go some way toward permitting the practices that have recently enraged the world, and much commentary here.
It is depressingly easy to continue, but I will end with one last observation about the current scene. A few months ago I took part in a meeting at Hope Church in downtown Boston called by CRISPAZ, commemorating the 25th anniversary of the assassination of Archbishop Oscar Romero of El Salvador, a “voice for the voiceless,” murdered by security forces backed by the United States. Romero was assassinated while performing mass, shortly after sending President Carter an eloquent letter pleading with him not to send aid to the brutal military junta in El Salvador, which “will undoubtedly sharpen the injustice and the repression inflicted on the organized people, whose struggle has often been for respect for their most basic human rights.” State terror increased, with constant and decisive American support. The hideous decade ended with the murder of six leading Latin American intellectuals, who were also Jesuit priests, by an elite battalion armed and trained by the United States that had already compiled a shocking record of atrocities, targeting mostly the usual victims: peasants, working people, priests and lay workers, anyone connected even loosely to “the people’s organizations fighting to defend their most fundamental human rights.”
CRISPAZ was one of the mostly church-based organizations formed after the Romero assassination to support those fighting to defend their most fundamental human rights. Their actions broke entirely new paths in many centuries of Western violence: by living with the victims, helping them, hoping that a white face might protect them from the wrath of the American-backed state terrorist forces.
I had the privilege of sharing the platform with Mirna Perla, a Salvadoran supreme-court justice who is also the widow of Herbert Anaya—once the leading human rights activist in El Salvador—and who is attempting to continue his work under terrible conditions. Anaya was imprisoned and tortured by the American-imposed government, then assassinated by the same hands that murdered the archbishop and the leading Jesuit intellectuals, along with tens of thousands of the usual victims.
In a society that valued its freedom, it would be unnecessary to recount any of this, because it would be taught in the schools and well known to everyone, and we would be commemorating the 25th anniversary of the assassination of archbishop, and the 15th anniversary of the assassination of the Jesuit intellectuals, who were also “voices for the voiceless.” And we would be reacting the same way to the continuing atrocities by military forces armed and trained by Washington—for example, in Colombia, for many years the leading human-rights violator in the hemisphere and through those years the leading recipient of U.S. military aid and training, a more general correlation well established in scholarship. Last year, Colombia apparently maintained its record of killing more labor activists than rest of world combined. A few months ago, the military reportedly broke into the most important of the towns that had declared themselves zones of peace and murdered one of its founders and others, including young children—I happened to have met this leader not long ago, on a visit arranged by Father Javier Giraldo, the courageous priest who heads the church-based Justice and Peace Center, himself targeted for assassination and withdrawn from the country by the Jesuit order, though he insisted on returning to his human-rights work.
Again, all of this would should be too familiar even to mention. But little is known outside the circles of people like CRISPAZ, who are authentically devoted to defending universal human rights.
I mention these few examples so that we remember that we are not merely engaged in seminars on abstract principles, or discussing remote cultures that we do not comprehend. We are speaking of ourselves and the moral and intellectual values of the communities in which we live. And if we do not like what we see when we look into the mirror, we have ample opportunity to do something about it. <
Noam Chomsky is a professor of linguistics at MIT and the author of, most recently Hegemony or Survival. His essay was adapted from a talk sponsored by MIT's Program on Human Rights and Justice.