Postmodernism and truth

Here is a story you probably haven’t heard, about how a team of American researchers
inadvertently introduced a virus into a third world country they were studying.(1)
They were experts in their field, and they had the best intentions; they thought
they were helping the people they were studying, but in fact they had never
really seriously considered whether what they were doing might have ill effects.
It had not occurred to them that a side-effect of their research might be damaging
to the fragile ecology of the country they were studying. The virus they introduced
had some dire effects indeed: it raised infant mortality rates, led to a general
decline in the health and wellbeing of women and children, and, perhaps worst
of all, indirectly undermined the only effective political force for democracy
in the country, strengthening the hand of the traditional despot who ruled the
nation. These American researchers had something to answer for, surely, but
when confronted with the devastation they had wrought, their response was frustrating,
to say the least: they still thought that what they were doing was, all things
considered, in the interests of the people, and declared that the standards
by which this so-called devastation was being measured were simply not appropriate.
Their critics, they contended, were trying to impose “Western” standards in
a cultural environment that had no use for such standards. In this strange defense
they were warmly supported by the country’s leaders–not surprisingly–and little
was heard–not surprisingly–from those who might have been said, by
Western standards, to have suffered as a result of their activities.


These researchers were not biologists intent on introducing new strains of
rice, nor were they agri-business chemists testing new pesticides, or doctors
trying out vaccines that couldn’t legally be tested in the U.S.A. They were
postmodernist science critics and other multiculturalists who were arguing,
in the course of their professional researches on the culture and traditional
“science” of this country, that Western science was just one among many equally
valid narratives, not to be “privileged” in its competition with native traditions
which other researchers–biologists, chemists, doctors and others–were eager
to supplant. The virus they introduced was not a macromolecule but a meme (a
replicating idea): the idea that science was a “colonial” imposition, not a
worthy substitute for the practices and beliefs that had carried the third-world
country to its current condition. And the reason you have not heard of this
particular incident is that I made it up, to dramatize the issue and to try
to unsettle what seems to be current orthodoxy among the literati about
such matters. But it is inspired by real incidents–that is to say, true reports.
Events of just this sort have occurred in India and elsewhere, reported, movingly,
by a number of writers, among them:


Meera Nanda, “The Epistemic Charity of the Social Constructivist Critics of
Science and Why the Third World Should Refuse the Offer,” in N. Koertge, ed.,
A House Built on Sand: Exposing Postmodernist Myths about Science, Oxford
University Press, 1998, pp286-311


Reza Afshari, “An Essay on Islamic Cultural Relativism in the Discourse of
Human Rights,” in Human Rights Quarterly, 16, 1994, pp.235-76.


Susan Okin, “Is Multiculturalism Bad for Women?” Boston Review, October/November,
1997, pp 25-28.


Pervez Hoodbhoy, Islam and Science: Religious Orthodoxy and the Battle for
Rationality
, London and New Jersey, Zed Books Ltd. 1991.


My little fable is also inspired by a wonderful remark of E. O. Wilson, in
Atlantic Monthly a few months ago: “Scientists, being held responsible
for what they say, have not found postmodernism useful.” Actually, of course,
we are all held responsible for what we say. The laws of libel and slander,
for instance, exempt none of us, but most of us–including scientists in many
or even most fields–do not typically make assertions that, independently of
libel and slander considerations, might bring harm to others, even indirectly.
A handy measure of this fact is the evident ridiculousness we discover in the
idea of malpractice insurance for . . . . literary critics, philosophers, mathematicians,
historians, cosmologists. What on earth could a mathematician or literary critic
do, in the course of executing her profession duties, that might need the security
blanket of malpractice insurance? She might inadvertently trip a student in
the corridor, or drop a book on somebody‘s head, but aside from such
outré side-effects, our activities are paradigmatically innocuous.
One would think. But in those fields where the stakes are higher–and more direct–there
is a longstanding tradition of being especially cautious, and of taking particular
responsibility for ensuring that no harm results (as explicitly honored in the
Hippocratic Oath). Engineers, knowing that thousands of people’s safety may
depend on the bridge they design, engage in focussed exercises with specified
constraints designed to determine that, according to all current knowledge,
their designs are safe and sound. Even economists–often derided for the risks
they take with other people’s livelihoods–when they find themselves
in positions to endorse specific economic measures considered by government
bodies or by their private clients, are known to attempt to put a salutary strain
on their underlying assumptions, just to be safe. They are used to asking themselves,
and to being expected to ask themselves: “What if I’m wrong?” We others seldom
ask ourselves this question, since we have spent our student and professional
lives working on topics that are, according both to tradition and common sense,
incapable of affecting any lives in ways worth worrying about. If my topic is
whether or not Vlastos had the best interpretation of Plato’s Parmenides
or how the wool trade affected imagery in Tudor poetry, or what the best version
of string theory says about time, or how to recast proofs in topology in some
new formalism, if I am wrong, dead wrong, in what I say, the only damage I am
likely to do is to my own scholarly reputation. But when we aspire to have a
greater impact on the “real” (as opposed to “academic”) world– and many philosophers
do aspire to this today–we need to adopt the attitudes and habits of these
more applied disciplines. We need to hold ourselves responsible for what we
say, recognizing that our words, if believed, can have profound effects for
good or ill.


When I was a young untenured professor of philosophy, I once received a visit
from a colleague from the Comparative Literature Department, an eminent and
fashionable literary theorist, who wanted some help from me. I was flattered
to be asked, and did my best to oblige, but the drift of his questions about
various philosophical topics was strangely perplexing to me. For quite a while
we were getting nowhere, until finally he managed to make clear to me what he
had come for. He wanted “an epistemology,” he said. An epistemology.
Every self-respecting literary theorist had to sport an epistemology that season,
it seems, and without one he felt naked, so he had come to me for an epistemology
to wear–it was the very next fashion, he was sure, and he wanted the dernier
cri
in epistemologies. It didn’t matter to him that it be sound, or defensible,
or (as one might as well say) true; it just had to be new and different
and stylish. Accessorize, my good fellow, or be overlooked at the party.


At that moment I perceived a gulf between us that I had only dimly seen before.
It struck me at first as simply the gulf between being serious and being frivolous.
But that initial surge of self-righteousness on my part was, in fact, a naive
reaction. My sense of outrage, my sense that my time had been wasted by this
man’s bizarre project, was in its own way as unsophisticated as the reaction
of the first-time theater-goer who leaps on the stage to protect the heroine
from the villain. “Don’t you understand?” we ask incredulously. “It’s make
believe
. It’s art. It isn’t supposed to be taken literally!”
Put in that context, perhaps this man’s quest was not so disreputable after
all. I would not have been offended, would I, if a colleague in the Drama Department
had come by and asked if he could borrow a few yards of my books to put on the
shelves of the set for his production of Tom Stoppard’s play, Jumpers.
What if anything would be wrong in outfitting this fellow with a snazzy set
of outrageous epistemological doctrines with which he could titillate or confound
his colleagues?


What would be wrong would be that since this man didn’t acknowledge the gulf,
didn’t even recognize that it existed, my acquiescence in his shopping spree
would have contributed to the debasement of a precious commodity, the erosion
of a valuable distinction. Many people, including both onlookers and participants,
don’t see this gulf, or actively deny its existence, and therein lies the problem.
The sad fact is that in some intellectual circles, inhabited by some of our
more advanced thinkers in the arts and humanities, this attitude passes as a
sophisticated appreciation of the futility of proof and the relativity of all
knowledge claims. In fact this opinion, far from being sophisticated, is the
height of sheltered naiveté, made possible only by flatfooted ignorance
of the proven methods of scientific truth-seeking and their power. Like many
another naif, these thinkers, reflecting on the manifest inability of their
methods of truth-seeking to achieve stable and valuable results, innocently
generalize from their own cases and conclude that nobody else knows how
to discover the truth either.


Among those who contribute to this problem, I am sorry to say, is, my good
friend Dick Rorty. Richard Rorty and I have been constructively disagreeing
with each other for over a quarter of a century now. Each of us has taught the
other a great deal, I believe, in the reciprocal process of chipping away at
our residual points of disagreement. I can’t name a living philosopher from
whom I have learned more. Rorty has opened up the horizons of contemporary philosophy,
shrewdly showing us philosophers many things about how our own projects have
grown out of the philosophical projects of the distant and recent past, while
boldly describing and prescribing future paths for us to take. But there is
one point over which he and I do not agree at all–not yet–and that concerns
his attempt over the years to show that philosophers’ debates about Truth and
Reality really do erase the gulf, really do license a slide into some form of
relativism. In the end, Rorty tells us, it is all just “conversations,” and
there are only political or historical or aesthetic grounds for taking one role
or another in an ongoing conversation.


Rorty has often tried to enlist me in his campaign, declaring that he could
find in my own work one explosive insight or another that would help him with
his project of destroying the illusory edifice of objectivity. One of his favorite
passages is the one with which I ended my book Consciousness Explained
(1991):


It’s just a war of metaphors, you say–but metaphors are not “just” metaphors;
metaphors are the tools of thought. No one can think about consciousness without
them, so it is important to equip yourself with the best set of tools available.
Look what we have built with our tools. Could you have imagined it without them?
[p.455]


“I wish,” Rorty says, “he had taken one step further, and had added that such
tools are all that inquiry can ever provide, because inquiry is never ‘pure’
in the sense of [Bernard] Williams’ ‘project of pure inquiry.’ It is always
a matter of getting us something we want.” (“Holism, Intrinsicality, Transcendence,”
in Dahlbom, ed., Dennett and his Critics. 1993.) But I would never take
that step, for although metaphors are indeed irreplaceable tools of thought,
they are not the only such tools. Microscopes and mathematics and MRI scanners
are among the others. Yes, any inquiry is a matter of getting us something we
want: the truth about something that matters to us, if all goes as it should.


When philosophers argue about truth, they are arguing about how not to inflate
the truth about truth into the Truth about Truth, some absolutistic doctrine
that makes indefensible demands on our systems of thought. It is in this regard
similar to debates about, say, the reality of time, or the reality of the past.
There are some deep, sophisticated, worthy philosophical investigations into
whether, properly speaking, the past is real. Opinion is divided, but you entirely
misunderstand the point of these disagreements if you suppose that they undercut
claims such as the following:


Life first emerged on this planet more than three thousand million years ago.
The Holocaust happened during World War II.
Jack Ruby shot and killed Lee Harvey Oswald at 11:21 am, Dallas time, November
24, 1963.


These are truths about events that really happened. Their denials are falsehoods.
No sane philosopher has ever thought otherwise, though in the heat of battle,
they have sometimes made claims that could be so interpreted.


Richard Rorty deserves his large and enthralled readership in the arts and
humanities, and in the “humanistic” social sciences, but when his readers enthusiastically
interpret him as encouraging their postmodernist skepticism about truth, they
trundle down paths he himself has refrained from traveling. When I press him
on these points, he concedes that there is indeed a useful concept of truth
that survives intact after all the corrosive philosophical objections have been
duly entered. This serviceable, modest concept of truth, Rorty acknowledges,
has its uses: when we want to compare two maps of the countryside for reliability,
for instance, or when the issue is whether the accused did or did not commit
the crime as charged.


Even Richard Rorty, then, acknowledges the gap, and the importance of the
gap, between appearance and reality, between those theatrical exercises that
may entertain us without pretence of truth-telling, and those that aim for,
and often hit, the truth. He calls it a “vegetarian” concept of truth. Very
well, then, let’s all be vegetarians about the truth. Scientists never wanted
to go the whole hog anyway.


So now, let’s ask about the sources or foundations of this mild, uncontroversial,
vegetarian concept of truth.


Right now, as I speak, billions of organisms on this planet are engaged in
a game of hide and seek. It is not just a game for them. It is a matter of life
and death. Getting it right, not making mistakes, has been of paramount
importance to every living thing on this planet for more than three billion
years, and so these organisms have evolved thousands of different ways of finding
out about the world they live in, discriminating friends from foes, meals from
mates, and ignoring the rest for the most part. It matters to them that they
not be misinformed about these matters–indeed nothing matters more–but they
don’t, as a rule, appreciate this. They are the beneficiaries of equipment exquisitely
designed to get what matters right but when their equipment malfunctions and
gets matters wrong, they have no resources, as a rule, for noticing this, let
alone deploring it. They soldier on, unwittingly. The difference between how
things seem and how things really are is just as fatal a gap for them as it
can be for us, but they are largely oblivious to it. The recognition
of the difference between appearance and reality is a human discovery. A few
other species–some primates, some cetaceans, maybe even some birds–shows signs
of appreciating the phenomenon of “false belief”–getting it wrong. They
exhibit sensitivity to the errors of others, and perhaps even some sensitivity
to their own errors as errors, but they lack the capacity for the reflection
required to dwell on this possibility, and so they cannot use this sensitivity
in the deliberate design of repairs or improvements of their own seeking gear
or hiding gear. That sort of bridging of the gap between appearance and reality
is a wrinkle that we human beings alone have mastered.


We are the species that discovered doubt. Is there enough food laid by for
winter? Have I miscalculated? Is my mate cheating on me? Should we have moved
south? Is it safe to enter this cave? Other creatures are often visibly agitated
by their own uncertainties about just such questions, but because they cannot
actually ask themselves these questions, they cannot articulate their
predicaments for themselves or take steps to improve their grip on the truth.
They are stuck in a world of appearances, making the best they can of how things
seem and seldom if ever worrying about whether how things seem is how they truly
are.


We alone can be wracked with doubt, and we alone have been provoked by that
epistemic itch to seek a remedy: better truth-seeking methods. Wanting to keep
better track of our food supplies, our territories, our families, our enemies,
we discovered the benefits of talking it over with others, asking questions,
passing on lore. We invented culture. Then we invented measuring, and arithmetic,
and maps, and writing. These communicative and recording innovations come with
a built-in ideal: truth. The point of asking questions is to find true
answers; the point of measuring is to measure accurately; the point of
making maps is to find your way to your destination. There may be an
Island of the Colour-blind (allowing Oliver Sacks his usual large dose of poetic
license), but no Island of the People Who Do Not Recognize Their Own Children.
The Land of the Liars could exist only in philosophers’ puzzles; there are no
traditions of False Calendar Systems for mis-recording the passage of time.
In short, the goal of truth goes without saying, in every human culture.


We human beings use our communicative skills not just for truth-telling, but
also for promise-making, threatening, bargaining, story-telling, entertaining,
mystifying, inducing hypnotic trances, and just plain kidding around, but prince
of these activities is truth-telling, and for this activity we have invented
ever better tools. Alongside our tools for agriculture, building, warfare, and
transportation, we have created a technology of truth: science. Try to draw
a straight line, or a circle, “freehand.” Unless you have considerable artistic
talent, the result will not be impressive. With a straight edge and a compass,
on the other hand, you can practically eliminate the sources of human variability
and get a nice clean, objective result, the same every time.


Is the line really straight? How straight is it? In response to these questions,
we develop ever finer tests, and then tests of the accuracy of those tests,
and so forth, bootstrapping our way to ever greater accuracy and objectivity.
Scientists are just as vulnerable to wishful thinking, just as likely to be
tempted by base motives, just as venal and gullible and forgetful as the rest
of humankind. Scientists don’t consider themselves to be saints; they don’t
even pretend to be priests (who according to tradition are supposed to do a
better job than the rest of us at fighting off human temptation and frailty).
Scientists take themselves to be just as weak and fallible as anybody else,
but recognizing those very sources of error in themselves and in the groups
to which they belong, they have devised elaborate systems to tie their own hands,
forcibly preventing their frailties and prejudices from infecting their results.


It is not just the implements, the physical tools of the trade, that are designed
to be resistant to human error. The organization of methods is also under severe
selection pressure for improved reliability and objectivity. The classic example
is the double blind experiment, in which, for instance, neither the human subjects
nor the experimenters themselves are permitted to know which subjects get the
test drug and which the placebo, so that nobody’s subliminal hankerings and
hunches can influence the perception of the results. The statistical design
of both individual experiments and suites of experiments, is then embedded in
the larger practice of routine attempts at replication by independent investigators,
which is further embedded in a tradition–flawed, but recognized–of publication
of both positive and negative results.


What inspires faith in arithmetic is the fact that hundreds of scribblers,
working independently on the same problem, will all arrive at the same answer
(except for those negligible few whose errors can be found and identified to
the mutual satisfaction of all). This unrivalled objectivity is also found in
geometry and the other branches of mathematics, which since antiquity have been
the very model of certain knowledge set against the world of flux and controversy.
In Plato’s early dialogue, the Meno, Socrates and the slave boy work
out together a special case of the Pythagorean theorem. Plato’s example expresses
the frank recognition of a standard of truth to be aspired to by all truth-seekers,
a standard that has not only never been seriously challenged, but that has been
tacitly accepted–indeed heavily relied upon, even in matters of life and death–by
the most vigorous opponents of science. (Or do you know a church that keeps
track of its flock, and their donations, without benefit of arithmetic?)


Yes, but science almost never looks as uncontroversial, as cut-and-dried,
as arithmetic. Indeed rival scientific factions often engage in propaganda battles
as ferocious as anything to be found in politics, or even in religious conflict.
The fury with which the defenders of scientific orthodoxy often defend their
doctrines against the heretics is probably unmatched in other arenas of human
rhetorical combat. These competitions for allegiance–and, of course, funding–are
designed to capture attention, and being well-designed, they typically succeed.
This has the side effect that the warfare on the cutting edge of any science
draws attention away from the huge uncontested background, the dull metal heft
of the axe that gives the cutting edge its power. What goes without saying,
during these heated disagreements, is an organized, encyclopedic collection
of agreed-upon, humdrum scientific fact.


Robert Proctor usefully draws our attention to a distinction between neutrality
and objectivity.(2) Geologists, he notes, know
a lot more about oil-bearing shales than about other rocks–for the obvious
economic and political reasons–but they do know objectively about oil
bearing shales. And much of what they learn about oil-bearing shales can be
generalized to other, less favored rocks. We want science to be objective; we
should not want science to be neutral. Biologists know a lot more about the
fruit-fly, Drosophila, than they do about other insects–not because
you can get rich off fruit flies, but because you can get knowledge out of fruit
flies easier than you can get it out of most other species. Biologists also
know a lot more about mosquitoes than about other insects, and here it is because
mosquitoes are more harmful to people than other species that might be much
easier to study. Many are the reasons for concentrating attention in science,
and they all conspire to making the paths of investigation far from neutral;
they do not, in general, make those paths any less objective. Sometimes, to
be sure, one bias or another leads to a violation of the canons of scientific
method. Studying the pattern of a disease in men, for instance, while neglecting
to gather the data on the same disease in women, is not just not neutral; it
is bad science, as indefensible in scientific terms as it is in political terms.


It is true that past scientific orthodoxies have themselves inspired policies
that hindsight reveals to be seriously flawed. One can sympathize, for instance,
with Ashis Nandy, editor of the passionately anti-scientific anthology, Science,
Hegemony and Violence: A Requiem for Modernity
, Delhi: Oxford Univ. Press,
1988. Having lived through Atoms for Peace, and the Green Revolution, to name
two of the most ballyhooed scientific juggernauts that have seriously disrupted
third world societies, he sees how “the adaptation in India of decades-old western
technologies are advertised and purchased as great leaps forward in science,
even when such adaptations turn entire disciplines or areas of knowledge into
mere intellectual machines for the adaptation, replication and testing of shop-worn
western models which have often been given up in the west itself as too dangerous
or as ecologically non-viable.” (p8) But we should recognize this as a political
misuse of science, not as a fundamental flaw in science itself.


The methods of science aren’t foolproof, but they are indefinitely perfectible.
Just as important: there is a tradition of criticism that enforces improvement
whenever and wherever flaws are discovered. The methods of science, like everything
else under the sun, are themselves objects of scientific scrutiny, as method
becomes methodology, the analysis of methods. Methodology in turn falls
under the gaze of epistemology, the investigation of investigation itself–nothing
is off limits to scientific questioning. The irony is that these fruits of scientific
reflection, showing us the ineliminable smudges of imperfection, are sometimes
used by those who are suspicious of science as their grounds for denying it
a privileged status in the truth-seeking department–as if the institutions
and practices they see competing with it were no worse off in these regards.
But where are the examples of religious orthodoxy being simply abandoned in
the face of irresistible evidence? Again and again in science, yesterday’s heresies
have become today’s new orthodoxies. No religion exhibits that pattern in its
history.


1. Portions of this paper are derived from “Faith in the
Truth,” my Amnesty Lecture, Oxford, February 17, 1997.
2. Value-Free Science?, Harvard Univ. Press, 1991.

This is the final draft of a paper given at the 1998 World Congress of Philosophy. Daniel Dennett’s most recent book, Freedom Evolves, has just been published by Viking Press.

Comments are closed.