Author: Ophelia Benson

  • More Than Politics

    I have another thought on the matter of lefties in the academy. It has to do with this one sentence of Timothy Burke’s that Erin O’Connor quoted:

    The tripwires here aren’t generally as obvious as saying, “I voted for Bush”-though Brooks is completely correct in thinking that this would possibly be one of the three or four most disastrous things an aspiring humanities scholar could say during an on-campus interview.

    What’s interesting about that is that it’s no doubt true enough, but there is more than one reason for it, more than one kind of reason. At least I assume so, extrapolating from my own opinion on the matter. In fact, the other reason (the reason other than the one implied by the context, which is the political one) bears out precisely the point that Burke and O’Connor are making. The other reason has to do with those less parochial, less narrowly political ideas and commitments that one expects intellectuals to have. The other reason for being repelled to hear that an aspiring humanities scholar voted for Bush is the fact that the man, his many explicit remarks on the subject, his considering himself qualified to run for the presidency, and the support for him, are all profoundly anti-intellectual. Any humanities scholar worth her salt ought to be hostile to Bush, and I would still say that if he were to the left of McGovern. All the sneering at Gore for knowing something and expecting the voters to care about substance, all the drivel about likability, all the brazen nonsense about what a reg’lar guy Bush is despite having everything handed to him by way of birth and money, simply because he never reads and mispronounces words and doesn’t know much – all those are glaring signs that what humanities scholars do and value, and what they think is valuable for other people, is considered elitist and often downright wicked by too many Republicans and too many voters. As an American I’m embarrassed by Bush; if I were a Republican, I would be beside myself with disgust.

    It’s an interesting thought experiment to wonder what things would be like if the Republican president were, say, Richard Posner. Would the revulsion among humanities scholars be quite as universal? Would an interviewee’s having voted for him be quite such an automatic trip-wire? I really wonder. I know it wouldn’t in my case. I’m quite sure I would disagree with many of his policies, but I wouldn’t feel as if there were an unqualifed overprivileged not very bright mediocrity in the job, and that would make a big difference.

  • Not a Very Bright Idea

    When Tony Blair first became leader of the Labour Party in 1994, the Sun
    newspaper, a British tabloid, took to calling him ‘Bambi’, presumably in the
    hope that the nickname would become established in the public consciousness.
    It did not, of course, for it lacked any kind of resonance with what people
    could believe about Blair. He wasn’t a child, his leadership was anything but
    childlike, and he lacked the requisite number of legs to be a baby deer. Not
    discouraged, the Sun was at it again in 2001, this time when Iain Duncan
    Smith became leader of the Conservative Party. In what was probably a desperate
    attempt to establish his man of the people credentials, it started to call him
    ‘Smithy’. A quite absurd conceit, given his double-barrelled name and former
    career as an army officer. Needless to say, this nickname didn’t catch on either,
    and now, almost universally in the UK media, Duncan Smith is known as IDS.


    None of this is surprising. If your aim is to coin, ex nihilo, a name
    or epithet which quickly gains widespread public acceptance, the chances of
    success are not great. Even the media, with its ability to talk daily to massive
    audiences, fails as often as it succeeds. Thus, for every ‘Slick Willy’, you’ll
    find that there is a ‘Bambi’, for every ‘loony left’ a ‘Doris Karloff’.


    This is a comforting thought for a secularist at the present time. For a rather
    unfortunate meme has lately infected the minds of some leading exponents of
    a naturalistic worldview. It is a meme which says that it would be a good idea
    if people without belief in things supernatural started to call themselves ‘brights’.


    The meme started with two people from Sacramento, California. Though atheists,
    Paul Geisert and Mynga Futrell did not want to be referred to as being ‘godless’,
    so they came up with the word ‘bright’ to better describe their naturalistic
    worldview. Their hope is that other nonbelievers will also use the word, and
    that it will become an umbrella term for the whole range of naturalistic philosophies
    (i.e., atheist, agnostic, humanist, etc.). They have setup a website, The Brights
    Net (www.the-brights.net), to this end, and have attracted a number of high
    profile advocates, including Richard Dawkins and Daniel Dennett, both of whom
    have written articles supporting the idea.


    It is easy enough to understand the efficacy of this meme. The naturalistic
    philosophies of the non-religious do not play the same kind of high profile
    role in political and civic life as do the supernaturalist ideas of their religious
    counterparts. This is the case particularly in the United States, but also in
    the United Kingdom, where, for example, assorted bishops get to sit on various
    ethics committees simply because they are bishops. Given this situation,
    any intervention which promises to raise the profile of naturalistic thinking
    is bound to be attractive at first sight. The trouble is that it doesn’t take
    too many more sights of the brights idea to realise that it is badly flawed.


    First, ‘bright’ is just the wrong word. How it was chosen in the first place
    isn’t quite clear. It seems to have had something to do with the fact that it
    is a ‘positive’ and ‘memorable’ word; and also that it is sufficiently puzzling
    or enigmatic when used as a noun – ‘I am a bright’ –that it invites the
    response, ‘What’s a bright?’, thereby allowing a person to talk about their
    naturalistic worldview. But there are major problems with the word.


    The first is that its enigmatic quality is indicative of a fundamental arbitrariness
    in its relationship to the phenomenon that it names. It’s the let’s call Tony
    Blair ‘Bambi’ problem. Dawkins imagines that a bright might have a conversation
    which goes like this:




    ‘"What on earth is a bright"? And then you’re away. "A bright
    is a person whose world view is free of supernatural and mystical elements…."


    "You mean a bright is an atheist?"


    "Well, some brights are happy to call themselves atheists. Some brights
    call themselves agnostics. Some call themselves humanists, some freethinkers.
    But all brights have a world view that is free of supernaturalism and mysticism."’


    (Richard Dawkins, ‘The future looks bright’, The Guardian, June
    21st 2003)




    All very nice, except that the conversation is far more likely to go something
    like this:


    ‘What on earth is a bright?’


    ‘A bright is a person whose world view is free of supernatural and mystical
    elements.’


    ‘Right. So why the word "bright" then?’


    ‘Err. Well it’s a positive word. And memorable.’


    ‘So is the word "truffle", but you wouldn’t call yourself a truffle.
    So why "bright"?’


    ‘Well, it’s what this couple from Sacramento came up with… and it is
    a very cheerful word!’


    The arbitrariness of the choice of the word ‘bright’, though undermining its
    potential as a meme, would not matter so much were it not for the fact that
    one of the established uses of the word is as an adjective meaning ‘clever’
    or ‘intelligent’. The problem here is that in the absence of an obvious reason
    to explain how it is that the word ‘bright’ designates a person who espouses
    a naturalist worldview, it is easy to jump to the conclusion that what is being
    suggested is that it is more intelligent to embrace naturalism than it is to
    embrace supernaturalism.


    It must be said that the supporters of the brights idea are quite clear that
    the word should not be taken to be an adjective in this way. However,
    this does not make the problem go away. It should be obvious why it does not.
    For starters, there is the trivial point that you cannot strip a word of its
    associations simply by denying that you intend them. [1] Nor can you do so by
    using the word in a slightly strange way (i.e., as a noun). If someone announces
    that they’re a bright, then likely it will occur to their audience that what
    they actually mean is that they are bright. The fact that they will also
    be unable to explain why the word ‘bright’ is appropriate as a label for someone
    with a naturalistic worldview will do nothing to allay this suspicion.


    There is also a slightly more complex point to be made here. To be an atheist
    in the United States – and also in some ways in the United Kingdom – is to set
    oneself against the dominant culture. There is, therefore, a tendency to associate
    atheism with a certain kind of intellectual independence. This is reflected
    in the names of the groups with which many atheists associate themselves (e.g.,
    freethinkers; skeptics; etc). And it also underpins the anti-intellectual sentiment
    of much of the religious sermonising characteristic of Christian fundamentalism.
    The problem with the word ‘bright’ is that it is too easily seen as confirming
    this link between atheism and intellectuality. Or to put this more precisely,
    if people with no belief in god begin to self-identify as brights, they run
    the risk of apparently confirming what many religious people already suspect
    about them, that they consider themselves to be better or more intelligent than
    people who believe in a god.


    Does this matter? Yes it does, if one is interested in convincing people of
    the merits of a naturalistic worldview. To start with, there is the obvious
    point that people are more likely to be receptive to new ideas if they feel
    that they are being treated with respect. But perhaps more worryingly, a movement
    which self-identifies as a movement of brights makes itself a hostage to rhetorical
    fortune. It is extremely easy – and, it must be said, very tempting – to parody
    the whole idea of a brights movement. And, of course, this is exactly what the
    enemies of a naturalistic worldview will do should the idea take off. The brights
    movement will find itself transmogrified into a ‘We’re smarter than you’ movement.
    And, at that point, protesting that the word was chosen simply because it is
    ‘warm’ and ‘cheerful’ will just result in more parody and more laughter.


    ‘Bright’, then, is the wrong choice of word to designate a person with a naturalistic
    worldview and as an umbrella term for a movement. But substituting a different
    word won’t make the brights idea a good one because it is muddle-headed for
    other reasons. Perhaps the most interesting of these has to do with what appears
    to be an unspoken assumption about people with a naturalistic worldview.


    The assumption seems to be that the rejection of supernaturalism is enough
    to qualify someone as a person without religion. This claim is only unproblematic
    if one defines religion as involving supernatural beliefs. However, there is
    at least an argument that the sphere of the religious can be extended to include
    aspects of the secular world. It is an argument inspired by the French sociologist
    Emile Durkheim. He claimed that the realm of the sacred is distinguished by
    the separateness of its objects from those of the world of the profane, and
    by the system of interdictions which prevents them from being denied. [2] If
    one accepts this conception, then there are secular phenomena which qualify
    as sacred. So Durkheim talks of ‘common beliefs of every sort connected to objects
    that are secular in appearance, such as the flag, one’s country, some forms
    of political organisation, certain heroes or historical events’, which are ‘indistinguishable
    from beliefs that are properly religious’; and he notes that ‘Public opinion
    does not willingly allow one to contest the moral superiority of democracy,
    the reality of progress, [or] the idea of equality, just as the Christian does
    not allow his fundamental dogmas to be questioned.’


    It is easy to understand what Durkheim is getting at. It is only necessary
    to attend a meeting organised by a group like the Revolutionary Communist
    Party of Britain
    to become very quickly aware that people can hold secular
    beliefs to be every bit as inviolable as religious beliefs often are for the
    religious. Of course, this point is already well understood. For example, in
    a review of Steven Rose et al’s Not in Our Genes, Richard Dawkins, commenting
    on their arguments against ‘genetic determinism’, has this to say: ‘The myth
    of the “inevitability” of genetic effects has nothing whatever to do with sociobiology,
    and has everything to do with Rose et al’s paranoiac and demonological theology
    of science. [my italics]’ (Richard Dawkins, New Scientist 24 January
    1985).


    What this means for the brights idea is that the criterion of a naturalistic
    worldview is no guarantee that people will be free of the kind of thinking which
    is quite reasonably described as ‘religious’. Or to put this another way, it
    is no guarantee that people will not be committed to beliefs, or sets of beliefs,
    which are beyond rational scrutiny in the same way as are many of the beliefs
    which are associated with theism. Possibly the supporters of the brights movement
    will not deny this, but rather claim that it does not matter too much, that
    their expectation has never been that they will create a movement of absolutely
    rigorous thinkers, each one holding up their beliefs to the light of reason.
    Fair enough. Except for two further points.


    First, there is just a suggestion in some of the writings of the supporters
    of the brights idea that they see themselves as the true inheritors of the Enlightenment
    tradition. Well, it just isn’t this clear-cut. First off, the belief in a deity,
    in and of itself, does not rule out an attitude towards this world which
    is entirely consistent with the Enlightenment emphasis on reason and the progress
    of human knowledge. But perhaps more significantly, many people who qualify
    as brights would have a decidedly ambivalent attitude towards the products of
    the Enlightenment. Just consider, for example, that many Marxists would agree
    with Rose et al that ‘science is the ultimate legitimator of bourgeois ideology.’
    (Not in Our Genes). And that from amongst the whole caboodle of postmodern
    thinkers, at least some will be prepared to commit to agnosticism, and will
    no doubt claim something like ‘that progress in social thought is not possible
    without a thorough critique of the Enlightenment, whether for its justification
    of the domination of nature, or its authoritative support for belief systems
    like scientific racism or sexism, or for the monocultural legacy of its assumptions
    about rationality.’ (Andrew Ross, The Sokal Hoax).


    Which thoughts lead on to the second point about the kind of movement the brights
    idea is likely to foster. It is certainly going to contain some odd bedfellows.
    Scientific atheists and Marxist atheists will be united in thinking that there
    is definitely no god, but they’ll fight like cats and dogs over the fate
    of the bourgeoisie. The agnostics will irritate both groups by sitting on the
    fence, whilst freethinkers drive themselves crazy trying to find a viewpoint
    unique to themselves. The skeptics will watch the whole thing from afar with
    slightly cynical smiles, and the postmodernists will talk past themselves, as
    per usual. As for the rest of the world? They won’t see past the name. And laughter
    and parody will be the result. Therefore, one can only hope that the ‘bright’
    meme fails on its evolutionary journey.


    Footnotes


    [1] The observant reader will notice that there is an echo here of the criticism
    that some people have made of the language of Dawkins’s The Selfish Gene.
    For the record, I think the criticism is misplaced in the case of The Selfish
    Gene
    .


    [2] There’s obviously a lot more to Durkheim’s argument than this mere bald
    assertion. See his The Elementary Forms of Religious Life; and also,
    for a summary of the argument I’m making here, J. C. Alexander’s The Antinomies
    of Classical Thought: Marx and Durkheim
    (Routledge, 1982), pp. 242 -250.

  • What’s Going On In There?

    What happens to the brain and to consciousness after trauma?

  • Environmental Propaganda Wars

    Entrenched positions prevent both sides from evaluating arguments on the merits.

  • More on Academic Conformity

    Critical Mass is hearing from people.

  • Think Like Us

    There is an excellent post at Critical Mass – starting, interestingly enough, from a comment on Crooked Timber. So we’re in a hall of mirrors here, or the land of infinite regress, or something. Bloggers commenting on bloggers commenting on bloggers commenting on (finally) an actual newspaper column. But that’s all right. The truth is, plenty of blog posts are better than plenty of newspaper columns. And this one is very good indeed. Erin O’Connor quotes Timothy Burke on the excessively narrow terms in which charges of political orthodoxy in universities are framed.

    Virtually anything that departed from a carefully groomed sense of acceptable innovation, including ideas and positions distinctively to the left and some that are neither left nor right, could be just as potentially disastrous. Like a lot of right-wing critics of academia, [David Brooks] generally thinks too small and parochially, and too evidently simply seeks to invert what he perceives as a dominant orthodoxy. If they had their druthers, Horowitz and Pipes and most of the rest of the victimology types would simply make the academy a conservative redoubt rather than a liberal one. The real issue here is the way that each successive academic generation succeeds in installing its own conventional wisdom as the guardian at the gates, and burns the principle of academic freedom in subtle, pervasive fires aflame in the little everyday businesses and gestures of academic life.

    That’s great stuff, and spot-on. Parochial is just exactly what this kind of thinking is. Anyone who’s ever listened to a putative Shakespeare scholar, say, droning on about what sadly imperfect views Shakespeare had on the Other, knows all about that parish, and wants to get the hell out of it and move to the big city. Clearly O’Connor is one of those:

    I have often had occasion to say to students that the things that draw them to advanced literary study–a love of learning, a love of literature, a deep desire to share those loves with students through teaching–are not the things that drive most English professors, and have next to nothing to do with what they would be expected to do in graduate school and beyond. The student who enters grad school intent on becoming a traditional humanist is the student who will be labelled as hopelessly unsophisticated by her peers and her professors. She will also be labelled a conservative by default: she may vote democratic; may be pro-choice, pro-affirmative action, and anti-gun; may possess a palpably bleeding heart; but if she refuses to “politicize” her academic work, if she refuses to embrace the belief that ultimately everything she reads and writes is a political act before it is anything else, if she resists the pressure to throw an earnest belief in an aesthetic tradition and a desire to address the transhistorical “human questions” out the window in favor of partisan theorizing and thesis-driven advocacy work, then she is by default a political undesirable, and will be described by fellow students and faculty as a conservative.

    This is what I’m always wondering about the trendy lit-crit crowd. Do they even like literature particularly? They don’t seem to. They seem to want to talk about anything and everything else under the sun except literature. Which is understandable in a way – I love the stuff but I’m not sure I would want to write about it, and I’m especially not sure I would want to keep on writing about it for thirty years or so. But then why get a PhD in English? If you want to read and think and write about politics, why not get a PhD in that? Or in history or sociology? Why go into literature and then talk about something completely different? It seems so futile, so silly, and such bad manners. Like going to a pizza place and screaming the place down because it doesn’t serve sushi.

    As Burke points out, this is at least as much about conformity as it is about politics…It’s the culture of academe–or at least of the academic humanities–that is the main problem. If you don’t have to be a conservative to get labelled–and reviled–as a conservative, then “conservative” means something other than “conservative” in the academic circles I am discussing here. It means something more like “non-conformist,” which, ironically, often translates into either “traditional humanist” or “person who questions prevailing orthodoxies of any stamp” or both. Certainly, left-wing politics are central to this problem–the people who are labelling the “conservatives” in their midst are by definition on the left. But what they are labelling “conservative” is more often than not not conservative per se, but simply different from them.

    Just so. I’ve had people solemnly inform me that B and W is ‘culturally conservative’. Which seems to me a silly thing to say on about ten different levels. One, so what? Two – so is everything that is newer than something else automatically better than that something else? Does everything invariably get better and better in an uninterrupted trajectory towards perfection? Do things never get worse from time to time? Hasn’t the Whig view of history had some doubts cast on it now? Three, if you take that view, doesn’t that mean that whatever New Thing someone comes up with tomorrow is necessarily better than whatever it is you’re doing now, and if so, doesn’t that make it all seem a tad pointless? Four, is it really sound to judge ideas chronologically? Does it work to simply date everything and then say ‘Well look, this idea is from 1825 so obviously it’s much better than this other one from 1789’? Hitler was newer than, say, Kaiser Wilhelm II. The Kaiser was not a great guy, but was he worse than Hitler? For that matter, Colly Cibber was later than Shakespeare; was he better? Well, the reductio is obvious enough. But people go on saying it. Erin O’Connor is exactly right, it seems to me: it’s all about conformity and orthodoxy, group-think and fashion, playing well with others instead of thinking clearly on your own.

  • Are GM Fears Justified?

    Two out of three GM strains ‘should not be grown’.

  • We’re Close Enough, Dammit!

    Touchy-feely blather not the best way to relax after a hard day?

  • Remembering Said

    A polemicist and literary warrior in the tradition of Swift.

  • Uh Oh

    Do we really need ‘criticism’ of science similar to that of ‘art, literature, movies, architecture’?

  • Secularism Meets the Hijab

    This is always an interesting subject. There are so many boxes one could put it in, for one thing. How unhelpful, self-cancelling, and ill-founded talk of ‘rights’ can be. How difficult or indeed impossible it can be to meet everyone’s desires and wishes – which is just another way of saying how self-cancelling talk of ‘rights’ can be. How difficult or impossible it can be to decide what is really fair and just to all parties, which is yet another way of saying the same thing. How incompatible some goods are, how irreconcilable some culture clashes are, how differently we see things depending on how we frame them. If our chosen frame is religion, or identity politics, or multiculturalism, or tolerance, or anti-Eurocentrism, or all of those, or some of them, then head scarves look like one thing. If our frame is feminism, or secularism, or equality, or rationalism, or Enlightenment, or some or all of those, then head scarves look like another thing. If we see merit in both sides of that equation then head scarves look like a damn confusing puzzling riddle.

    In France, meanwhile, two teenage sisters have been suspended from school after insisting on attending class with their heads covered. The school says it is simply enforcing secular laws that ban all displays of religious faith in state schools and public buildings. “The girls’ argument that they have a right [to wear a headscarf] is incompatible with secularism and school rules,” Education Ministry Inspector Jean-Charles Ringard said. Alma and Lila Levy, whose mother is Muslim and whose father is a Jewish atheist, say they are simply demanding that two basic rights be respected. “We are being asked to decide between our religion and our education; we want both,” said Alma Levy, 16.

    Yes, but then what about other rights? What about the rights of other girls not to have to learn in the presence of a symbol of female inferiority and subservience?

    A constitutional ruling gives schools power to ban any religious symbol – headscarf, Jewish skullcap or Christian cross – worn as an “act of pressure, provocation, proselytism or propaganda.” The headscarf, or hijab as it is called in Arabic, has stirred controversy in France for more than a decade…French feminists and left-wingers say the scarf is a token of servitude, a sign of submission to male dominance rather than to God, as devout Muslims claim it to be.

    Just so. Pressure, proselytism, propaganda. A head scarf carries a lot of meaning, it’s not just some neutral bit of decoration. No doubt I ought to, but I find it very hard to feel much sympathy for girls who ‘demand’ their ‘right’ to advertise their subordinate status.

  • Review of Bountiful Harvest

    Thomas DeGregori examines antitechnology movements that keep the world’s poor in poverty.

  • Head Scarves, Rights, Secularism

    Rights talk doesn’t help when two ‘rights’ are incompatible.

  • Said Inspired but Also Forestalled

    Rejecting all criticism as Orientalist is not what a scholar should do.

  • Sympathy for the…

    Norm Geras’ blog has an excellent post on a recent Guardian column by Karen Armstrong. I thought it was excellent when I first read it, before Norm demonstrated what dazzlingly good taste he has by posting a, a, well, not to put too fine a point on it a rave review of B&W. I did a Note and Comment on Armstrong myself a few weeks or months ago, making a similar point. She’s too determined to be understanding and sympathetic and inclusive and non-Eurocentric and non-Orientalist about Islam, too unwilling to just give it up and be ‘judgmental’. Having read some of her memoirs and other books on religious subjects, I take her stance to have more to do with excessive sympathy for religion than it does with, say, multiculturalism or cultural relativism; but I don’t really know that, it’s just a guess. In any case, the effect is the same.

    Armstrong’s diagnosis of the problem of terrorism is multi-factor, but it comes down to two threads: the fundamentalist-reaction-against-modernity thread and the Western-complicity-in-political-and-social-injustice thread. But prescriptively it’s only the second thread which counts. In this she is wholly representative of the post-9/11 liberal and leftist ‘doves’.

    It’s interesting to ponder what the implications of taking the first thread seriously might be. Perhaps that’s why Armstrong drops it – why most people drop it. Because if you start to argue that we really ought to pay attention to what al Qaeda wants, i.e. give it to them, then one has to start contemplating the joys of living under an Islamic theocracy – an especially thrilling prospect for a woman. Gosh, I’m so spoiled, I’m so used to going out of the house whenever I want to, without having to ask a man for permission, let alone having to stay in unless a man I’m related to will come with me. It would be a bit of an adjustment, frankly, to have to start doing things bin Laden’s way.

    And yet some people do make that argument, sort of, almost, partly. Or they hint at it, they gesture at it, they mumble about it, without actually coming out and saying Yes we should let people like bin Laden call the shots and if that means a little less freedom for half of humanity, well, so be it. At least I don’t know what else is behind all the reproachful noises people make about secularists and atheists refusing to take religion ‘on its own terms’. There is another post with similar comments from readers here.

  • The Virtue of Innovation and the Technological Imperative

    The rise of the precautionary principle in public policy and international
    relations has called into question the role technological innovation should
    be allowed to play in society. [1] According to the precautionary principle, no
    novel technology, regardless of its benefits, should be deployed if it poses
    risks to human health or the environment. [2] Under some interpretations
    of the principle, these risks need not even be testable hypotheses, but may
    merely be posited. [3] In the latter case, the principle merely
    says that technological innovation is too dangerous to be allowed.


    Critics of technological advance have also invented a doctrine which is antithetical
    to the precautionary principle, and dubbed it the ‘technological imperative.’
    In its first formulation, this doctrine only existed as a ‘straw man’ argument,
    something put forth so that it may be attacked at convenience by its creators.
    The doctrine has no canonical statement, its existence is at best claimed to
    be "implicit," it has no discernible champions, yet its critics abound.
    [4]


    In spite of its dubious origins and lack of formulation, the technological
    imperative actually exists in Western philosophy and jurisprudence. In ethics
    technological invention is a virtue and in law it is protected and encouraged;
    and in both, a duty exists to allow the dissemination and use of novel technology.


    The technological imperative in philosophy and ethics.


    Perhaps the earliest, but certainly the most durable, contextual explanation
    of technology was offered by Aristotle. His explanation relies on making distinctions
    between three types of knowledge: episteme, techne [5]
    and phronesis. Episteme is pure knowledge, such as of mathematics, geometry
    or logic. Techne, from which our word ‘technology’ is derived, is concrete knowledge
    of how to rearrange lumps of matter in a purposeful way. Phronesis has no counterpart
    in English, but is often rendered as ‘prudence.’ As Aristotle explains it,




    We may grasp the nature of prudence [phronesis] if we consider what sort
    of people we call prudent. Well, it is thought to be the mark of a prudent
    man to be able to deliberate rightly about what is good and advantageous…
    [P]rudence cannot be science or art; not science [episteme] because what
    can be done is a variable (it may be done in different ways, or not done
    at all), and not an art [techne] because action and production are generically
    different. … What remains, then is that it is a true state, reasoned,
    and capable of action with regard to things that are good or bad for man.
    We consider that this quality belongs to those who understand the management
    of households or states. [6]




    This explanation of three forms of knowledge establishes a framework for understanding
    the nature of technology. On Aristotle’s account, techne is value-neutral and
    capable of serving any number of ends, good and bad alike. This accords well
    with intuitions about the nature of technology, but this value-neutral status
    cannot establish the existence of a technological imperative. However, Aristotle
    links technology to phronesis, to knowing "things that are good or bad
    for man." If there is a technological imperative, it must go two steps
    beyond this framework and address the question of whether a change in
    knowledge (be it episteme or techne) when put to use serves a good purpose.
    Aristotle stops short of addressing that, saying instead that with phronesis,
    episteme and techne are "capable of action."


    Elsewhere in the Nicomachean Ethics Aristotle reasons that actions have goals,
    that those are human goals, and that human goals in general are directed at
    eudaimonia— a term often translated as ‘happiness,’ but defined by Aristotle
    as the general direction of human activities intended to make human life complete
    and satisfying. [7]


    Aristotle did not leave us with much more than this on what is good to do,
    because his emphasis in ethics was on what was good to know and to be.
    Like many he was a virtue theorist and to him, good meant being virtuous.


    Later philosophers re-examined human virtues and claimed they were human duties
    instead.




    Duties to oneself include preserving one’s life, pursuing happiness, and
    developing one’s talents. Duties to others fall into three groups. First,
    there are family duties which involve honoring our parents, and caring for
    spouses and children. Second, there are social duties which involve not
    harming others, keeping promises, and benevolence. Third, there are political
    duties that involve obedience to the laws, and public spirit. [8]




    Other philosophers preferred to judge between good and bad activities by determining
    their outcomes, which is known as consequentialism.




    Consequentialist normative principles require that we first tally both
    the good and bad consequences of an action. Second, we then determine whether
    the total good consequences outweigh the total bad consequences. If the
    good consequences are greater, then the action is morally proper. If the
    bad consequences are greater, then the action is morally improper. Consequentialist
    theories are also called teleological theories, from the Greek word telos,
    or end, since the end result of the action is the sole determining factor
    of its morality. [9], [10]




    This begs the question of how good is actually distinguished from bad, but
    the most famous consequentalist would have good and bad be defined socially
    or democratically, as "the greatest good for the greatest number."
    [11]


    The pragmatist John Dewey blended notions of virtue and consequentialism in
    his account of the nature and value of innovation.




    The logic of the moral idea is like the logic of an invention, say a telephone.
    Certain positive elements or qualities are present; but there are also certain
    ends which, not being adequately served by the qualities existent, are felt
    as needs. Facts as given and needs as demands are viewed in relation to
    each other because of their common relationship to some process of experience.
    Tentative reactions are tried. The old ‘fact’ or quality is viewed in a
    new light— the light of a need— hence is treated in a new way and thereby
    transformed. [12]




    It is unnecessary to delve much deeper into notions of technology and how philosophers
    define ‘good’ to see that there is an underlying consensus that technology should
    be deployed for good purposes. If one discovers something new and sees that
    it can be used for a good purpose, then it follows that the new should be used
    for a good purpose. This is easily justified on all the theories mentioned above,
    whether in terms of improving the quality of life, having good consequences
    for many people, fulfilling a duty or serving a need. So philosophically and
    ethically speaking, there is a technological imperative.


    The technological imperative in jurisprudence.


    If the technological imperative has its basis in ethics, it should not be surprising
    to find this reflected in law, especially in product liability law, which addresses
    innovations which offer novel benefits, or in injury, which occasions a legal
    dispute.


    Product liability law places a punitive burden on producers of defective products
    which cause injury. In contrast, the law refuses to impose a duty to retrofit
    non-defective products when an innovation would, i.e., improve their safety,
    and the reason is found in the technological imperative. "Imposing a post-sale
    duty to recall or retrofit a product ‘would discourage manufacturers from developing
    new designs if this could form the basis for suits or result in costly repair
    and recall campaigns.’" [13] However, nothing prevents
    a manufacturer from voluntarily retrofitting its products already in use— which
    is another way to introduce an innovation— and this has also been held to absolve
    a manufacturer of liability. [14]


    So in product liability law, the value of innovation eclipses other interests.
    Even though a duty to retrofit would benefit some consumers, the cost of such
    a rule would intolerably burden the general social benefits of technological
    progress.


    The internet is an innovation which may well prove to have the broadest impact
    on society since the invention of the printing press. Here, too, the technological
    imperative has come into play. Even though legislators know the internet might
    be misused, the law is willing to countenance the prospect of misuse to protect
    innovation.




    This explosive democratization of the capability to publish [on the internet]
    has raised fundamental questions about who bears responsibility for content
    that is harmful. In February 1996, Congress enacted 47 U.S.C. § 230 to eliminate
    some of the uncertainties surrounding that question. Section 230 establishes
    that providers of interactive computer services are generally immune from
    liability for harms resulting from the dissemination of defamatory or otherwise
    harmful information that other persons or entities create and make available
    through such services. [15]




    Indeed, the law in general may be said to be structured to encourage technological
    progress by making it a protected activity. The system of patents established
    by the United States Constitution, which gives inventors temporary monopolies
    on the use of their inventions, is probably the most obvious example of this.
    Thus, we may also conclude that the technological imperative is an important
    part of Western jurisprudence.


    The status of the technological imperative.


    Having established the existence of the technological imperative, it is necessary
    to determine its status. In both philosophy and law, some things rise to the
    level of a duty, while other things are considered virtues, and praised or rewarded
    when they are exhibited. For instance, it is deemed virtuous to be a hero, but
    there is no duty to be a hero. Indeed, one cannot become a hero by fulfilling
    a duty and this is intuitively recognized by firefighters, who often deny fighting
    a fire is heroic. In fighting a fire, they are doing their job.


    Acts above and beyond the call of duty, which are by definition voluntary,
    produce heroes. These are known as supererogatory acts.




    Although agents are not obliged by the dictates of ordinary morality to
    perform supererogatory acts— extraordinary feats of heroism or extreme deeds
    of self-sacrifice, for example— they may be commended for doing so. Normative
    theories that demand the performance of the best possible action in every
    circumstance render supererogation impossible by identifying the permissible
    with the obligatory. [16],[17]




    Both in law and in ethics, there is a duty to avoid causing harm to others.
    In ethics, a breach of this duty deserves punishment and in law, the punishment
    is delivered. In ethics, complying with this duty is not praiseworthy, and in
    law is irrelevant except as a defense. Doing good to others goes beyond this,
    and is supererogatory. [18] In ethics, this is
    virtuous and in law, is protected and encouraged by various rights.


    When it comes to employing technology, the analysis is similar, but not the
    same. Employing the current state of any art is nothing more than using available
    means to achieve expected ends, which is just as ordinary as not harming others
    and just as morally neutral. However, failing to employ the state of the art
    breaches no duty per se, and can at most be condemned as irrational or
    foolish. [19] This indicates, but does not yet demonstrate,
    that improving on the current state of the art is virtuous.


    As stated above, technology, by itself, is value-neutral. However, that applies
    only to the current state of the art. A technological improvement over the status
    quo is distinct because it by definition makes a new contribution to the availability
    of moral goods. [20] Otherwise, it would be no improvement
    and would not be adopted. Once adopted, it becomes the state of the art and
    reverts to value neutrality.


    This is consistent with the notion of technology as instrumentality, where
    technology serves as a means to an end, rather than being an end in itself.
    The conclusion is inescapable, however, that the value of a technology is measured
    with reference to the moral goods it may be used to produce or deliver. Similarly,
    every technological improvement is justly measured according to the moral goods
    it can be used to produce or deliver over and above the previous state of the
    art. In this way, the value of an invention derives from the ends it more ably
    serves. But it cannot accomplish that on its own.


    The improvements in technology called for by the technological imperative are
    supererogatory and laudable because of their novelty and usefulness [21]
    for improving the quality of human life, for conferring increased net benefits
    on society, or more readily fulfilling a moral duty. In this way, techne
    is again revealed as morally neutral while phronesis, the ability to
    "deliberate rightly about what is good and advantageous," becomes
    central to the virtue established by the technological imperative.




    Since phronesis is at work in discerning and choosing the mean at which
    ethical virtue aims, ethical virtue cannot achieve its own end without phronesis.
    On the other hand, discernment of the good and perfection of deliberation
    is dependent on having a good character; hence, without ethical virtue,
    one might have cleverness in figuring out the means to any end, but one
    would not have phronesis, the virtue of choosing the appropriate means to
    the right end. Excellence of character, then, and practical wisdom together
    form a whole which alone counts as genuine virtue. [22]




    On this analysis, phronesis justifies techne, and even more strongly justifies
    technological invention.


    The technological imperative establishes duties.


    Just as the moral considerations behind the technological imperative require
    [23] praise for those who exemplify the virtue of invention
    by making new technology available for use, patent law requires that inventors
    be compensated for a time for the use of their inventions, on condition that
    the substance of the inventions be disclosed and made freely available thereafter.
    In law, the condition is a duty. In ethics, it is a duty as well, and this is
    where the technological imperative becomes worthy of being called an ‘imperative.’


    The value of an invention derives strictly from its ability to be used to improve
    the availability of moral goods but in ethics, as in law, there is no duty act
    for the benefit of someone else absent a special relationship such as that between
    parent and child. Thus it would violate law and ethics to require that an inventor
    make an invention available to others, because that would amount to imposing
    a duty to act in the interest of others; [24] and the act
    of inventing creates no special relationship which could impose such a duty.
    Indeed, an inventor may legally and ethically make use of an invention in secret.


    The duty arises instead when the inventor offers the invention to be used.
    It is not a duty to use the invention; for as we have seen before, failure to
    employ the current state of the art may be unintelligent, but is neither reprehensible
    nor illegal. [25] Nor is it a duty adhering to the inventor,
    who will already have disclosed the substance of the invention. The duty is,
    instead, reciprocal among all who might benefit directly or indirectly from
    its use, and that is to allow its use. There are utilitarian aspects which justify
    this conclusion.




    The sciences, like the arts, may expand in two directions—in superficies
    and in height. The superficial expansion of those sciences which are most
    immediately useful, is most to be desired. There is no method more calculated
    to accelerate their advancement, than their general diffusion: the greater
    the number of those by whom they are cultivated, the greater the probability
    that they will be enriched by new discoveries. [26]




    This helps explain why in ethics, preventing or interfering with the dissemination
    of innovative technology is a breach of duty, and that duty is the technological
    imperative.


    Footnotes


    1 The author of this article is Andrew Apel, B.A. (Phil), M.A. (Phil.), J.D., editor
    of AgBiotech Reporter, http://www.bioreporter.com


    2 An early formulation of the precautionary principle is found
    in the "Wingspread Statement." See http://www.sehn.org/wing.html
    Versions of the principle appear in over 20 international treaties, laws, protocols
    and declarations. See, e.g., Royal Commission on Genetic Modification, "Current
    status of genetic modification in New Zealand," http://www.gmcommission.govt.nz/RCGM/pdfs/appendix1/2-2.pdf


    3 Such posited risks are often called "unknown risks."
    See, e.g., National Center for Policy Analysis, "Amid Famine, Africans
    Reject GM Corn," http://www.ncpa.org/iss/env/2002/pd082302b.html
    Danish Environmental Protection Agency, "The precautionary principle,"
    http://www.mst.dk/udgiv/Publications/1999/87-7909-203-9/html/bil01_eng.htm
    Friends of the Earth, "Tomorrow’s World," http://www.foe.co.uk/campaigns/sustainable_development/publications/tworld/land.html


    4 See, e.g., Daniel Chandler, "Technological or Media
    Determinism," http://www.aber.ac.uk/media/Documents/tecdet/tdet07.html


    5 Aristotle limits episteme to self-evident, axiomatic principles
    and to what can be logically derived from them. See Muryat Aydede, "Aristotle
    On Episteme And Nous: The Posterior Analytics," http://web.clas.ufl.edu/users/maydede/Aristotle.html


    6 Aristotle, Nicomachean Ethics, 1140a24-1140b12. See Theodore
    Goertzel, "Three Approaches to Knowledge," http://www.crab.rutgers.edu/~goertzel/threeapproaches.htm


    7 Eugenio Benitez, "Eudaimonia," http://www.usyd.edu.au/philosophy/benitez/eudaimonia.html


    8 The Internet Encyclopedia of Philosophy, http://www.utm.edu/research/iep/e/ethics.htm#Normative
    Ethics


    9 Ibid.


    10 Critics of consequentialist theory say that since all future
    outcomes of an act cannot be known in advance, its relative goodness or badness
    cannot be determined in the present. Critics of the precautionary principle
    say that the principle exploits this weakness in consequentialist theory by
    making it a pretext to ban all novelty.


    11 J.S. Mill. For an outline of the many approaches to defining
    good, see "What Matters in Life?" http://home.att.net/~talk2perry/what_matters.htm
    Though the claims made there are not entirely accurate, the outline serves well
    to show the variety of theories available, and the general compatibility of
    those theories with the conclusions of this paper.


    12 John Dewey, "The Evolutionary Method As Applied To
    Morality: II Its Significance for Conduct,"http://spartan.ac.brocku.ca/~lward/dewey/Dewey_1902b.html


    13 Stephanie Scharf and Thomas Monroe, "Post-Sale Duty
    to Warn," http://www.jenner.com/files/tbl_s20Publications%5CRelatedDocumentsPDFs2254%5C324%5CMichigan.pdf
    citing Estate of Raap v. Clark Equip. Co., No. G88-614-CA7, 1989 WL 382091 (W.D.
    Mich. June 20, 1989) Note that this only applies to non-defective products.
    Another way to view the doctrine is that the improvement of a product does not
    render more primitive products automatically defective.


    14 Ibid.


    15 Patrick Carome, Samir Jain and Elizabeth deGrazia Blumenfeld,
    "Federal Immunity For Online Services: How It Works and Why It Is Good
    Policy," http://www.ldrc.com/Cyberspace/cyber1.html


    16 Free Online Dictionary of Philosophy (FOLDOP), http://www.swif.uniba.it/lei/foldop/foldoc.cgi?supererogatory


    17 On sociological evidence, there is justification for accepting
    both standard and normative theories as descriptive and explanatory. Those who
    rescue children from burning buildings, firefighters and civilian passersby
    alike, often reject being described as heroes; members of both groups often
    report feeling that their duty was either pre-existing or had been thrust upon
    them and in either case merely did what they should have done under the circumstances.


    18 There is of course a vast spectrum of supererogatory acts,
    ranging from what some call "random acts of kindness" to developing
    superior crops which save millions from starvation.


    19 But see footnotes 24, 25.


    20 In this monograph ‘moral goods’ are defined as things a
    human may desire and attain without violating moral principles. These would
    include the necessities of life, as well as its enjoyments.


    21 In law, these are the two fundamental requirements for
    the issuance of a patent.


    22 Robert Berman, "Nicomachean Ethics: Commentary on
    Book III," http://webusers.xula.edu/rberman/CommentBk3.htm


    23 The requirement may be rationally necessary, per Aristotle
    or Kant; or practically necessary, per the consequentialists; or dutifully necessary,
    per those who rest their arguments on duty (which would again include Kant).


    24 There are significant exceptions to this which nonetheless
    underscore the technological imperative. When public interest in the value of
    a patented invention is sufficiently great, the US government may force the
    inventor to license the patent. See James Love and Michael Palmedo, "Examples
    of Compulsory Licensing of Intellectual Property in the United States,"
    http://www.cptech.org/ip/health/cl/us-misc.html
    A similar power is found in the World Trade Organization (WTO) Agreement on
    Trade-Related Aspects of Intellectual Property Rights (TRIPS). See "Health
    Care and Intellectual Property: Compulsory Licensing,"
    http://www.cptech.org/ip/health/cl/cl-ilaw.html


    25 There are significant exceptions to this, such as in food
    safety regulations, which are constantly upgraded to require the use of novel
    technologies – but this, too, underscores the technological imperative.


    26 Jeremy Bentham, The Rationale of Reward: Reward Applied
    to Art and Science, http://www.la.utexas.edu/labyrinth/rr/rr.b03.c03.html

  • Murder in the Name of Tradition

    ‘The justice system will come down on you like a ton of bricks’ for so-called ‘crimes of honour.’

  • Confusing Politics with Conformity

    ‘Conservative’ can be just code for ‘different from me’.

  • Philip Pullman Worries About Testing

    Teaching for the test makes children hate literature, Pullman says.

  • Bubble Car Blues

    This is what you get when ‘offensive’ is the shut-up word of the day. You get archbishops complaining that the BBC is reporting on the church, and equating criticism with hostility and bias.

    But there are clearly elements or individuals, mainly – as far as I can tell – within news and current affairs, who seem to approach the Catholic Church with great hostility. Certainly the Catholic community is fed up seeing a public service broadcaster using the licence fee to pay unscrupulous reporters trying to re-circulate old news and to broadcast programmes that are so biased and hostile. Enough is enough.

    So – what would a friendly and unbiased report on the Catholic church look like then? An admiring enumeration of the Pope’s wardrobe? A fond reminiscence about the warm friendship between priests and choirboys? A ringing endorsement of the Pope’s stand on birth control? Would anything less flattering than that be called ‘offensive’?

    It’s familiar stuff, but that doesn’t make it any more reasonable. An unholy alliance between identity politics and obscurantist religion that uses complaints about ‘offense’ to try to establish its right to be beyond criticism. Suck it up, bish. Your church is out there in the world telling billions of people what to do, including whether to have children or not. Claiming immunity on top of all that is really pushing it a bit.