Category: Archives

Site features and sections which are no longer being written or developed.

  • The precautionary principle

    "Genetically modified crops raise more questions than they answer. Insufficient
    laboratory tests have been done on the effect of GM crops before going to
    field-scale trials. The precautionary principle means that, if you are unsure
    of what the result will be, especially if it is going to be so serious, it
    would be wiser not to do it until more evidence is available."
    RSPB spokesperson. Source: Belfast Telegraph, 4 June 2003

    If you want to see a good idea become misunderstood and abused, give it a clear
    and simple name. Very often, this is the cue for people to start mistaking the
    clarity and simplicity of the nomenclature for clarity and simplicity in the
    idea itself.

    Such is the fate of "the precautionary principle". You can see it
    being appealed to in all sorts of contexts, as though it were a straightforward,
    obvious idea we can all understand and use. In fact, the principle is used in
    a myriad of varying, vague and misleading ways.

    In the example above, the principle is formulated as: Do not do anything where
    the outcome is uncertain and potentially serious until more evidence is available.
    The obvious problem here is that the outcome of almost all actions is uncertain,
    and often potentially serious. Even a simple decision like driving a long distance
    has an uncertain, potentially fatal outcome. (In the UK, road accidents are
    the most common cause of death among the under 35s.) Are we then not to drive
    "until more evidence is available"?

    Obviously this is absurd (and not what was intended). The point is to have
    enough evidence, not more. We do have enough evidence to make an informed choice
    about driving and most of us judge that the risks are worth taking. In contrast,
    the RSPB obviously thinks we don’t have enough evidence about GM crops to make
    a decision about field trials. The principle we are then following should read:
    Do not do anything where the outcome is uncertain and potentially serious until
    enough evidence is available to make an informed choice.

    The key question now becomes, "what is enough evidence?" "Enough
    to know what the risks are" isn’t a particularly informative answer, since
    it doesn’t add much to our concept of "an informed choice" to say
    it is one where the risks are known. This answer can mean something substantive,
    however, if it means "that which is sufficient to conduct a rigorous risk
    assessment". Risk assessment is about making decisions based on the desirability
    of outcomes weighted against the risks involved in the actions required to achieve
    these outcomes. Of course, it is for expert risk assessors to determine just
    what is enough evidence to conduct a risk analysis. But this is not a problem.
    It is just another example of the division of intellectual labour, an acknowledgement
    that a layperson is in no position to judge what precisely what constitutes
    "enough evidence".

    We can now state the precautionary principle in a coherent way: do not proceed
    in a course of action with potentially serious consequences until you have sufficient
    evidence to conduct a proper risk analysis. This captures the spirit of what
    the RSPB spokesperson was trying to say.

    This makes it clear that the precautionary principle needs to run in tandem
    with risk assessment. It is not a stand-alone principle. Nor does it amount
    to the claim that the existence of risk or uncertainty in themselves provide
    reasons for not acting. Risk analysis is all about dealing with uncertainty
    – where there is certainty there is no risk. The caution that is urged by the
    principle is thus against acting on the basis of inadequate evidence. It is
    not a kind of blanket policy of risk-aversion. This is just as well, because
    if the precautionary principle just amounted to saying "do not take unnecessarily
    large risks" it would be a mere platitude.

    It is unfortunate, therefore, that the RSPB spokesperson chose to define the
    principle in terms of being "unsure of what the result will be." For,
    by doing so, a popular misconception about the principle was reinforced: that
    it is about being risk-averse where there is uncertainty. Consider this pair
    of definitions of the precautionary principle:

    "You should not adopt any new technology unless you are certain it is
    safe. (Rocky Mountain News, CO, 21 May 2003)
    "Nothing should ever be done until it has been proved safe." (Toronto
    Star
    , Canada, 19 May 2003)

    The key words here are "certain" and "proved". These demands
    are too high. It takes years to be sure that any new drug, for example, is totally
    safe beyond all reasonable doubt. And because this version of the precautionary
    principle asks us to prove a negative – that no harm is done – it puts absolute
    certitude beyond us. Yet the principle demands certitude or else it says we
    should not go ahead.

    Look at how the idea of the precautionary principle is actually used and more
    often than not, I would wager, it is used in this defective sense. The principle
    is not used together with risk analysis but as a substitute for it. If this
    were just a question of rhetorical shorthand, there would be little problem.
    (Except in those cases where such shorthand disguised the fact that a proper
    risk analysis has the opposite conclusion to the one claimed.) But the misuse
    of the principle is objectionable mostly because it encourages people to ignore
    the fact that we have to make decisions based on incomplete knowledge and judgement
    of risk, and instead offers the illusion that we can do nothing that isn’t certain
    to be safe. The popularisation of certain versions of the principle is thus
    contributing to the unrealistic and ignorant attitude that society increasingly
    has towards risk.

  • Ancient means wise

    "Manufacturers are reaping the benefits of natural remedies. It’s not
    surprising really, as they’re tried and tested ingredients dating back thousands
    of years."
    Spirit and Destiny magazine, February 2003

    The next time you suffer from an inflammation, why not try a little blood letting?
    This "tried and tested" method dates back to the fifth century BCE
    and probably owes its origins to Hippocrates. The principle behind blood-letting
    is simple: the human body needs to maintain a balance between its four "humours":
    blood, phlegm, yellow bile and black bile. Inflammation is caused by excess
    of blood, so losing a bit of it should help restore the natural balance of your
    body and make you feel a whole lot better.

    Why not indeed? How about because it is ineffective, potentially dangerous
    and based on a false understanding of human physiology? But against these considerations,
    you have to balance the fact that it is an ancient form of therapy, practised
    for many centuries and thus "tried and tested".

    In weighing up these reasons for and against blood letting I would hope you
    would see there is no contest. Blood-letting is hokum, bad medicine based on
    bad science. The fact that it is ancient should not count as any reason to think
    it is good. Like many horrific medical practices of old, it has rightly been
    consigned to medicine’s hall of shame, not before its misguided application
    killed off George Washington, among others.

    However, in many other cases, people are impressed by the sheer age of a medical
    practice, especially if they seem to involve "natural" processes.
    Flick through Spirit and Destiny, for example, and you’ll find frequent mention
    of the ancient origins of the various forms of nonsense on offer. Chinese astrology
    has been "practised for thousands of years." We learn that "Neanderthal
    man used yarrow." Well, yes, and he also killed animals with clubs. One
    range of beauty products uses "recipes dating back to the Elizabethan era."
    Just remember that was an age of bubonic plague and poor hygiene when you slap
    on your make-up.

    More respectable publications are not immune from this folly. Guardian Weekend
    magazine points out that "Melissa has been used in Tibetan medicine for
    more than 3,000 years." In this and most other cases it is not explicitly
    claimed that this is evidence that the remedies work, but it is clear that the
    mention of their longevity is supposed to attach a kind of seal of approval
    to them.

    It should be enough just to point out that "it’s been done for centuries,
    therefore it must be good" is a simple non sequitor. If anything,
    the old age of a treatment is reason to suspicious of it, since such treatments
    were formulated when there was little or no understanding of how the human body
    worked and when people routinely died from what are now seen as minor illnesses.
    It would be quite incredible if such biologically and medically undeveloped
    cultures were the source of many effective remedies.

    So what is it about "ancient wisdom" that makes it so attractive?
    When it comes to Eastern practices, one reason is surely the mythologising about
    the East that is common in western society. Places like China, Tibet and India
    are seen as very "spiritual" places, as though the people there were
    not human beings like the Americans, French and British.

    A better reason is that the basic idea of something being "tried and tested"
    is a good one. The problem is simply that, if you look at many of these ancient
    remedies, they have never been properly tested. The fact that they survive shows
    at best that most are probably harmless. But we know from our observations of
    the way superstitions persist, for example, that ideas can persist even though
    they have no basis in fact, and would fail any proper tests applied to them.

    Some ancient wisdom is genuine. For instance, there is a lot to learn from
    the philosophers of Greek antiquity. But this is because they knew almost as
    much about their subject matter – human nature and how to live well – as we
    do today. In the fields of science and medicine, however, we have come a long
    way since then. This is why Aristotle’s ethics is still worth reading while
    his biology is only of historic interest. And it is also why an Elizabethan
    recipe for a cake might be worth trying out, but a contemporary remedy for inflammation
    is best avoided.

  • Loading the dice

    "Julie (she’s open to spiritual stuff) and Kate (the cynical one) continue
    their voyage of discovery through the world of the New Age. This month our
    testing twosome try colourpuncture."
    Spirit and Destiny magazine, February 2003

    Imagine you’re a comedy writer and you want to send up New Age, alternative
    medicine. "Colourpuncture" would be a stroke of comedic genius. But
    too late – it’s already out there, and it’s for real.

    According to the truly frightening Spirit and Destiny, colourpuncture
    was devised by a German scientist, a claim which is typical of the New-Agers’
    desire to have it both ways: it’s an alternative to mainstream medicine, so
    not subject to the same principles and tests; but devised by a scientist, with
    the implication that it has some credibility with the mainstream.

    But I digress. This scientist "discovered that there was a connection
    between the network of Chinese meridians in our bodies and the healing effect
    of light-responsive colours." Well I can see one obvious connection – neither
    really exists.

    Spirit and Destiny ("for women who want the best possible future")
    is full of this kind of nonsense. What makes it so scary is that it is a glossy
    publication, sold in the High Street and aimed at the sort of women who buy
    other general interest titles. It irritates me more than anything I’ve read
    in a long time, so don’t be surprised to see it appear again in future columns.

    Identifying a bad argumentative move in such a publication is a bit like trying
    to identifying the problem areas in the diet and eating habits of the average
    American: it’s so difficult to know where to begin. The extract I’ve selected
    contains a good example of what I call "loading the dice". This is
    when something is presented as if it were mere description when in fact it contains
    one or more implied value judgments.

    So, you notice that Julie is described as "open", which is generally
    considered to be a good thing. This in itself implies that her counterpart,
    Kate, has a closed mind, which is not something to be proud of. It gets worse
    for Kate, however. She is not described as sceptical but "cynical".
    This word clearly has negative connotations, especially when contrasted with
    the virtue of openness. Note also that "open" implies an unbiased,
    impartial attitude whereas "cynical" suggests prejudice against the
    spiritual. This suggests Julie is a fair and impartial judge and Kate a biased
    one, whereas in fact Julie is probably at least as biased in favour of the spiritual
    as Kate is sceptical about it (assuming the two characters are even genuine).

    So when Julie and Kate set about "testing" colourpuncture, the dice
    had been loaded. Were Kate to report negatively, we would be able to dismiss
    her views as those of a closed-minded sceptic. We can trust Julie, on the other
    hand, if she gives a favourable verdict, since she’s open-minded and fair. And
    what’s more, theirs is a "voyage of discovery", implying at the outset
    that what is being "tested" is a wonderful world of wisdom and knowledge,
    not a dubious sea of sloppy-minded rubbish.

    As it happens, Kate ends up scoring colourpuncture even more highly than Julie!
    It must be good to have persuaded such an old cynic as her. Not that she appears
    to be a very hard-nosed sceptic. She is easily impressed by the therapist’s
    own leaflet which claims "The effects are so well proven, that German
    insurance companies will fund treatments." "Cynical" Kate doesn’t
    even raise an eyebrow. Instead, she swoons, "Ahh, insurer backing and Teutonic
    endorsement, that’s the kind of logic I like."

    It looks probable, then, that the dice have been loaded in more ways than one
    here. Loading by language is perhaps the least obvious. It’s something that
    can happen a lot without our noticing. For example, cross-pollination of genetically-modified
    and non-genetically modified crops is referred to by environmentalists as "genetic
    pollution". Since "pollution" clearly has negative connotations,
    this description makes the cross-pollination sound bad before any argument or
    evidence is presented that it is bad. (Forgive the advertising break, but I
    talk more about such examples involving environmentalism in my book Making
    Sense.
    )

    Sometimes such loading is hard to avoid. For example, in the Iraqi conflict,
    such words as "liberation" and "occupation" carry evaluative
    connotations. Yet it can be cumbersome and difficult to describe what has happened
    and is happening in neutral language.

    The idea then that we can purge all language of evaluative connotations and
    thus not load the dice at all is unrealistic. What we can do is choose our words
    carefully, try not to load the dice and be aware of the implied judgements in
    the words we read and write. I’m sure that intelligent, open readers will agree
    with me, but am prepared to take criticism from cynical or slow ones. That’s
    only fair, after all.

  • Someone must pay

    "For every winner, there has to be corresponding losers, and it has
    nothing to do with skill, ‘investing’ or how popular you are."
    Neil Collins, The Daily Telegraph, 21 April 2003

    Like many opponents of the "something-for-nothing culture", Neil
    Collins does not just believe that if something good is done someone has to
    pay; he seems to regard it as a moral imperative that someone does. So irritated
    is he in his attack on the British Chancellor of the Exchequer, Gordon Brown,
    from which the quote above comes, that he finds himself insisting that there
    have to be multiple losers for a single winner, getting his grammar garbled
    in the process. (If it’s a sub editor’s error it still seems to capture his
    mood accurately.)

    The object of Collins’s wrath is a scheme where the British government will
    pay a lump sum into an investment fund for every child when it is born. The
    government may add to it at future birthdays and relatives may also contribute
    up to £1,000 per year. Babies from poor families will receive more from
    the government than their better-off peers.

    The thinking behind the scheme is that a little capital at the start of adult
    life can make a big difference between success and failure. The middle classes
    usually get a helping hand from their parents to enable them to "set themselves
    up" but the poor don’t get the same advantages. The so-called "baby
    bonds" scheme seeks to partially redress this imbalance.

    Skeptics have much to doubt in the scheme, not least the paltry sums involved
    – no more than £500 initially from the government, which is hardly going
    to force open many doors for an eighteen year old. But what seems to irritate
    Collins the most is his conviction that even if it makes some people better
    off, someone will have to pay for all this. That means on balance it’s going
    to be no help – what the government dishes out with one hand it will have to
    take back with another.

    But is it true that for every winner, there has to be a corresponding loser?
    Whichever way you interpret this supposed piece of common sense, it doesn’t
    seem so.

    Could it mean that the total amount of wealth in a society must always remain
    the same, so any increase in one person’s wealth must lead to a decrease in
    someone else’s? Evidently not, since the world has got richer. It’s easy to
    see how: finding more resources or using them more productively increases wealth.
    So a primitive society that only produces food and bricks (and whose wealth
    can only be measured in these commodities), gets richer if it learns how to
    grow food and make bricks more productively. Bigger houses, more food, and no
    losers.

    Could it mean that any increase in government spending has to be matched by
    an increase in taxation? Not necessarily. All that is needed to fuel higher
    government spending is a higher tax yield. If the economy grows faster than
    inflation, tax rates can remain the same and government spending can increase
    in real terms, since there is a bigger fiscal pie for the government to take
    a proportionately identical slice out of. This is why, counter-intuitively,
    sometimes decreasing tax rates can lead to an increase in the overall tax yield.

    Could it mean, more vaguely, that if someone benefits someone else must pay
    in some way, even if these benefits and payments are not strictly financial?
    Again, there’s no reason to think so. There may be ways, for example, to help
    convicted criminals reassimilate into society, and since criminals cost the
    taxpayer whereas working people contribute to national wealth, such a scheme
    would benefit criminal, taxpayers and society alike.

    The principle that for every winner, there has to be a corresponding loser
    just doesn’t seem to hold. Contrary to what Collins says, there are such things
    as win-win situations. Social cooperation as a whole is an example of this.
    We can all win by refraining from acts such as theft and violence, for example,
    since if we all (or most of us) do that we all enjoy greater security and don’t
    have to use precious resources making ourselves safe from attack.

    Baby bonds may turn out to be a bad idea, just as schemes that aim to stop
    criminals re-offending often don’t work. But neither failure would be proof
    of any general principle that any benefit has an equal and opposite harm. It
    may be prudent to always ask if anyone stands to lose from an apparent win-win
    situation, but that does not mean some things can’t make us all better off.

  • Appealing to common-sense

    "I am not convinced carbohydrate-restricted diets meet the standards
    of common sense, and thus, I am not convinced that more research is needed."
    Dr. David Katz, New Haven Register, 14 April 2003

    I couldn’t have said it better – or worse – myself. I despise faddy diets,
    and so I’m right behind Dr Katz’s dismissal of the idea that severely limiting
    the intake of carbohydrates is a healthy or sustainable way of losing weight.
    (The fact that I adore pasta and pizza does not, of course, influence my judgement.)
    And I’m also keen to find ammunition to support my cause, having discovered to my despair that a friend has followed Dr Atkins down his low-carb garden path.

    I’d even go so far as to say that I’m very attracted to the dictates of common
    sense. But that’s just where I’d be going wrong. For common sense is a poor
    indicator of what is true, or for those who are suspicious of the "T"
    word, of what is reliable, practical or efficacious.

    This point was made forcibly in the biologist Lewis Wolpert’s book The Unnatural
    Nature of Science
    . Wolpert points out that science repeatedly confounds
    common sense, or at the very least what is seen as common sense at any particular
    time. For example, common sense held that a heavy object would fall faster than
    a light one. Common sense would say that if the world were a sphere, people
    on the bottom half would fall off, and we’d definitely be propelled into space
    if the world were spinning. Common sense – at least in the form of received
    wisdom – says that going out into the cold in wet clothes increases your chances
    of getting a cold. Common sense on all three counts is wrong.

    It doesn’t take much to show that common sense is unreliable. Yet I too find
    the lure of common sense almost irresistible. What I think we’re often doing
    when we appeal to common sense is expressing a kind of exasperation. Something
    seems obviously true or false to us and we really don’t think it is worth the
    effort explaining why. Appealing to common sense is thus just a shorthand way
    of saying we think something is obviously true or false. But it’s a misleading
    strategy, since it evokes some kind of external authority. "Common sense"
    implies a kind of universal standard of rationality which is more than just
    what ordinary people happen to think, but which requires less than specialised
    knowledge. But if it is more than just what people happen to think, doesn’t
    it just become what we believe people ought to think? And if this is what we
    mean when we talk about common sense, shouldn’t we say so and make clear why
    we think they ought to agree with us? Saying it is just common sense is a way
    of shifting responsibility for having to explain why we think what we do onto
    some mythical judge of ordinary reason.

    So common sense is not just unreliable, it’s hazy and ill-defined. We should
    try to avoid using the phrase altogether and instead replace it with something
    that at least makes the basis of our judgement clear: it is what people generally
    think; it is what all experience points to being true; it is received wisdom;
    it is what I think is obvious; it is self-evident. Not all of these are good
    justifications for belief. But at least they are clear and make it possible
    to honestly assess what the justifications are. Saying something is common sense
    is just a way of trying to avoid justifying it altogether.

  • Selective quotation

    "My position is that, regardless of the circumstances, France will vote
    ‘no’."
    Jacques Chirac, President of France, 10 March 2003

    The phrase "cheese-eating surrender monkeys" has been happily adopted
    by many Americans and Britons as a fun way of expressing disdain for the French.
    It gives them something to laugh about as they tuck into their "freedom
    fries" and swill non-Gallic wine. No matter that the phrase, first uttered
    by Groundskeeper Willie in The Simpsons, was surely a send-up rather
    than celebration of the current mood of francophobia. Either that or Matt Groening
    et al have lost their genius for razor-sharp satire and social observation.

    What have the French done to deserve their pariah status? It wasn’t just the
    matter of their opposing the war on Iraq – many countries did that – it was
    the manner in which they appeared to do so. What more than anything enabled
    critical commentators to paint the French as unreasonable in their opposition
    was President Chirac’s declaration in a television interview that "regardless
    of the circumstances", France would exercise its veto and vote against
    any "second" UN resolution on Iraq. (In fact, Downing Street lists
    ten resolutions on Iraq, including 1441, which it claims Iraq has not fully
    complied with.) The remark caused outrage in Britain and America, evidence,
    it was said, that France had closed its ears to reason and argument. Downing
    Street called it "poisonous" and Jack Straw, the foreign secretary,
    said it had made war more likely.

    The problem is (as has been noted by BBC radio’s World at One and the
    Guardian newspaper) that Chirac has been the victim of selective quotation.
    What he actually said, in full, was: "My position is that, regardless of
    the circumstances, France will vote ‘no’ because she considers this evening
    that there are no grounds for waging war in order to achieve the goal we have
    set ourselves, i.e. to disarm Iraq."

    The crucial words here are "this evening". Even more importantly,
    the discussion prior to these comments had clearly been about how France would
    vote that evening (if there had been a vote), in a number of different hypothetical
    circumstances, such as there being or not being a majority of nine on the security
    council for a new resolution. So "regardless of the circumstances"
    clearly means regardless of how other members of the council voted, and "this
    evening" indicates that the stance being taken is not one that would never
    be changed. (You can read the interview for yourself at www.elysee.fr/ang/actus/speeches_.htm)

    Indeed, Chirac explicitly does not rule out the eventual use of force. "France isn’t a pacifist country," he said, and it "doesn’t refuse war on principle. France considers that war is the final stage of a process."

    However, by selectively quoting Chirac – pulling out a short phrase and not
    even a whole sentence – he could be portrayed as an implacable opponent of the
    use of force under all conceivable circumstances. In other words, a cheese-eating
    surrender monkey.

    This crime is not to be confused with the inevitable and harmless practice
    of quoting only short extracts or phrases. In this sense, anything other than
    a full reprint of the original speech or work is selective quotation, and every
    Bad Move begins with a selective quotation. The phrase "selective
    quotation" should be reserved as term of opprobrium. So when the selection
    does not distort or misrepresent, it should not be called a selective quotation.

    I have discussed riding roughshod over the context in which a comment is made
    in Bad Moves before. Selective quotation is the most extreme form of
    this type of poor argumentation, since it not only fails to take account of
    the particular time, place and circumstance which often is crucial to understanding
    what someone really means or intends, it fails to take account of even the other
    words which immediately surround it and which form a part of one single act
    of speech or writing. Ignoring the context can often be a mere error: selective
    quotation is more like wilful distortion.

  • Can’t do it? Don’t back it.

    "I am a vegetarian but I have no problem with animals as food, I just think
    that if you are not prepared to kill it, you ought not to eat it."
    Allan Beswick, columnist, Manchester Metro News, 11 April 2003

    Confucius’s golden rule was "Do not do to others that which you do not
    want done to you." Beswick’s golden rule seems to be "Do not have
    done to others that which you would not do yourself."

    It’s an extremely popular moral maxim. Anti-war campaigners berate "hawks"
    on the grounds that they usually show little willingness to get out on the battlefield
    and face the enemy themselves. Opponents of capital punishment can make those
    with the contrary view squirm by asking if they would be prepared to flick the
    switch themselves. Anti-abortionists use images of terminated foetuses to suggest
    that we can only maintain support for abortion if we keep ourselves far removed
    from the actual act itself. The message of them all is that if you can’t do
    it yourself, then you’re a hypocrite to say it’s all right for others to do
    it on your behalf.

    The undeniable rhetorical force of the argument is not, however, supported
    by any rational argument. Quite simply, there is no necessary link between the
    rightness or wrongness of an action and one’s ability or inability to do it.
    This can easily be seen by considering all four combinations of approval or
    disapproval and the ability or inability to act.

    Two of these combinations obviously carry no moral implication. The ability
    to perform an action does not make it wrong. If that were the case, anything
    we could do would be wrong and we could never do anything right. And the inability
    to perform an action does not make it right. If it did, anything impossible
    would be morally good. Yet it’s hard to see why we should think torturing people
    with square circles would be an honourable, if hypothetical deed.

    More interestingly, it is clear that the ability to perform an action does
    not make it right. Beswick may be happy to allow the willing slaughterers their
    meat, but anti-abortionists think those willing to carry out the deed they oppose
    are wicked, not morally justified in what they do. Doves think the same of gun-toting
    hawks, as do opponents of capital punishment of those willing to act as executioner.
    All this would be surprising if one’s ability or inability to carry out an action
    were any kind of indicator of its moral status.

    Which leaves the original case, where an inability to perform an action is
    supposed to indicate that there is something morally suspect about it. Yet the
    only thing that follows from such an inability is the psychological truth that
    we find the act unpleasant and so are disinclined to do it. But since when has
    unpleasantness been any kind of reliable moral barometer? Most people would
    recoil from undertaking an autopsy, yet that doesn’t make autopsies immoral.
    Many of us would find it too unpleasant to kill someone or put our lives at
    great risk in a heroic act to save others, yet that only says something about
    us, not the morality of what we cannot do.

    Indeed, it is often the mark of acts of great moral bravery that ordinary people
    recoil from them. But you don’t hear people saying "I think that if you
    are not prepared to attempt air-sea rescue yourself, you ought not to take advantage
    of it if it’s offered to you," or "I can’t stand these people who
    support helping the poor in third world countries but who aren’t prepared to
    get out there and do it themselves." In these examples, the absurdity is
    apparent. But the logic is exactly the same as in the previous cases of vegetarianism,
    capital punishment and war: an inability or unwillingness to do something yourself
    is taken to be a sign that you ought not to approve of it. The one just doesn’t
    follow from the other.

    I suspect that the reason why the move is so popular is that it disguises a
    fair concern. It is reasonable to ask people in some way to confront the reality
    of what they support, if that reality is unpleasant. People who support capital
    punishment need to realise that it results in the death of a human being; carnivores
    that animals die so they can get their steaks; and pro-choice campaigners need
    to realise just what is being destroyed in an abortion. But accepting honestly
    what a moral stance entails is not the same as necessarily being able to do
    personally all which that moral stance requires. Our own squeamishness or cowardice
    is not a reliable guide to morality.

  • Dubious advantages

    "We do not generally employ people who have spent a career doing something
    else and who have turned to executive search as a second career. We want our
    people to be the best at hiring great management. … To do this well you
    need to get the kind of commitment you have in a first career, not a second
    one."
    Armstrong International advertisement, 2003 campaign (Source: The Economist,
    29 March 2003)

    The comic alter ego of Graham Fellows, the hapless singer-songwriter John Shuttleworth,
    had a wonderful line in his stage show when he evangelised to the audience over
    the merits of a well-known sports drink. "It’s isotonic," he said,
    "it cares for the environment."

    As with so much of the Shuttleworth act, behind the banality lies an astute
    observation. Like many of us, Shuttleworth is easily impressed by the claims
    made by manufacturers and advertisers for their products, even when he doesn’t
    understand what these claims mean. The mere fact that something is presented
    as an advantage is enough to win him over.

    This is a version of the wider problem that if a claim is made with sufficient
    strength, conviction or authority, it tends to be accepted whatever its merits.
    The sub-species of "dubious advantages", however, works this trick
    in a slightly more sophisticated way. It works by presenting a claim which is
    factually correct, but in such a way as to make it appear like an advantage.

    The classic version of this comes with the many foodstuffs which are advertised
    as "95% fat free" or similar. There is nothing at all factually incorrect
    about this. But the way that the claim is splashed over the packaging makes
    it evident that this fact is supposed to describe an advantage. What could this
    advantage be? Many consumers will assume that it means the product is healthier,
    or is a better option if they are trying to lose weight. But many such low-fat
    cakes, for example, are loaded with sugar and a serving can contain just as
    many calories as other regular-fat alternatives. In short, the fact that something
    is 95% fat free isn’t necessarily an advantage, even though it is being sold
    to you as one.

    Once you become alert to this, examples leap off the supermarket shelves and
    the advertising billboards. Why is it good that something contains Guarana if
    the amount it contains is less than that required for it to have any effect,
    assuming it has a desirable effect anyway? Why is it better that something comes
    in a new, bigger size, if the price has increased proportionally? Why should
    we rejoice that a cereal now comes in a foil bag when it was perfectly crispy
    in the old plastic one?

    What makes the Armstrong International advertisement particularly interesting
    is that by spelling out so clearly why it is supposed to be an advantage to recruit people who are starting their first career, Armstrong are being more open than those who
    merely imply their dubious advantages, but they also thereby make the questionable
    nature of this advantage clearer. For it just doesn’t seem at all evident that
    people are more committed when on their first career than their second. Indeed,
    many people just drift into their first career, and the move to a second one
    often requires more commitment. And people on their second career have more
    experience, including that concerning which kinds of people make great managers.
    Prima facie, then, the claim that this feature of their recruitment practices
    is an advantage is questionable and it seems unlikely that any empirical evidence
    exists to back it up.

    The presentation of dubious advantages probably works because we are cognitive
    misers who will always make as few judgements as possible to get by. We prefer
    "that’s true" or "that’s false" to "the factual part
    of that claim is true but its implied advantages are not real." The latter
    requires us to distinguish the factual content from the evaluative implication
    of a claim and when we’re glancing at advertisements or product packaging, that
    can be a cognitive task too many. It’s not that we’re stupid, it’s just that
    we are already bombarded by commercial messages and we’re doing all we can to
    filter them out. Also, there aren’t many of us who are at our mentally sharpest
    when doing the shopping.

  • Complacent superiority

    "When asked "How often do you have sex?" Horne replied, "Every
    orgasm is a sacred offering to the universe." When asked if she believed
    in life after death, she replied, "The energy that we are has to go somewhere."
    A witch’s wisdom or wacky Wicca waffle? I’ll let you be the judge."
    Julian Baggini, Bad Moves: The Is/Ought Gap, butterfliesandwheels.com

    Who do we think we are, we guardians of good sense and rationality? In this
    series, I claim to "detail the various ways in which arguments or points
    are made badly, but often persuasively." Presumably, that must mean I think
    my own arguments are made well. Is that confidence or arrogance?

    Butterflies and Wheels itself risks hubris when it sets out to "fight fashionable
    nonsense". It had better be right about what it claims is nonsense and be
    sure not to add to the mass of intellectual trash in the world with its own outpourings.
    And how can it be sure that it is effectively opposing "pseudoscience that
    is ideologically and politically motivated?" Is it claiming its editors are
    not themselves motivated by politics or ideology, or that if they are they can
    nonetheless distance themselves from it?

    The point of raising such doubts is not the non sequitur that no argument is more
    or less rational than any other, or that ideological commitments cannot be at
    least partly set to one side. It is rather a reminder that the pursuit of noble
    ends, such as truth or goodness, is difficult and bound to be accompanied by error.
    However, when one puts oneself on the side of the angels it is all too easy to
    start believing that one has sprouted wings. The result can then be complacent
    superiority: the belief that one is "good" or "rational" and
    therefore immunised against wickedness or poor reasoning. If one starts to believe
    that, or adopt it as an unconscious assumption, then one is in danger of falling
    into just the kind of badness or ignorance one is supposed to be against.

    There are two important things to remember if we are to avoid falling into this
    trap. The first is simple vigilance. Never assume your arguments are rational;
    always scrutinise your own reasoning for signs of sloppiness. The second is recognising
    that the lines which divide the clever and the stupid, the good and the poor argument,
    are rarely sharp.

    Hence the example of my own treatment of Fiona Horne in a previous Bad Moves.
    The raison d’etre of Bad Moves is to distinguish good arguments from pure rhetoric:
    bad arguments which are nonetheless persuasive. However, it has to be admitted
    that even the most rational of arguments are not free from rhetorical flourish.
    It is simply false to claim that in my critiques of rhetoric I myself never resort
    to the use of techniques which are not in fact rational arguments.

    In the example above, for example, I indulge in a little fun-poking. I would defend
    this by pointing out that the rhetorical move of gentle mockery employed here
    is neither disguising poor reasoning nor masquerading as good reasoning. It also
    works merely by presenting albeit selective quotes from Ms Horne and inviting
    the reader to judge their merits. But this invitation is not made neutrally: I
    am nudging and winking to the reader to indicate that I think what she says is
    ridiculous.

    There is nothing wrong with this. Apart from anything else, our texts would be
    very dry if they always aspired to be as neutral and humourless as is humanly
    possible. The only danger is if we imagine that we are reading or writing material
    which is entirely free of all rhetorical content. Rhetoric when in the service
    of good reasoning is a good thing: it makes good arguments more persuasive and
    following such arguments more enjoyable. It is only when rhetoric is in the service
    of poor reasoning that it becomes a problem.

    Nonetheless, rhetoric is not the same as rational argumentation, so those who
    aspire to be on the side of reason need to remember not just that they are fallible,
    but that they too use some of the tricks of persuasion they are only too keen
    to criticise in their adversaries.

  • The is/ought gap

    "Humans have not evolved to be monogamous; the survival of the species
    depends on diversity."
    Fiona Horne, 8 March 2003, The Guardian Weekend

    In case you haven’t heard of Fiona Horne, this multi-talented antipodean is a rock
    star, journalist, author, model and witch. Seriously. She’s written five books
    on witchcraft, including Witchin’: A Handbook for Teen Witches. Like it or not,
    what this person says gets published and listened to.

    Judge for yourself whether this is a good thing. When asked "How often
    do you have sex?" Horne replied, "Every orgasm is a sacred offering
    to the universe." When asked if she believed in life after death, she replied,
    "The energy that we are has to go somewhere." A witch’s wisdom or
    wacky Wicca waffle? I’ll let you be the judge.

    Her musings on evolution were in response to the question "Do you believe
    in monogamy?" Her answer would seem to suggest that she doesn’t, but in
    fact she doesn’t give a direct answer and her reply doesn’t help us decide whether
    she means yes or no.

    Horne’s reply makes two factual claims: that we have not evolved to be monogamous
    and that diversity is essential to the survival of the species. Horne has the
    backing of most evolutionary psychologists for the first, since they would agree
    that strict monogamy is not the behaviour pattern evolution has favoured. However,
    the second part of her claim – that "the survival of the species depends
    on diversity" – is something of a non sequitur. There is, after all, no
    evidence that the strict practice of monogamy would threaten the "diversity"
    required for human survival. The main kind of diversity required for the species
    to flourish is a mixing of the gene pool, and this is threatened by excessive
    in-breeding, not monogamy.

    So the only pertinent point made is that we have not evolved to have a propensity
    for monogamy. But even if that is true, the question was about whether Horne believed
    in monogamy. Normally we take this to be a question about whether someone thinks
    monogamy is a good thing. (Clearly she’s not being asked whether she thinks
    monogamy exists.) If this is the question, how can talking about evolution even
    begin to answer it?

    The problem is that the story of our evolution can only tell us facts (or perhaps
    we should say in this case hypotheses) about why humans have certain predilections
    for particular types of behaviour. What it can’t do is tell us whether we should
    act on these predilections or not. Indeed, morality as normally understood surely
    does require us to sometimes go against our evolved predispositions. For example,
    many evolutionary psychologists would say that we have evolved to put the welfare
    of our close kin above that of strangers. But that would not make it right to
    favour a job application from a relative over a better-qualified stranger. What
    is right is not necessarily what we are most disposed to do.

    The general point is that nothing about what we ought to do necessarily follows
    from what we actually do, or have predispositions to do. This is known as the
    is/ought gap, and was first expressed in a
    famous passage
    from David Hume’s A Treatise of Human Nature. Hume’s point
    is a simple logical one. No statements of values (what "ought" to
    be) follow from any premises which simply describe facts (what "is").
    If you want to reason from the fact that kicking people causes pain to the conclusion
    that you ought not kick people without good reason, you cannot do so unless
    you introduce a statement of values, such as "causing pain without good
    reason is wrong". You have to put values in to get values out: they are
    never simply generated from the facts.

    The debate over the is/ought gap has got quite sophisticated and many philosophers
    argue that the gap is not unbridgeable. However, even if the gap can be bridged
    it needs some clever reasoning and demonstration for it to be done. What cannot
    be justified is a leap from facts to values with no demonstration of how this
    can happen. And certainly to answer a question about values with a stark statement
    of fact, as Ms Horne did, is not to answer the question at all, no matter what
    the respondent thinks.

  • False dichotomies

    "Every nation in every region now has a decision to make. Either you
    are with us, or you are with the terrorists."
    George W Bush, 20 Sept 2001

    You couldn’t get a starker demonstration of a false dichotomy than President
    Bush’s bold statement, made shortly after the attack on the World Trade Center
    in 2001. A false dichotomy presents two options as though these exhausted all
    the possibilities, when in fact there are other choices available. In this example,
    one alternative to Bush’s choice is to oppose terrorism but also to oppose America’s
    preferred methods of dealing with it. A person or country that adopts that line
    is not with President Bush, but nor are they with the terrorists.

    On a charitable interpretation of Bush’s speech, he wasn’t really trying to
    suggest that the choice was so stark. He continued by saying, "From this
    day forward, any nation that continues to harbour or support terrorism will
    be regarded by the United States as a hostile regime." This suggests that
    not being "with us" requires acquiescence with terrorists and not
    just failure to support US policy.

    Indeed, when Bush repeated the dichotomy a few weeks later, in the context
    of a crackdown on terrorist finances, again the main message seemed to be that
    turning a blind eye to terrorism counted as being against America in its fight
    against it.

    However, if this is true, why did Bush not only choose these particular words
    but also to repeat the same formulation again? The answer could be that as a
    description of the facts, the dichotomy is false. But as a description of America’s
    intentions, they sent out a clear message. As a matter of fact, you may be with neither
    the terrorists nor America. But if you choose not to be with America, America
    will view you as being against her. America makes the untruth of the false dichotomy
    true by deciding that it will treat all those who are not with her as being
    against her, whether they see themselves in that way or not. This is one reason
    why many Europeans have accused Bush’s administration of adopting a bullying
    attitude.

    Whichever way you interpret Bush’s words, it is clear that taken literally
    they are just false. Yet the rhetorical trick of presenting a false dichotomy
    (or false set of more options than two) is very popular. You often see a version
    of it in Christian evangelical literature. Christ, they say, claimed to be the
    son of God. He must have been telling the truth, lying or mad. There is no evidence
    that he was a liar or mad, so therefore he must have been telling the truth.

    Of course, the problem is again that the options presented don’t exhaust the
    possibilities. Jesus may well not have claimed any such thing – the Gospels
    may be unreliable. He may also have meant something more metaphorical. After
    all, in Genesis it is said that "When men began to increase in number on
    the earth and daughters were born to them, the sons of God saw that the daughters
    of men were beautiful, and they married any of them they chose." (6:1-2)
    So clearly being the son of God isn’t a unique achievement and may mean something
    less than it is usually taken to be. Whichever way you look at it, there are
    more than the three options presented.

    If we were to be too strict in our policing of false dichotomies, we would
    be robbed of some great quotes. "Life is either a great adventure or nothing,"
    said Helen Keller. Well, no, but I see her point and it wouldn’t have quite
    the same ring suitably qualified. Ditto Anthony Robbins’ maxim, "In life
    you need either inspiration or desperation." Better still, Max Lerner’s
    warning, "Either men will learn to live like brothers, or they will die
    like beasts," is no less forceful for being literally false.

    The false dichotomy is a great simplifier. It cuts out all the complexity of
    an issue and presents just two choices, take ’em or leave ’em. There are times
    when rhetorical force justifies this wilful simplification. But we have to remember
    that it is simplification. If we accept such dichotomies too easily or at face
    value, then we are in danger of imagining the world is all black and white and
    we will miss the critical shades of grey.

  • Better than nothing

    "Although conditions in many of the [sweat]shops are admittedly wretched,
    people chose to work in the shops of their own free will, experts point out,
    because a lousy job is better than none at all. If major U.S. retailers stop
    doing business with countries where exploitation is a fact of life, maquila
    production will decline further in Central America and thousands of workers
    – children and adults – will join the ranks of the unemployed, experts warn."
    (Source: National Center for Policy Analysis, Month In Review, Trade June,
    1996) (http://www.ncpa.org/pd/monthly/pd696r.html)

    Sweatshops stir the consciences of all but the hardest of westerners who become
    aware that most of their clothes come from them. We know that conditions in
    these factories, usually, but not always, located in the developing world, are
    awful. We know that workers often have few if any rights, receive measly pay
    and often work in hazardous environments. If we think about it too much, we
    may even wonder if we are modern-day slave-owners, enjoying the fruits of the
    labour of those who toil on our behalf under conditions we would never accept
    for ourselves.

    It can be very appealing, therefore, when someone comes up with an argument
    that tells us we shouldn’t feel bad after all. Even better if that argument
    says some true things.

    This argument comes courtesy of the National Center for Policy Analysis (NCPA),
    whose goal "to develop and promote private alternatives to government regulation
    and control, solving problems by relying on the strength of the competitive,
    entrepreneurial private sector." Those not drawn towards neo-liberal free-market
    orthodoxy may feel suspicious of what such a body has to say, but their argument
    needs to be judged on its merit, not its provenance.

    Set aside for one moment the argument that the workers in these sweatshops
    chose to work there freely. That will be a topic for a future Bad Moves. Focus
    instead on the main point, which is that if we don’t buy goods which come from
    sweatshops, the workers we are concerned about will be worse off, since their
    poorly paid, tough and often dangerous jobs are better than no jobs at all,
    or the alternatives open to them.

    The argument has a good pedigree. Much cited is Lucy Martinez-Mont’s article
    "Sweatshops Are Better than No Shops" (Wall Street Journal, June 25,
    1996) in which she wrote, "Banning imports of child-made goods would eliminate
    jobs, hike labor costs, drive plants from poor countries and increase debt.
    Rich countries would sabotage Third World countries and deny poor children any
    hope of a better future."

    What Martinez-Mont says is true. The question is, what follows from it? What
    clearly doesn’t follow is that we can carry on buying child-produced goods with
    impunity, as many (but not all) proponents of the argument would have you believe.

    The reason for this is that the choice is not between the status quo and banning
    such imports. This is something most "fair trade" campaigners know
    full well. For example, the Maquila Solidarity Network (www.maquilasolidarity.org)
    advises, "Don’t promote a blanket boycott of all goods produced by child
    labour," precisely on the grounds that simply withdrawing custom and leaving
    nothing in its place is harmful to those they want to help. The Ethical Trade
    Initiative (www.ethicaltrade.org) base code prohibits "new recruitment
    of child labour" and insists that member "companies shall develop
    or participate in and contribute to policies and programmes which provide for
    the transition of any child found to be performing child labour to enable her
    or him to attend and remain in quality education until no longer a child."

    The point is simple. Poor working conditions may be better than nothing, but
    that does not justify us supporting poor working conditions. The alternative
    should not be nothing but making things better. A parent who feeds their child
    junk food cannot say that they should not be criticised because junk food is
    better than no food. The point is the parent has the choice to offer proper
    food.

    So often "better than nothing" arguments simply gloss over the possibility
    of changing things for the better and only draw comparisons with the even worse
    option of "nothing". If it is genuinely the case that the only options
    are something bad, and nothing, which is even worse, that does present a real
    moral dilemma. But most of the time these aren’t the only options.

    Consider one final example. It may be better for all parties concerned if you
    buy a child to be your personal slave from a poor family starving to death rather
    than just leave them. But does that really make it morally justifiable? After
    all, if you can afford to buy a slave, you can afford just to give them the
    money and take nothing in return. That too is better than nothing, but also
    much better than the other alternative.

  • Arguments from incredulity

    "No one in their right mind can look in the stars and the eternal blackness
    everywhere and deny the spirituality of the experience, nor the existence
    of a Supreme Being. There were moments when I honestly felt that I could reach
    out my hand, just as the pilot John Magee says in his poem ‘High Flight’,
    and touch the face of God."
    Eugene Cernan, last man to walk on the moon (Source: Observer Magazine,
    16 June 2002)

    These few lines are stuffed full of argumentative bad moves. There’s the ad
    hominem abuse – people who disagree are just not in "their right mind".
    There’s also a whiff of the argument from authority: an "I’ve been into
    space buddy, and you haven’t, so you’d better believe I know what I’m talking
    about" attitude. But the flaw I want to focus on is what can be called
    the argument from incredulity.

    An argument from incredulity essentially works by taking the fact that one
    can’t believe or imagine that something is true (or false) to be a good reason
    for thinking it isn’t true (or false).

    In this case, when he looks into space, Cernan simply can’t believe that there
    isn’t some kind of spiritual dimension or supreme being behind it all. And so
    his argument is that there is, therefore, some kind of spiritual dimension or
    supreme being behind it all.

    And that really is the sum total of his case. He makes it sound as though you
    too should be equally unable to deny the deity by claiming you’re not in your
    right mind if you do deny it. But that’s just an assertion mixed up with some
    abuse. It doesn’t advance the argument any further.

    It might be thought misleading even to think of this as an argument. Rather,
    Cernan is merely asserting what he thinks. But there does seem to be an argument
    buried here, since the assertion implies that because he cannot look out into
    space and deny the existence of a supreme being, and he also thinks that no
    other sane person can either, therefore, it follows the supreme being is real.
    The logic of the assertion can certainly be expressed in the form of an argument
    without distorting what Cernan means.

    Like many argumentative bad moves, once the structure of the argument is made
    explicit its weaknesses become obvious. Our own inability to be able to imagine
    that something is or is not the case is not in itself a reason to think it is
    or is not the case. Some true things just are unimaginable. And the fact that
    we have strong convictions when confronted by certain experiences does not mean
    that those convictions are reliable bases for true belief.

    Consider just a few examples. I can’t really imagine the evolution of life
    from single cells to human beings. But I should not think my inability to imagine
    this provides some kind of reason for thinking evolution is not how humans came
    to be. Similarly, when I see a magician saw a person in two, I can’t see how
    the trick works. But I would be foolish to think that the person had in fact
    been sawn in two.

    When Cernan looks out into space he can’t imagine there is no supreme being
    and so he can’t believe there is no supreme being. But he is wrong to think
    that his inability to imagine or believe that there is no supreme being is some
    kind of reason to suppose there is one. Others look out into space and do deny
    there is a God, and despite what Cernan says, many are in their right minds.
    Cernan is entitled to say no more than that he personally can’t believe there
    is no God and accept that this only tells us about his own abilities to believe
    and nothing about the universe or what lies behind it.

    Cernan provides a clue to the need for caution in his own words. He said there
    were moments when he felt he could reach out and touch the face of God. But,
    of course, he could never do any such thing. The fact that he felt he could
    is no reason to suppose he actually could; just as the fact that he felt sure
    God was there is no reason to suppose God was there. We should never think that
    what seems unbelievable to us must therefore be false, or even that what seems
    certain to us must be true. Our own inability or compulsion to believe is no
    grounds for making claims about the truth.

  • Blurring the boundaries

    "Air conditioning made it [global warming] all possible. And now having
    opened the door to southern pols and Dixie climes, it’s also planning to export
    those hot summer winds all over the world by making the hole in the ozone
    layer a reality."
    Michael Moore, Stupid White Men, p130

    Ozone depletion and global warming are not two wholly unrelated issues. Both,
    for instance, can be seen as the product of mankind’s high levels of consumption
    having detrimental effects on the environment. But they are not as closely related
    as they are sometimes assumed to be.

    Ozone depletion is mainly caused by chlorofluorocarbons (CFCs) and its main
    effect is to raise the amount of harmful ultraviolet B (UV-B) radiation reaching
    the earth’s surface. Global warming, on the other hand, is thought to be at
    least in part the result of increased man-made emissions of carbon dioxide (CO2)
    and its main effect is an increase in the Earth’s temperature.

    Despite these clear differences, however, there have been various attempts
    to link the two. Some of these are respectable and follow the general lines
    of arguing that what contributes to one phenomenon can also contribute to the
    other; or that the kind of human activity that causes one also causes the other.
    But, at most, these accounts suggest some kind of specific overlap. They do
    not make one phenomenon the flip side of the other.

    This has not stopped many people, often environmental campaigners, deliberately
    or otherwise, using this small area of overlap to blur the distinction between
    the two. In Michael Moore’s polemic, for example, he just seems to have got
    confused. It would only be possible to "export those hot summer winds all
    over the world by making the hole in the ozone layer a reality" if ozone
    depletion were a major cause of global warming. But no credible scientist seriously
    believes it is.

    What Moore has done, perhaps unwittingly, is to have blurred the boundary which
    keeps the two issues distinct. For anyone committed to clarity, precision and
    intellectual rigour this is bad in itself. To understand any issue clearly one
    needs to understand all the important distinctions that enable one to give a
    precise account of what is at stake. Blurring the boundaries of debates is clearly
    the opposite of this kind of careful analysis.

    So why do it? Sometimes it is just ignorance. We just mistakenly think, because
    we haven’t paid enough attention to the facts, that two similar sounding issues
    must be more or less the same. Human beings are by nature "cognitive misers".
    As a result, we don’t like to think about two things when we can get away with
    thinking about just one. So we have an instinctive impulse to simplify which
    makes us all too ready to lump two similar-ish issues together instead of keeping
    them apart.

    This probably explains why too many people (although fewer by the day) have
    thought of smoking cannabis and injecting heroin as basically the same kind
    of activity. Legal drugs okay, illegal drugs bad is a simple rule to follow
    and it is so much easier than looking at the wide range of different drugs available
    and thinking about the similarities and differences between their effects, addictiveness,
    relation to crime and so on. The real issues about drug use are complicated
    and that provides an incentive for us to fool ourselves into thinking they are
    simpler than they really are.

    But sometimes the reasons for blurring the boundaries are more calculated.
    Blurring issues can be a useful rhetorical or polemical device. In the case
    of environmental campaigners, for instance, if ozone depletion and global warming
    are connected in people’s minds, then they will perceive a single threat that
    is larger than each of the two by themselves. It also means that doubts about
    the science behind environmentalists’ claims can be more easily diffused. The
    threat is too large, the risks too high for any doubts about the reliability
    of evidence, which is only ever going to be about one or the other, to throw
    the campaign off course.

    This is also what is probably happening with the war on terror and the conflict
    with Iraq. Any links between the two are indirect and probably more to do with
    long-term strategic aims (cutting off support for and dampening down militant
    Islam by instilling a western friendly, more or less secular regime in the heart
    of the Gulf) than the tactical hunt for Bin Laden and his supporters. But while
    it is true that there are some links between the two campaigns, it would help
    the Americans and British enormously if people were to think of the campaign
    against Iraq as a simple extension of the war on terror. Blurring the boundaries
    is thus intended as a means of increasing public support for an attack on Iraq
    (not that it seems to be having much success).

    Whether intended for polemical purposes or the result of some form of ignorance,
    blurring the boundaries can never be a good move when trying to understand any
    issue properly. The inquiring mind needs to attend to the important distinctions
    in a debate, not fudge them.

  • Tout comprendre c’est tout pardonner

    We should condemn a little more and understand a little less.
    British Prime Minister John Major, 8 October 1993

    The problem with accepted wisdom is that for every proverb there is an equal
    and opposite proverb. A bird in the hand may be worth two in the bush and you
    shouldn’t look a gift horse in the mouth, but since everything comes to he who
    waits, maybe settling for just one feathered beast isn’t such a good idea. You
    can’t teach an old dog new tricks and a leopard can’t change its spots, but
    since it’s never too late to learn and it ain’t over until the fat lady sings,
    why be so defeatist?

    Rather like reading the Bible literally, accepting the wisdom of old proverbs
    at face value can only lead to embracing contradiction. So even the wisest of
    sayings needs to be approached with a degree of caution.

    One of the better proverbs is the French expression "tout comprendre c’est
    tout pardonner": to understand all is to forgive all. There’s more than
    a grain of truth in this. The more we understand why a person has acted the
    way they have done the less likely we are to blame them for their wrongdoings.
    The sexual abuser, for example, appears at first sight to be nothing more than
    an evil monster. But when we realise, as is usually the case, that the abuser
    was himself abused and is the damaged product of a terrible upbringing, we begin
    to have some sympathy for the abuser as well as the abused.

    Unfortunately, this insight has tended to be interpreted as entailing a crude
    link between understanding on the one hand and condemnation or punishment on
    the other. This is partly because of the way the thinking behind the proverb
    has been used by the left and the right. Socialists generally believe that people
    are not intrinsically good or bad and are only corrupted by the injustice of
    society. This means that crime and wrongdoing have to be seen as a product of
    an unjust system. If this is true, then the fundamental fault for the existence
    of crime lies with society, not the individuals who are corrupted by it. So
    to understand the oppressed person’s crime is to forgive them and blame the
    system.

    In reaction, the right has tended to dismiss the connection between a person’s
    background and their propensity to commit crime, arguing that each individual
    is free and must take responsibility for their actions. And so they have resisted
    the idea that a person can be forgiven their wrongdoing if we understand their
    social and family background. Hence John Major’s plea that we should condemn
    a little more and understand a little less.

    The problem with both sides is that the supposed inverse relation between understanding
    and condemnation has not been demonstrated. The proverb may move swiftly from
    understanding all to forgiving all but we should not be so quick. The direct
    link can only be made if we think that we can only condemn if a person’s actions
    are entirely the product of their own free choice. But why should we think this?

    Let us imagine that it is in fact true that all sexual abusers only do what
    they do because they have themselves been damaged through no fault of their
    own. It does not follow from this that we should not condemn what they do or
    punish them by restricting their freedoms. Condemnation may be a necessary way
    of showing and maintaining society’s disapproval; it may also be required to
    bring about reform in the wrongdoer; locking up offenders may be the only way
    to protect society from abusers who wouldn’t be able to stop themselves offending
    again.

    In these and myriad other ways, it may be possible to understand the wrongdoer
    without shrinking from condemnation and punishment. Of course, none of this is
    uncontroversial. But these possibilities need to be explored. It may be true
    that people do wrong because of their upbringing, but the best way to reform
    them and protect others may not be to take them aside for an understanding chat
    but to condemn them clearly and unequivocally.

    So John Major’s dichotomy is a false one. We can understand more without necessarily
    condemning less. And even if in a sense to understand all is to forgive all,
    such forgiveness does not necessarily entail the suspension of criticism and
    even punishment. While it has a less catchy ring, perhaps what we should say
    is that to understand all is simply to be a more sympathetic judge, not to refrain
    from judgement altogether.

  • Absence and evidence

    "It depends on Saddam. If he co-operates with the inspectors in allowing
    them not just access but telling them what material he has and allowing them
    to shut it down and make Iraq safe and free of weapons of mass destruction
    then the issue is over, but he is not doing that at the moment."
    Tony Blair, 26 January 2003 (Source: the Guardian, 27 January 2003)

    The British and American governments have consistently claimed that Iraq has
    weapons of mass destruction. But they have not helped their case by rigging
    the rules by which their claim is tested. Here’s the problem.

    UN weapons inspectors are currently searching Iraq for these weapons. If they
    find them, this is obviously evidence that Iraq has them. But if they don’t
    find them, then that is only taken as evidence that Iraq is not "telling
    them what material he has and allowing them to shut it down". In other
    words, if they find weapons that’s proof Iraq has them and they don’t it’s proof
    Iraq is hiding them. That means nothing the UN inspectors can do can be accepted
    as evidence that Iraq does not have weapons of mass destruction, since only
    evidence that it does is counted. Heads I win, tails you lose.

    Critics of America and Britain have rightly pointed out that in order to perform
    a genuine test of a theory you must permit the possibility of evidence that
    would count against it. If you do not, the test cannot be genuine, because a
    test that is run with the presumption that nothing could count as a failure
    of the test is no real test at all.

    The obvious reply to this is that the asymmetry is due to the fact that you
    can’t prove a negative. Whilst finding weapons is clear proof that they are
    there, not finding them is not proof that they are not. Absence of evidence,
    so the saying goes, is not evidence of absence.

    In fact this principle isn’t as straightforwardly true as can appear. Often
    absence of evidence is evidence of absence. What greater evidence can there
    be, for example, of an absence of pizzas in my freezer than the failure to find
    evidence of their presence if you have a good look? In law, there is also a
    presumption of innocence in the absence of evidence for guilt. It may not be
    possible to prove I didn’t kill Colonel Mustard, but absence of evidence that
    I did is considered good enough evidence that I didn’t.

    The reason for accepting absence of evidence as positive evidence is precisely
    because proving a negative is often impossible. So we have to be satisfied with
    something less than absolute proof. This something less is absence of evidence
    when evidence has been sought where it should be found, if it exists at all.

    These difficulties with absence and evidence have led to Britain and the US
    getting into a tight spot. The premise of weapons inspections is that absence
    of evidence is evidence of absence, just as in a criminal inquiry, absence of
    evidence for guilt is taken as evidence for innocence. But Britain and America
    don’t actually believe that in this case the absence of evidence is evidence
    of absence. And that is because there are just too many ways in which the weapons
    being looked for could be concealed.

    Ironically, the result is that both sides are claiming the inspections are
    a charade. Britain and the US argue that the inspections are a charade because
    Iraq is not co-operating and critics of Britain and America say it is a charade
    because nothing is allowed to count as evidence of absence.

    Both are right. The bad move here is the oscillation between the legal assumption
    of innocence that says absence of evidence is evidence of innocence and the
    actual facts in this case that suggest it is naïve to suppose absence of
    evidence is evidence of innocence. The shifting and confusions between the two
    has created a situation in which everyone accuses everyone else of bending the
    rules to suit themselves.

    That’s not a healthy situation to be in when war is a possibility and lives
    are at stake. Extraordinary thought it may seem, intellectual difficulties concerning
    the relationship of absence and evidence do seem to be contributing negatively
    to the playing out of the current crisis.

  • Guilt by association

    Clint Eastwood filed a $10 million libel suit against St. Martin’s Press
    and author Patrick McGilligan for an unauthorized bio portraying the actor
    as a wife-beater and atheist. (USA Today, 26/12/02)

    Sometimes sneaky rhetorical moves can be so subtle that they fail even to register
    in the consciousness of the people using them. What would the journalist who
    wrote this say about the odd juxtaposition of "wife-beater" and "atheist"?
    It looks to me like a classic example of guilt by association: putting two things
    that have no necessary connection together in the hope that the bad name of
    one will taint the other. The almost subliminal suggestion of this short news
    item is that being an atheist is a horror comparable to being a wife-beater.
    You’d better not call someone either if you want to stay out of the libel courts.

    This move is particularly insidious because it is perfectly possible to deny
    it is being used and to accuse the person who claims it is of being paranoid.
    Does this article actually say that atheism is comparable to wife-beating? No.
    Isn’t it just true that this is why Mr Eastwood is suing the author and publisher?
    Yes. Don’t people sue over falsehoods said about them even when there is nothing
    necessarily wrong with what is attributed to them? Of course – in 1992, pop
    star Jason Donovan successfully sued the magazine The Face for saying
    he was gay, yet he has been vocally supportive of homosexuals. So what’s the
    problem? Aren’t atheists who feel the article places them in an axis of evil
    with wife-beaters just too sensitive?

    Well, maybe they are. But at the very least there is something odd in the casual
    conjunction of atheist and wife-beater made in the article. Would George Bush
    consider slighted if the accusation was of being a wife-beater and republican,
    for example? Would anyone be offended by the conjunction of wife-beater and
    Jew? Would we really accept that in such a coupling no one was suggesting being
    a Jew was bad?

    On other occasions, there is no question about the intent of implying guilt
    by association. The most popular way to do this is to invoke the Nazis. If you’re
    anti-euthanasia, just do what many "pro-lifers" do and mention the
    fact that the Nazis practised euthanasia. Never mind that the Nazi euthanasia
    programme was nothing about people exercising their free choice to end their
    own lives and everything to do with cold-blooded murder. Just suggesting a Nazi
    link is enough to cast proponents of euthanasia in a negative light.

    The same trick can be applied to an astonishing array of beliefs and practices.
    The Nazis were very keen on ecology, compulsory gym classes and keep fit, forests,
    eugenics and public rallies. If you yourself object to any of these, then slip
    in a mention of Nazi policy next time you want your criticisms to pack an added
    rhetorical punch. And if you’re being bothered by a vegetarian while you’re
    trying to enjoy your T-bone steak, just remind your critic that Hitler too eschewed
    meat.

    The problem with guilt by association is that it fails to do what any genuine
    criticism must do: show what is wrong with the thing being criticised. The fact
    that some bad people like or support it, or that it can be mentioned in the
    same breath as something bad, does not add up to a criticism. Would love be
    bad if the devil had loved? Should books be banned because Mein Kampf
    too was a book? Should we banish sauerkraut from our tables because Nazis ate
    it? Of course not. Nothing is bad or wrong simply because the hand of evil has
    touched it. If it is wrong, show why it is wrong and don’t resort to innuendo
    to make it appear wrong by association.

  • From ridicule to the ridiculous

    The war on drugs was weird enough because it was a war on plants, which I found
    quite odd. But the whole concept of a war on terror is absurd. How can you declare
    war on an abstract, on a notion?
    Mark Thomas (Source: Big Issue, December 2-8 2002)

    Comedy can be an incisive instrument to reveal the truth. Good satire can be
    the most telling and effective form of critique, while in recent years The
    Simpsons
    has probably been the most consistently insightful work of social
    observation in any media.

    It is not surprising then that comedy has become one of the most potent modes
    of political commentary and polemic. In Britain, for example, comedians such
    as Ben Elton in the eighties and Rory Bremner and Mark Steel in the present
    day have been praised as the most effective critics of the politicians of their
    time. Michael Moore in the US and Mark Thomas in the UK have also pioneered
    a new hybrid of investigative journalism and comedy. The blend has proved to
    be extremely popular. Most notably, Moore’s book Stupid White Men was
    a huge bestseller in 2002.

    The comedian’s skills, however, do not always lend themselves to the kind of
    subtle analysis good political commentary often requires. The problem is that
    the political comedian is the master of ridicule. But it is easy to forget that
    what has been made to look ridiculous may not in fact be ridiculous.

    Take Mark Thomas’s swipes at the war on drugs and the war on terror. Not bad
    gags, I would say. But do they add up to serious criticisms of either? Hardly.
    Consider the war on drugs first. The joke works because it makes us imagine
    an armed struggle in which the enemy is a plant, and this absurdity makes us
    laugh. But, of course, the joke is asking us to imagine what is clearly not
    the case. No one involved in the war on drugs thinks they are going into battle
    with plants. In fact, they probably don’t even think that they’re engaged in
    a war in the conventional sense at all. The term itself is merely a kind of
    metaphor or shorthand for a policy of working to eliminate the supply and use
    of drugs.

    What the joke does is takes the phrase ‘war on drugs’, interprets it in a deliberately
    over-literal way, and then shows how, on that interpretation, such a war is
    ridiculous. But this is a far cry from having shown that the war on drugs itself
    is ridiculous. In fact, the joke cannot do this because it doesn’t even begin
    to address what the war on drugs is really about. However, comedy is in the
    timing, and gags, unlike careful arguments, follow each other quickly. It is
    easy in our laughter to think that it is the war on drugs which we are finding
    ridiculous, not a clever joke we are finding funny.

    The line about the war on terror follows the same logic. Of course the war
    on terror is not a war waged by the armies of America against hordes of abstract
    concepts. It is a war against terrorists and supporters of terrorism.

    At best, the gags may show there is something unfortunate about talking about
    wars on drugs and terrors, since in neither case is a conventional war being
    waged. But even this is a criticism of descriptions and not the projects they
    describe. And by missing the point they actually take us away from considering
    the genuinely problematic aspects of both these so-called wars.

    If all this sounds like taking comedy too seriously then it should be remembered
    that the comedians themselves take their political objectives very seriously.
    And they are very articulate and persuasive spokespeople for their causes. Often
    their work is excellent, and the jokes hit upon uncomfortable truths. Both Michael
    Moore and Mark Thomas in particular have produced some searching and disturbing
    work. But both ridicule more than is actually ridiculous and in the process
    they can divert our attention from the real issues and make us laugh at no more
    than straw men. Comedy can reveal the truth, but it can also make us miss it.

  • Spurious science

    I decided to explore randomness and some of the principles of quantum mechanics,
    through poetry, using the medium of sheep.
    Valerie Laws, writer (Source: BBC News, 4 December 2002)

    It’s all too easy to mock contemporary art, especially when ruminating mammals
    are involved. But it is not for me to comment on the artistic merits of Valerie
    Laws’s extremely original project. Laws sprayed one word on the back of each
    member of a flock of sheep, using a total of seventeen syllables, the same number
    as in a traditional Japanese haiku. The idea is that the sheep would constantly
    rearrange themselves, each time creating a new poem, which would exist for just
    as long as the sheep remained still.

    Laws said, "I like the idea of using living sheep to create a living poem,
    and creating new work as they move around," and I am sure there are many
    who share her delight in lamb-ic pentameter.

    But what has this got to do with quantum mechanics? The BBC in its report seemed
    to think it had a great deal to do with it, saying her poems "utilise the
    deepest workings of the universe." They are missing the obvious point that
    quantum theory only explains the workings of the very smallest parts of the
    universe, at the sub-atomic level. The idea that a sheep can "utilise"
    quantum principles while meandering around a field is about as muddle-headed
    as you can get.

    But let’s not shoot the messenger. The BBC was merely reporting what the poet
    believed. "Quantum mechanics is a branch of physics which a lot of people
    find hard to understand, as it seems to go against common sense," she said.
    "Randomness and uncertainty is at the centre of how the universe is put
    together, and is quite difficult for us as humans who rely on order. So I decided
    to explore randomness and some of the principles of quantum mechanics, through
    poetry, using the medium of sheep."

    It is indeed true that people find quantum mechanics very hard to understand
    and Laws is one of them. The main problem here seems to be that Laws has latched
    onto to a few buzz-words associated with quantum theory – randomness and uncertainty
    – and thinks they capture what is particular about it. In fact, randomness is
    not a distinctive feature of quantum mechanics at all. Randomness, at least
    at some level of description, is a phenomenon that appears in other areas of
    the physical sciences. Chaos theory, for example, is not a part of quantum mechanics
    at all.

    Indeed, it is hard to see any relevance of quantum mechanics to the sheep.
    The uncertainty of quantum mechanics concerns the speed and position of electrons
    and the impossibility of measuring both simultaneously. There is, however, no
    problem in ascertaining the speed and position of the sheep. The poems they
    form may be random, but this has no particular connection with the principles
    of quantum mechanics. It can be explained wholly within the terms of classical
    physics.

    Unfortunately, this kind of spurious adoption of quantum theory to make something
    sound more impressive is reaching epidemic proportions. There is, for instance,
    a lot of talk about "quantum consciousness", explaining consciousness
    by the use of quantum theory. There is some serious research here and Roger
    Penrose, for example, has argued that he believes the solution to the problem
    of consciousness will come from quantum theory. But the vast majority of the
    "literature" on this is just a combination of speculation and dubious
    analogy. So, for example, Danah Zohar in The Quantum Self, speculates
    that the quantum wave/particle duality corresponds to the duality between the
    physical and the mental. The reasoning seems to be that particles are a bit
    concrete and so like the physical, and waves are more fluffy and thus more like
    the mental. This analogy added to a liberal dose of speculation leads to her
    explaining consciousness as the fusing of the two in quantum states of the brain,
    even though almost all physicists think that the kind of quantum state Zohar
    thinks explains consciousness – the Bose-Einstein condensate – could not exist
    in something as warm and wet as the brain.

    The problem is that quantum mechanics is difficult and hard to understand,
    so people seem to think that anything else difficult and hard to understand
    should somehow be seen as a quantum phenomenon. But this adds up to no explanation
    at all. As Susan Blackmore, the noted psychologist, said in a report on a conference
    at which these theories were offered as explanations for consciousness, "…they
    didn’t explain it. They quantummed it."

    So, please, artists and writers, feel free to explore randomness and uncertainty,
    but do not pretend you are invoking quantum theory unless you are really sure
    that you are. As for those who would seek to explain the mysteries of the universe
    using quantum theory, remember that substituting one mystery for another is
    not an explanation at all. And remember that if even leading physicists find
    quantum mechanics hard to get to grips with, what chance do you stand? And one
    final plea to everyone: do not think that rooting your work in quantum theory
    makes it better if, in the next breath, you condemn society for being too in
    thrall of science and scientists. You can’t have it both ways – and please don’t
    appeal to the paradoxes of quantum theory to say you can.

  • Context? What context?

    Why, Sir, you find no man at all intellectual who is willing to leave London:
    No, Sir, when a man is tired of London, he is tired of life; for there is
    in London all that life can afford.
    Dr Johnson, in James Boswell’s Life of Samuel Johnson (1791)

    Dr Johnson’s paean to London is oft-repeated as if it were an established truth.
    To admit being fed up with Britain’s capital is to admit to being worn out with
    life itself. Or at least that’s what the shrinking number of people who still
    think the city is worth living in would have you believe.

    Let us assume for a moment that what Johnson claimed was true. That still leaves
    several problems for those who would appeal to its truth to support their love
    of modern day London. The most obvious of these is that the observation is two
    hundred years out of date. In 1801, ten years after the publication of Boswell’s
    Life of Samuel Johnson, London’s population was 900,000. In 2002 it was
    7.4 million. But for the existence of many historic buildings, Johnson wouldn’t
    even recognise the London of today, let alone be in any position to judge whether
    it was the best place in the world to live.

    Furthermore, a cursory examination of the context of Johnson’s quote shows
    that it doesn’t even express a general approval of London life at all. Johnson
    is talking only of the lives of "intellectual" men. Of course, an
    intellectual in that time would mean a member of the comfortable middle classes,
    and most definitely a man rather than a woman. People outside this exclusive
    circle, intelligent or otherwise, could understandably be tired of London not
    out of tiredness with life, but out of a hunger to live a better one.

    Consider this description by Richard Schwartz in his Daily Life in Johnson’s
    London
    . "Hovels and shacks were commonplace. Many of the poor crowded
    into deserted houses. A sizeable number of the city’s inhabitants both lived
    and worked below ground level." Even in Johnson’s time, there were plenty
    of good reasons to tire of London.

    In this case the insensitivity to context is usually unobjectionable. Quotations
    do take on a life of their own and can be used simply to express a sentiment
    in a particularly pithy way. I myself have used Yeats’ lines "The best
    lack all conviction, while the worst are full of passionate intensity"
    out of context to celebrate the lack of conviction Yeats is actually lamenting
    in the original poem. (And I’ve also misattributed it.) In a similar way, when
    people trot out Johnson’s line they are usually doing no more than borrowing
    some words to express how they feel better than they could using their own words.

    However, if we are taking something as authoritative, to justify as well as
    to express what we think, then ignoring context is inexcusable. If we think
    the fact that Johnson said this about London says something about its truth,
    then we are guilty of riding roughshod over the all-important context in which
    it was first uttered.

    Another striking example of this kind of contextual insensitivity is Marx and
    Engel’s claim, made in the Communist Manifesto in 1848 that "The
    proletarians have nothing to lose but their chains". Even the proudest
    of unreconstructed Marxist would have to admit, given the huge differences between
    the conditions of the working classes today and one hundred and fifty years
    ago, that if this assertion is still true it needs to be shown to be still true.
    One cannot pretend that a claim made at a particular historical time and place
    becomes timelessly true simply in virtue of it being repeated enough over the years.