Author: Julian Baggini

  • Aggregation aggravation

    A 70-year study of 500 juvenile offenders born in the Twenties — the longest-running crime study in the world — has found that those who married were far more likely to go straight later in life than those who remained single.
    Melanie Phillips, The Daily Mail, 16 January 2004

    Melanie Phillips is one of the best-known socially conservative commentators in Britain. Like many or her ilk, she sees the evidence that “marriage works” as incontrovertible. But although there is no reason to doubt the accuracy of the statistics she cites, her case is flawed because it fails to take into account the effects of what statisticians call disaggregation: the breaking down of statistics into their component parts.

    Statistics do indeed regularly indicate a correlations between marriage and various goods, both social and personal. Similarly, divorce and single-parenthood are correlated with various ills. Phillips is quick to highlight these. “Married men were more likely have a full-time job,” she writes about another survey, and “They were less like to use drugs or abuse alcohol, and less likely either to commit crime or become its victims.”

    Of course, one point that should always be made about such surveys is that a correlation is not the same thing as a cause. (See previous Bad Move, Correlation/cause confusion) But in this case there is another major problem with interpreting the data. The trouble is that such surveys deal only with very broad categories: the married, cohabiters, divorcees, single parents. But what would happen if we broke down these categories into smaller component parts? What would we see then?

    Most obviously, we would notice that cohabitation covers a wide spectrum of arrangements, from people living together with no commitment to the future, right through to those who have spent a lifetime together without exchanging wedding vows. If we looked at the married, we would also find – as well as the strong, stable relationships Phillips values – domestic disasters which are bad for both spouses.

    All of this would be consistent with a hypothesis contrary to that espoused by Phillips: that it is not marriage per se which is good for people, but long term committed relationships. We would expect to find more of these kinds of relationships in the married group than the cohabiting group, simply because all marriages are at least in theory long-term commitments, whereas cohabitations need not be. It is therefore more than possible that marriage is not a prerequisite for the social benefits Phillips values after all, but is merely the most common social sign that the commitment required to achieve these values is present in a relationship.

    That is just one possible finding breaking down the statistics more carefully might suggest. But there could be others. For instance, take the study on juvenile offenders Phillips cites. Again, we might find that those who stayed straight were those who found long-term relationships, whether or not they got married. Or we might find that that the tendency to get married is a result of some other factor which was more important in preventing re-offending.

    Another problem with the statistics is that the divorced were, of course, once married. Those who fall into the married category are by definition those whose relationships are, for the time being at least, holding up. So in determining whether marriage is a good thing or not, you really need to look at how those who are still married and those who have divorced fair as a whole, compared to the never married. And if you want to argue that the divorced would be better off if they had stayed together, the statistics cannot help you make your case, because they cannot tell you what counterfactually would have happened if the separated had remained together.

    If you are serious about using statistics as part of your case for the benefits or otherwise of marriage, it is no good simply pointing to correlations between marriage and social or personal benefits. You need to dig deeper and see whether what is truly correlated with happy adults and children isn’t something more fundamental than whether or not people choose to march down the aisle. In and of themselves, the bold statistics showing the married and their children to be better off do not show that marriage works.

  • Begging the question

    [Dudley Poplak] gave Charles a copy of the book A Time to Heal: My Triumph over Cancer – Beata Bishop’s story of how she beat malignant melanoma 23 years ago by following the strict dietary regime.
    Jo Revill (Health Editor), the Observer , 27 June 2004

    The strict dietary regime in question is the Gerson Therapy, which eschews drugs in favour of coffee enemas and fruit juices. It has the support of well-known medical experts such as Prince Charles, interior designer Dudley Poplak and Lord Baldwin of Bewdley. Their opinions, of course, carry more weight than those of the American Cancer Society, which warns that the treatment could be dangerous.

    To say that Gerson is controversial is therefore something of an understatement. Jo Revill’s piece in the Observer on Prince Charles’s advocacy of the therapy was supposed to deal with this debate. The overall point of the article is that Charles has “infuriated the medical establishment” by backing a treatment which medical experts believe there are no grounds for thinking works. We then hear opponents and proponents of the therapy offer their views. Revill’s role is not to pass judgement but to survey the opinions fairly.

    Overall, Revill remains impartial. But when she says that Bishop “beat” cancer “by following the strict dietary regime” she begs the question: she assumes precisely what is being contested. Bishop could only have beaten her cancer by following the diet, if following the diet actually caused her to get better. But this is exactly what is in question. What Revill should have said is that Bishop got better while following the diet. The facts indicate a temporal coincidence, not a causal link.

    Although the slip may seem a small one, it is crucial. For what this form of words implies is that on at least one occasion the therapy did work. That means the controversy is transformed into a debate about how often it works and how reliable it is, when it is really concerned with whether it works at all. To say someone beat cancer by following the diet therefore crucially grants to the advocates of the therapy that which they have not demonstrated to be true.

    Begging the question – assuming what needs to be argued for – is often a result of a careless use of language. More specifically, we often use “success” words where more neutral vocabulary is needed. For example, we say learned French when really we only studied it and never developed any real competence. Republicans say that Ronald Reagan won the cold war, when perhaps the cold war simply ended while he was president. A military retaliation may achieve little, but it is still said to have avenged an attack. When we learn, win or avenge, we achieve something by our actions. No such success is implied by the fact that we study, retaliate or simply have power.

    The unjustified use of success words is not the same mistake as begging the question, but it is often the means by which question begging occurs. In the Gerson example, the success word “beat” certainly contributes to the question begging. To say she beat cancer is to say her actions caused her to get better. And since the only apparently relevant action she took was the Gerson Therapy, to say she beat her illness is to assume the therapy can work when this is precisely what the article is putting into question. But if the recovery was not the consequence of anything Bishop did, then it is not accurate to say she beat her cancer. Rather, she simply got better.

    It is interesting to note that the question begging in this article all worked to the favour of alternative medicine, which the media always seems to give the benefit of the doubt. Such bias goes largely unnoticed, whereas to beg the question the other way would be seen as narrow-minded and prejudiced. Imagine that Revill had described the treatment as ineffective. That would be seen as unfair, yet she is permitted to say that someone got better by taking the treatment, and that the treatment is therefore at least sometimes effective. Some beliefs, it seems, are treated with more respect than others.

  • Confirmation bias

    [Jonathan Cainer] met a psychic poet called Charles John Quatro, who told him he would some day write an astrology column read by millions.
    David Smith, the Observer , 20 June 2004

    And would you believe it, many years later, Jonathan Cainer does write an astrology column read by millions! Incidentally, Cainer’s predictions grace the pages of “a newspaper dedicated to the subtle propagation of bigotry.” That description of the Daily Mail is by, ahem, Jonathan Cainer.

    Are you impressed by the uncanny accuracy of Quatro’s prediction? Let me make my own predictions: if you already believe in astrology, your answer will be yes. If you don’t, your answer will be no. If you’re agnostic, you will probably find it somewhat impressive.

    Was I right? Probably, though, not because I possess any psychic powers myself. Rather, I am simply aware of an effect psychologists call confirmation bias. This concerns how we filter out the mass of evidence for or against various theories and hypotheses. In short, what it means is that we consider evidence that supports what we already believe to be stronger or more significant than that which undermines it. Indeed, we may go so far as to pay little or no attention at all to contrary evidence and focus our attention almost exclusively on that which bolsters our prior convictions.

    This explains why so many people are impressed by the claims of psychics and astrologers. If we are inclined to believe in the supernatural, then it is easy to focus on those examples where prediction comes true, or where psychics make accurate statements about the past or present. These “confirm” our beliefs that they really do have access to a source of knowledge beyond the physical world, or at least the world as science standardly understands it.

    If we do not believe in the supernatural, however, we will focus on the countless times when predictions are wrong or when psychics make mistakes. Reading the article about Jonathan Cainer, for example – setting aside doubts about the truth of the story – we will think that this one accurate prediction doesn’t count for much, for the psychic probably also said many other things that were not true.

    It should be clear, therefore, that people on both sides of the debate can fall victim to confirmation bias. However, it should also be clear that, in this case, confirmation bias works more to the benefit of believers than skeptics. This is because if we try to take a genuinely balanced look at the evidence, we will find that for every apparent instance of a true prediction by an astrologer there are many other false ones. What is more, many predictions are so vague that it is always possible to say that they came true in some sense. Confirmation bias is therefore more likely to lead the believer into error because the balance of evidence just does stack up against the truth of astrology. It is only by selecting the evidence to fit their beliefs that they could possibly come to the conclusion that astrology works. That is, unless I have failed to overcome my own confirmation bias against the paranormal.

    Confirmation bias infects political discourse too. It is almost certainly the case that, once they were persuaded that Iraq had WMD, Blair and Bush placed more weight on evidence that supported their position than that which challenged it. They may have tried to keep open minds, but once you have committed yourself to what you see as the truth, it becomes very hard to assess all the evidence impartially.

    On the other side, those who are persuaded that Bush and Blair are driven by purely selfish motives are much more impressed by evidence that supports this view than that which suggests that they might be sincere, even if mistaken. The genuinely open question of whether they lied or were mistaken about WMD becomes an open and shut case in the face of “clear” evidence that they lied, while any counter-evidence is dismissed.

    Confirmation bias is a real impediment to good thinking, but unlike some errors in reasoning, it is very hard to root out. No one can expect to become totally immune to it: it requires constant fighting.

  • The straw man fallacy

    Free-market capitalism is founded on one value: the maximization of profit. Other values, like human dignity and solidarity, or environmental sustainability, are disregarded as soon as they limit potential profit.
    Naomi Klein, nologo.org FAQ

    Nasty, greedy folk, these free market capitalists. If, as I suspect, you hold values other than the maximization of profit, you can’t possibly be on their side. Better to join the anti-capitalists, for whom human dignity, solidarity and environmental sustainability count for something.

    If Klein’s moral victory over capitalism seems too easy, that’s because it is. The problem becomes evident when you ask yourself what this demonic free-market capitalism actually is.

    It certainly isn’t capitalism as instantiated in European liberal democracies. There, all sorts of mechanisms exist to limit potential profits in the name of other values, most obviously: competition laws, minimum wages, health and safety regulations, taxes and environmental legislation. If actually existing capitalism is the target, Klein has missed by a mile.

    Even if the attack is on the ideal of free-market capitalism, it is far from clear Klein has landed a hit. The point is that a free market is in itself value-neutral. Everything depends on how the economic actors behave within that market. People could base their purchasing decisions entirely on price. But equally, they could base them on environmental or human impact. Nothing about free-market capitalism would compel people to cast off all values other than profit (even if, as a matter of fact, that’s just what people would do). The growth in sales of fairtrade coffee, for example, is driven by demand in the market from values-conscious consumers.

    This second response to Klein is, however, by the by, for in the context of her remarks it is evident that her target is actually existing capitalism. Her comments are a response to the ‘frequently asked question’ of how consumers can make ethical purchasing decisions. It is thus clearly about the world as it is, not how it might be. And as we have seen, the idea that profit trumps all else in this world doesn’t stand up to the slightest scrutiny.

    Klein’s argument is an example of the straw man fallacy. Although her target is the actual, essentially capitalist, economic system of western liberal democracies, she has not in fact confronted its reality. Instead, she has set up as a target a caricature of the free-market capitalism we have and attacked that instead. But her subsequent easy victory over it is seen as a victory over the real McCoy.

    Put in general terms, the fallacy is of dealing with a weaker or distorted version of an argument or position as though it was in fact the full and accurate one. The position itself is then taken to be shown to be flawed even though it has not actually been subject to proper critique at all.

    Although the misrepresentations characteristic of straw men can be willful, often they simply reflect how little effort people make to understand their opponents’ points of view. We like the world to be clear cut and simple, made up only of black and white. If we attribute hopelessly inadequate or repugnant views to others, the virtues of our own commitments seem obvious. But if we grant that our enemies have an arguable case, then our own views suddenly do not seem so unassailable, and our opponents not so clearly on the side of the devil.

    Another explanation for the popularity of straw men is that if we win an argument, we feel that our opinions have been vindicated, even if our victory was won over an emaciated opponent. We forget that the aim of rational debate is not for us to win, but for the truth to win. That is rarely what happens when the fight is with a straw man.

  • No hypotheticals

    Sir Victor [Blank] and the Trinity Mirror chief executive, Sly Bailey, both refused to answer "hypothetical questions" about Daily Mirror editor Piers Morgan’s future if the images proved to be fake.
    Chris Tryhorn, the Guardian , 6 May 2004

    Unfortunately, there is no doubt that the vast majority of the images of coalition troops subjecting Iraqi prisoners to degrading treatment revealed recently are all too genuine. But at the time of writing, the authenticity of one of the first batch to appear remains in doubt. Experts have provided many reasons for thinking that the images of British troops mistreating prisoners published in the Mirror may be fakes.

    If they are not genuine, this would be no small blunder. The perception that maltreatment is going on fuels Iraqi resentment of the occupying forces and helps the cause of the “resistance”. Although subsequent revelations have confirmed that mistreatment is occurring anyway, the veracity of the Mirror images would significantly alter the evidence of the extent to which British troops are contributing to it.

    So it would seem a fair question to ask: if the images are fakes, will the Mirror’s editor have to resign? Blank and Bailey, however, both refused to answer it, on the grounds that it was a purely hypothetical question.

    Nigel Warburton, in his Thinking From A to Z , dubbed this the “politician’s answer”, with good reason. It has become a favoured tool of evasion for politicians the world over, who often bat away queries saying, “that’s a hypothetical question”.

    However, look for a justification for the assumption that only questions about what is actual need to be answered and you’ll search in vain. Indeed, it is ironic that politicians are so keen to avoid hypothetical questions when their entire campaigns are run on the basis of hypotheticals: if you elect me, I’ll do this. If they truly believed that they shouldn’t answer hypothetical questions, then they should refrain from saying anything about what they would do if they gained power.

    There are good reasons why it is sometimes unwise to answer hypothetical questions. One is that it is often not worth worrying about all the possible things that might happen. You have to weigh up the probabilities and the seriousness of the consequences to decide whether in any given case “We’ll cross that bridge when we come to it” betrays a shocking lack of forethought (the plan for what to do in Iraq once Saddam had been toppled is a clear contender for this category) or a prudent conservation of intellectual energy (for instance, how to save tax in the 14 million to one chance that you win the lottery).

    Another is that circumstances change, and it can be unwise to commit yourself to a future course of action when unforeseen events may change the calculations about what the best course of action is. For example, before the Iraq war, Tony Blair frequently dodged the question of what he would do if there was no “second” UN resolution backing military action against Iraq . In this case, the existence or not of an UN resolution was just one of a number of factors which would contribute to his eventual decision. With events on the ground changing every day, unless you thought military action without UN backing was unjustifiable in all circumstances (in which case you would have to be opposed to the NATO action in former Yugoslavia and British intervention in Sierra Leone), Blair could not predict the eventual weighting the resolution, or lack of it, should have in his deliberations.

    But it is important to note that the problem here is not that the question about UN backing was hypothetical. It is rather that there were so many other variables that there was no single hypothetical scenario in which UN backing was not forthcoming for Blair to comment on. Rather, there were any number of different scenarios consistent with that outcome, not all of which could even be foreseen, and all of which would have to be judged on their merits. Only if one already thought that the requirement for a UN resolution was absolute, which would make all the different scenarios identical in the one regard that mattered, could the hypothetical question be answered.

    In Morgan’s case, however, this justification for evasion does not seem to apply. It is true that there are many scenarios consistent with the pictures being fakes. But in all cases, publishing fakes would be a terrible error and one which, arguably, the editor of the paper involved should take responsibility for. That means the hypothetical question is both clear and could be answered. The fact that the Mirror ‘s chiefs chose not to cannot be justified by the hypothetical nature of the question, because the mere fact that a question is hypothetical is never a reason why it can’t or shouldn’t be answered.

  • Lies, damned lies and statistics

    Most people would regard publishing as lacking cultural diversity – and they would be right, according to a survey into ethnicity in the industry. […] Only 13% of respondents to the survey belonged to Asian, black, Chinese or other minority ethnic groups…
    Guardian Review , 13 March 2004

    Accurate statistics are just facts, and as such, they don’t lie. Nevertheless, the bad reputation they have as being the source of the darkest deceptions is not entirely unfounded. For the very indisputability of a statistic can transfix us, leaving us blind to the unproven fact it is supposed to demonstrate.

    This little statistic about cultural diversity in publishing is a wonderful example of how a statistic by itself rarely tells you anything. The writer does obey the first rule of good statistical reporting, by making clear that this all “according to a survey”. We need to be reminded that statistics are only as good as the data used to generate them, and so just because one survey said this many people think or do that, it ain’t necessarily so.

    But having made that caveat plain, the rest of the report is a disaster. It takes the statistic to support the view that publishing “lacks” culturally diversity. The clear implication is that the industry is not as diverse as it should be because “only” 13% of respondents belong to minority ethnic groups. But what percentage would be high enough to indicate sufficient cultural diversity? We are given no benchmark or comparative figure to contextualise the statistic and enable us to give it a fair interpretation.

    The most obviously relevant statistic would be the actual proportion of the population as a whole which belongs to a minority ethnic group. If the proportion of people working in publishing is less than this, then we would have grounds for saying that it is not culturally diverse enough. In fact, according to the most authoritative statistic, the last UK census in 2001 , ethnic minorities make up 7.1% of the population. In other words, publishing contains, proportionately, almost twice as many people from ethnic minorities than the population as a whole. This leads us to exactly the opposite conclusion to the one implied by the report: publishing is more culturally diverse than we would expect it to be.

    This is typical of what makes so much reporting of statistics misleading: they are presented as though they speak for themselves when comparators and interpretations are indispensable. Consider the number of people in the UK who leave school with no qualifications. The opposition Conservative party points out on its website that 30,000 children leave school without a GCSE (the standard qualification taken at age 16). We are presumably supposed to think this is scandalously high. However, the Department for Education and Skills can also report that “the number of children who leave school without qualifications has decreased for the seventh year running. Now nearly 95% leave school with a qualification.”

    But even that is not game, set and match to the government. For it is possible that the number of children leaving school without a qualification has decreased because the number of school leavers as a whole has decreased, for demographic reasons. If that is the case, the 95% figure may not have changed, or may even have come down. Furthermore, even if there has been an improvement, how are we to judge if it has been a good enough one?

    It is perhaps surprising that so many statistics sound good or bad to us, even though we have no knowledge at all of the context and comparisons that would enable us to say if they are truly good or bad. Whatever the explanation, innumeracy itself is not entirely to blame. You might be perfectly able to do the maths and still misinterpret what the statistics actually mean.

  • Correlation/cause confusion

    "The arts have their value in society. You look at the Royal Albert Hall on Proms night. How many of those people are going to mug old ladies on the way home? Not many – they’ve got more important things to worry about."
    Prunella Scales, Big Issue in the North , 1996

    It is indeed highly unlikely that anyone returning from a classical music concert is going to mug someone along the way. But such a person is also probably more likely than the average member of the population to embezzle funds from their company the day after. Are we then to conclude that listening to Mozart will make you less likely to mug someone but more likely to fiddle the books?

    The conclusion would seem at best premature and at worst simply absurd. The problem is a straightforward confusion of correlations and causes.

    For example, Britons who regularly eat plantains are also more likely to regularly eat sweet potatoes. Does that mean eating one causes people to eat the other? Of course not.

    The explanation of the correlation is a third factor: plantains and sweet potatoes are both staples of the Caribbean diet. Coming from a family of Caribbean origin is thus the causal factor which explains the correlation between consumption of both foodstuffs.

    Similarly, it remains true that audiences at classical music concerts tend to be middle class, a social group not prone to mugging but with a virtual monopoly on corporate fraud. It is far more likely that these broader facts about social position do more to explain the lack of muggers in prom audiences than any morally improving quality in the music.

    Even if, as a matter of fact, the music does affect morality, the important point is that the mere correlation of reduced criminality and listening to orchestras does not show that it does.

    Nonetheless, is the existence of a correlation evidence that there is some causal story to be told that links the two, as is the case with both my examples? Often there is such a story to be told, but little is to be gained by telling it. The setting of the sun may explain both the closing of a flower and the locking of the park gates by the keeper, but the two effects are still caused by two very different mechanisms and have no deep connection.

    Similarly, the 9 o’clock train leaves at the same time as the 9 o’clock radio news bulletin starts. (Well, perhaps not on Britain ‘s creaking railways.) But to say that both are caused by it being 9 o’clock is surely an error: times are just the wrong kind of thing to be causes. The mere sharing of a common causal factor does not provide a causal link between two such events.

    Leaping from correlation to cause does seem to come naturally to us, however, perhaps because, as David Hume argued , ultimately regular correlations of a certain sort are the only evidence that there is such a thing as causation at all. (Many claim he went further and argued that causation was just a form of exceptionless correlation.) But even Hume would agree that not just any correlation points to a cause.

    News pages are full of reports of correlations where it is implied that there is some causal link. Consider, for example, what causal links might be assumed from the following findings and what other explanations are possible. The links take you to a report of each of the findings. No rash assumptions are made (tabloid papers almost certainly would not have been as careful in their reporting) but nor is the full range of plausible explanations always considered. I would expect that Butterflies and Wheels readers would be more likely than others to work out what these are…

  • Fallacy of the complex question

    "Why are we so obsessed with what other people think of us? Why are
    we so concerned to fit in? Why do we submit so readily to the tyranny of the
    ‘they’?"
    Giles Fraser,
    the
    Guardian
    20 Dec 2003

    The most common example given to illustrate the fallacy of the complex question
    is "When did you stop beating your wife?" Such a question asks one
    thing while assuming a second, when it is just this assumption which needs to
    be established. First we need to know whether you did beat your wife. Only if
    it turns out that you did should we concern ourselves with when you stopped
    doing so.

    The great trick of a complex question is that any direct answer to it implicitly
    endorses the assumption, whereas any failure to offer a direct answer can look
    like an evasion. However, although in the heat of an argument it can throw someone
    off track, in this example, and in another favourite – "Why did you steal
    the money?" – it doesn’t take much thought to see the trick and simply
    respond, "I never started beating her" or "I didn’t steal the
    money".

    The fallacy is harder to spot when people use a complex question, not to make
    an accusation, but to frame a discussion or enquiry. In the example I quote
    from Giles Fraser, he opens his article with three questions, all of which assume
    something that he has not established and which, on reflection, may well not
    be true. How many of us are really "obsessed with what other people think
    of us"? Most people are at least concerned about how they are seen by others,
    but I would suggest it is a minority who are obsessed by it. Yet Fraser’s question
    assumes that we – he and his entire readership – are all obsessed by the opinions
    of others. Similarly, many of us are not "so concerned to fit in"
    and refuse to "submit so readily to the tyranny of the ‘they’." His
    questions encourage us simply to assume that we are all highly preoccupied with
    what other people think and only think about why this should be so.

    This kind of debate framing, which assumes a state of affairs which may not
    pertain, is remarkably common in the media. Often there is some flimsy basis
    offered, such as a report or single opinion poll. But then we are thrown straight
    into a debate: Why are people rejecting marriage? Why aren’t British men romantic?
    Why can’t actresses over forty get work? Why is the government destroying the
    BBC? How much freedom should we be prepared to sacrifice for security?

    Because the assumptions being made in such cases are often very plausible or
    reflect conventional wisdom, it is much easier to fall for the fallacy than
    it is when a false accusation against us is being smuggled in. The remedy, however,
    is the same. We need to be aware of what assumptions are being made by a question
    and challenge them if we think they are unfounded.

    Which leaves me with one final puzzle: Why do we fall for the fallacy of the
    complex question so easily?

  • Concealed caveats and qualifications

    "Now the Pentagon tells Bush: climate change will destroy us."

    Headline
    in the Observer
    , 22 February 2004

    In Britain at least, we expect newspaper headlines to overstate their case
    a little. What seems dramatic when printed in 72 point bold across the page
    often turns out to be much more mundane once the actual article is read.

    But in this particular example, the story is just as dramatic as the headline
    suggests. Apparently, a Pentagon report "warns that major European cities
    will be sunk beneath rising seas as Britain is plunged into a ‘Siberian’ climate
    by 2020."

    In a box accompanying the article in the print edition, headed "The key
    findings", we also discover that "by 2007 violent storms smash coastal
    barriers rendering large parts of the Netherlands uninhabitable. Cities like The
    Hague are abandoned."

    Pretty unbelievable stuff. The problem is that these are not firm predictions
    at all. Rather, they are just some of the more extreme scenarios that could
    happen as a result of global warning. The problem is that the caveats which
    would make this clear are suppressed so as to be virtually invisible.

    The article should have predominantly used a variety of conditional forms –
    such as "may", "could" and "might" – along with
    some indication of how probable these outcomes are considered to be. But instead,
    it is largely written in the future tense – "Nuclear conflict, mega-droughts,
    famine and widespread rioting will erupt across the world" – or in the
    present simple – "riots and internal conflict tear apart India an Indonesia".
    Phrased in this way, the events described seem to be firm predictions, not merely
    possibilities among many.

    There are a few "coulds" scattered about, but definite indicative
    verb forms vastly outnumber these. Indeed, you need to look carefully to be
    sure that the report in question is only dealing with possibilities and not
    firm predictions. The clearest evidence that this is indeed the case comes in
    the comment that, according to the report, "an imminent scenario of catastrophic
    climate change is ‘plausible’." To say these outcomes are plausible is
    very different to saying they are predicted – a word used elsewhere in the article
    – or even probable. And it is certainly misleading to describe as "findings"
    scenarios that are no more than plausible.

    So few are the expected qualifications that it is actually possible that I
    have misinterpreted the report entirely and that the Pentagon really is predicting
    these outcomes are overwhelmingly probable. The failure of the story to make
    the front page, rather than its actual content, is perhaps the strongest indicator
    that this is not in fact the case.

    This article is an extreme example. But subtler failures to include the caveats
    and qualifications that are required to make what is said accurate are all too
    common.

    Sometimes, it is arguable whether or not the lack of a qualification is a failure
    or merely a case of acceptable stylistic economy, since the caveat can be safely
    assumed. For example, an
    article in the Guardian
    included the sentence, "Mynak Tulku, the reincarnation
    of a powerful lama, is the Dragon King’s unofficial ambassador for new technology."
    It seems too much to say he was the reincarnation of a powerful lama. It would
    be more accurate to say something like "said to be the reincarnation of
    a powerful lama". But arguably such caveats can be assumed: we all know
    that whether he is in fact reincarnated is a matter of opinion. In the context
    of this particular article, I think the lack of caveat contributes to a general
    unquestioning acceptance of the beliefs of Bhutan’s Buddhists, but I accept
    that this could be seen as quibbling.

    Between the borderline case of the reincarnated lama and the extreme case of
    global catastrophe starting next year lie many instances where caveats are either
    missing entirely or played down. As writers, we need to make sure we include
    all the caveats that are necessary to make what we say true and which we cannot
    assume the reader will take for granted. And as readers, we need to be aware
    that many writers as not as vigilant as this, and look out for the signs of
    concealed or absent qualifications.

  • Low redefinition

    "Why do wars begin? The simple answer is that they never end."

    Tom Palaima, Times Higher Education Supplement, December 12 2003

    One of the most commented upon headlines in the world’s press the day after
    9/11 appeared in the liberal French newspaper Le Monde: "We are all Americans".
    It was a powerful expression of the solidarity of democrats everywhere in the
    face of an apparently new and terrifying threat.

    No-one who read it, however, would have been foolish enough to take it literally.
    Had the article then gone on to claim that, since we were all Americans, French
    citizens should be able to vote for US presidents and have the other rights
    of US citizens, the absurdity would have been obvious. We all understood that
    the usual sense of "American" had been widened so that it carried
    a metaphorical and symbolic meaning, alongside its usual narrow one.

    Yet something like the absurd move from the metaphorical to the literal can
    happen with a process known as low redefinition. This is when the legitimate
    meaning of a word is broadened in order to make a questionable proposition seem
    more plausible.

    This is what I think is going on in Tom Palaima’s argument about wars. He wants
    to advance the thesis that war is essentially ceaseless. We ordinarily think
    about history as being divided between periods of peace and periods of war.
    In Western Europe, for example, it is thought that we have had peace since 1945,
    apart from a few regional conflicts, most notably those in the former Yugoslavia.
    Palaima, however, disputes this. War never ends, it simply goes through quieter
    and more active phases. "[P]eriods of so-called peace," he writes,
    "were intervals when the competing nation-states were inevitably preparing
    for the next phase of open war…"

    The problem with this thesis is that it is only true if war is understood in
    a broader sense than usual. It is not that we have been deceived about the lack
    of war in Western Europe, it is that there has not been war in the usual sense
    of the word, period. If you want to define war in a broader way, then you might
    be able to claim that war has never ceased. But we have to be clear that this
    requires a low redefinition of war – a broadening of its meaning – and is not
    just about correcting a false idea we have about war.

    There may well be legitimate reasons for wanting such a low redefinition. In
    Palaima’s case I think the motivating factor is a desire to make us reconsider
    the nature of peace. Palaima wants to challenge the comfortable idea that, because
    battles have not been fought with great frequency in Western Europe since 1945,
    that we live in a period of geopolitical calm and stability where military power
    is no longer an issue. Urging us to accept a low redefinition of war is a way
    of making this claim vivid.

    Nevertheless, his article does not make it explicit that he is revising our
    ordinary sense of war. So his de facto low redefinition of "war" either
    fallaciously conflates the usual meaning of the word with his broadened one,
    in which case his argument is flawed; or it glosses over the revisionary nature
    of his claim, in which case his argument may be misleading.

    Low redefinition is often more brazen and without any good justification. When
    people say "chocolate is an addictive drug, "everyone is bisexual"
    or "altruism is ultimately just self-interest", they are in each case
    broadening the meaning of the central concepts to make what would otherwise
    be an outrageous claim plausible. In order for low redefinition to be a legitimate
    argumentative move, we need to know why the broader meaning is preferable to
    the usual, narrower one. Otherwise, it’s a bad move.

  • False analogies

    "…some of the same lawyers who spent years battling tobacco companies
    on behalf of sick smokers […] are arguing that the fast food industry
    is a similar risk to public health."
    CNN.com, August 19 2002

    Several lawsuits have already been filed against fast-food restaurants, claiming
    that they are responsible for the ill health of the obese who have fed for years
    on their products. None have so far been successful, and many people regard them as
    some kind of joke. But advocates point out that the first people to sue "big
    tobacco" were ridiculed. Yet in 1998, the Master
    Settlement Agreement
    saw the major US tobacco companies agree to pay $246bn
    over 25 years to settle lawsuits filed by US states.

    The success of the tobacco suits has encouraged those who believe that the
    junk-food claims are analogous. In both cases, it is claimed that public health
    has been damaged by the actions of major corporations who concealed the health
    risks of their products. This makes them liable to pay damages to those who
    suffered as a result.

    From a legal point of view, there are certainly precedents set by the tobacco
    suits which those pursuing fast-food manufacturers will want to learn from.
    But are the two cases truly analogous?

    One legitimate way to draw an analogy in an argument is to identify a common
    logical structure. In such a case, it is not the content of the analogy that
    matters, but the structure of the inference. So, for example, it can be argued
    that the arguments against the tobacco and fast-food industries have a common
    structure, namely:

    1. If a manufacturer covers up the harm its products can cause, it is responsible
    for any such harm its products do cause.
    2. X has covered up the harm its products can cause.
    3. Therefore X is responsible for any such harm its products have caused.

    If (1) is true, then X can be substituted for anything which makes (2) true
    and the conclusion (3) will follow. In this way, the two arguments are analogous.
    In fact, they both share a basic valid form known as affirming the antecedent:

    If P then Q
    P
    Therefore Q

    But the tobacco case is not cited purely because the arguments there have the
    same logical structure. Rather, it is claimed that the relevant facts are the
    same. In other words, the similarity extends to the content of the premises.

    This is where the analogy might break down, at least with the specific argument
    considered above. First, it is not clear that the Master Settlement Agreement
    was premised on the principle that "If a manufacturer covers up the harm
    its products can cause, it is responsible for any such harm its products do
    cause". But even if it were, the second premise – X has covered up the
    harm its products can cause – is almost certainly not equally true of the tobacco
    and fast food industries.

    It should have been obvious to tobacco companies a long time ago that their
    products were intrinsically damaging to health. Warnings such as those now placed
    in adverts by the alcohol industry – "use our product responsibly"
    – would be incongruous on cigarette packets. But fast-food, like alcohol, can
    be enjoyed in moderation without harm to health. And furthermore, the facts
    about what constitutes a healthy diet are well enough known for it to be possible
    to make the case that consumers should choose for themselves how much fast-food
    they consume.

    The complaints that can be made against the tobacco firms and the fast-food
    restaurants are thus disanalogous. Cigarette manufacturers are accused of not
    making public the at the time little known risks of using their products; fast-food
    manufacturers are accused of not doing enough to highlight the well-known risks
    of misusing of their products. In one case, it’s about covering up the inevitable
    harm, in the other it’s failure to advertise the potential harm caused by misuse.
    Hence the second premise of the argument is not the same in both cases and the
    analogy breaks down.

    When determining whether an analogy is a good one, sometimes the main point
    being made is about the logic of the argument and it is just the structural
    similarities which count. But in other cases, such as this one, the relevant
    similarities must extend to contents of the premises if the argument is to be
    analogous.

    However, what is interesting is that the analogy would become much closer if
    it could be shown that the fast-food manufacturers were concealing little-known
    facts about the dangers of their products. There would still be a difference,
    in that a healthy lifestyle can include moderate fast-food consumption, whereas
    it cannot include moderate smoking. But if legally the key concern is preventing
    information reaching the public domain, the analogy might nevertheless be close
    enough for some lawsuits to succeed.

  • Immunisation against error

    "People who tell you they’re not superstitious are lying."
    Frankie Dettori, Jockey, Observer Magazine, 5 January 2002.

    Since its birth in Ancient Greece, philosophy has sought the holy grail of
    certain knowledge. In this respect, philosophy reflects a common human desire
    to have things clear-cut. This desire can be satisfied – psychologically, if
    not logically – through the adoption of beliefs which are immunised against
    the very possibility of error.

    Dettori’s assertion about superstition is a striking example. He believes that
    everyone is superstitious. The problem is, of course, that some people claim
    not to be. If, however, he adopts the maxim "People who tell you they’re
    not superstitious are lying," then no such avowals count as evidence against
    him. Accepting you are superstitious supports his thesis; denying that you are
    simply shows you are a liar, and again fits the thesis.

    Of course, it is possible to argue that someone might behave in such
    as way as to show Dettori is wrong, but since Dettori’s claim is about what
    we really believe deep down, the mere fact that someone doesn’t manifest
    their superstition in behaviour should not count as counter-evidence.

    Traditionally, this kind of claim would be called unfalsifiable, meaning that
    nothing would count as evidence for its falsity. Jean-Paul Sartre seemed to
    make a similarly unfalisifiable claim when he claimed that we all feel anguish,
    and the reason why some people don’t appear to be anguished is because "they
    are merely disguising their anguish or are in flight from it." What that
    means is that no one can be held up as a counter-example to the thesis. A lack
    of apparent anguish can always be explained away as the result of disguise or
    flight.

    It is not generally the case that people deliberately make unfalsifiable claims
    as part of a conscious strategy to immunise themselves against error. On the
    contrary, the popularity of such assertions is precisely their apparent certainty.
    The fact that no evidence exists to counter a thesis is usually a good reason
    to suppose it is true. But sometimes we fail to notice that the lack of counter-evidence
    is due to the fact that its very possibility is ruled out by the claim being
    made. It is like a court case where the only admissible witnesses are those
    who support the prosecution. In such a rigged trial, it is not surprising that
    all the evidence falls on one side.

    The reason I prefer to talk about immunisation against error rather than falsifiability
    is that the key structural point about this bad move is not the relation to
    evidence but the way in which the assertion contains within itself the mechanism
    to repeal all counter arguments. This helps avoid the red herring defence made
    of many such claims: namely, that the more inconceivable it is that some evidence
    could prove a thesis wrong, the more certain it is that the thesis is true.
    Hence the most certain beliefs are precisely those for which nothing could conceivably
    count as counter evidence.

    I think this defence is flawed, but by dwelling on it we miss the crucial point,
    which is not essentially about evidence. Rather it is that the apparent certainty
    is not because the thesis captures a truth about the world; it is merely a result
    of the way in which the thesis has been formulated. In other words, it cannot
    be wrong because it has been (implicitly or covertly) stipulated that it cannot
    be wrong, not because it is right.

    Immunisation against error is most evident in conspiracy theories, since these
    stipulate that any apparent counter-evidence is really evidence of the effectiveness
    of the conspiracy. "They would say that," is how apparent counter-evidence
    is rebutted. Yet even if we are not attracted to conspiracy theories, such ways
    of thinking are quite common. Have you never, in an immune system-like response,
    repelled a view that contradicts your own with a "they would say that"?
    If you say you haven’t, it’s my bet you’re either lying or in denial. Touché.

  • Half truths

    "I did not have sexual relations with that woman, Miss Lewinsky."
    Bill Clinton, January 26, 2003.

    If you think Bill Clinton’s statement at a White House conference about his
    relationship with Monica Lewinsky was just a bare-faced lie, consider this.
    In some circumstances, it is desirable for young people to maintain that they
    are still virgins. In others, it would be a complete embarrassment.

    Take a seventeen-year-old boy from a conservative family who has experience
    of oral sex, mutual masturbation and so on, but not penetrative sex. "Are
    you a virgin?" asks his parents. "Yes," the boy replies. "Are
    you a virgin?" ask his friends, "Dur, no!" comes the reply. Wouldn’t
    you say that in both cases the boy is being at the very least a little disingenuous?
    Yet one of the answers must be true.

    When Clinton looked straight into the camera and said he had not had sexual
    relations with Monica Lewinsky, he was talking as the boy was to his parents.
    He chose to interpret "sexual relations" as being a euphemism for
    full sexual intercourse in a context where the people asking the questions want
    to know about sexual behaviour more generally. He was playing on the ambiguity
    of the term to state a half-truth, which I would define as a statement which
    can be read as being literally true, but which occludes other important relevant
    truths.

    In that case, the half-truth was seen through pretty quickly. But what about
    this statement from Tony Blair to the British parliament on 24 September 2002?

    It [the UK intelligence dossier] concludes that Iraq has chemical and biological
    weapons, that Saddam has continued to produce them, that he has existing and
    active military plans for the use of chemical and biological weapons, which
    could be activated within 45 minutes, including against his own Shia population;
    and that he is actively trying to acquire nuclear weapons capability.

    The British government and Tony Blair have vigorously defended the truth of
    the notorious 45 minute claim, or at least that it was an accurate statement
    about what British intelligence thought at the time. But as I define it, this
    looks very much like a half-truth. The kinds of weapons which could be activated
    in 45 minutes were relatively small battlefield ones and not the "weapons
    of mass destruction" which had dominated debate. Indeed, this is what the
    Intelligence and Security Committee, comprising members of parliament, concluded
    in its report, which exonerated Blair of the charge of lying, but said:

    As the 45 minutes claim was new to its readers, the context of the intelligence
    and any assessment needed to be explained. The fact that it was assessed to
    refer to battlefield chemical and biological munitions and their movement
    on the battlefield, not to any other form of chemical or biological attack,
    should have been highlighted in the dossier. The omission of the context and
    assessment allowed speculation as to its exact meaning. This was unhelpful
    to an understanding of this issue.

    Half-truths exploit the difference between telling a lie and not telling the
    truth. One can fail to tell the truth by not saying everything as well as by
    saying things that are false. But the idea that lies are of necessity ethically
    worse than half-truths is hard to defend. Indeed, given that it is not always
    wrong to tell a lie (as in the hackneyed example of lying to protect an innocent
    person from a potential killer) it seems what is crucial is intent and effect.
    And the effect and intent of a half-truth can be as good or bad, malicious or
    honourable, as a lie.

    Rhetorically, however, half-truths can be more powerful than lies. Because
    half-truths are nonetheless truths, credible evidence can be given to support
    them. They can also be stated with total sincerity and conviction, just as long
    as the utterer is able to convince herself that, not being lies, they are really
    ok. But half-truths are designed to deceive, to deflect our attention from what
    hasn’t been said but which is of crucial importance. They are no better than
    lies and can sometimes be worse.

  • You can’t prove it

    "Cigarette smoking has not been scientifically established as a cause
    of lung cancer. The cause or causes of lung cancer are unknown."
    Imperial Tobacco legal documents, as
    reported in the Observer
    , 5 October 2003.

    "Prove it" looks like a fair challenge to issue to anyone making
    a claim you suspect to be false. And properly understood, that’s just what it
    is. The problem is that an adequate "proof" almost always leaves a
    space for the shadow of unreasonable doubt.

    If proof demands absolute certainty, then arguably nothing can ever be proven.
    Descartes,
    for example, whittled down his beliefs until he was left only with those he
    thought to be absolutely certain. All that remained was the fact that he existed.
    Worse, subsequent critics have maintained that, if he were to be truly consistent,
    not even that could be certain.

    Descartes was continuing a quest for certainty which was probably launched
    by Plato. In his famous simile
    of the divided line in The Republic
    , Plato makes it clear that the highest
    form of knowledge is of certain, immutable truths. Absolute certainty has been
    the goal of many philosophers ever since.

    However, running parallel to this "rationalist" tradition, more pragmatic
    minds have had other ideas. Aristotle, for example, wrote, "it
    is the mark of an educated man to look for precision in each class of things
    just so far as the nature of the subject admits
    ." Strict proof is not
    possible in some subjects and the wise person thus accepts only as much proof
    as is possible.

    David Hume agreed, distinguishing
    between matters of fact and relations of ideas
    . The latter include mathematics,
    logic and statements which are true by definition. The former include all truths
    about the actual world. Hume pointed out that we cannot establish these truths
    according to the strict methods of deduction which we use with, say, mathematics.
    Rather, past experience provides evidence for their truth; evidence which is
    never logically watertight, but which we can judge to be sufficient.

    Take as a simple example the boiling point of water. Logically, it does not
    follow from the fact that all water to date has boiled at 100 degrees Celsius
    (adjusting for impurities and air pressure, of course) that all future water
    will do the same. But that doesn’t matter, because facts about the world aren’t
    established by strict logical deduction. Rather, they are determined by a process
    of generalisation from past experience.

    Perhaps most importantly for present purposes, it is usually (if not always)
    impossible to rule out on logical grounds alternative explanations. For instance,
    armed with a bluffers guide version of quantum theory, you might point out that
    is possible that the act of measuring temperature changes the substance measured,
    and that in nature, water actually boils at something other than 100 degrees.
    This is logically possible. But we have no reason to believe it to be true.
    Nevertheless, if you want to insist on a higher standard of proof than we normally
    do, you could say that the existence of this possibility shows we haven’t shown
    beyond all doubt that all water boils at 100 degrees.

    It should be obvious that on this understanding of what proof requires, no
    facts about the world can ever be proved. Matters become murkier, however, when
    we get to facts for which there is overwhelming evidence, but not quite as much
    as our most well-established truths. Here is where we come across cases such
    as smoking and cancer. (The issue here is complicated further by the difficulties
    of establishing causal relations and differentiating causal factors and causes
    simpliciter.)

    Imperial tobacco seems to be exploiting both the complications of causality
    and proof. What unites them, though, is the simple fact that, although there
    is overwhelming evidence that smoking is a major contributor to increased incidence
    of lung cancer, there is wiggle-room available for the sceptic to demand a higher
    standard of proof and claim that science has not yet met it. Not all alternative
    explanations have been ruled out and it remains at least possible that smoking
    is not a major causal factor at all.

    We should not be fooled by such sophistry. The fact that other explanations
    are logically possible is a red herring, for such is also the case for the boiling
    point of water. The fact that it is possible science has got it wrong is also
    uninteresting: science is by its nature fallible and to demand infallibility
    from it is to disobey Aristotle’s wise injunction.

    Like the connoisseur of good vodka, the truth seeker should not demand 100%
    proof. We have to live with a small measure of uncertainty. Proof only requires
    us to move beyond reasonable doubt. It cannot require us to remove all possibility
    of doubt whatsoever.

    See also Absence
    and Evidence

  • Taking credit

    "I could recite you the statistics: The lowest inflation, mortgage rates,
    and unemployment for decades. The best ever school results, with over 60,000
    more 11 year olds every year now reaching required standards in English and
    Maths. Cardiac deaths down 19 per cent since 1997, cancer deaths 9 per cent.
    Burglaries down 39 per cent."
    Tony Blair, Labour Party Conference speech, 30 September 2003

    What do you want from your government? Many would say, chiefly: security, the
    efficient management of the economy and the delivery of public services. What
    then can a government do to defend its record other than list the ways it has
    delivered? This is what Tony Blair did in his speech
    to this year’s Labour Party conference
    .

    Of course, the speech would have left out any statistics that did not show
    the government in a good light. The
    Conservatives claim
    , for example, that average waits for an operation on
    the National Health Service have increased from 90 to 96 days over the past
    three years and that the number of beds in English hospitals has fallen by 10,000
    since Labour came to power.

    But even the statistics that Blair did cite don’t necessarily provide evidence
    of good governance. Labour surely does deserve some credit for the low mortgage,
    inflation and unemployment rates. Although the ‘best ever school results’ could
    possibly be the result of more lenient marking, it also seems fair to attribute
    at least some of the improvements to government policy. But elsewhere, is Blair
    guilty of taking undeserved credit for improvements that would have happened
    anyway, or are actually signs of poor performance?

    It certainly looks like it. Take the reduction in cancer deaths. The fact is
    that the long-term trend throughout the developing world is for such a reduction
    and Britain’s 9% drop is only in line with European averages. (see this
    BBC news item
    or very detailed stats here
    to put the UK figures in perspective.) So the UK government has only been performing
    at best averagely, and arguably has had little direct control over the long
    to medium term trends at all.

    It’s a similar story with cardiac deaths. The British
    Heart Foundation
    reports that "while the death rate has been falling
    in the UK, it has not been falling as fast as in some other countries."

    Even in the case of crime it is hard to disentangle the effects of government
    policy from factors outside its control, such as demography. For example, the
    higher the proportion of young men in the population, the higher crime rates
    tend to be.

    Blair is only doing what politicians of all stripes have always done. If something
    good happens while they are in power, they try to take credit for it. (Conversely,
    if something bad happens they try to show it was not their fault.) The implicit
    non sequitur is "if something good happens while I’m in charge, it is because
    of my actions."

    As a rhetorical move it can be effective, partly because humans instinctively
    understand the world as operating according to causal principles and, as David
    Hume argued, it’s a good job that we do. After all, we only ever directly observe
    conjunctions of events, not the causal links between them. Our minds thus have
    to fill in the gaps if we are to understand the world causally at all. But the
    downside of this is that we can easily mistake non-causal conjunctions for causal
    ones. Success under a certain regime is thus easily mistaken for success because
    of a regime.

    Unfairly taking the credit (or the blame) in such manner is not restricted
    to the political sphere. Consider the implied causal claims in the following:
    "Since we’ve been married, your career has shot ahead, while mine has stagnated."
    Or, "There has not been a single fatality in this factory all the time
    I’ve been the manager." Or how about, "The man who won five of the
    six major championships played between August 1999 and April 2001 with Titleist
    equipment has won only two of the last seven, and none of the last five, with
    the Nike driver in the bag." That man is Tiger
    Woods
    .

    It is not that we know Blair’s government deserves no credit for improved health
    stats or that the Nike clubs are just an excuse for Woods’ slump. It’s just
    that in both cases credit or blame is being dished out on the basis of a causal
    link that just hasn’t been established.

    See also Post
    Hoc Fallacies

  • Playing the rights card

    "What right do we have to touch and smell an animal that has rested
    beneath the surface for 10,000 years?"
    David G. Anderson, Times Higher Education Supplement, 8 August 2003

    Rights often seem fundamental to our sense of what is morally acceptable. The
    UN’s Universal Declaration of Human Rights is a quasi-sacred document, the benchmark
    against which the decency of a country is measured. Human Rights NGOs like Amnesty
    International are virtually beyond criticism, for what they are defending is so
    obviously just. And democratic governments pass laws at their peril that are perceived
    to infringe on the "inalienable" rights of their citizens.

    But while the discourse of rights is extremely powerful in the public domain,
    intellectually speaking, they command less than universal respect. Philosopher
    and social reformer Jeremy Bentham famously said that talk of "natural rights
    is simple nonsense: natural and imprescriptible rights, rhetorical nonsense –
    nonsense upon stilts." Many since have agreed with him, arguing that rights
    are not things we are born with but rather artefacts of human law. Rights are
    not the moral basis of law, they are rather products of the law, which has its
    moral basis in something completely different.

    Whether one takes the broadly Benthamite line or not, what is clear and obvious
    is that it is not clear or obvious what rights are, where they come from, and,
    most importantly, which rights are genuine and which bogus. What is the right
    to life and what kinds of beings have it? Does the right to work entail a duty
    on the part of the state to provide work? Does the right to free speech include
    hate speech?

    All these complexities should be clear to anyone who has tried to think seriously
    about rights. Yet such complexities are often swept away by the rhetoric of rights.
    Simply by claiming a right an argument can appear to be clinched or strengthened.

    This is what the anthropologist David G. Anderson did in his article on digging
    up the remains of mammoths in Siberia. His piece details the interesting traditions,
    rituals and – though he would find the word too judgemental – myths of the Evenki,
    a people indigenous to Siberia. He points out the conflicts between the wishes
    and views of scientists who want to get their hands on mammoth remains and those
    of the Evenki, who only permit their use if they "present themselves".
    Even then, a gift must be left in return.

    As part of his argumentative armoury he asks his rhetorical question about what
    right we have to dig up the mammoth’s remains. There is something very powerful
    in the question "What right do you have to…" which seems to demand
    an answer. It puts the person questioned on the back foot.

    But must such a question have an answer? In general, we do not require specific
    rights to perform specific actions. There is no right to whistle, or to use the
    toilet. Nor is there a right to read a newspaper on a train. Someone who challenged
    us to state by which right we were doing any of these activities would be asking
    a very odd question indeed. We generally have the right to do whatever we want,
    as long as we don’t infringe on other people’s rights of non-interference or break
    the law.

    In the case of digging up mammoth remains, the only significant rights we could
    be infringing would be the rights of the land’s owners to retain the integrity
    of their territory. To say that the dead mammoth has a right not to be disturbed
    is surely stretching the notion of rights too far. And it is not even the case
    that the Evenki have a right not to have their traditions offended against. As
    John Stuart Mill persuasively argued, mere offence cannot be the basis for a restriction
    of action, or else we’d have to ban anything that anyone takes offence at. Which
    given the variety of human responses means just about everything.

    Just as it can be rhetorically powerful to erroneously demand by what right people
    act, so it can be effective to claim a right as justifying one’s own action. Yet
    when people claim the government owes them support to conceive a child artificially
    because they have a right to have children; that they should be allowed to spread
    racist or homophobic views because they have a right to free speech; or even that
    there is no need for greater gun controls in America because of the right to bear
    arms; it should be clear that it is all too easy to evoke a seemingly unobjectionable
    right to justify a possibly objectionable course of action.

    That should make us wary of invoking rights which seem to us to be fundamental
    and inalienable without stopping to ask if such a right really exists, as well
    as making us wary when people play the rights card to try to persuade us of their
    point of view. If rights are indeed important, we should think carefully before
    claiming or granting them.

  • Non sequiturs?

    "As yet another (British) panel concluded this week, there is no evidence
    that GM crops now in commercial cultivation are more dangerous to human health
    than conventional foods. So there is no reason why Europeans should not eat
    the GM food that Americans already consume by the siloful."
    Economist leader, 26 July 2003

    For the critical thinker, no error is more basic than the non sequitur: the
    conclusion that doesn’t follow. Non sequiturs are extremely common. People seem
    to like to scatter their texts with words like "therefore" and "so"
    whether or not the various points they are making in some way follow from each
    other.

    Saddam Hussain showed himself to be a brilliant exponent of the non sequitur
    in his surreal
    interview
    with British parliamentarian Tony Benn. In this short exchange
    he used the word "therefore" six times, in every case either creating
    a non sequitur or drawing a self-evident conclusion ("Therefore we are
    facing a critical situation.") Here’s the best example:

    "Those people and others have been telling the various US administrations,
    especially the current one, that if you want to control the world you need to
    control the oil. Therefore the destruction of Iraq is a pre-requisite to controlling
    oil."

    The important thing to note about non sequiturs is that what is at issue is
    not necessarily the truth of the main claims being made, but the inferential
    connection between them. For example, if I say, "I like cheese therefore
    it’s Tuesday", I have uttered a non sequitur, since the fact that it is
    Tuesday doesn’t follow from the fact that I like cheese. But it may nevertheless
    be true both that it is Tuesday and I do like cheese. So the fact that Saddam
    uttered a non sequitur does not in itself prove that the US does not want to
    control the world’s oil, nor that they didn’t see the destruction of Iraq as
    a means of achieving their goal. The complaint about the non sequitur simply
    draws our attention to the fact that the need to attack Iraq does not necessarily
    follow from the desire to control the world’s oil.

    The Economist leader also seems to contain a glaring non sequitur. It
    does not follow from the fact that a British panel has concluded that GM foods
    are safe to eat that they are actually safe to eat. It may be that insufficient
    studies have been conducted to allow us to conclude that absence of evidence
    for health risks is evidence for their absence. (See previous bad move, absence
    and evidence.)

    But here we must be careful not to read such texts as though there were formal
    arguments with clearly defined premises and conclusions. The fact is that in
    real life not every stage of an argument is usually spelled out. Most arguments
    are enthymemes – they rest on unstated, often assumed, premises.

    It doesn’t take too much digging to identify the enthymematic nature of the
    Economist leader. The first words – "As yet another (British) panel
    concluded" suggest that we should assume the premise that there has been
    plenty of research into the health risks of GM food, enough to draw a conclusion
    from, and that all this research points the same way.

    Similarly, the conclusion leaves a little out. When it talks about "no
    reason why Europeans should not eat GM food" we should take it to mean
    no health reason, since those kinds of reasons are the only ones under consideration
    at this point.

    Once we accept these implied facts, the non sequitur disappears. It does follow
    from the fact that lots of research – enough for us to trust the results – says
    that GM food is safe, that there is no good health reason not to eat it.

    What the Economist leader demonstrates is how it can sometimes be too
    easy to identify bad argumentative moves in forms of writing that are not written
    for a readership of pedantic logicians. Has this series been guilty of identifying
    non sequiturs only by ignoring the premises and assumptions which are obviously
    implied if only one cares to look? I’ll let you be the judge of that.

  • Getting it out of proportion

    "Drinking around three litres of pure still filtered water a day makes
    a vital contribution to health."
    Emma Mitchell, the Guardian, 7 September 2002

    Emma Mitchell is a "natural health therapist" who, in her column
    for the Guardian, "Ask Emma", gives advice on how to live a healthier
    life. Emma is usually pretty sensible, and her advice to drink more water is
    in line with recommendations from most health experts.

    Emma, however, has a thing about the need for water to be filtered. No mention
    of H2O is complete without the qualifying word "filtered" attached.
    So, in her 5 July column, she advises someone who suffers from acne that it
    is likely she does "not drink enough filtered water". A few paragraphs
    later, she advises that, if eating soya beans, to "first soak in filtered
    water for at least eight hours".

    One of the benefits of drinking more filtered water, according to Emma, is
    that "it helps eliminate toxins". This is probably why she thinks
    the water ought to be filtered: unfiltered tap water contains more toxins, so
    is not as healthy as the filtered alternative.

    I don’t disagree with any of this, but I do suspect that filtering water provides
    such an insignificant health benefit that to think one ought to do it for the
    sake of one’s own health is akin to thinking one can increase one’s life expectancy
    by exercising for 91 minutes per week rather than 90.

    Certainly the British
    Drinking Water Inspectorate
    thinks so. They insist that "All public
    water supplies in England and Wales are safe to drink and there is no need to
    install additional treatment within the home as a health protection measure."
    And they claim strict controls on levels of pesticides mean "additional
    filtration is not required". Last year, 99.87% of water samples met legal
    standards and none of those that failed posed a health risk.

    The Consumer
    Association
    also concluded in its report that tap water, filtered or unfiltered,
    often tasted better than bottled water, and that no water source contained unsafe
    levels of bacteria.

    In the US, it seems that tap water quality is more variable. But even the Natural
    Resources Defense Council
    , which campaigns for better quality tap water,
    thinks that it is usually unnecessary to filter water. They recommend buying
    a filter only "if you know you have a tap water quality or taste problem,
    or want to take extra precautions".

    Such is the insignificance of the health benefits of filtered water that not
    even water filter companies make strong claims for them. The UK market leader
    Brita,
    for example, focuses instead on the general benefits of drinking water and only
    on the taste improvements it claims its filters provide. It also openly acknowledges
    that its cartridges cannot remove nitrates, reassuring us that "Water companies
    have to comply with the standards set down in the E.C. water quality regulations."
    But if these standards are good enough for what the filters can’t remove, then
    surely they are good enough for what they can.

    To think, therefore, that one really owes it to one’s heath to bother to filter
    tap water when the tapped supply is perfectly decent is to get things out of
    proportion. Unfortunately, such losses of proportion are all too common, as
    we are generally very bad at assessing risks and tend to worry far too much
    about things that are of little importance or which we have no control over
    and ignore the basics of diet and exercise that really can affect both life
    expectancy and quality of life. Ironically, paranoia about issues such as drinking
    water may fuel the kind of stress which really might finish you off prematurely.

    Arguably, this isn’t just an intellectual and prudential mistake but a moral
    one. To worry about the infinitesimally small increase in risk to one’s health
    drinking unfiltered water represents when one billion people do not access to
    safe water and when a child dies every fifteen seconds from water related diseases
    looks like moral myopia at its most narcissistic. That is why when I returned last year
    from a trip to East Africa, where young children were frequently seen
    carrying jerry cans for miles, I threw my water filter out in disgust.

    But there’s a coda: I soon retrieved it when I discovered that in my flat,
    unfiltered water just doesn’t taste nice. Which goes to show how because one
    reason for doing something might be bad, there might be another good reason
    for doing exactly the same thing anyway.

  • Fallacies of democracy

    "I don’t think we have been consulted as a democracy. It is the wrong
    war. We need a bit more imagination. All we are saying is the country is mature
    enough to sit down and have some kind of referendum."
    Damon Albarn, lead singer of Blur (Source: the Guardian, 21
    January 2003)

    Readers of last week’s column will not be surprised to find a rock singer once
    again cited as an authority on matters unconnected with music. The concern here,
    however, is not with Albarn’s expertise but with the climate of opinion he reflected.
    For during the build-up to the invasion of Iraq, his view was one which was
    held by a great many of the British public. Since polls showed a majority of
    people against going to war with Iraq, it was common to hear people claim that
    to engage in such a conflict would be undemocratic.

    Even though once the conflict began opinion polls started to turn in favour
    of military action, this post facto change of heart doesn’t affect the main
    argument of the "war was undemocratic" camp. They could, and still
    do, argue that to start a war in defiance of the wishes of the British people
    was profoundly undemocratic.

    This argument is flawed in several respects. If it is premised on the view
    that majority opinion is always right, then it is clearly falling foul of the
    "democratic fallacy", since it is just not true that beliefs become
    true or false on the basis of how many people hold them.

    This crude fallacy is obviously not what most people have in mind when they
    claim Britain’s involvement in the second Iraq war is undemocratic. However,
    simply acknowledging that public opinion can be wrong immediately exposes the
    weakness of the other arguments that war was an affront against democracy.

    For instance, one can accept that the majority can be wrong but insist that,
    nevertheless, in a democracy majority opinion must be followed, for better or
    for worse. But this confuses democracy with simple majoritarianism. As defined
    by Merriam-Webster, a democracy is "a government in which the supreme power
    is vested in the people and exercised by them directly or indirectly through
    a system of representation usually involving periodically held free elections."
    A crude majoritarian system, in contrast, is one where the government always
    does what the majority wants.

    Most democracies are not majoritarian. If Britain were run on majoritarian
    lines, for example, then fox hunting would have been banned long ago and capital
    punishment would never had been abolished. In other words, Britain would be
    a country which killed more people but fewer animals.

    Majoritarianism is not the favoured system in the west for several reasons.
    One is to protect minorities. Another is rooted in an appreciation of the democratic
    fallacy: majorities are often wrong, and they are much more likely to be wrong
    when they are uninformed about the issue to hand, as they often are when detailed
    knowledge is required to make a wise decision. This is why Britain, like other
    western nations, runs on the model of a representative democracy. In this system,
    members of parliament are elected as representatives to make decisions on behalf
    of their electors, not as delegates to do whatever their electors tell them.
    They are held to account every four to five years at elections, when they are
    judged on their overall record.

    It therefore cannot be said to be undemocratic for parliament to act against
    the wishes of the majority of the population at any given time. This very possibility
    is just what distinguishes representative democracies from majoritarian regimes.
    The British Parliament, elected by the people, made a decision to go to war
    and members of that parliament will be re-elected or voted out by the people
    at the next election. That is paradigmatically democratic.

    Of course, the democratic fallacy would appear in another guise if we argued
    that decisions reached by this process were always right. But the argument here
    is not directly about whether it was right or wrong to go to war with Iraq,
    but whether it was democratic to do so. This charge cannot be made to stick.

    An interesting coda to this story is how public opinion has changed over time.
    In February 2003 the Guardian was able to report that only 29% of the
    British public supported a war on Iraq. By mid-April, following the fall of
    Baghdad, support had risen to 63%. Arguably, this shows how the fickleness of
    public opinion is another good reason why genuinely democratic governments cannot
    and should not always follow it.

  • Bogus authorities

    "After reading Captive State, I will never be able to take the Labour
    government seriously again."
    Thom Yorke, lead singer of Radiohead (Quoted on cover of George Monbiot’s
    Captive State)

    In my opinion – which has never been humble – and in the opinion of many others,
    Radiohead are the best rock band on the planet right now. They are one
    of only two groups whose latest album I buy as soon as it comes out, without
    listening to it or reading reviews first.

    But although I am prepared to acknowledge the genius of Thom Yorke et al in
    the realm of music, I was not aware that Yorke was also a political commentator
    worth paying serious attention to. True, some saw the lyrics of "You and
    Whose Army?" from the Amnesiac album as a cutting attack on the
    Blair government. But can anyone seriously suggest the lyrics "You can
    take us all on/You and whose army?/You and your cronies/You forget so easily"
    are evidence of an expert knowledge of contemporary British politics?

    Why then should Yorke’s endorsement of a book on "the corporate takeover
    of Britain" be considered worth splashing over the dust jacket? What lends
    his words any authority? Why should we value his opinion on this more than we
    do that of my aunt Mable?

    The short answer is that there is no reason. Yorke is not an authority in matters
    of politics and the only reason what he says on the subject is taken seriously
    is because he is very popular for reasons totally unconnected with current affairs.
    Celebrities get to mouth off about whatever they want and people listen. This
    is just an extreme example of how people are often treated as "authorities"
    on certain subjects for no good reason.

    If all bogus authorities were so obvious there would be little point in drawing
    attention to them. But they’re not. Consider, for example, a more usual authority
    to quote on a book jacket: an endorsement by a leading writer or critic, or
    words from a review in a major publication. Aren’t these, if not completely
    authoritative, then at least bearers of some authority?

    The truth is that it’s hard to tell. For example, on the cover of Colin McGinn’s
    admittedly rather good autobiography, The Making of a Philosopher, Oliver
    Sacks testifies, "Brilliantly written, devastatingly honest, often very
    funny…" and gushingly on. It’s even more hyperbolic on the back cover.
    The praise loses some of its power, however, when you discover on page 226 of
    the book that McGinn knows Sacks and describes him as "remarkably erudite"
    and an "exceptionally thoughtful conversationalist, who weights his words
    as if they were precious stones…" and gushingly on again.

    I’m not suggesting Sacks and McGinn are engaged in anything sinister or cynical.
    I myself have had quotes from Nigel Warburton on the covers of two of my books.
    I did not solicit them and as far as I know my publisher didn’t know if we were
    acquainted. However, I am in fact privileged to count Nigel as a friend, and
    while I am sure he didn’t lie to do me a favour (he could have turned down the
    request for comments in confidence), I am sure knowledge of our acquaintance
    would lessen the sense of authority someone might otherwise attach to his endorsements.

    In the world of books, grudges, rivalries and friendships infect a large number
    of reviews, more than reviewers would care to admit. For that reason I have
    come to see endorsements from reviews or big-names on book jackets as carrying
    less authority than I used to.

    But perhaps the greatest problem of bogus authorities comes in the area of
    morality. In medical ethics, for example, who is considered an authority? Doctors
    for one, according to the Daily Telegraph, which ran a reasonably lengthy
    story recently when a leading transplant surgeon, Prof Nadey Hakim, called for
    the legalisation of a regulated trade in human organs. But why should a surgeon
    be in any better position to pronounce on the ethics of organ sales than, say,
    Thom Yorke? Hakim’s expertise is in surgery, not ethics.

    This is a tricky area, for arguably there is no such thing as an "expert
    in ethics", at least no experts to which we should all defer in the same
    way as there are experts in engineering or medicine. But there are people better
    qualified than others to examine and deal with complex ethical issues. Newspapers,
    radio and television tend to focus on what doctors, bishops, leaders of pressure
    groups and media pundits have to say. None of these, I would argue, are the
    best placed to provide moral guidance. We should reply more instead on members
    of ethics commissions and moral philosophers, who think about these problems
    in much more depth than anyone else but who do not get the chance to contribute
    as much to the debates as they deserve.

    The problem of bogus authorities is thus much more widespread and much less
    obvious than it is in the extreme case of the singer-cum-political commentator.
    Still, it’s amusing to see Yorke warble on the latest album, this time seemingly
    putting words into George Bush’s mouth, "Don’t question my authority or
    put me in the dock." It’s not just George’s authority that we should question,
    Thom. It’s yours.