Wednesday 18 June 2008

Do you know?

Strong agnosticism can be summed up on a bumper sticker: 'I don't know and you don't either.' With respect to Christianity, a strong agnostic would typically believe that nobody can know whether it is true or false. Initially, this seems possible; most major religions have spread by politics or violence, rather than by rational argument. But doesn't a little thought show that strong agnosticism is inconsistent?

Imagine a strong agnostic, Steve, contemplating a Christian, Dave. Steve says 'Dave doesn't know whether Christianity is true or false'. But then, looking in a Bible, Steve sees that it claims that people can know that it is true. Steve could then conclude that the Bible is false, and so that (at least evangelical) Christianity is false. He could go so far as to say 'I know Christianity is false,' thus refuting himself, for he believes that nobody knows whether Christianity is true or not.

It is just such an inconsistency that Alvin Plantinga points at in the book 'Warranted Christian Belief'. However, he goes further. After all, faced with the above argument, Steve is more likely to conclude that Christianity is false than that it is possible for people to know that Christianity is true. To escape this possibility, Plantinga sets up a model of how Christian belief could have warrant (for a discussion of warrant, see my previous post). He claims that none of the current arguments for strong agnosticism exclude his proposed model, so they don't show that nobody knows Christianity is true.

Of course, it wouldn't be inconsistent for Steve to say 'You propose that Christian belief is warranted in the following way. Although I don't know whether Christian beliefs are true or not, I do know that your account is flawed. I have an argument showing that Christian belief does not get warrant in the way you suggest.' The cunning argument above would not apply in this case, unless it could be shown that if Plantinga's account is incorrect, so is Christianity. Plantinga makes a claim of this kind ('if Christian belief is true, then the model in question or one very like it is also true'), but he provides little justification for this claim.

If Plantinga's account were to turn out to be fundamentally incoherent, Christians would be more likely to jettison Plantinga (together with his claim that if his argument falls so does the whole of Christianity) than to abandon Christianity itself. Indeed, this is true of any account of warrant. Dave would not be too worried if Steve were to show that any particular account of how Christian beliefs get warrant fails. He could just abandon that account, which is after all not fundamental to Christianity.

Surely, however, a similar thing would happen if Steve were to come up with an absolutely watertight argument showing that Christian belief, irrespective of whether it is true, cannot have warrant. Dave, forced to accept the argument, would not abandon Christianity itself. He would instead reinterpret the claim that Christians can know that their beliefs are true. Perhaps he would say 'This knowledge is not the petty kind that philosophers are able to tie up with words and play around with. This is a higher, a deeper kind of knowledge; in a sense it is more real, rather than less.'

Most strong agnostics are aware of this ploy; it has been used throughout history when part of Christian belief has been demonstrated to be false. The damaged part is abandoned, and the related claims are reinterpreted. This awareness means that they can't claim a proof that nobody knows Christianity is true as a proof that Christianity is false. For this reason, strong agnostics are not guilty of the inconsistency cited above. The argument is neat, but fails to take into account the realities of human nature.

Of course, this by no means shows that strong agnostics are justified in their claims. Indeed, Plantinga's model was constructed to show that they are not. Using the model, and other arguments, Plantinga considers several arguments for the strong agnostic conclusion, and finds them all wanting. Here Plantinga shows his skill as a philosopher: If the arguments are as he reports then they fail.

How does the model work? Well, Plantinga has defined warrant in terms of proper function. For example, it is through the proper function of my visual system that my beliefs about what I see have warrant. He claims that, similarly, there is a system (the sensus divinitatis) the proper function of which is to give us true beliefs about God. Accordingly, when we get beliefs about God in this way (such as that He exists, is omnipotent, omniscient and perfectly good), those beliefs have warrant.

A typical case of the operation of this sensus would be the arising of the belief, in a person looking up at the stars, that they must have had a majestic creator. It is also intended to include more specific religious experience, as well as the 'sense of the presence of God'. The claim is not that these experiences are evidence for Christian belief, but that the fact that they arise from a properly functioning cognitive system aimed at producing true beliefs gives them warrant.

It is therefore necessary to examine the idea of proper function. I believe that the principal system whereby I am able to know about my past, namely memory, is functioning properly and is aimed at giving me true beliefs. This is the case even though I have hardly any idea how it works. I would believe that my visual system functions properly and is aimed at giving me true beliefs even if I didn't have the slightly greater, but still extremely weak, grasp of how it works that I do have.

Why is that? Well, when I see something red, and I talk to other people, they also say it is red. Where I see a tree, others also claim that they see a tree. In much more complex systems, involving complicated interactions with other people or other senses, my visual system is vindicated time and again. This vindication is not the same thing as proper function, but it is certainly evidence of it. Accordingly, I have a second-level model of the world in which I model myself as seeing things via a visual system which gives me true beliefs about the world (for a discussion of this kind of model, see this post).

This higher-level model does not justify my visual beliefs; they do that for themselves. It explains them, and explains them in such a way that I believe that they have warrant. The model does not give the beliefs warrant (it is the proper function of the visual cortex which does that). Instead, the model justifies my belief that my visual beliefs have warrant: That they constitute knowledge.

Things do not always turn out so well. When I look at the systems that give me my beliefs, I sometimes find that they are not reliable, so that they either don't function properly or aren't aimed at producing true beliefs. Suppose, for example, that I have a particularly malignant kind of cancer. I may find that I nevertheless believe that I will survive. Careful study may show that this belief is common amongst those with terminal illness, and that it is as commonly false. I would have to conclude that the belief doesn't have warrant for me. The system that produced it is aimed at survival, rather than truth.

Of course, no system is perfect. Careful study of a system should allow us to discover under what circumstances a belief produced by it will have warrant for us. The idea that we can have beliefs, but believe that those very beliefs don't have warrant for us may seem odd, but it is a natural concept and one that we express using the words 'as if'. It looks as if the pattern is in motion, but I know that it's just an illusion. It feels as if there must be something I can do to contact my spouse, who is dead, but there is nothing.

Whatever system it is that philosophers use to come to their philosophical beliefs, it can't normally be functioning properly. This can be seen by considering the notorious and extensive disagreements amongst philosophers over the most basic things. Accordingly, you should take all philosophical ideas I present on this blog with a large pinch of salt (particularly since I'm not very good at philosophy).

What about the sensus divinitatis? Does study of the beliefs this system produces suggest that it is functioning properly? Frankly, no. The main reason is that it does not produce consistent results. Jews and Muslims believe on the basis of this sense that there is just one god. Christians believe something similar, though they are led to believe that that god consists of three persons. Hindus believe that there are very many gods, in just the same way. For still others, this sensus seems to give no more than the belief that the universe is a deeply mysterious, beautiful and awesome place.

Of course, Plantinga is aware of this problem, and is able to circumvent it. According to him, in anybody who is not a Christian the sensus divinitatus is malfunctioning as a result of the fall. Only in Christians is it functioning properly. This miraculous proper function is achieved through the proper function of yet another process (faith), which acts in Christians although it is not part of their native cognitive equipment. Since this second process is only given to Christians on the basis of grace, we should not expect to find it operating in non-Christians.

How does this secondary system work? First, there is the existence of the Bible, of which God is the principal author. When a Christian reads the Bible, a system called the internal instigation of the Holy Spirit (IIHS) kicks in, which when functioning properly gives the Christian the belief that what they are reading is true. This belief, coming as it does from a properly functioning system, has warrant. In such a circumstance, we say that the Christian knows by faith.

So now, it is necessary to ask, is faith (described in these terms) a properly functioning cognitive system aimed at the truth? Again, we must answer no. The first reason is once more the inconsistency of the beliefs produced in this way. The history of the Church is full of schisms and disagreements, some over substantial points of Christian doctrine, in which both sides claimed that they believed what they did by faith through the internal instigation of the Holy Spirit revealing to them the truth of scripture.

The second reason is one of inaccuracy. Some passages of the Bible suggest, on a naive reading, that God will answer any prayer whatsoever. Scientific studies have shown that this is false; in all circumstances that have been tested, God answers no more prayers than you would expect to be answered by chance if there were no God. You may not be so naive as to believe that the passages about prayer should be interpreted in a literalistic way, but others have been. They would have said that they believed God answers prayer, if not always, at least in such a way as to be distinguishable from coincidence, and that they believed this by faith through the internal instigation of the Holy Spirit revealing to them the truth of scripture.

Very well, the Christian may say, not everybody who says they believe something by faith is right. Nevertheless, in my case, faith is functioning properly, and so my beliefs have warrant. Even if this were true, the Christian surely does not have good reason to believe it. After careful consideration of the people with whom she disagrees, she concludes that people may have similar experiences to her and may be misled into false beliefs on the basis of those experiences. So she can't conclude with certainty that the same experiences in her are the operation of a properly functioning system. Nor can she claim that she knows by faith that her particular beliefs have warrant. On this model, only truths revealed in the Bible are known by faith, and claims about her particular beliefs aren't there. The best she can say is 'It feels to me as if these things are true'.

Even worse, faith can lead people to believe things in the face of strong contrary evidence. For, faced with strong evidence in favour of evolution, the Christian may reason thus: 'If my cognitive faculties were working perfectly, I would believe what the Bible says, and I would see its truth so clearly that this evidence would pale into insignificance. What's more, my belief would be warranted. So I can be perfectly warranted in believing this. Accordingly, I won't believe the theory of evolution by natural selection on the basis of this evidence'. This may seem like an extreme exaggeration of Plantinga's view, like a disfigured straw man. But in fact, it is precisely this sort of argument which Plantinga gives in response to the problem of evil in the last chapter of his book.

Plantinga's definition of faith, then, draws dangerously close to the caricature suggested by Richard Dawkins:
Faith ... means blind trust, in the absence of evidence, even in the teeth of evidence.
Then again, as Plantinga himself says,
What is supposed to be bad about believing in the absence of evidence?

Monday 16 June 2008

Show me your warrant.

Imagine yourself on a walking holiday in the Lake District with an optimist. Despite forecasts of heavy rain, the optimist is convinced that the weather tomorrow will be fine, and plans a serious walk. As it turns out, the weather is perfect. The optimist triumphantly says 'You see; I knew it would be fine.' However, there is a sense in which they did not know. They had no good reason even to think that there would be no rain, and were right by pure chance.

Is Christian belief similar? If Jesus were to return tomorrow, and a Christian were to say 'You see, I knew it,' would they be misusing the word 'know' in precisely the same way? This is the question Alvin Plantinga addresses in his warrant trilogy. The main concept Plantinga uses is 'warrant': That quantity, enough of which makes true belief into knowledge. He begins by trying to answer the question 'just what is warrant?' This is a typical philosophical question, and comes with some typical philosophical issues.

We human beings, who spend most of our time interacting at sensible speeds with objects about the same size as ourselves, have developed language to allow us (among other things) to communicate truth about the world in which we live. In doing so, we have given words meanings. A philosopher, seeing that people use a word like 'exist', may wonder precisely what that word means. They will not be satisfied with the synonym-based definitions in dictionaries; they want to get to the heart of the matter.

Unfortunately, although we all get along perfectly well using the word in normal life, we become very confused trying to apply the word in extreme special cases (do possible worlds exist?). There may not be any precise formulation which captures the meaning of the word and corresponds to our intuitions in all special cases. One skill philosophers have honed is finding how close to this ideal they can get. In the course of their investigations, they have discovered that finding the meaning of words is hard, and often impossible.

Sometimes, if you are very lucky, it is possible to find a precise definition of an idea which is both helpful for talking about the world and close to the meaning of the original word. If this project is successful enough, the meaning of the word will eventually change to fit the definition. Something of this kind has happened with words like 'energy' and 'symmetry'. These definitions are often 'definitions in use': Rules for turning sentences involving the word into other sentences with the same meaning not involving the word. Unfortunately, when this is done successfully, the word is normally claimed in the name of science or mathematics.

More often, as with words like 'self', there is a definition-in-use which is helpful for talking about the world and fits very roughly with the original word, corresponding closely in some situations and deviating wildly in others. There may be more than one such definition, in which case philosophical arguments spring up. Occasionally, the project becomes so difficult that philosophers despair of the task. Something like this has happened with words like 'exist'.

What about warrant? Well, Plantinga is not using the word 'warrant' in a standard way; he instead defines it as the quantity enough of which makes true belief into knowledge. So the word whose meaning is being considered is 'know'. Where on the above continuum does this word lie? It has caused so much trouble that the study of it has been given a name of its own: 'epistemology'. Some have despaired of ever understanding it. Most, whilst agreeing that it is a stubbornly difficult case, have kept trying since it is such an important word.

Plantinga follows current fashion in saying that knowledge consists of true belief together with some other quantity: Warrant. In the first book of the trilogy, 'Warrant: The Current Debate', he tears apart several misguided attempts to give a precise definition of warrant. In each case he does so by considering some extreme special cases, in which the purported knower is suffering from severe cognitive dysfunction.

In 'Warrant and proper function', the next book in the trilogy, he suggests his own approach. Having just demonstrated how likely it is that any precise definition can be decimated, he presents a broad-brushstroke picture about which he is careful to make no claim of precision. Except in the heat of argument, he avoids phrasing his idea in terms of 'severally necessary and jointly sufficient conditions'.

The idea is that a belief is warranted if:
  1. It is produced by cognitive faculties that are working properly in an appropriate cognitive environment.
  2. The segment of the design plan governing the production of the belief is aimed at the production of true beliefs.
  3. There is a high statistical probability that a belief produced under those conditions will be true.
In using the concept of proper function, this approach recognises that very often we rely on nonconscious processing within our brains to arrive at our beliefs. When I see a face, I do not carefully consider the raw sense-data being sent up the optic nerve from my eye and conclude that it matches a complex structural pattern I have designated as characteristic of faces. Instead, some pre-processing in my brain lets me know directly that it is a face I can see. I trust this pre-processing implicitly, because I know it functions properly. It is by means of this proper functioning that I get my knowledge that it is a face that I see.

It isn't that I say to myself 'My pre-processors are telling me I see a face, and I trust them, so I really must be seeing a face'. I just know, without the need for such redundant reasoning, and I get the knowledge from that pre-processing. The belief has warrant because whatever part of my brain it is that subconsciously recognises faces is working properly.

Plantinga discusses some of the other ways we get knowledge, for example by memory or empathy. He concludes in each case that we get the knowledge via some such process, and that the knowledge has warrant because the process is functioning properly. He is a little short on detail, which is fair enough, as the way in which our brains actually do these remarkable things is not yet well enough understood to give a detailed account.

He also doesn't give a clear explanation of what a design plan (specifications for proper function) is, or what it means for such a thing to be aimed at the truth. Of course, we have an intuitive grasp of this idea. We can imagine a designer designing something. But we can also recognise that something is working properly if we think it was not designed; if we think it was produced by natural selection, for example. It is this sense that Plantinga intends at:
Here I use 'design' the way Daniel Dennet (not ordinarily thought unsound on theism) does in speaking of a given organism as possessing a certain design, and of evolution as producing optimal design.
Plantinga doesn't give details of how, if you come across a system, you can tell if it is functioning properly or what it is aimed at. He delights in not doing so; after all, he thinks that there was in fact an intelligent designer so no further explanation is needed. In this, he is mistaken. For if the definition of 'proper function' is 'functioning in the way that the designer intended', then we would have no way of determining, by observation of the system itself, whether it is functioning properly or not. However, microbiologists (for example) claim to be able to do exactly that; they are working with the intuitive notion of 'proper function', which Plantinga also uses but does not explain.

He thinks that the idea of an intelligent designer would explain what it is for a thing to be designed. But in that case, to determine the design plan of something, it would be necessary to know the mind of the designer (in this case, of God). But to say that the explanation of a difficult philosophical idea is that it may be found in the mind of God (as Plantinga does with many ideas, such as number, proposition, proper function and therefore also warrant and knowledge) is to say nothing. There is no more content there than in the more colloquial 'God knows'.

Ranting aside, however, the link Plantinga establishes between our ideas of knowledge and of proper function is valuable. The intuitive notion of proper function should suffice for the rest of this post. This account of warrant isn't quite right; it faces some minor technical difficulties. It does provide a strong enough support for a discussion of whether Christian belief is warranted, a topic which he tackles in the final book of the trilogy, 'Warranted Christian Belief', which I'll discuss in my next post.

Saturday 14 June 2008

A surprising connection.

Galois connections are often hidden behind well-behaved areas of mathematics, and they are often produced in a standard way from simple binary relations. Here's one that seems to produce mathematics out of thin air.

Ultrafilters on a set \small \rule[-1.5]{0.1}{0.1} X are a bit like generalised points of that set. Of course, principal ultrafilters correspond to points of the set in an obvious way. If \small \rule[-1.5]{0.1}{0.1} X lives in some model of set theory, and we take an ultrapower of that model by some ultrafilter, then in the ultrapower \small \rule[-1.5]{0.1}{0.1} X gains a generic point, with respect to which the ultrafilter is principal.

Sometimes we might want to associate actual points of the set to these ultrafilters: We are interested in relations \small \rule[-1.5]{0.1}{0.1} R between \small \rule[-1.5]{0.1}{0.1} X and the set \small \rule[-1.5]{0.1}{0.1} \beta(X) of ultrafilters on \small \rule[-1.5]{0.1}{0.1} X. Such a relation is a set of ordered pairs \small \rule[-1.5]{0.1}{0.1} (x, {\cal U}) with \small \rule[-1.5]{0.1}{0.1} {\cal U} an ultrafilter on \small \rule[-1.5]{0.1}{0.1} X, and \small \rule[-1.5]{0.1}{0.1} x a point of \small \rule[-1.5]{0.1}{0.1} X. \small \rule[-1.5]{0.1}{0.1} {\cal U} may be thought of as specifying a set of subsets of \small \rule[-1.5]{0.1}{0.1} X to which some imaginary point belongs. Unless \small \rule[-1.5]{0.1}{0.1} {\cal U} is the principal ultrafilter at \small \rule[-1.5]{0.1}{0.1} x, there will be some sets containing \small \rule[-1.5]{0.1}{0.1} x but not in \small \rule[-1.5]{0.1}{0.1} {\cal U}: Since such sets are witnesses of the fact that \small \rule[-1.5]{0.1}{0.1} {\cal U} isn't \small \rule[-1.5]{0.1}{0.1} x, and so I'll call them inconsistent with the pairing \small \rule[-1.5]{0.1}{0.1} (x, {\cal U}). All other sets are consistent with the pairing.

This consistency relation induces a Galois connection from the set of relations \small \rule[-1.5]{0.1}{0.1} R of the type described above to the power set of the power set of \small \rule[-1.5]{0.1}{0.1} X. It is here, on this bleak mountaintop of abstraction, that there is a surprise. The sets of subsets of \small \rule[-1.5]{0.1}{0.1} X which are closed with respect to this connection are precisely the topologies on \small \rule[-1.5]{0.1}{0.1} X.

Proof: Let \small \rule[-1.5]{0.1}{0.1} {\cal T} be a set of subsets of \small \rule[-1.5]{0.1}{0.1} X closed with respect to the connection. Then there is a relation \small \rule[-1.5]{0.1}{0.1} R which is taken to \small \rule[-1.5]{0.1}{0.1} {\cal T} by the connection. That is, \small \rule[-1.5]{0.1}{0.1} {\cal T} is the set of subsets of \small \rule[-1.5]{0.1}{0.1} X consistent with \small \rule[-1.5]{0.1}{0.1} R; \small \rule[-1.5]{0.1}{0.1} {\cal T} is the set of sets \small \rule[-1.5]{0.1}{0.1} O such that, for all \small \rule[-1.5]{0.1}{0.1} x and \small \rule[-1.5]{0.1}{0.1} {\cal U} with \small \rule[-1.5]{0.1}{0.1} xR{\cal U}, if \small \rule[-1.5]{0.1}{0.1} x \in O then \small \rule[-1.5]{0.1}{0.1} O \in {\cal U}. In particular, as the empty set \small \rule[-1.5]{0.1}{0.1} \emptyset contains no points, it is in \small \rule[-1.5]{0.1}{0.1} {\cal T}. As \small \rule[-1.5]{0.1}{0.1} X is in every ultrafilter, \small \rule[-1.5]{0.1}{0.1} X \in {\cal T}. If \small \rule[-1.5]{0.1}{0.1} A and \small \rule[-1.5]{0.1}{0.1} B are in \small \rule[-1.5]{0.1}{0.1} {\cal T}, \small \rule[-1.5]{0.1}{0.1} xR{\cal U}, and \small \rule[-1.5]{0.1}{0.1} x \in A \cap B, then \small \rule[-1.5]{0.1}{0.1} x is in both, so both are in \small \rule[-1.5]{0.1}{0.1} {\cal U}. But then \small \rule[-1.5]{0.1}{0.1} A \cap B \in {\cal U}, as \small \rule[-1.5]{0.1}{0.1} {\cal U} is an ultrafilter. That is, \small \rule[-1.5]{0.1}{0.1} A \cap B \in {\cal T}. If each set in a family \small \rule[-1.5]{0.1}{0.1} ({\cal U}_i)_{i \in I} is in \small \rule[-1.5]{0.1}{0.1} {\cal T}, and \small \rule[-1.5]{0.1}{0.1} x \in \bigcup_{i \in I}{\cal U}_i, then \small \rule[-1.5]{0.1}{0.1} x is in one of them, so one of them (and hence their union) is in \small \rule[-1.5]{0.1}{0.1} {\cal U}. That is, \small \rule[-1.5]{0.1}{0.1} {\cal T} is closed under arbitrary unions. Putting it all together, \small \rule[-1.5]{0.1}{0.1} {\cal T} is a topology on \small \rule[-1.5]{0.1}{0.1} X.

Suppose that \small \rule[-1.5]{0.1}{0.1} {\cal T} is a topology on \small \rule[-1.5]{0.1}{0.1} X, and let \small \rule[-1.5]{0.1}{0.1} R be the relation \small \rule[-1.5]{0.1}{0.1} {\cal T} is taken to by the Galois connection, and suppose that the connection takes \small \rule[-1.5]{0.1}{0.1} R to \small \rule[-1.5]{0.1}{0.1} {\cal T}'. It is enough to show that \small \rule[-1.5]{0.1}{0.1} {\cal T}' = {\cal T}. Evidently \small \rule[-1.5]{0.1}{0.1} {\cal T} \subseteq {\cal T}', so it's enough to show that, for any set \small \rule[-1.5]{0.1}{0.1} C \not\in {\cal T}, we have \small \rule[-1.5]{0.1}{0.1} C \not\in {\cal T}'. Let \small \rule[-1.5]{0.1}{0.1} C be such a set, and let \small \rule[-1.5]{0.1}{0.1} O be the interior of \small \rule[-1.5]{0.1}{0.1} C. As \small \rule[-1.5]{0.1}{0.1} C isn't open, there is some \small \rule[-1.5]{0.1}{0.1} x \in C \setminus O. Let \small \rule[-1.5]{0.1}{0.1} {\cal F} be the set of all open neighbourhoods of \small \rule[-1.5]{0.1}{0.1} x, together with the complement of \small \rule[-1.5]{0.1}{0.1} C. Any finite intersection of sets in \small \rule[-1.5]{0.1}{0.1} {\cal F} is nonempty, so \small \rule[-1.5]{0.1}{0.1} {\cal F} can be extended to an ultrafilter \small \rule[-1.5]{0.1}{0.1} {\cal U}. Any neighbourhood of \small \rule[-1.5]{0.1}{0.1} x is in \small \rule[-1.5]{0.1}{0.1} {\cal U}, so \small \rule[-1.5]{0.1}{0.1} xR{\cal U}. But \small \rule[-1.5]{0.1}{0.1} C \not\in {\cal U}, so \small \rule[-1.5]{0.1}{0.1} C isn't in \small \rule[-1.5]{0.1}{0.1} {\cal T}', as required.

This result is remarkable enough, but there's more. It turns out that compactness and Hausdorffness correspond closely with similar properties of relations. Say a relation \small \rule[-1.5]{0.1}{0.1} R is surjective if, for every \small \rule[-1.5]{0.1}{0.1} {\cal U} there is at least one \small \rule[-1.5]{0.1}{0.1} x with \small \rule[-1.5]{0.1}{0.1} xR{\cal U}. Say \small \rule[-1.5]{0.1}{0.1} R is injective if, for any \small \rule[-1.5]{0.1}{0.1} {\cal U}, there is at most one \small \rule[-1.5]{0.1}{0.1} x with \small \rule[-1.5]{0.1}{0.1} xR{\cal U}. If \small \rule[-1.5]{0.1}{0.1} R is a function, these definitions are exactly the usual definitions of injectivity and surjectivity.

Claim \small \rule[-1.5]{0.1}{0.1} 1: Let \small \rule[-1.5]{0.1}{0.1} R be a relation, as above, and let \small \rule[-1.5]{0.1}{0.1} {\cal T} be the topology that \small \rule[-1.5]{0.1}{0.1} R is taken to by the Galois connection. Then \small \rule[-1.5]{0.1}{0.1} {\cal T} is compact if \small \rule[-1.5]{0.1}{0.1} R is surjective.
Proof: By contradiction. Pick any open cover of \small \rule[-1.5]{0.1}{0.1} X with no finite subcover, and let \small \rule[-1.5]{0.1}{0.1} {\cal F} be the set of complements of the sets in the cover. Then any finite intersection of sets in \small \rule[-1.5]{0.1}{0.1} {\cal F} is nonempty, so \small \rule[-1.5]{0.1}{0.1} {\cal F} may be extended to an ultrafilter \small \rule[-1.5]{0.1}{0.1} {\cal U} on \small \rule[-1.5]{0.1}{0.1} X. By surjectivity, there is some point \small \rule[-1.5]{0.1}{0.1} x with \small \rule[-1.5]{0.1}{0.1} xR{\cal U}. \small \rule[-1.5]{0.1}{0.1} x must lie in some set \small \rule[-1.5]{0.1}{0.1} O of the original cover. But \small \rule[-1.5]{0.1}{0.1} O can't be in \small \rule[-1.5]{0.1}{0.1} {\cal U} (its complement is), contradicting the definition of \small \rule[-1.5]{0.1}{0.1} {\cal T}.

Claim \small \rule[-1.5]{0.1}{0.1} 2: Let \small \rule[-1.5]{0.1}{0.1} {\cal T} be any topology on \small \rule[-1.5]{0.1}{0.1} X, and let \small \rule[-1.5]{0.1}{0.1} R be the relation that \small \rule[-1.5]{0.1}{0.1} {\cal T} is taken to by the Galois connection. Then \small \rule[-1.5]{0.1}{0.1} {\cal T} is compact iff \small \rule[-1.5]{0.1}{0.1} R is surjective.
Proof: The 'if' follows from Claim \small \rule[-1.5]{0.1}{0.1} 1 and the fact that \small \rule[-1.5]{0.1}{0.1} {\cal T} is closed with respect to the Galois connection. To prove the 'only if', suppose that \small \rule[-1.5]{0.1}{0.1} {\cal T} is compact, and let \small \rule[-1.5]{0.1}{0.1} {\cal U} be any ultrafilter on \small \rule[-1.5]{0.1}{0.1} X. Suppose for a contradiction that every \small \rule[-1.5]{0.1}{0.1} x \in X has an open neighbourhood not in \small \rule[-1.5]{0.1}{0.1} {\cal U. These neighbourhoods form an open cover, which therefore has a finite subcover. The complements of the sets in this subcover are in \small \rule[-1.5]{0.1}{0.1} {\cal U}, and their intersection is empty, contradicting the fact that \small \rule[-1.5]{0.1}{0.1} {\cal U} is an ultrafilter. So there is an \small \rule[-1.5]{0.1}{0.1} x such that every open neighbourhood of \small \rule[-1.5]{0.1}{0.1} x is in \small \rule[-1.5]{0.1}{0.1} {\cal U}, so that \small \rule[-1.5]{0.1}{0.1} xR{\cal U}. As \small \rule[-1.5]{0.1}{0.1} {\cal U} was arbitrary, \small \rule[-1.5]{0.1}{0.1} R is surjective.

Claim \small \rule[-1.5]{0.1}{0.1} 3: Let \small \rule[-1.5]{0.1}{0.1} R be a relation, as above, and let \small \rule[-1.5]{0.1}{0.1} {\cal T} be the topology that \small \rule[-1.5]{0.1}{0.1} R is taken to by the Galois connection. Then \small \rule[-1.5]{0.1}{0.1} R is injective if \small \rule[-1.5]{0.1}{0.1} {\cal T} is Hausdorff.
Proof: By contradiction. Let \small \rule[-1.5]{0.1}{0.1} {\cal U} be an ultrafilter, and let \small \rule[-1.5]{0.1}{0.1} x \neq y \in X be such that \small \rule[-1.5]{0.1}{0.1} xR{\cal U} and \small \rule[-1.5]{0.1}{0.1} yR{\cal U}. Then we can find disjoint open sets \small \rule[-1.5]{0.1}{0.1} O and \small \rule[-1.5]{0.1}{0.1} P with \small \rule[-1.5]{0.1}{0.1} x \in O and \small \rule[-1.5]{0.1}{0.1} y \in P. Then \small \rule[-1.5]{0.1}{0.1} xR{\cal U} implies that \small \rule[-1.5]{0.1}{0.1} O \in {\cal U}, and \small \rule[-1.5]{0.1}{0.1} yR{\cal U} implies that \small \rule[-1.5]{0.1}{0.1} P \in {\cal U}. But \small \rule[-1.5]{0.1}{0.1} O \cap P = \emptyset \not\in {\cal U}, contradicting the fact that \small \rule[-1.5]{0.1}{0.1} {\cal U} is an ultrafilter.

Claim \small \rule[-1.5]{0.1}{0.1} 4: Let \small \rule[-1.5]{0.1}{0.1} {\cal T} be any topology on \small \rule[-1.5]{0.1}{0.1} X, and let \small \rule[-1.5]{0.1}{0.1} R be the relation that \small \rule[-1.5]{0.1}{0.1} {\cal T} is taken to by the Galois connection. Then \small \rule[-1.5]{0.1}{0.1} R is injective iff \small \rule[-1.5]{0.1}{0.1} {\cal T} is Hausdorff.
Proof: The 'if' part follows from Claim \small \rule[-1.5]{0.1}{0.1} 3 and the fact that \small \rule[-1.5]{0.1}{0.1} {\cal T} is closed with respect to the Galois connection. To prove the 'only if', suppose \small \rule[-1.5]{0.1}{0.1} {\cal T} isn't Hausdorff, and let \small \rule[-1.5]{0.1}{0.1} x and \small \rule[-1.5]{0.1}{0.1} y in \small \rule[-1.5]{0.1}{0.1} {\cal T} be distinct but not separated by any pair of open sets. Let \small \rule[-1.5]{0.1}{0.1} {\cal F} be the set of open sets containing either \small \rule[-1.5]{0.1}{0.1} x or \small \rule[-1.5]{0.1}{0.1} y. Any finite intersection of sets in \small \rule[-1.5]{0.1}{0.1} {\cal F} is an intersection of an open set containing \small \rule[-1.5]{0.1}{0.1} x with one containing \small \rule[-1.5]{0.1}{0.1} y, so is nonempty. Hence \small \rule[-1.5]{0.1}{0.1} {\cal F} can be extended to some ultrafilter \small \rule[-1.5]{0.1}{0.1} {\cal U}. Then \small \rule[-1.5]{0.1}{0.1} xR{\cal U} and \small \rule[-1.5]{0.1}{0.1} yR{\cal U}, so \small \rule[-1.5]{0.1}{0.1} R isn't injective.

The converses to claims \small \rule[-1.5]{0.1}{0.1} 1 and \small \rule[-1.5]{0.1}{0.1} 3 are false. This remarkable pattern is a shadow of a pair of adjoint functors, which I hope to say a little more about soon.

Monday 9 June 2008

Survival of the thickest?

I recently came across a novel argument by Alvin Plantinga, which aims to show that it is irrational for a human to believe both in naturalism and the theory of evolution by natural selection. Unfortunately, Plantinga never managed to write it up properly, but a draft may be found here. It is only worth reading the first part; the rest consists of a turgid and incomplete response to some objections which differ greatly in character from that which I give below. Essentially, Plantinga argues that evolution is unlikely to produce rational creatures whose reasoning is reliable: This gives those who believe that they are the result of unguided evolution a reason to doubt the reliability of their own reasoning, and so to doubt all of their conclusions; in particular they should doubt their belief in evolution.

Plantinga does not query the idea that natural selection can produce rational creatures. Instead, he aims to show that rational creatures arising in this way are unlikely to reason reliably; that their beliefs are likely, on the whole, to be false. It is necessary to introduce the idea of rationality in this way in order to make the argument apply to us. Evolution produces a great diversity of creatures, but we are the only rational creatures on Earth.

Since selection occurs on the basis of behaviour, Plantinga considers four possible ways in which the beliefs of a rational creature could be related to its behaviour.
  1. Not at all. In this possibility, beliefs have no effect at all on behaviour. So true beliefs are no more likely to be selected for than false ones. Natural selection is unlikely to produce creatures with such useless baggage. Also, we would have no way to recognise that creatures of this kind were rational. We have no way of knowing that, for example, trees are not rational in this way. Since a key starting point of the argument was that we are the only rational creatures on Earth, the word rational must be being used in some sense that excludes this possibility. Humans are recognisably rational; it is ridiculous to think that our beliefs do not affect our behaviour.
  2. Beliefs do affect behaviour, but in a way unrelated to their content. The same comments apply to this possibility as to the first.
  3. Beliefs are causally efficacious: The creatures act on the basis of their beliefs. However, the behaviour this produces is maladaptive. Such creatures would die out. The probability of rational creatures produced by natural selection being of this kind is therefore tiny. Equally, our own continued survival as a species shows that we are not of this type.
  4. Beliefs are causally efficacious: The creatures act on the basis of their beliefs. What's more, the behaviour that this produces is adaptive, so that the creatures survive. By a process of elimination, we are of this type. This is a good sign, since any rational creature produced by natural selection is overwhelmingly likely to be of this type.
Plantinga claims that the first two types of rationality would be fairly probable, a claim which the above analysis shows to be incorrect. He agrees that the third case is improbable. We can therefore focus in on the fourth case, where he claims that the probability that the beliefs of the creatures are, on the whole, reliable is not much more than 1 in 2. After all, it is possible to conjure up scenarios in which a creature survives by acting on the basis of false beliefs. To quote him:
Perhaps Paul very much likes the idea of being eaten, but when he sees a tiger, always runs off looking for a better prospect, because he thinks it unlikely that the tiger he sees will eat him. This will get his body parts in the right place so far as survival is concerned, without involving much by way of true belief. Or perhaps he thinks the tiger is a large, friendly, cuddly pussycat and wants to pet it; but he also believes that the best way to pet it is to run away from it. . . . or perhaps he thinks the tiger is a regularly recurring illusion, and, hoping to keep his weight down, has formed the resolution to run a mile at top speed whenever presented with such an illusion; or perhaps he thinks he is about to take part in a 1600 meter race, wants to win, and believes the appearance of the tiger is the starting signal; or perhaps. . . . Clearly there are any number of belief-cum-desire systems that equally fit a given bit of behavior.
Plantinga acknowledges that most such scenarios involve creatures whose belief-desire structures are, on the whole, maladaptive; they are only appropriate in very special circumstances. He therefore introduces a more subtle scenario. In this scenario, the creatures believe that everything is conscious, so that for example they have no word for 'tree', but rather one for 'conscious tree'. In particuar, almost all beliefs of the creature are false. This is, of course, a simplification of what early humans believed.

Suppose we were to meet such a creature, who spoke a different language to us. Let us suppose that their word for 'conscious tree' is 'shrubbery'. We, after talking to them for a while, would conclude that their word 'shrubbery' meant tree, and that their beliefs about trees were on the whole correct, but that they also had a false belief; namely, that trees are conscious. But any account of beliefs that accords with the way we use the word (to account for something which humans express by their language) must be such that in this case the beliefs of the beings are on the whole true, as we see, but that they include the false belief that everything is conscious.

This shows that Plantinga is simply working with an inadequate notion of what a belief is. That is fair enough; after all, we don't (as far as I know) have a good working definition to which he could refer. Nor can I give one. This unfortunately leads to incorrect arguments being put forward from time to time, of which Plantinga's is an example.

We may, however, say that we would expect some of the beliefs of the beings to be false. For example, it seems to be better to believe that there are causal links in situations where there are not than to risk missing actual causal links. So we would expect evolved rational beings to believe in causal links where there are none. We can easily observe that this is true of ourselves; it is called superstition. The field of studying what errors evolved beings might be expected to make is a growing one.

On the whole, we have found that we make errors of the expected kinds, and so this endeavour allows us to clear up our thinking. For example, those who understand some of this often actively guard against their tendency towards superstitious thinking. A sensible study of the limitations we would expect to find in ourselves given that we have evolved leads to humility, and to hope that understanding the truth about ourselves may yet set us free, but it does not swamp us in the kind of radical doubt Plantinga envisages.

Friday 6 June 2008

Discussing Dawkins.

Very few of the responses to 'the God delusion' that I have come across deal with the main argument it presents. So I was very pleased to see that the Faraday institute had arranged a discussion on that very issue. If you're not familiar with the argument, you should probably have a look at it before reading on.

Unfortunately, the discussion was unsatisfying for a couple of reasons. First, the format led to the speakers talking past, rather than to, each other. Second, nobody who spoke accepted that Dawkins' argument was valid. I don't think it is valid either, but I would have liked to know how those who do accept it defend it. I had hoped that a discussion of the argument would involve a speaker who accepted it. In any case, it is worth exploring some of the ideas that were introduced.

The discussion began with each of the two main speakers giving a 20 minute presentation. Richard Swinburne, a prominent opponent of Dawkins, was up first. Swinburne mentioned that he did not accept Dawkins' argument, but did not explain where the problem is. He spoke on a related theme, reiterating an argument he had proposed before 'The God Delusion' was written. The main link, apart from the similarity of form, seems to be that this argument was criticised in the chapter of TGD that contains the key argument.

The argument may be summarised by the phrase 'Theism is a simple explanation'. After a digression on the nature of scientific explanations (which I will not go into; it isn't relevant to the argument, and better accounts of scientific explanation exist), Swinburne introduced the following criteria for accepting that a hypothesis is true, given some evidence:
  1. The evidence should be a likely outcome, given the truth of the hypothesis.
  2. The evidence should be an unlikely outcome, given the falsehood of the hypothesis.
  3. The hypothesis should be simple.
  4. The hypothesis should fit in with our background knowledge.
He then mentioned that if we're looking for a theory of everything, the fourth criterion is irrelevant. He introduced some evidence; there are beings to whom the universe is rationally intelligible. He also introduced a hypothesis: 'There is a God, omnipotent, omniscient and perfectly free.' He checked the criteria:
  1. Being free, God will not be limited by irrational inclinations: He will act rationally. Being omnipotent, God will have perfect moral knowledge. Being rational, God will wish to do what is good. Being omnipotent, God will do what is good. It is good to create other beings, in order to do them good. This involves creating a universe for them to live in. It is good to make that universe rationally intelligible to them. So God will do so.
  2. The world is a complicated place. It is unlikely to have come into existence by chance.
  3. Only one being, with 3 qualities, is hypothesized.
It is worth noting that the argument relies on the simplicity only of a hypothesis about God, whereas Dawkins' claim was that God Himself, rather than any hypothesis about Him, was necessarily complex.

Next in line was Colin Howson, an atheist philosopher. He had prepared a powerpoint presentation beforehand. He reviewed Dawkins' argument, and rejected it on the grounds that there is no need to consider how complex God is to obtain a probability of His existing by chance; he claimed that this probability is automatically 0. Thankfully, he had anticipated the argument Swinburne presented, and he argued against it.

I'll modify the terms of the argument, and the order of presentation, (though hopefully not the ideas) to fit in more closely with the phraseology above. Howson explained that the three criteria introduced above can be made precise in the framework of Bayesian inference. The first two criteria correspond to features of the calculation, whilst the third criterion corresponds to the idea of assigning prior probabilities to hypotheses on the basis of simplicity. He then attacked the three criteria individually:
  1. On the basis of the hypothesis, we would expect the world to be a good place. But when we observe it, we find that it doesn't look that way. This is the problem of evil. Also, there are many conceivable universes containing beings which rationally comprehend them; So the possibility of our universe, even given this explanation, is tiny.
  2. It is impossible in principle to assign a probability here; we don't know what the alternative explanations are. They certainly aren't limited to 'chance'. Scientists have had to be more imaginative than science fiction writers, to explain the real world. We must accept our poverty of imagination. We don't know the alternatives; we may not even be able to.
  3. We have observed that assigning prior probabilities on the basis of simplicity is a good idea for explaining things within the world; the history of science is full of examples. But it is the world itself, including the history of science, which is to be explained. So any recourse to that history would be circular. There doesn't seem to be a reason why simplicity is intrinsically more probable than complexity.
This approach also knocks down any attempt to give a similar objective justification for science. But there is no need for science to be justified by such an abstract philosophical foundation.

At this point, the three interlocutors were each allowed to make a brief speech. Though they were in agreement that all three of Dawkins, Swinburne and Howson were wrong, they did not have time to adequately explain why. Indeed, the two main speakers did not take long to point out that the objections they had managed to articulate were simply missing the point. I've tried to phrase the above summary in such a way as to avoid similar misunderstandings, in order not to have to present them and pull them apart here.

The two main speakers were then left with 3 minutes each to deal with the points that had been raised. Swinburne mentioned that he could resolve the problem of evil, though not within 3 minutes. Howson expressed his doubts. That, apart from explaining what the interlocuters had misunderstood, was all they had time for.

Then there was a question time. I raised an objection to Howson's attack on the second criterion. We may not know what the possible explanations there are, but we can surely put crude bounds on the number of explanations of any given simplicity. For example, an explanation that cannot be described in English in a space shorter than the encyclopædia brittanica can hardly be described as simple, and we can put a bound on the number of remaining explanations. It's possible that we can do this in such a way as to put a lower bound on the probability of the hypothesis of theism. His response, that a simple explanation may require a high-level language for its expression, ignores the fact that high-level languages may be introduced and used within English. The process is called giving definitions. Soon after this, I had to leave early to get to a bible study group. The speakers never had a chance to develop the discussion, but I think there are some more ideas to be explored here.

The argument about the first criterion can certainly be developed. Howson pointed out that the hypothesis of theism simply does not explain physical events as precisely as modern physics; the universe we see is so specific that it is still highly improbable under this hypothesis. Swinburne never explicitly claimed it explained anything more than the existence of beings capable of rationally understanding the universe: He just hinted at it.

Now that these hints have been laid to rest, the evidence being explained can be seen to be simpler than the proposed explanation. After all, the explanation includes the idea of a being who is capable of rationally understanding all things (that's a small part of omniscience). But the explanation also relies on the existence of absolute moral truth. Although the problem of evil can be resolved (I've outlined one attack in an earlier post), the difficulty of the resolution shows that this moral truth is not as simple as it might at first appear, and accordingly brings out some of the complexity of the proposed explanation.

As Howson pointed out, we don't really have justification for assigning prior probabilities on the basis of simplicity. Indeed, why should we be able to assign probabilities at all? The idea of probability makes sense in the context of tossing a coin. We have a reasonable understanding of coins, and of how they behave. This ultimately relies on some crude understanding of how the world works. It is on the basis of such an understanding that we can assign probabilities. So we can't assign probabilities to such understandings, except on the basis of better ones. We can't assign probabilities to ultimate understandings at all; it doesn't make sense.

For a test case, consider the probability that there is a physical world at all. We have masses of evidence, pushing this probability up to 1. But discounting this evidence, in the abstract, what is the probability? The question makes no sense. It stretches the notion of probability beyond its proper boundaries. In a similar way, the hypothesis of theism just doesn't have a probability in the abstract, apart from the evidence. Reasoning on the basis that it does is the shared flaw of the arguments given by Dawkins and Swinburne.

Wednesday 4 June 2008

Speaking of the truth...

One of the books I was given for Christmas was 'Think', by Simon Blackburn. This is a basic introduction to philosophy. It's one of the best books I've read. I wasn't bothered by the clear bias of the author, as it doesn't come from any attempt to coerce agreement. Instead, Blackburn seems to be motivated by the simple joy of exploring and sharing ideas. He explains, rather than demeans, ideas he disagrees with. The explanations are clear and captivating, and I was thoroughly drawn in.

To follow it up, I've been reading 'Truth: A guide', also by Blackburn. Once more, I was delighted by the clear exposition and fascinating range of ideas. I can't help but try to explain and extend some of them here. After introducing various understandings and rejections of truth, from absolute realism to postmodernism (which he managed to clear up some of my confusions about), Blackburn is reasonably explicit about where his allegience lies.

First, he notes that to say that a statement is true is simply to assert the statement. As a result, he counsels us not to get caught up in elaborate negotiations about the nature of truth itself, but to be content, when we hear claims of truth, to understand the intention of the speaker, namely to assert the statement which is being claimed as true.

Of course, this does not resolve the battles over the nature of truth. To recognise that the question 'Why is that true?' can be simplified to 'Why?' is not to see how to answer the question itself in any circumstance. The circumstance is important; there is no reason why we should not give different answers, or even different kinds of answer, to this question in different circumstances. The issue has been refocused as one of justification.

Blackburn then proceeds to look at various kinds of answer that can be given, and various reasons why we might not want to answer the question at all, or even believe that to be a sensible thing to attempt. Though the arguments and ideas he outlines are interesting, and provide historical perspective, you should look in the book to find out what they are. Blackburn takes inspiration from them, but it is what he goes on to say that I'm interested in.

Blackburn zooms in on a kind of claim which does seem to be justified in a real way; by being a claim about the world, to which it corresponds. That is, he looks at the claims of science. Why do we say that these claims are about a real world? We do so in order to account for the quality of the success of science. We find that people who use a map to find water are more successful than those who use a dowsing rod. We explain this by postulating a real world, to which the map corresponds. Here's how Blackburn puts it:
Suppose my practice is successful: my space rockets land where and when they should. What is the best explanation of this success? I design my rockets on the basis that the solar system has the sun at its centre, and it does. Why is our medicine successful? Because we predicted that the viruses would respond in such-and-such a way, and they do. In saying these things we are not at all stepping outside our own skins and essaying the mythical transcendental comparison. We are simply repeating science's own explanation of events. There is no better one - unless there is a better scientific rival. Once we believe that the best explanation of geographical and optical data is that the world is round, we also believe that the best explanation of our successes as we proceed upon this hypothesis is that the world is round. It is not that there was a further set of data about science (its success) that required something like an independent, sideways explanation. It is just that the very regularities in the phenomena that required the theory in the first place should now be looked on as including any success we have in using the theory. Shadows fall at night because of the revolution of the earth, and success awaits those who expect shadows to fall at night because of the revolution of the earth. Science explains the success of science.
Blackburn notes an obvious objection: There are alternative explanations of the success of science. For example, 'Any scientific theory is born into a life of fierce competition, a jungle red in tooth and claw. Only the successful theories survive.' But these explanations are not mutually exclusive; they are complementary. It may well be the case that a scientific theory survives precisely because it is a description of the way that the world is. Indeed, a glance at the history of science suggests that this is the best explanation of what is happening.

Another objection is that this only allows us to accept the empirical adequacy, or the accuracy, of science, not to go so far as to call it true. Blackburn argues that this is a false distinction. He carefully examines several ways we might try to separate empirical adequacy from truth: All are found wanting.

Blackburn says 'We are simply repeating science's own explanation of events,' and he is right. But he appears to conclude that therefore the actual giving of this explanation is a task for scientists, rather than philosophers; that philosophy has nothing helpful to say here. This is because the account he gives is a little oversimplified.

Scientists explain the world by making models of it. Nowadays, these models are often highly abstract and mathematical, but they need not be so abstract nor so detailed. For example, a gas is modelled as a large number of atoms, all jostling around and banging into one another. This model was introduced to explain the behaviour of the gases that we observe, and it does this extremely well. The model talks about individual atoms. What are these atoms? It makes no sense to try to answer this question from outside the model. When we discuss atoms, we are using the model, not just mentioning it. For example, to say that the atoms are aspects of the observations which led us to introduce the model would be to stretch language beyond breaking point. It would be to mix levels, and to confuse categories. Hopefully the mistake is clear in this case, but when there is a more subtle interplay of models it is an easy one to make.

Now let's return to what Blackburn says above: 'Shadows fall at night because of the revolution of the earth, and success awaits those who expect shadows to fall at night because of the revolution of the earth.' It is easy to make sense of the first part of this sentence. We have a model of the world as a slowly revolving globe with light coming from a particular direction. The globe has bumps on it, and within this model we can say things like 'the rotation of the earth causes these bumps to cast shadows at night.' Let's call this rather useful model 'model I'.

Model I is not sufficient for the second part of the sentence. There is nothing referred to by model I as 'those who expect shadows to fall at night.' We must consider a more complicated model, model II. Model II is like model I, except that it includes some observers. These observers have expectations, which may be successful or unsuccessful. Blackburn switches from model I to model II in midsentence.

This may appear to be pedantry at first, but awareness of this extra layer of complexity allows us to explain how this might tie in with some previous philosophical ideas. Let's return to the illustration of the theory of gases, based on a model involving atoms. As I'll be introducing more models in a minute, I'll christen this one model III for the sake of clarity. The most obvious model to introduce next, model IV, is one in which not only is there a gas, composed of atoms in the same way as before, but there are also observers, who make predictions on the basis of models. Some of these observers use model III, and model IV explains their success.

Suppose now that we consider the statement 'atoms are real'. It isn't a statement we can make within model III: Model III doesn't have a notion of 'reality'. Nor is it a statement about model III. Instead, it's a statement we can make within model IV. Model IV is something we might consider in order to explain the behaviour of scientists who study gases. As a part of this explanation, we may wish to say 'The scientist successfully predicted the behaviour of the gas because he used the atomic theory, and the gas really is made of atoms'. It is in a context like that of model II or model IV that Blackburn's explanation 'I design my rockets on the basis that the solar system has the sun at its centre, and it does,' makes sense.

You may smell a rat at this point. If the idea of reality is introduced to explain the behaviour of scientists, isn't that saying that, for example, the reality of atoms consists in nothing more than the success of scientists who use models like model III? No. To say so would be to make a category error like that described 5 paragraphs ago. In saying 'Atoms are real' we use model IV. but the success of certain scientists is what model IV was to explain; it is not something within the model itself.

The other philosophical mistake which is clarified by this approach is Dualism. After all, a crude model including observers would have the observers being genuinely different to that which is observed. But, remarkably, this is not a necessary feature of such models. Indeed careful study of the observers we are most familiar with, ourselves, suggests models in which the observers are of precisely the same stuff, and the observation is itself a process obeying the same laws as that which is observed.

Now we can return to the claim that we can only ever get at empirical adequacy. We could, for example, consider a complex model in which some observers study a real world and have theories about it which are accurate but wrong. For example, gravity may really behave according to the theory of relativity, but the scientists could be working with Newtonian mechanics. As long as they don't measure too precisely, the scientists will make successful predictions. Their theory is empirically adequate but not true.

This is just the kind of situation most scientists think we are in at the moment. All the evidence suggests that quantum electrodynamics is both extremely accurate and false. It's just a really good approximation. However, because it is a good approximation, coarse-grained statements it makes will be true. The scientists in the model above may be working with a false theory, Newtonian mechanics. But the broad-brush statements they make, like 'the gravity of the moon causes tides twice every day', are nevertheless true of reality even in this model. That is just what it means to say that they are working with a good approximation.

So we can accept that our best and most detailed scientific theories are no more than unbelievably good approximations, and still say that the cruder claims of science, such as 'stuff is made of atoms' are true. It is within this kind of model, of a real world which we have a good but imperfect approximation to the workings of, that we can say things like 'chairs are real'. Luckily, this kind of model also gives a great explanation of what we observe.

The key feature of these expanded models is that they are modelling us, the observers. In particular, they model us as somehow making models of the world. One key tool we use is language. So the model must include some account of how we use language to talk about the world. For this reason, though possibly not aware of it in these terms, philosophers have made a careful study of how we use language, and how our linguistic behaviour relates to the meanings we intend. This study of language-games gives philosophers a head start in producing good higher-order models. So philosophers need not merely recount science's own explanation: They can add their insights to it.

One example is the study of truth and falsity. Blackburn appears sympathetic to the quietist position:
We should leave truth alone. We sould not enter the fields of meta-theory or philosophical reflection, to try to say something more, to gain a 'conception' of truth, as both absolutists and relativists have been presented as doing.
This is fair enough, to a degree. But if we are to make models like those above, which include notions of truth, then good models will have some conception of truth; of how it works. For example, they ought to include some notion of logic; of when it is that we can deduce some statements from others. Again, philosophers have made progress here. A lot of their work has fed into detailed mathematical models of deduction.

There has been a remarkable development in recent years. Aspects of the most detailed models we have of the world are being combined with the most intricate models of truth mathematicians have come up with. Both the physics and the mathematics have been slightly tweaked. The claim is that, just as notions like position break down at a small scale, and are only useful as approximations, the same is true of notions like truth and reality. It is not at all clear whether this exciting attack can be made coherent, but it should be worth watching to see.

I have talked in detail about physical truth; truth about the world. What about moral truth? Do we have a similar model, of moral statements corresponding in some sense to reality? Well, such models do exist, but their acceptance is on the wane. There has been a lot of work in the last century on how evolutionary processes could produce beings who speak and feel as we do about morality, in the absence of a moral reality to which they refer. These models are still crude, but they do seem to have explanatory force.

Leaving aside the question of whether such models are accurate, it is worth drawing a consequence from their nature. It is from within such models that we might make the statement 'Morality is relative, and not absolute'. Accordingly, this statement is about the world, about the way that things are. It is not a moral statement, and so it does not have direct moral consequences. In particular, from 'Morality is relative', we cannot deduce things like 'Anything goes'. 'Anything goes' is a moral statement; it does not form a part of such models of moral behaviour.

It is not necessary to use these ideas to see that moral relativism has no moral consequences. Indeed, I'll conclude with Blackburn's able exposition of this theme:
Suppose I believe that foxhunting is cruel, and should be banned. And then I come across someone (Genghis, let us call him) who holds that it is not cruel, and should be allowed. We dispute, and perhaps neither of us can convince the other. Suppose now a relativist (Rosie) comes in, and mocks our conversation. 'You absolutists,' she says, 'always banging on as if there is just one truth. What you don't realise is that there is a plurality of truths. It's true for you that foxhunting should be banned - but don't forget that it's true for Genghis that it should not.'

How does Rosie's contribution help? Indeed, what does it mean? 'It's true for me that hunting should be banned' just means that I believe that hunting should be banned. And the opposite thing said about Genghis just means that he believes the opposite. But we already knew that: that's why we are in disagreement!

Tuesday 3 June 2008

Another splinter of mathematics.

I'm jealous of the part IB maths students, who are taking their exams at the moment. Part of the reason for this is that one of them mentioned one of the questions from today's exam to me, and it's such a neat bit of maths that I have to share it.

The question is about a village idiot, who has to paint a fence. The fence is made up of a large number of slats in a circular arrangement. The idiot begins with a particular slat; let us call it his favourite. After painting any slat, he paints one of the two adjacent slats. He decides which of these two slats to paint next at random, for example by tossing a coin. He does not care whether he has painted a slat before: He follows the decision of the coin and, if necessary, adds multiple coats to some slats. Eventually, of course, he will have painted all of the slats; some particular slat will have the distinction of being painted last. Before he starts painting, can we tell which slat this is likely to be? That is, can we work out, for any given slat, the probability that it will be the last one to be painted?

We can, and the answer is striking and counterintuitive. Obviously, the chance that the idiot's favourite slat is the last to be painted is \small \rule[-1.5]{0.1}{0.1} 0. Let's fix our attention on some other slat, which I'll call 'the slat in question'. At some point, as the fool meanders about, he will paint one of the slats adjacent to the slat in question. The first time this happens, he will be painting the slat on one side without having begun work on the slat on the other side. So, at this point, the chance that the slat in question is the last to be painted is the chance that the idiot will, in his meanderings around the circle, reach the slat on the other side before he reaches the slat in question. In order to do so, he must paint all of the other slats.

Let us call this chance \small \rule[-1.5]{0.1}{0.1} p. So when, as he must, the fool eventually paints one of the adjacent slats, we can say with certainty that the chance we are interested in is \small \rule[-1.5]{0.1}{0.1} p. But this number \small \rule[-1.5]{0.1}{0.1} p doesn't depend at all upon how the idiot moves about until the point when he paints a neighbouring slat. So we may as well say, from the start, that the probability of the fool painting the slat in question is \small \rule[-1.5]{0.1}{0.1} p .

Now it is worth noting that nothing in the above argument, not even the value of \small \rule[-1.5]{0.1}{0.1} p, depends on which slat we were considering, except that it cannot be the idiot's favourite slat. So the probability of any of the other slats being painted last is the same: \small \rule[-1.5]{0.1}{0.1} p. Say there are \small \rule[-1.5]{0.1}{0.1} n slats in total. Then there are \small \rule[-1.5]{0.1}{0.1} n-1 slats that we might have considered, each of which the fool paints last with probability \small \rule[-1.5]{0.1}{0.1} p. Only one of the slats is painted last, so the probability that at least one is painted last is \small \rule[-1.5]{0.1}{0.1} (n-1)p. But this certainly happens, so \small \rule[-1.5]{0.1}{0.1} (n-1)p = 1. That is, \small \rule[-1.5]{0.1}{0.1} p = \frac{1}{n-1}.

This rather neat argument illustrates the power of the ideas in the Markov chains course. You may have thought I was doing something a little fishy in the fourth paragraph above. The main point of the course is that what I did was not fishy at all, and can be made completely rigorous. Finally, here's a question for those who know their stuff: What happens if the fool always moves clockwise with some probability \small \rule[-1.5]{0.1}{0.1} q, and anticlock wise with probability \small \rule[-1.5]{0.1}{0.1} (1-q)? I'm afraid you'll need to do a little calculation to get at the answer.