Saturday, 22 May 2010

Once upon a time...

As I've mentioned here, I had a very sudden loss of faith in the summer before I began my PhD. One of the most frustrating things about this loss of faith at the time was that I was completely unable to give a good reason for it. I certainly hadn't discovered any logical inconsistency within Christianity, nor had I suffered at the hands of my fellow Christians. I wasn't going through a lull in my prayer life (though I did, as part of a wider crisis, a few months before), and I was still studying the Bible regularly.

Since then, I've put together a `just so' story, which I think gives some explanation of how it happened. However, you should bear in mind that this story was put together over a year after the fact, and it may well be off the mark.

Before I get into the story proper, I'm afraid I need to digress to explain some technical details to do with apologetics. The word `apologetics' refers to the practices employed by Christians in defence of their faith, especially reasoning and argumentation. The kind of apologetics that matters for this story is called presuppositionalism. It is based on the idea that, whilst we justify some of the things we believe in terms of others, we all have some ultimate presuppositions on which our worldviews are based and which do not rest on anything else for their justification. A key part of the presuppositionalist strategy, at least as I encountered it, is to conclude from this oversimplified picture that it is OK to take, for instance, the existence of God and the truth of the Bible as such basic presuppositions. In other words, on this view there is no need to argue for these beliefs - they can be taken for granted. There's much more to presuppositionalism than that, but what I've said is enough to let me press on with the story.

When I was at university, a charismatic member of my church was a presuppositionalist, and I encountered this style of apologetics in conversation with him. It made a lot of sense to me (at least, the parts I explained in the last paragraph did) - after all, if God is foundational for ontology it makes sense for belief in God to be foundational for epistemology. So I became presuppositionalist.

This meant that, when I encountered a refutation of an argument for the existence of God, I didn't have to worry. After all, my faith wasn't based on such flimsy things as arguments. So I could examine the refutation and, if it made sense, accept it. Here are a few typical examples:
  • Argument: the unity and perfection of the Bible indicate that it must have had a divine source.
    Refutation: examination of the text shows that the various authors had very different, sometimes conflicting and often questionable projects.
  • Argument: God answers prayer.
    Refutation: repeated scientific testing has produced no evidence that prayer has any measurable illness-reducing effect.
  • Argument: the fine tuning of the universe is highly improbable unless there is a God.
    Refutation: the notion of probability does not apply to things like universes in the same way as to things like coins.
  • Argument: God is necessary to explain morality.
    Refutation: it is plausible that there may be evolutionary explanations (though not justifications) of morality.
I could extend this list ad nauseam.

And lo! It came to pass that my faith was completely unsupported. I did not know of any compelling argument for the existence of God, or any of the other supernatural claims of Christianity. Of course, I didn't mind that, because I didn't have any compelling arguments against, either. And I was happy to presuppose the truth of Christianity.

How did this affect my beliefs? Well, it began to disconnect my religious beliefs from my day-to-day expectations about the world. Since I was not aware of any phenomena in normal life which science was unable, in principle, to explain (so that God would be needed), I made no allowance for such phenomena. Thus, for example, if someone was ill and I prayed for them this did not increase my expectation that they would recover. This did not devalue prayer in my eyes.

What this meant was that my internal model of the world was not what I thought it was. For the purposes of normal living I was making use of a perfectly workable worldview which did not rely on the presupposition of the existence of God at all. Although this presupposition was present, and I believed it to be fundamental, all that was being supported by it was the ornate cathedral of religious doctrine which I had been slowly building over the years.

One day, there was a switch in perspective - suddenly the religious doctrine no longer counted for me as belief. It was all still conceptually present; I am still able to recall and understand much of it today. But the concepts involved no longer served as beliefs for me. The cut was relatively clean, and what was left behind was the nontheistic worldview which had already been serving me (as a substructure of my beliefs) for some time.

That's my story, and I'm sticking to it.

Tuesday, 18 May 2010

Self deception

What sets humans apart? One common answer is that it is our self-awareness; our sense of self. Whilst there isn't a qualitative division, this certainly seems to be something that is far more developed in humans than in other animals. So a natural question to ask is how it came to be so highly developed. There's a relatively simple story which gives one plausible answer. It may well not be right, but if it is it is rather humiliating for us. The feature which sets us apart may have been born in deception.

Here's the story:

Proto-humans began to develop a community structure which relied on intelligent communication. Some proto-humans began to build mental models of other proto-humans. This helped them to judge and more accurately predict the behaviour of others. They survived better, and this became widespread.

Some proto-humans began to meta-model - to model the modelling that was occuring in the minds of others. They survived better, and this became widespread.

In some, this meta-modelling was especially detailed with respect to the models that others had of them. This was for at least two reasons. First, it was easy for their brains to gather data about their own state and the ways it was making them behave. Second, it was these meta-models which yielded the most significant information for those doing the modelling. They survived better, and this became widespread.

Finally, some proto-humans began to make use of these meta-models in more sophisticated ways; to modify their own behaviour so that they were harder to second-guess, and even to give a misleading impression to others. In order to do this, their mental self-models became extremely detailed and detached from any specific model of the minds of others. They survived better, and this became widespread.

Does this story have a moral? If so, it isn't that deception is natural for humans and therefore morally fine. The question of whether a thing is natural (in the sense of emerging in an understandable way from the normal running of nature) is independent of the question of whether it is good. Instead, if this story is true, the moral is that we shouldn't trust our sense of self quite so much. When we rotate our inner eyeballs to look back into ourselves, it may be that the image we see is one that was born in deceit.

Monday, 1 March 2010

Idolatry and the speaking theist

In a couple of previous posts I discussed a potential reinvigoration of religious language, focusing in particular on the use of the word God. I approached this usage from the perspective of the nonbeliever, and tried to answer the question of why it might be reasonable to retain religious language despite rejecting supernatural ontology. But I reckon that the same use of the language of God would be helpful for the believer, shattering the box in which (so they think) He must be confined. It is this case which I'll be presenting here.

The main motive which I'll present for believers in God to prune the ways in which they speak of Him will be avoidance of idolatry. Because the charge of idolatry carries such heft, it has been constantly reappropriated and reapplied to condemn a wide variety of activities, from the worship of wooden statues to greed. There is a particular thread of meaning, however, which is more directly relevant to the use of religious language. A paradigmatic example is found in Exodus 32:
When the people saw that Moses was so long in coming down from the mountain, they gathered around Aaron and said, "Come, make us gods who will go before us. As for this fellow Moses who brought us up out of Egypt, we don't know what has happened to him."

Aaron answered them, "Take off the gold earrings that your wives, your sons and your daughters are wearing, and bring them to me." So all the people took off their earrings and brought them to Aaron. He took what they handed him and made it into an idol cast in the shape of a calf, fashioning it with a tool. Then they said, "These are your gods, O Israel, who brought you up out of Egypt."

When Aaron saw this, he built an altar in front of the calf and announced, "Tomorrow there will be a festival to the LORD." So the next day the people rose early and sacrificed burnt offerings and presented fellowship offerings. Afterward they sat down to eat and drink and got up to indulge in revelry.
Notice that the festival is to the LORD, not to some other god or gods in competition with Him. The golden statue isn't a rival of the LORD: it is a misrepresentation. Misrepresenting the LORD is a very serious offence, as the sequel shows:
When Moses approached the camp and saw the calf and the dancing, his anger burned and he threw the tablets out of his hands, breaking them to pieces at the foot of the mountain. And he took the calf they had made and burned it in the fire; then he ground it to powder, scattered it on the water and made the Israelites drink it.

He said to Aaron, "What did these people do to you, that you led them into such great sin?"

...

Moses saw that the people were running wild and that Aaron had let them get out of control and so become a laughingstock to their enemies. So he stood at the entrance to the camp and said, "Whoever is for the LORD, come to me." And all the Levites rallied to him.

Then he said to them, "This is what the LORD, the God of Israel, says: 'Each man strap a sword to his side. Go back and forth through the camp from one end to the other, each killing his brother and friend and neighbor.' " The Levites did as Moses commanded, and that day about three thousand of the people died. Then Moses said, "You have been set apart to the LORD today, for you were against your own sons and brothers, and he has blessed you this day."
It isn't just the golden calf that Moses objects to, but the practice of the people, which is misrepresenting the LORD before His enemies.

So what kinds of representation are OK? Deuteronomy 4 is pretty clear:
You saw no form of any kind the day the LORD spoke to you at Horeb out of the fire. Therefore watch yourselves very carefully, so that you do not become corrupt and make for yourselves an idol, an image of any shape, whether formed like a man or a woman, or like any animal on earth or any bird that flies in the air, or like any creature that moves along the ground or any fish in the waters below. And when you look up to the sky and see the sun, the moon and the stars—all the heavenly array—do not be enticed into bowing down to them and worshiping things the LORD your God has apportioned to all the nations under heaven. But as for you, the LORD took you and brought you out of the iron-smelting furnace, out of Egypt, to be the people of his inheritance, as you now are.
No representations of God are allowed. Why not? Because any representation is, and must be, a misrepresentation; a reduction of God to something common, to a part of His creation. This idea has been taken very seriously in some kinds of Judaism, so that even the name of the LORD is not spoken. It is also famously the source of the abstract constellations of pattern so prevalent in early Islamic art. It has been taken less seriously in Christianity, a point to which I shall return later.

Working out the full implications of this prohibition leads not just to a condemnation of some kinds of statuary. I'll argue that, taken seriously, its range is so wide that it undermines a great deal of religion, including its own frame of reference. This ironic reflexive undercutting begins to show in the instructions Moses gives the Israelites for when they enter the promised land, in Deuteronomy 27:
When you have crossed the Jordan into the land the LORD your God is giving you, set up some large stones and coat them with plaster. Write on them all the words of this law when you have crossed over to enter the land the LORD your God is giving you, a land flowing with milk and honey, just as the LORD, the God of your fathers, promised you. And when you have crossed the Jordan, set up these stones on Mount Ebal, as I command you today, and coat them with plaster. Build there an altar to the LORD your God, an altar of stones. Do not use any iron tool upon them. Build the altar of the LORD your God with fieldstones and offer burnt offerings on it to the LORD your God. Sacrifice fellowship offerings there, eating them and rejoicing in the presence of the LORD your God. And you shall write very clearly all the words of this law on these stones you have set up."
There are two lots of stones to be set up. First there are the stones for the altar, in which great care is taken to avoid even attempting to represent God. These stones are used just as they are, and not marked in any way. But opposite these stones stand another group, which are thick with inscriptions. These inscriptions represent God, despite their formulation in language, and despite their protestations against any such representation. They stand as a testimony against themselves.

The marks on the stones cannot be God, despite their aroma of (Platonic) heaven. They can only represent Him. So they can only misrepresent Him. Language is no less an attempt to circumscribe God than pictures, and no less a blasphemous failure. The iterability on which language relies deprives it of the power to capture the unique; the transcendent is debased and rendered translatable for `all the nations under heaven'. The quotation from deuteronomy 4 above slyly undercuts its own exclusive rhetoric by witnessing the possibility of a comprehensible translation for us heathen English.

The idea that misrepresentation of God with language, even with the best of intentions, is seriously problematic is eloquently presented by Job. Faced with a mystery about God, Job's friends do their best, given their limited understanding, to present a picture of God which sets Him in a good light. Job's response is damning:
Will you speak wickedly on God's behalf?
Will you speak deceitfully for him?

Will you show him partiality?
Will you argue the case for God?

Would it turn out well if he examined you?
Could you deceive him as you might deceive men?

He would surely rebuke you
if you secretly showed partiality.

Would not his splendor terrify you?
Would not the dread of him fall on you?
I'm often reminded of these lines when I encounter Young Earth Creationism; adherents of this movement have repeatedly spoken decietfully for God. That they are misrepresenting God is reasonably clear, given our current scientific understanding. But not all representations are so clearly misrepresentations at the time they are made. This is beautifully illustrated by a later passage from Job in which, apparently, God is speaking:
Have you entered the storehouses of the snow
or seen the storehouses of the hail,

which I reserve for times of trouble,
for days of war and battle?

What is the way to the place where the lightning is dispersed,
or the place where the east winds are scattered over the earth?

Who cuts a channel for the torrents of rain,
and a path for the thunderstorm,

to water a land where no man lives,
a desert with no one in it,

to satisfy a desolate wasteland
and make it sprout with grass?
This representation of how God acts to bring about the weather made perfect sense at the time. It was reasonable, for example, to suppose that hail was stored somewhere by God, rather than magically appearing in the sky when needed. But given our current understanding of the weather, this passage is clearly full of howlers, and these howlers lie at the level of the basic assumptions used to form the picture of God. Our own current state of knowledge is very likely to also contain such howlers, so relying on it to represent God puts us at a serious risk of idolatry. These mistakes were to do with scientific knowledge, which is (at any given time) relatively clear-cut. Most of the language used to picture God is not scientific but theological, and thus far more seriously disputed. Therefore it is impossible for more than just a small fraction of theological discussion to avoid the serious error of idolatry; indeed, it follows from the argument I am making that all theology that attempts to represent God, however reasonable it might seem at the time, falls into this trap.

However, there is an even deeper problem here. The very use of language itself implicitly provides a picture of that which is spoken about. For example, the use of a noun suggests that there is a particular thing which can be referred to by means of that noun, and which behaves in a regular manner reflected by the grammar of nouns as we use them in our language. I've commented on how this picture is reflected in the way we use words like `exists', for example, here. Some of the key blunders in the history of science involved the use of nouns where they would turn out to be inappropriate, and the implicit pictures of the world that go along with that use; think of the ether or phlogiston, for example. Something similar may be happening with current discussions of dark matter and dark energy.

Of course, our own ways of thinking are so closely tied in to our language that it is difficult to see how the implicit pictures which the grammar of that language presents could fail. However, as I've suggested here, there are good reasons to think that they do break down even when we seek to speak of perfectly ordinary matter on a very small scale. These pictures serve us well when we want to talk about things on our own scale, but there is no reason to suppose that they will work equally well for addressing all domains where we wish to have knowledge. In particular, to return to the main theme of this post, there is no reason to suppose that God will fit our grammatical boxes. To use the word God simply as a noun is to implicitly represent, and so misrepresent, Him. This, too, is insidious idolatry.

How, then, is this idolatry to be avoided? I mentioned earlier one attempt - namely the avoidance of the name of God in some flavours of Judaism. However, this practice can only stand as a reminder of, rather than an escape from, the blasphemy of language, for some equally arbitrary string of symbols (Adonai, G-d) can always be hauled in to continue the desecration. Alternatively, we could attempt to cut out all God-talk completely. I've argued against this practice elsewhere. For now, it suffices to note that God-talk began to be used in some particular contexts and that at present it seems to be the only language we have for addressing those contexts. The importance of maintaining the space into which this language began to point was argued in a recent address at the unitarian church here in Cambridge.

Indeed, thinking (at least for ourselves) about those contexts in which God-talk seems most appropriate suggests a possible way to use this language without falling into the traps outlined above. To take a simplified case, recall a time when you were struck dumb by the expanse of the stars on a cloudless night. This experience of awe includes a sense of something encountered beyond ourselves, even beyond the physical world. In such a case it is natural to say you have encountered God: to use the word `God' as a placeholder indicating the direction in which this sense points. But it is important at this point not to be drawn into the supposition that this word, `God', can now be treated as any other noun; that we may, for example, sensibly ask whether two distinct Gods were encountered on two different such occasions or whether it was the same God both times.

Indeed, if what I have said above is right, we should not even suppose that to ask at this point a question like `Does God exist?' is a sensible use of language. Since it may have weight for some theists, I should flag up that I think this illustrates a proper response to strong atheism. It would be tempting to say that, prior to all their arguments, strong atheists have made a mistake in supposing that the question `Does God exist?' may sensibly be asked. But in fact, of course, they have made no such mistake. For the question was already appropriated and answered in the affirmative by theists, and the standard arguments of strong atheists are quite properly directed against this idolatry.

Now that I have laid out how the use of religious language in a nonmetaphysical manner may be approached from the side of more traditional theism, it is worth taking a little time to consider some objections to this approach.

The first kind of objection is the suggestion that the approach I have sketched must simply be switching one idol for another. For example, since the characteristic example I mentioned above involves the internal experience of a single person, perhaps all I am doing is substituting the self and personal experience as a new idol in place of God. But this can't be right; like the simplified models in physics textbooks, I chose the example above for its simplicity and clarity, but not for its typicality. Typically, religious use of language will emerge from the practice of a community, not just a single person.

Is it the community which is being substituted for God, then? This doesn't seem right either. After all, it is not the community which is being discussed when religious language is used. That language is directed away from ourselves, and towards the divine (indeed, it is in order to gesture in this direction that we are most in need of religious language). Faced with the infirmity of our langauge in the face of God, there is indeed a danger that we will seek firmness elsewhere, but it is not inevitable: in any case there is no excuse for seeking that firmness in misrepresentative pictures.

The next kind of objection runs a little deeper. In order to lay out my initial account of idolatry, I had to speak of God using religious language in a traditional manner. But this is exactly the use of language which I later condemned as blasphemous. So, just as I earlier suggested that the book of Deuteronomy undermines itself, so does my own argument. In order to make the argument, I must stand on the very foundations I am undercutting.

This objection does not remove the force of my argument, but it does show that it can only serve as an internal critique of a special kind: as a kind of deconstruction. For I have sought to show that traditional religious accounts conceal within themselves the seeds of their own condemnation. Once this is made explicit, I claim, the structure implodes, taking the argument with it. However, this very movement serves to gesture in the direction of the holy, and of a more tentative way of addressing God.

A third objection is that what I have said is an overblown attack on sincerely held beliefs, which will serve only as an excuse for violence and condemnation. Because of the extreme force of the language of idolatry, the objection runs, it should be used with more caution. It has been used to excuse murder in the past. First of all, it is worth noting that even if this objection were accurate it would not be a reason to reject my argument. That a claim has been used in the name of violence does not make it false. We can't deny the truth of nuclear physics just because of the horrific weapons it underpins.

However, in fact this objection is not even accurate; all its force is drawn away by the second objection. After all, in the moment in which we are drawn to condemn traditional religious language as idolatrous, even before we are able to strap on our swords in the name of divine retribution, that condemnation undercuts itself, together with the language, and falls away, having removed only our own inclination to speak of God in a certain way, and left us with no resources to condemn others.

We might try to save the objection by pointing out that even if the argument is internal it opens a possible charge of hypocrisy. This, too, is a serious enough charge that it should be used with caution. But even this revised objection will not stick. For we can not say, as an outer criticism, that those using religious language are commiting idolatry even on their own terms. For this would involve accepting those terms as sensible, if false. But such an acceptance would be a misuse of religious language of precisely the kind I have been arguing against. Even this attempt at violence undercuts itself. The only objection to be made from the outside is that we cannot make full sense of the traditional religious language without condemning ourselves. This is hardly a vicious allegation.

There is a final objection, which I suspect is likely to be confined to a Christian approach to theism. The objection is that, just as God somehow, mysteriously, became a man (who could, no doubt, be modelled in bronze), so too He is able to lift our language to the level of picturing Him, thus humbling Himself to the point of being captured in human speech. On this account, when theists talk about God, they are saved from the dangers I have outlined above by His own divine action. It is, I think, thoughts of this kind which have made Christians less coy about representing God than Jews and Muslims.

This objection relies on perpetual miracles to correct the potential idolatry of theists all over the world. These miracles are of a rather odd character; they cannot be simple corrections which smooth over inaccuracies in the meaning of speech which is approximately right. The very idea of approximation is tied to our ways of picturing things with words, and so any approximation to what we say would still be blasphemy if attributed to God. Instead, God must be making a radical break in meaning at each point, and causing theists' statments to mean something wholely other than their usual meaning and inexpressible in language.

There is a certain arbitrariness here. Why does God fix up images of Himself made in language, but not graven in metal? Or why not mysteriously cause discussion of the weather to secretly mean wonderful things about Him, whilst reducing the meanings of potentially blasphemous statements to pleasantries about the weather? Of course, part of the problem here is that the language of meaning is overstretched, and that to follow the pictures it presents us with can be problematic in the same ways as doing the same with religious language. This idolatry of meaning is a displacement of the original idolatry and does not resolve it.

Nevertheless, let us suppose for the sake of argument that there is some sense to this superstitious hope that God will sanctify certain kinds of speech - He is supposed to move in mysterious ways, after all. It would follow that the usual conventions we follow based on our normal understanding of meaning would not apply in such cases. For example, we normally try not to affirm both a sentence and its negation together. But there is no reason for this convention if the meaning of the sentence is not tied to the form of words used. In particular, it is inappropriate for theists to deny the claims of others (such as `God has no Son') on the grounds that they contradict other claims which they affirm (such as `God has a Son').

In a similar way, we usually count utterances as knowledge when they are connected to their content through an appropriate causal chain including the mind of the utterer; it is not at all clear that religious utterances could count as knowledge under the `perpetual miracles' account. In short, even if there is no way to rule out this objection, its consequences for the treatment of religious language fall far short of what would be needed to support the usual practice.

It is this standard practice, which unthinkingly supposes that the implicit linguistic pictures evolved to capture the material word can be directly transplanted onto descriptions of God, which I have been objecting to. I suggest that instead it would be proper to limit our talk about God to the point where we find ourselves unable to express the superstitions which are such a common accompaniment to engagement with Him.

Saturday, 9 January 2010

Conceptual concretions

Rocks exist, and so do numbers. But there's something rather more concrete about the existence of rocks. At a basic level, we might explain this by saying that we can reach out and touch rocks, or bang them together; we can interact with them in a physical way. We can't do that with numbers. But there's something a little more subtle going on here; after all, I reckon that rocks in Melbourne exist concretely, even though I've never bothered to go there and juggle with them. Maybe I could do that. But the universe is full of rocks, and there's a limit to how many light years I can run before I get out of breath. My inability to visit them doesn't diminish the concreteness of their existence.

Well, perhaps what's going on is that in principle if I was in the vicinity I could heft those rocks. This comes closer to the truth; it shows that our understanding of concrete existence depends somehow on our understanding of how, in principle, the universe might turn out to be. That is, it depends on the structure of our understanding of how the universe is patterned: Our rudimentary physics, if you will. To help us navigate the day-to-day world, we all have built in models of how the stuff we encounter might be expected to behave. Of course, our modelling kit is rather more sophisticated: It deals with the raw feeds from the nerves leading to our brains and it's only after some serious processing that this raw data is made to fit models involving stuff behaving in various ways; what we are consciously aware of is always already neatly chunked like this. The models we use fit nicely with the language we use; some bits (the stuff) correspond to nouns and other bits (the behaviour) to verbs.

It is a little clearer what is going on if we look at the models we are more aware of; the ones we consciously make for ourselves. Nowadays, especially in the sciences, these are often expressed in terms of mathematics. Usually, there is some mathematical structure, in which we can perform computations, and which corresponds closely with the world, so that we can speak of those computations as being somehow about the world. More precisely, the mathematics is set up in such a way that the statements and equations it produces correspond in a sensible way with statements we might make about the world, and the rules for rearranging those statements and equations do a pretty good job of preserving truth. Under this correspondence, some bits of the mathematics correspond to nouns, some to verbs, and so on. It is the bits which correspond to nouns which are taken to `concretely exist' with respect to a particular model. The fact that the model works then underwrites their existence, in the sense I outlined here.

Of course, things are not quite so simple. First of all, there are lots of things that we think of as concretely existing, but which are a bit fuzzy round the edges. Clouds, for example. We can clearly see that they exist, but the question of just where a cloud ends and the rest of the sky begins is hard to answer precisely. This isn't a problem for our modelling of clouds, however, since we know that clouds are made up of tiny droplets of water, which certainly exist concretely and whose behaviour we can model pretty accurately. So the concrete existence of fuzzy things, like clouds, is understood in terms of the concrete existence of less fuzzy things, like water droplets.

However, the lack of fuzziness of water droplets is just a matter of scale. If you had the ability to shrink yourself down to a microscopic scale, and to look closely at the surface of a water droplet, you would see that the surface is also quite fuzzy, with water molecules near the surface constantly jiggling around and some of them flying off into the air, whilst others from the air rejoin the droplet. So we don't escape fuzziness by looking at the water droplets; maybe if we really want to understand the concrete existece of water droplets (and so of clouds) we need to understand the behaviour of the real concretely existing constituents; the water molecules.

This is the situation with most things we think of as concretely existing, even rocks. Our theory of them involves them being somehow constituted of smaller concretely existing things. From the behaviour of the smaller things, we can derive the behaviour of the larger things; so we can explain the concrete existence (to a good approximation) of the larger in terms of the smaller. The smaller things may be explained in terms of even tinier constituents, and so on down through several levels. We proceed in this way from biology to chemistry to physics.

What lies at the bottom of this downward chain? Maybe, at some level, there are tiny basic constituent particles of the universe whose concrete existence is not just approximate but perfect; they precisely obey mathematical laws which fit well with our grammatical distinction between nouns and verbs, assigning these particles the roles designated by the nouns. This would be the ideal kind of backing our ideas of concrete existence could have. In fact, it is hard to imagine how the universe could be any other way. Perhaps our imaginations could stretch to some simple variations; an infinitely descending or even periodic series. But it seems completely clear that the concrete reality at any level, if it is only approximate, must be backed up by a more detailed explanation involving the interactions of smaller concretely existing things.

Imagine, however, the following alternative scenario: we wish to find a sensible theory of microwidgets, which are taken as concretely existing items at some point in this chain of explanation - they can't quite be at the bottom, because careful experimentation has shown that they are a little fuzzy. We find, as usual, that the best way to explain the fuzziness in the behaviour of microwidgets is by means of a mathematical theory. Also as usual, by taking some small simplifying approximations in this mathematical theory, we can reduce the theory to a theory which mentions objects called microwidgets, behaving as microwidgets should (but without the fuzziness); so we can use this theory to understand why it is sensible, for the most part, to take microwidgets as concretely existing. The difference is that the new, `more accurate' theory does not fit the grammatical conventions of our natural language; it cannot be translated into statements about even tinier nanowidgets and their quirkily nanowidgerific behaviour. No part of the mathematics corresponds to nouns, or to verbs. It just doesn't fit our way of thinking.

My reference to grammatical conventions in the last paragraph isn't to things like `In sentences of such-a-kind the noun should come before the verb' but rather to conventions like `There are some words which are called nouns. They refer to things.' These conventions apply, so far as we know, to all kinds of natural human language, and grammar is the word used for these conventions by the linguists that study them. So, to repeat, in the scenario of the last paragraph it is these deep universal conventions which are irreconcilable with the mathematical theory in question.

Is this even possible? It is certainly hard to conceptualise how it could ever work out that way, and it doesn't seem to have been seriously considered as a possibility until it actually happened. But it has happened. The theory which has wonderful predictive power but does not share the grammar of our thought has been discovered, and accepted as the best current model of small scale phenomena (though, of course, the word `phenomena' is no longer really appropriate). It is quantum mechanics.

There have been many attempts to say just what the concretely existing entities postulated by quantum mechanics are (it being assumed that there must be such entities). Unfortunately, all of them either fall foul of the mathematics or contort the idea of existence so seriously that concretely existing things no longer satisfy the grammar of nouns. However, this is not the only, or even the main, reason for concluding that the grammar of quantum mechanics and normal grammar are incompatible. It is simply a clue that might lead us to suspect that that is what is going on. It is possible to formalise what it might mean for a mathematical theory to fit our natural language, and to demonstrate that quantum mechanics does not satisfy the necessary formal criteria. See, for example, this paper (though I'm not sure the authors of that paper would agree with all that I've said here; it is the mathematical, rather than the philosophical, content of the paper which is relevant).

The striking conclusion of all this is that the deep grammar of our languages, the grammar on which concepts like concrete existence rely for their very meaningfulness, is contingent. It is appropriate for talking about things which are about the same size as us, and which move at a sedate speed. We should expect this, since our language evolved to help us cope with sedately moving things of a similar size to us. We have no reason to think it must fit what happens on smaller scales (though it does fit down to the atomic scale, where it begins to jar). Our reason for assuming that it had to be that way was that the grammar is so ingrained our ways of thinking that we cannot conceptualise any way the world could be that wouldn't fit.

This conclusion does not depend on the fact that quantum mechanics is odd in the ways I have outlined, though quantum mechanics provided a helpful pointer to it. Even if the mathematics of quantum mechanics is revised or, by some miracle, found to be compatible under a sufficiently cunning contortion with natural language, this will not resolve the issue. For there doesn't seem to be any good reason to reject the possibility that it could have turned out in the way I have presented above. Our only reason for supposing that the world must be neatly divisible into concretely existing entities even on tiny scales appears to be that we are blinkered by the grammatical form of our own thought.

Friday, 25 September 2009

What I do all day

As part of the process of applying for Junior Research Fellowships, I've had to put together a 1500 word statement about what my research involves and how it might develop, phrased in such a way as to be intelligible to a layman. I thought it would also go well with some of the stuff on here. Here it is:

Mathematics can be considered as the process of engaging with, understanding, and exploiting patterns. The strengths of the human mind are not perfectly fitted to the abstract problems which arise in this process. However, it is often possible to recruit our better intuitive and conceptual structures by rephrasing the problems in suitable terms. Much of mathematics therefore consists of setting up and refining amplified metaphors. For example, a great deal of mathematics is phrased in terms of geometry, using terms like shape and dimension. My research concerns game theory, which similarly makes use of analogies involving competitive interaction. More precisely, I am dealing with two-player complete-information games. The phrase `complete information' specifies that these are games which do not rely on chance (like snakes and ladders) or concealed data (like battleship). Instead, as in chess or noughts and crosses, either player always has enough information to completely describe both the current state of the game and how it will be modified by any legal move.

Though the theory describing such games is so elegant that it is worthy of study for its own sake, it can also be used to throw light on other areas of mathematics, principally logic and the theory of computation. Mathematical logic is the abstract study of mathematical reasoning itself, and so it is concerned with (simplified models of) the language with which such reasoning is usually expressed. Wittgenstein introduced the metaphor `Languages are games': one strand of mathematical logic has deepened and extended this metaphor for the simplified models of language studied by logicians. To each statement in such a language, we can associate a game, played by two players (the Challenger and the Defender) with the property that play in this game looks like the kind of discussion which might arise when determining whether the statement is true. Taking a very simple example, play in the game corresponding to the statement `Everybody has a mother' might look like this:
Challenger: What about [person A]?
Defender: His/her mother is [person B].
The Defender is declared to have won if person B is the mother of person A. We might then say that the statement is true if (in principle) the Defender has a winning strategy for this simple game, and false if the Challenger has a winning strategy. Not only does this help us to explore the meaning of words like `everybody,' it also allows us to recruit our conceptual understanding of games to help us think more fruitfully about mathematical language.

The second major use of game theory is in the theory of computation. Here the subroutines of a program are thought of as players in a game, the structure of which guides the flow of the computation. On a larger scale, the protocols by means of which programs interact with one another may be thought of as basic games. This metaphor may be made precise and extended to show how standard concepts from computer science (parallel processing, variable binding, etc.) correspond to concepts involving games; seeing the concepts from this new perspective allows new ways of thinking about them.

Game theory is just one of the diverse species of mathematics, and can appear as different from ideas such as the convoluted geometry used to model the behaviour of spacetime on quantum scales as a poodle is from a blue whale. However, just as the poodle and the whale have strikingly similar skeletal structures, so there is a common structure under the surface, not just of these two areas, but of an incredibly diverse menagerie of mathematical fields. The study of this structure is category theory, the comparative anatomy of mathematics. [As is often true in mathematics, the word `category' has a specific technical meaning in this context, which has little to do with the normal usage. The best policy is to imagine that a completely new word is being introduced, and to disregard any standard meanings or connotations the word may have for you.]

One major benefit of category theory is that it provides a general linguistic framework within which many areas of mathematics may be discussed. This aids communication between mathematicians working in different fields, and assists in the recognition and development of connections between those fields. Indeed, in mathematics, simply expressing ideas in the right language can be a powerful aid to thought, suggesting new perspectives and approaches and reducing complex problems and definitions to simple ones. Category theory also allows concepts and techniques to be more easily transferred from one area to another, and guides the construction of the analogies and amplified metaphors on which mathematics thrives. For particularly thorny problems, category theory can help to identify simpler contexts which serve as guinea pigs for potential solutions; seeing what works in these simpler contexts can be useful in deciding what approach to take to the original problem. Finally, many of the structures made explicit in category theory are extraordinary for their simplicity and beauty, and these qualities are highly valued within the mathematical community.

Category theory was applied to the theory of games with great success in the latter half of the 20th century. During this time, a clear intuitive picture of how categorical structures might emerge in the theory of games was developed. However, when attempts have been made to make this intuitive picture more concrete, the details have proved to be rather fiddly. Several concrete explanations have emerged, each with its own peculiarities and with none evidently more natural than the others. At present, these explanations are held together only by a loose weave of suggestive analogies.

Some recent developments in category theory may change that. The widespread use of geometric intuitions in mathematics means that often the idea of dimension is key. We naturally assign dimensions to physical objects, so that a line on a piece of paper is just 1-dimensional, the surface of the paper itself is 2-dimensional, the space in which the paper sits is 3-dimensional and so on. In just the same way, mathematicians naturally assign dimensions to many of the abstract objects that we study, and this helps us to visualise them. When category theory first emerged, about halfway through the 20th century, the approach was entirely 1-dimensional. It quickly became clear that there were higher-dimensional analogues of the structures being studied, and that these higher-dimensional structures would allow a more expressive linguistic framework. To balance this expressivity, however, these structures are also more delicate and more care is needed to understand them. It is only in the last decade that the necessary techniques for handling these structures have been established, and there is still plenty of room for progress in this exciting area.

The categorical approaches to the understanding of games have so far been almost exclusively 1-dimensional. In my research, I have discovered a hidden extra dimension of structure underlying the intuitive picture of the relationship between categories and games. There is a natural way to make this precise using the fresh language of higher dimensional category theory. When this is applied to the disparate existing concretisations of the intuitive picture, the 2-dimensional structures I obtain show their unity more clearly than their 1-dimensional shadows. This gives a new way of looking at the existing constructions and the links between them, as well as allowing me to make concrete some constructions which had so far only been discussed on an intuitive level. This work has also provided a context for the development of some basic tools in higher-dimensional category theory.

In my PhD thesis, I hope to give a clear explanation of the higher dimensional construction, including both an introduction to an elegant general framework for the application of category theory to game theory and a detailed exposition of a simple new example. Beginning in parallel with this thesis, but continuing into the next couple of years of my research, I hope to publish a series of papers making use of this general framework to provide new perspectives on the existing constructions in this field, and on the connections between them. I hope to explain how the new framework makes possible a proof of a conjecture of Imre Leader. Finally, I hope to explore some of the problematic issues from the theory of games in the context of my new simple example, and so to indicate potential resolutions of these problems.

Saturday, 27 June 2009

A rearrangement of the Tractatus Logico-Philosophicus

I had a look at Wittgenstein's Tractatus Logico-Philosophicus recently, and the format is crying out for expression with an outliner. I poked around on the internet for a while, and I couldn't find it. So, I did it myself. Here it is:
The Tractatus in an outliner.
Reading it this way brings out the coherence of some of the structure of this book. It's particularly impressive since Wittgenstein didn't have any such software to play around with.

I built it from a Project Gutenberg e-text, though I've corrected some typos and put in some sections that had been left out of the e-text. I've also cleaned up the notation for the mathematics, which had become mangled in the e-text version. The only major problem is that I couldn't include any of the diagrams from the original book. But this isn't too serious, as Wittgenstein doesn't rely very heavily on the diagrams.

It is clear that, on its own terms, the Tractatus is nonsense. Wittgenstein was well aware of this when he wrote it. The trouble is that most of it is in fact perfectly sensible use of language and so, not being nonsense, it shows itself to be false. We could, perhaps, imagine that Wittgenstein's project was more modest: To set up a bastion of sensible language without commenting on the sense of the remainder (with which that bastion is constructed). But even this fails, since the set-up of the bastion relies on a classical foundationalism which has been smashed by quantum mechanics, as I hope to explain in my next post.

Sunday, 21 June 2009

An amusing diversion.

If I told you I didn't believe in rationality, what could you do to convince me? You could try to present an argument, but I, believing the argument to be necessarily irrational, would have no reason to accept it, or even listen to it. Your best bet would probably be to medicate me, though such techniques don't really fall under the banner of 'convincing'.

This is because rationality is so basic. Any proof that rationality exists must be in some sense circular, in that it must rely at some point on reason. This isn't a problematic kind of circularity, though, in that we simply rely on the proper function of reason in our brains to understand reasonable arguments. We don't, and can't, rely on our knowledge that we are rational in order to do this. But lack of proper care in thinking about this can lead to paradoxes.

It's much easier with the existence of things like tables. We have a sensible theory of the world into which tables snugly fit. More precisely, in this theory there is a role for tables as existing objects which fits pretty closely with our experience of them and of the world. To return to the theme of this post, our physical theories of the world give good reason to believe that continuing to use the word 'table', as we do, as if it were a noun, will not lead us into serious difficulty. So, according to that post, we shouldn't be at all surprised that we find ourselves, from time to time, making claims like 'tables exist'. There is no paradox here, despite the fact that tables have been widely used by those who put these theories together.

Let's pause at this point to notice the dizzy heights we haven't reached. The dalliance with the word 'exists' in the post referenced above gave no explanation at all of what existence is, or of what it is for a thing to exist. It was simply an observation about the linguistic use of a word, a piece of anthropology. Accordingly, we needn't worry about things relying for their existence on the ideas introduced there. This is good: It avoids vicious circularity - those ideas needn't rely on themselves for their own existence, for example. Bafflement easily ensues if this is not kept in mind.

In fact, there are quite a few things which exist, and whose existence may be explained in this very limited sense, but for which the explanation invokes the things themselves. The problem is only apparent, since what is explained is only the fact that, given the way we use language, we sensibly use the word 'exists' about those things. It is amusing to examine a few cases, and see how in each case familiar paradoxes are generated. I'll look at numbers, language, time, probability and deconstruction, but I'm sure you can think of your own examples.

Numbers

Although numbers aren't made of physical stuff in the same way as tables, our basic physical theories give us good reasons to accept that using the words for numbers nounishly will be unproblematic. For the world is generally divisible into discrete bits which we can count, and preservation of cardinality is a general rule which emerges from our understanding of how those bits of stuff behave. However, the more deeply we look into the behaviour of stuff and our understanding of it, the more we find that in order to explain what is going on we must make use of mathematics.

Because of the misunderstanding I outlined above, this phenomenon has often been taken to show that such explanations are circular and therefore inadequate. So the question of how our minds can interact with numbers has been singled out as a mystery. How can our minds, which are made of physical stuff, possibly interact with such abstract nonphysical entities? Even the brilliant thinker Roger Penrose has taken pains to express how baffling he finds this, in terms of a triangular structure of mental, material, and mathematical worlds, each enclosing the previous one:
In order to confront the profound issues that confront us, I shall phrase things in terms of three different worlds, and the three deep mysteries that relate each of these worlds to each of the others. The worlds are somewhat related to those of Popper (cf. Popper and Eccles 1977), but my emphasis will be very different.

The world that we know most directly is the world of our conscious perceptions, yet it is the world that we know least about in any kind of precise scientific terms. ... There are two other worlds that we are cognisant of ... One ... is the world we call the physical world ... There is also one other world, though many find difficulty in accepting its actual existence: it is the Platonic world of mathematical forms.

Language

Though we are far from completely understanding it, linguistics has given us a pretty good outline of how language emerged and how it is used in day to day life. Mundanely enough, in order to express this stuff even linguists must humbly rely on the very linguistic structures they are studying. Pulling a mystery from this banality, I've seen this fact used to reject the explanations of linguists as fundamentally unable to address the 'deep questions' of language, such as when a sentence is true. This misunderstanding is used to claim that there is a sacred territory here on which we must proceed philosophically barefoot.

Time

Great strides have been made in understanding what time is. Newton and others produced a sensible mathematical model into which time fits. Einstein showed that the model in question was slightly off (in a way that reflects our simple mental pictures of time), and proposed a better one. This better perspective has yet to be fully integrated into quantum mechanics, but there's no reason to suppose that this won't ever be achieved. In any case, we are close to having an understanding of how time exists (that is, of why using the word time as a noun doesn't screw us up) which is built on cunning maths which makes no reference to and doesn't rely on a prior concept of time.

Being human, though, when we consider these mathematical models, our brains (being so tied up in a primitive model of time) situate them in a kind of timeless present. Besides the obvious confusion this leads to (nonsense like 'Relativity implies the future already exists'), this has led to the claim that this present, for example, is prior to and in principle cannot be accounted for by scientific models. Once more, there is a purported need to abandon all our usual tools when addressing our day-to-day experience of time. I'm told Heidegger followed a line of this kind, but I haven't checked this in his writings.

Probability

This is a slightly odd special case. The normal way we introduce probability into scientific models is at the ground floor, with a variety of possible worlds whose actualities are assigned various prior probabilities. For this reason, like numbers, probability is difficult to explain except by exemplifying the proper use of the language (and remarkably it seems not to have been understood systematically until a few centuries ago).

To some extent this can be dealt with, though it normally isn't, by considering appropriate strategies for a limited reasoning being in a deterministic, patterned reality. Such a being could represent its ignorance about that world by assigning various probabilities to ways it reckons the world might be. So we can see how probabilistic language might become useful and how the probabilities of particular events could be said to exist without having to invoke probability in the explanation.

Unfortunately, though, as far as we can tell such 'probability-free' basic models aren't adequate to the behaviour of the world that we find ourselves in, where there appear to be genuinely random events if you look on a small enough scale. Quantum mechanics forces us to once more build in randomness at the bottom. Actually, the situation is a little more complicated still, in a way that is so outlandish that I'll have to leave an explanation to a later post.

Deconstruction

A lot of the language Derrida originally used when introducing deconstruction appears to fall into this category. I can't comment for myself on later postmodernists, since I haven't looked into them, but I'm told that to a large extent they were trying for something a little different. There is, as far as I can tell, no current systematic understanding of why 'differance', or 'trace', for example, are a sensibly nounish words. I can see that they are, but not in terms that I can explain without resorting to the vocabulary of Derrida, and even that isn't well enough developed to explain it thoroughly. This linguistic oddity may turn out to be explicable in other terms eventually, for example with an adequate development of neuroscience, or it may not. It almost certainly won't be done whilst I'm still alive.