Paul Jorion
paul_jorion@email.msn.com

Official reference: Dialectical Anthropology 24, 1: 45-98, 1999 

 

What do mathematicians teach us about the World ? An anthropological perspective

 

« Aristotle was a thorough-paced scientific man such as we see nowadays, except for this, that he ranged over all knowledge. As a man of scientific instinct, he classed metaphysics, in which I doubt not he included logic, as a matter of course, among the sciences, - sciences in our sense, I mean, what he called theoretical sciences, - along with Mathematics and Natural Science, - natural science embracing what we call the Physical Sciences and the Psychical Sciences, generally. This theoretical science was for him one thing, animated by one spirit and having knowledge of theory as its ultimate end and aim » (Peirce 1992 [1898] : 107)

I did not train as a mathematician, I trained as a Social Scientist. I had chosen however Mathematics as my main subject when at the « Athénée », the equivalent of High School in Belgium, the country where I was born and where I was raised up to graduate level. At the Free University of Brussels, I learnt mathematics for economics as part of the curriculum for sociology undergraduates. As a Graduate student I had the privilege of being one of Georges Théophile Guilbaud's students at his seminar called « Mathematics for Social Scientists » hosted by the Ecole des Hautes Etudes en Sciences Sociales in Paris. Next followed many years of conversations and joint informal work with Professor Sir Edmund Leach, at Cambridge University, a small portion of which has recently found its way to the publisher (Jorion 1993). Then I went on learning mathematics « on the job », first as an anthropologist, then as an Artificial Intelligence researcher and more recently in finance. This makes me a long-term Applied Mathematician which - I have realized over the years - is a profession very different from Pure Mathematician.

Let me tell you how I see these two experiences as being different. In my conversations with Pure Mathematician friends I have discovered people who are performing a demanding task which despite the intellectual hardship they very much like and enjoy. Mathematicians tell you about the torment of trying to establish something like demonstrating a theorem or designing a complex mathematical object, and often failing to do so after weeks, months or even years of effort. They tell you about being woken up at night by the solution to a problem which may depressingly turn out in the morning not to be a proper solution after all. They mention being often left drained by the mental effort which the pursuit of mathematical research requires. But despite the hardships they are on the whole happy with mathematics : they like it and they believe it is a field which is doing a good job in the world of science and in the world at large.

My own experience with mathematics - essentially as a customer of mathematics which have been produced by others - is of a different nature. No doubt what justifies the honor I am being awarded of being here today at University of California, Irvine has got much more to do with the circumstances when I have been a satisfied customer of mathematics than when I have been a frustrated one - the latter not leading to any noteworthy conclusion ! I have been very fortunate some twenty-eight years ago to encounter permutation groups as part of a full-bodied theory and being able then to find original ways for applying permutation groups to the systematic exploration of genealogies, under the guidance first of Guilbaud, then of Leach. At times in my dealings with mathematics I have managed to do somewhat better than being a passive consumer. This was possible when the question I was trying to address was close enough to some mathematics I was familiar with that I could « customize » to my own demands an existing object such as the dual of a directed graph. But on the whole I have been very much a frustrated customer of mathematics who hardly ever found in the mathematical toolbox what he was looking for. This frustration has led me over the years to ponder about the production of mathematics by mathematicians as a part of our culture, and in this quality can be studied in an anthropological perspective. Today I shall try, from an anthropological point of view, to shed some light on the question « What can mathematics tell us about the world ? ».

1. The « effectiveness of mathematics »

Eugen Wigner wrote in 1960 an essay entitled « The Unreasonable Effectiveness of Mathematics in the Natural Sciences ». There is no denial of the « effectiveness » of mathematics even if the « unreasonable » should require justification instead of being plainly assumed.

Let us remember first that there is no « Unreasonable Effectiveness » of mathematics in explaining the world. On the contrary, it may be said that mathematics let us down as soon as we attempt to account for the world with their help. For instance when we try doing something as simple as calculating the ratio of the diagonal of a square to its side or if we attempt to calculate the ratio of the circumference or of the area of a circle to its diameter.

As is well-known it is impossible to find an exact measure for the diagonal of a square in terms of its side : the side and the diagonal of the square are « incommensurable », they cannot be measured using the same unit. So for instance if you define the side of a square as being of length « 1 » then the diagonal (as a consequence of Pythagoras' theorem) is of length square root of « 2 », i.e. 1.414213... which has a set of decimals of infinite length ; if on the contrary you define the diagonal as being of length « 1 » then the side is of length square root of « 1/2 », i.e. 0.707106... which has also a set of decimals of infinite length and turns out to be half the value of square root of « 2 » . The same applies to the pentagon, the proportion of diagonal to side has become famous under the name of the « golden section » ; here the proportions of diagonal to side and side to diagonal are respectively 1.618033... and 0.618033..., i.e. identical but for one unit, the larger value of the two being the « golden section » .

This question of incommensurability between « obvious » lengths such as the side and the diagonal of the square led to the necessity of defining a new sort of numbers, the irrationals. The second defeat for mathematics after the proportion of the diagonal to the side is also famous and of a similar nature : what is the proportion between the circumference and the diameter of a circle ? the answer is another « irrational » number : p , the value of which is 3.141592... p is also the ratio of the area of the circle to the square of half its diameter.

Irrational numbers have been distinguished as « algebraic irrationals » which are « the root of an algebraic equation with a finite number of terms, and rational coefficients » (Legendre quoted in Remmert 1991 : 151), and « transcendental irrationals » as « omnem rationem transcendunt » (ibid.). In the fourth century BC Eudoxos found a way to turn the difficulty raised by the irrationals by designing the method of exhaustion where the two numbers which are incommensurable are subtracted from each other until the remnant has become negligible (van der Waerden 1983 : 89-91 ; Szabo [1969] 1977 : second part ; Fowler 1990 : second chapter).

2. « Realists » and « anti-realists » among mathematicians

What is a mathematical model ? Its source is an intellectual construct : a mathematical object. A mathematical object does not tell anything directly about the world. It is often said that as opposed to a description of a piece of the world which is semantic, i.e. has got to do with meaning, a mathematical object is syntactic : its meaning derives entirely from its structure. This is actually an apt manner for expressing the type of meaning held by a mathematical object : some of the symbols which constitute it impose constraints on others, some have no more meaning than the set of constraints they are submitted to. A mathematical object articulates "atomic" mathematical propositions consistently linked with each other.

Apart from the syntax that rules mathematical propositions, there exists another type of syntax which tells us how to generate new valid mathematical propositions from existing valid ones, these being made of well-formed elementary units or "words". The commonly accepted view is that a mathematical object is symbolic, by which is meant that it is constituted of numbers and of elements which can stand for objects of the empirical world but for the time being do not stand for anything in particular (I doubt however that they ever stand for "nothing at all" in the mind of the mathematician, and I will say why later when discussing the mathematician's intuition). In order to say anything about the world a mathematical proposition needs to be « interpreted » : it needs to stand for something. An interpretation of a mathematical proposition is an assignment of world counterparts to the meaningless symbols of the proposition.

The benefits of a mathematical model for world comprehension are the following : if an interpreted mathematical model makes sense, then it is reasonable to assume that the type of relations which hold between the symbols in the model hold also between the bits of the real world which are represented in the interpretation of the model. The set of these relations make up the shape (Greek morphe), if the shape is the same (Greek ison), one talks of an isomorphism between the model and the part of the world being modeled.

But what have we actually done once we have established that an object made of symbols and a piece of the world have the same shape ? Here unfortunately the answer to this question diverges into two types and we need to scrutinize both if we want to understand what a mathematical model actually does. The two types of answers were never too much of concern to mathematicians themselves before the beginning of the twentieth century, but by then they began to intrigue mathematicians themselves and names like David Hilbert or L.E.J. Brouwer are famous for their involvement in what has been labeled since then « questions about the foundations of mathematics ».

For the sake of the argument, it is possible to distinguish two strongly polarized positions on this issue called the « realist » (or « Platonist ») and the « anti-realist » (or « constructivist »). On the whole mathematicians have never bothered to locate themselves on either side, but this is essentially because in their large majority they are spontaneously realists. Some however, like Kurt Gödel, who proved two important theorems about the completeness of geometry and arithmetic, made it a point to locate themselves as platonists ; I will be back to this later.

The realists are called this way because they hold that mathematics account for something real. In the view of the realists mathematics is a « science » : it describes a particular world which is the world of mathematical entities. Thus according to the realists, mathematicians are no inventors but discoverers : they discover entities that live in the world of mathematical entities. The reason realists are also called platonists is that the followers of Plato believed that the ultimate reality is made out of numbers. Realist mathematicians see thus themselves as people who describe the ultimate reality, or at least, the behavior of the entities which constitute the ultimate reality. Quite consistently, realists are the type of mathematicians who would therefore raise the question why there is no Nobel Prize for mathematics.

For his contemporaries there was no doubt that Plato was a follower of philosopher cum mathematician Pythagoras. Aristotle said of the Pythagoreans that they held « that numbers are the ultimate things in the whole physical universe, they assumed the elements of numbers to be the elements of everything, and the whole universe to be a proportion or number » (Metaphysics, 986a, 4-7). Aristotle reckoned that there were nuances between the position of Plato and that of Pythagoras, but very minor ones : « ... for whereas the Pythagoreans say that things exist by imitation of numbers, Plato says that they exist by participation - merely a change of term » (Metaphysics, 987b, 11-15).

The anti-realists hold a different and one may say opposite view : for them numbers are constructions of the human intellect, numbers were not discovered but invented and the rest of mathematics has similarly been invented. Numbers have been abstracted from reality and consequently rules for generating other abstract notions springing from these numbers and this is as much as one can say. Quite consistently, anti-realists are the type of mathematicians who would therefore dismiss the question why there is no Nobel Prize for mathematics, by claiming - like Alfred Nobel himself - that mathematics is a methodology for science, not science itself : the « servant of science ».

For Plato, the world that we know and we live in is an imperfect materialization of Numbers or perfect Forms. But this view implies that once we have described how these numbers behave we possess, one may say, the templates for all things physical, and what only remains to be done to construct a proper Physics is to show for all objects what is the ideal Form of which they participate ; for instance the cone for the mountain. I am not claiming that all mathematicians who would recognize themselves as realists believe exactly something like this, but I believe it is fair to suggest that they think that mathematics somewhat explore the world of these perfect entities such as numbers and ideal shapes. About Gödel mentioned before, Bertrand Russell wrote jokingly in the second volume of his Autobiography : « Gödel turned out to be an unadulterated Platonist, and apparently believed that an eternal "not" was laid up in heaven, where virtuous logicians might hope to meet it hereafter » (quoted by Dawson 1988 : 8).

For the anti-realist or  constructivist, there is no such reward of knowing something for sure about the world once you have designed a mathematical model for some portion of the empirical world. What you possess with a model is some stylized representation of what you started from, a kind of intellectual shortcut which operates what Ernest Mach at the end of the nineteenth century characterized as an « economy of thought ». In this respect as in many others, Aristotle is indeed the very opposite of Plato. Aristotle impersonates the anti-realist position : nothing « mystical » in the working of mathematics, just « stylization ». Here in the words of the philosopher himself : « ... the mathematician makes a study of abstractions (for in his investigations he first abstracts everything that is sensible, such as weight and lightness, hardness and its contrary, and also heat and cold and all other sensible contrarieties, leaving only quantity and continuity - sometimes in one, sometimes in two and sometimes in three dimensions - and their affections qua quantitative and continuous, and does not study them with respect to any other thing ; and in some cases investigates the relative positions of things and the properties of these, and in others their commensurability or incommensurability, and in others their ratios ; yet nevertheless we hold that there is one and the same science of all these things, viz. Geometry)... » (Metaphysics, 1061a, 28 - 1061b, 4).

3. The nature of mathematical proof

« It is in mathematics that our thinking processes have their purest form » writes Roger Penrose, Rouse Ball Professor of Mathematics at the University of Oxford (Penrose 1994 : 64). This is a well-shared commonsensical view of mathematics but we cannot accept it on the face only of such plausible credentials. Euclid introduced in the history of mathematics the principle of an « axiomatic » approach, meaning that mathematical theorems, new accepted mathematical propositions, are derived systematically from a body of « axioms » which are nothing but a set of non-contradictory « theses » (either hypotheses or not further justified definitions ) used as starting points for theoretical development . At the beginning of our aging century a number of mathematicians, and more prominently David Hilbert, meant to move one step further than axiomatisation by making sure that mathematical theories are entirely formalized, i.e. work entirely on the basis of non-intuitive symbols, which can then separately be « interpreted » in terms of intuitive empirical realities such as time, lengths, speed, acceleration, etc.

One essential motive for such a move was to rid mathematics of the unpleasant paradoxes - apparently due to unclarified issues about meaning - such as were cropping in Cantor's attractive and then recently developed « set theory ». Hilbert himself produced a formalized version of Euclid's geometry. Supposedly, the bases were so set for establishing a definitive and clear-cut separation between the syntax of mathematics : the meaningless operations on symbols only, and mathematics' semantics : the use of mathematical objects as models for empirical phenomena or mechanisms. In promoting « formalisation », Hilbert was of course opening the path to the « automatic » algorithmic usage of mathematics which would become central to the type of computation that machines do, i.e. would become central to computer science. The names of Alan Turing, Alonzo Church, Stephen Kleene are those of men prominent in the design of the theory of « computability » as it became known.

A theorem is, as Wittgenstein aptly observed, « a mathematical proposition (which) is the last link in a chain of proof » (§ 122 of Wittgenstein 1975). Penrose probably reflects a commonly held view when he writes - as I quoted him - that « it is in mathematics that our thinking processes have their purest form ». But what is the nature of mathematical proof ? What gives it the consistency which we so much admire ?

Gottlob Frege first, then Bertrand Russell and Alfred North Whitehead in a joint venture, led at the turn of the century a major effort for providing mathematics with a foundation based entirely on logic. The effort was not entirely successful but plays an important part in the clarification task to which Hilbert was contributing separately and on a different basis. The fact remains, what could possibly provide mathematical proof with consistency if not the general principles of logic ?

Although he assigns some priority to Zeno of Elis in the task of eliciting the laws of logic, Aristotle claims that his own enterprise of lifting off the ground a method for rational thinking constitutes an unprecedented effort. Twenty-one centuries later Immanuel Kant was led to state that nothing could be added to the monument which Aristotle had erected with his Organon, encompassing Analytics and Dialectics, which we today bring together under the unified banner of Logic .

In the Organon Aristotle establishes a catalogue of the means of proof and it is his classification and assessments which I am using here. I will not go into too much detail as my immediate purpose is not to discuss Aristotle but the nature of proof. I will however focus on maintaining accuracy in the process of simplification. Let me first say that Aristotle distinguishes three methodologies for reasoning, presenting diminishing standards in the methodology of obtaining conviction : Analytics, Dialectics and Rhetorics. Analytics describe the methods of proof to be used in scientific practice. Dialectics account for the laxer methods that can be used in formal disputations such as take place in court or in the political arena. Rhetorics report the even laxer techniques of proof which can be used in oratory or in informal conversation where persuasion can be achieved by whatever means rather than through systematic reasoning only, as would be the case with Analytics and Dialectics : hence the possible recourse in Rhetorics as means of persuasion, to proverbs, riddles, paradoxes, entertaining anecdotes, etc.

In addition, within each of these three different domains of reasoning, special techniques are catalogued by Aristotle and ranked according to their persuasive power, identified with their « tightness », i.e. in what degree the conclusion reached is actually effectively supported by the arguments put forward in the process of demonstration.

The most accurate method for reasoning is the syllogism which Aristotle was first to describe in each of its possible configurations. The syllogism draws a single conclusion from two premises. For this to be possible, these two premises need to have one term in common, the « middle term », and it is by means of this middle term that the two other terms in the premises, the « extremes », get connected within the conclusion. The premises in the syllogism may be expressed positively, stating that some things are of such or such a nature or hold a particular property. The syllogism with positive premises is according to Aristotle the most perfect mode of reasoning. One degree lower in terms of persuasive power are syllogisms involving premises expressed in a negative manner, stating that some things are not of such or such a nature or do not hold a particular property. Still one further degree below are syllogisms used in a tentative manner in what is nowadays called proof by reductio ad absurdum but was called until the middle ages and in accordance with Aristotle's own terms, proof per impossibile (a d u n a t o n ), because as a consequence of one of the premises, the conclusion is led to state an impossibility : that both one view and the contradictory view are true. In order to remove the impossibility the offending premise needs to be reversed into its own contradictory .

The reason why Aristotle regards the proof per impossibile as being of the lowest status to be allowed in scientific demonstration is that it implies first to test a premise ex hypothesi, i.e. to « try it out », then examine its consequences and if these lead to an « impossible » conclusion, to adopt the contradictory of the premise first envisaged. What weakens this way of proof is that the syllogism has not been constructed in its definitive form for positive reasons but for negative reasons only. The weak link is of course the « reversed » premise, where for instance the contrary may be confused for the contradictory, or worse, if the incriminated premise does not express a « yes » or « no » type of issue, there will exist a number of alternative ways to chose from when « reversing » its contents.

Typically, in a recent mathematical book (Fundamentals of Mathematical Analysis ; Haggarty 1993), when the author characterizes the most common forms of mathematical proof, he lists Aristotle's three forms of demonstration proper to Analytics, i.e. appropriate to the purpose of scientific demonstration :

« Many theorems in mathematics take the form (P è Q). To show that P implies Q, one usually adopts one of the following schemas:

  1. The first method, which is a direct method of proof, assumes that P is true and endeavors, by some process, to deduce that Q is true. Since (P è Q) is true whenever P is false, there is no need to consider the case where P is false.
  2. The second method is indirect. First write down the contrapositive, ((not Q) è (not P)), and try to prove that this equivalent statement directly. In other words, assume that Q is false, (that is, (not Q) is true) and deduce that P is false (that is, (not P) is true).
  3. The third commonly used method is to employ a proof by contradiction (also known as reductio ad absurdum). For this argument, assume that P is true and Q is false (that is, (not Q) is true) and deduce an obviously false statement. This shows that the original hypothesis (P and (not Q)) must be false. In other words, the statement (not (P and (not Q))) is true. But this is logically equivalent to (Pè Q) » (Haggarty 1993: 22-23).

Thus we have recognized here, in this particular order, 1° the syllogism with positive premises. 2° the syllogism with negative premises, 3° the syllogism used in a proof per impossibile.

Dialectics resort to premises which are no more than « opinions » the truth of which remains open to debate. It is possible to develop syllogisms on this basis but the quality of the conclusion will not reach higher than that of the premises. Dialectics however have got their own special method of proof called induction, where a general principle is inferred from particular cases. One of the examples provided by Aristotle is the following : « Induction is the progress from particulars to universals ; for example, "If the skilled pilot is the best pilot and the skilled charioteer the best charioteer, then, in general, the skilled man is the best man in any particular sphere". Induction is more convincing and clear and more easily grasped by sense-perception and is shared by the majority of people, but the syllogism is more cogent and more efficacious against argumentative opponents » (Topica, 105a 10-13).

Rhetorics knows its own weak form of syllogism, the enthymeme, its own weak form of induction, the isolated example : the « case ». It is also the only methodology for persuasion which would resort to the analogy, of which figures of speech such as the metaphor are degenerate forms.

What we call today « analogy » is the paradigm of the ancient Greek ; the Greek analogy is our « proportion » : a is to b like c is to d. The syllogism is thought of by Aristotle as a proportion where a is to b like b is to c. This is an example of a continuous proportion with three terms only instead of four ; « b » being here the « middle term » which allows a conclusion where c is linked to a. Thus in Aristotle's own terms, « That a discrete proportion has four terms is plain, but so also has a continuous proportion, since it treats one term as two, and repeats it : for example, as the line representing term one (PJ : say of length 32), is to the line representing term two (PJ : say of length 8), so is the line representing term two (PJ : of length 8, as was just said) to the line representing term three (PJ : say of length 2); here the line representing term two is mentioned twice, so that if it be counted twice, there will be four proportionals, (PJ : 32 / 8 = 8 / 2, 8 being the geometric mean of 32 and 2) ». (Nichomachean Ethics, V, iii, 8-9)

Within these three methodologies for reasoning, Aristotle dedicates utter care to substantiate the distinctions he establishes between the various techniques used to achieve conviction and to support the grading he assigns them. The important point as far as we are concerned, and this is why it is justified to further elaborate on this, is that mathematical proof involves not only the most perfect technique of proof which the syllogism embodies but also the proof per impossibile, induction, analogy and even the exposition of a single « case ». Which means that mathematical proof covers a range of persuasive techniques which spreads within Aristotle's system from the stronger to the weaker, across the divisions between Analytics, Dialectics and Rhetorics, i.e. across the divisions between techniques that suit scientific pursuit down to those suitable only within informal conversation.

Among the latter, demonstration on a single « case » is Euclid's favorite method. As Szabo sums it up : « Euclid always adopts the following scheme : he first states his theorem as a simple assertion. Then he illustrates this assertion with an example which in his eye is of a relatively "concrete" nature. Follows then the demonstration of the assertion on the "concrete" example. Finally, at the end of the proof, he refers to the proposition initially set and reproduced at the end of the argument, writing : o p e r e d e i d e i x a i = quod erat demonstrandum » (Szabo 1977 [1969] : 202). Analogy is also - as most of us will remember - a major way for demonstrating in Euclid's Elements whenever geometric figures are involved.

The proof per impossibile or reductio ad absurdum has shown to be an extremely popular way of proof in the entire history of mathematics. It was known at the time of Aristotle and when describing the method he explicitly refers to one of its famous applications : the proof that the side and the diagonal of a square are incommensurable  : « Everyone who carries out a proof per impossibile proves the false conclusion by syllogism and demonstrates the point at issue ex hypothesi when an impossible conclusion follows from the assumption of the contradictory proposition. E.g., one proves that the diagonal of the square is incommensurable with the sides by showing that if it is assumed to be commensurable, odd become equal to even numbers. Thus he argues to the conclusion that odd becomes equal to even, and proves ex hypothesi that the diagonal is incommensurable, since the contradictory proposition produces a false result. For we saw that to reach a logical conclusion per impossibile is to prove some conclusion impossible on account of the original assumption » (Prior Analytics, I xxiii, 41a 24-34).

The developed proof of incommensurability works this way. Let us take a square with side S and diagonal D. Applying Pythagoras' Theorem to an isosceles triangle being half of the square with the diagonal as the hypotenuse, one obtains the equality D2 = 2 S2 .

Let us assume that we are only resorting to integers. Were the side and the diagonal to be commensurable, S and D would be primes to each other - if they are not they can each be further divided by their common divisors until they are so. If S and D are prime to each other, only one of them can possibly be an even number, the other one being necessarily odd. Our first equality which equates D to two times the square of S, shows D to be necessarily an even number. Therefore S must be odd.

D being even it is necessarily twice another quantity which we will call M. Then D can be rewritten as D = 2 M. Now in the initial equality we replace D with 2 M. What obtains is

(2 M)2 = 2 S2 or 4 M2 = 2 S2 . When simplified this becomes 2 M2 = S2 . Which establishes this time that it is S that must be even.

We assumed ex hypothesi that D and S were commensurable integers. This assumption has been shown to have as a necessary entailment that S is both odd and even. This being impossible, we need to admit that the contradictory assumption to « D and S are commensurable » actually holds, i.e. « D and S are incommensurable ». QED

In the twentieth century the validity of the proof per impossibile in mathematics has been questioned by the anti-realists according to the  constructivist principle advocated more prominently in the « intuitionist » movement led by L.E.J. Brouwer. Constructivism holds that « in order to establish that a mathematical object exists we must specify a procedure to posit that object » (Shanker 1987 : 44). Because the anti-realist - having the realist in mind - does not want to get derailed in its progress by any fancy, s/he will be careful to only progress in a systematic and rigorous manner. S/he will wish to only generate new mathematical propositions in the most unquestionable manner. It is this concern, which led constructivists to reject in particular the temptation of demonstrating any theorem through reductio ad absurdum i.e. proof per impossibile.

As Barrow observes, « The confinement of logical argument to the constructivists' dicta removes such familiar devices as the argument from contradiction (the so-called reductio ad absurdum), wherein one assumes some statement to be true and from that assumption proceeds to deduce a logical contradiction and hence a conclusion that the original assumption must have been false. If the constructivist philosophy is adopted, then the content of mathematics is considerably reduced. The results of such a descoping are significant for the scientist also. Indeed, we would have to relinquish such famous deductions as the "singularity theorems" of general relativity which specify the conditions which, when satisfied by the structure of a Universe and its material content, suffice to indicate the existence of a past moment when the laws of physics must have broken down - a singularity which we have come to call the "Big Bang". For these theorems do not construct this past moment explicitly, rather they use the device of reductio ad absurdum to show that its non-existence would result in a logical contradiction. » (Barrow 1991 [1990] : 186-187)

Mathematicians may feel that I am supporting contentious views about the status of mathematical proof resorting preferably to examples from Greek geometry sometimes even anterior to Euclid. So let us turn to a mode of proof which is quite recent and plays a major role in mathematical demonstration : proof through recursion also called (appropriately) « complete induction ». I have given a brief description of induction before, when I first mentioned the technique. I am offering here a more complete description of the type of reasoning involved.

Here, again in Aristotle's own words : « Induction, or inductive syllogism, consists in concluding by means of one extreme that the other extreme is an attribute of the middle term. For instance, if B is the middle term between A and G , one will establish, by means of G that A applies to B ; for this is how we effect inductions. E.g., let A stand for "long-lived", B for "that which has no bile" and G for the long-lived individuals such as man and horse and mule. Then A applies for the whole of G [for these bileless animals are long-lived]. But B, "not having bile", also applies to all G . Then if G is invertible with B, i.e. [which is only possible] if the middle term is not wider in extension, A must apply to B. For as it has been shown above [II xxii, 68a 21] that if any two predicates apply to the same subject and this extreme is invertible with one of them, then the other predicate will also apply to the one which is invertible. We must however, understand by G the sum of all the particular instances ; for it is by taking all of these into account that induction proceeds. This kind of syllogism generates prototypical or immediate premises. Where there is a middle term, the syllogism proceeds by means of the middle ; where there is not, it proceeds by induction. There is a sense in which induction is opposed to syllogism, for the latter shows by the middle term that the major extreme applies to the third, while the former shows by means of the third that the major extreme applies to the middle. Thus by nature the syllogism by means of the middle is prior and more informative ; but syllogism by induction is more apparent to us » (Prior Analytics, II xxiii, 68b 15-19).

Let us make sure we understand how this operates, lest we do not see why Aristotle regards induction as a weak type of proof, good enough no doubt for public debating in court or in the public assemblies, but inappropriate when part of a scientific demonstration. Here is the syllogism which results from the example above :

To be long-lived belongs to animals that have no bile (A belongs to B)

To be bileless belongs to man, horse and mule (B belongs to G )

Being long-lived belongs to man, horse and mule (A belongs to G )

The first premise we don't know initially, it is the proposition which we will actually elicit through reasoning. What we are starting from are the second premise and the conclusion, both held to be true. Induction is precisely the generation of the first premise due to its compatibility with the other two components of the syllogism. With the aid of the minor (G = man, horse, mule), it is shown that the other extreme, the major (A = to be long-lived) belongs to the middle term (B = to have no bile).

Why the argument is weak is that there is no authentic logical compulsion in the derivation of the first premise from the second one taken together with the conclusion. For instance, the conjunction of having no bile and being long-lived may be absolutely coincidental ; also, horse, mule and man might have a million other things in common which may explain their being long-lived ; again long life may have different causes in the three types of animals, etc.

But as I have already said induction is central to modern mathematics, I referred earlier to « computability theory » as developed by Turing, Church and Kleene and the central role this theory plays in computer science. « Computability theory » is also known under the name « recursive function theory » and recursion is but another name for « complete induction ». Here is an example of how it operates.

We know how to generate the square of a number : we multiply it by itself. Let us denote by N  the series of the natural numbers (the positive integers) with an being one of the series - which we write as an Î N , and let sn which is also a natural number (sn Î N ) be such that

sn = an x an 

This constraint is legitimate as it is a property of natural numbers that the multiplication of a natural number by a natural number generates a natural number.

Now I can generate the series of the squares of natural numbers by assigning to an consecutive values in the series of the natural numbers : 1, 2, 3, ...

What obtains is

sn = 1 * 1 = 1

sn+1 = 2 * 2 = 4

sn+2 = 3 * 3 = 9

.....

But let us define instead the series of the squares of the natural numbers in a different way : recursively or - which is the same - through complete induction, i.e. one from the other. We are going to define each square of a natural number as a function of the square of the natural number immediately smaller, i.e. its immediate predecessor in the series N  : let us define sn+1 through sn. For example, the square of 5 as a function of the square of 4 :

Square(5) = f[Square(4)]

In the simpler cases of recursive definitions, two equations are sufficient to define the series. In more elaborate cases more than two are needed. For a recursive definition of the square, two will be sufficient. The first equation always applies to the « bottom line » position, to the starting point of recursion.

Let us assume that a0 stands for the natural number « 1 » and that accordingly a1 will stand for « 2 », etc. The square of 1 is 1. Thus we are in a position to write s0 = a0 = 1 .

What is the square of 2 ? the square of 2 is 4. How do we move from the square of 1 (= 1) to the square of 2 (= 4) ? Clearly we need to add 3 to the square of 1 to obtain the square of 2. What have we got so far is a0 = 1 ; s0 = 1 ; a1 = 2 ; s1 = 4. And what we wish to do is to define s1 as a function of s0 . Thus, if we may say so, s1 is s0 to which something has happened, e.g. that 3 has been added to it.

As a matter of fact we have got the elements which allow us to precisely do that : 3 is 2 + 1. So let us write

s1 = s0 + a1 + a0 which can be interpreted by replacing the symbols by the values we have assigned them : Square(2) = Square(1) + 2 + 1.

Therefore we have found a formula which allows us to derive the square of 2 from the square of 1.

Here is where the process of induction begins : we are making a rough guess that there might be a general principle lurking here. Could the same formula work again the next time up ? In order to find out we move each index number up by one :

s2 ? = s1 + a2 + a1 

Let us check : Square(3) ? = Square(2) + 3 + 2, or 9 ? = 4 + 3 + 2 = 9.

It does work. Now we have established that the procedure works in two cases : for 2 and for 3. Here now the expression of the principle of complete induction : let us assume the formula works in all cases.

Our first equation was s0 = a0 = 1 .

Let us generalize our second equation in such way it applies for any value of N but zero (i.e. a natural number N starting with one) that we assign to the index n : sn = sn-1 + an + an-1 .

Let us try it once more, this time with n = 3, which would mean that we obtain the square of 4, as we started numbering indices with n = 0,

Square(4) ? = Square(3) + 4 + 3 or 16 ? = 9 + 4 + 3 = 16.

Of course we are somewhat cheating to the extent that we are building up in our checking from results we have previously established. But with a computer we would easily construct our recursive function as a procedure and, for instance, assign n = 16,983, which would generate the square of 16,984 recursively. This implies the following (which students in computer science remember from their introductory lectures on the subject). The program looks up sn representing the square of 16,984. It discovers that to find this out it needs first the value of an . This is immediate : it is 16,984. It needs also the value of an-1 which is easily calculated as being 16,983. Finally it needs the value of sn-1, which is the square of 16,983, this it does not know.

How will it find sn-1 ? By calling the procedure « recursively », i.e. by assigning n = 16,982, which would generate the square of 16,983. From here, it drills down in a lengthy process of deferred operations. At some point on the way down, the procedure will call one sn-1 which will happen to be s0 - as soon as n = 1 - the value of which it knows already : a0 = 1. From this point on the program is able to go back all the way up carrying out the evaluation of squares until the square of 16,984 is generated. At no point has 16,984 been multiplied by itself to obtain the value of its own square : each square in the series leading to 16,984 has been computed as a function of the square of the natural number which immediately precedes it in the series of natural numbers.

Of course, there is no purpose calculating squares in this manner. Immediately after students in computer science have been introduced to recursive functions they are warned that recursive procedures are most often an inefficient computing method : it can be slow and greedy of memory resources . The programming language Lisp is based on the principle of recursion. I mentioned earlier that « computability theory » is based entirely on the principle of recursion. In fact all elementary functions of arithmetic can be defined recursively (Ladrière 1992 [1957] : 80, 84). The definition of the natural numbers N within set theory - which Zermelo and von Neumann proposed - invokes itself a recursive function (Ebbinghaus 1991 : 361-362 ; Mainzer 1991 : 14-15). Also, recursive functions are central to the most famous mathematical (or meta-mathematical) proof in the twentieth century, the so-called « Incompleteness (of Arithmetic) Theorem » of Kurt Gödel, establishing that « there are ... relatively simple problems in the theory of ordinary numbers which cannot be decided from the axioms » (Gödel 1992 [1931] : 38). I will be back to this theorem about another of its features.

Do we perceive now why induction would have been assigned to Dialectics by Aristotle, i.e. as a method of proof which does not suit scientific demonstration ? First, we have noted that there is something in the method which reminds of a « rule of thumb », something that looks like a « clever trick ». This was probably the more clear since the example I chose was somewhat counterintuitive : why indeed bother to calculate squares in this manner when there is an obvious and direct way for doing so  ? But more importantly, the type of proof is typically not « full proof ». We have tested our recursive function on two cases and have been satisfied that it « plausibly » applies therefore to all cases. It is this « plausibility » which forces the link with Dialectics, which can be defined here in the spirit of Aristotle as being in the likeness of « proper logic » (Analytics) but operating from a plausible basis only. Henri Poincaré who played a central role in the introduction in mathematics of the principle of « complete induction » wrote about it that « the rule of reasoning through recursion cannot derive from experience ; what experience could tell us is that the rule is true for say, the first ten or the first hundred numbers, but it cannot reach the indefinite sequence of all numbers, only a shorter or longer portion of such a sequence - but always limited » (Poincaré 1906 : 23). Although we can be « pretty confident » in the method once it has been verified in a number of instances, no basis has been set that would establish that it will always work.

4. « The same in some respect »

My aim was not of course to devise here a full catalogue of modes of proof in mathematics, my point was just to show that there exists in mathematics a number of ways for proving mathematical propositions, the principles of which are in no way mysterious. Indeed it has been possible to take as our reference the types of logical proof which Aristotle had catalogued then ranked according to their persuasive power as measured by the tightness with which arguments used support the point being made.

Having shown this I believe I have shed some light on a question which has bothered for many years these philosophers and mathematicians who try to understand the foundations of mathematics : if what appears on both sides of the equality sign is always « the same », is mathematics anything more than a huge tautology, endless repetition of the same ? and in this case how could it possibly teach us anything about the world ?

One classical answer to this question is the Pythagorean response that the World being itself nothing but Numbers is itself indeed a huge Tautology, a huge repetition of the same. What I have shown is that mathematical demonstration is no such automatic process : something does take place when moving from the right hand side of the equation to the left hand side. The French philosopher of science Emile Meyerson used to say that both sides of the equation are the same « sous un certain rapport » : « in some respect ». What this « some respect » is more precisely, Meyerson tried to fully explore. In De l'explication dans les sciences he wrote the following « A = A is always followed in our mind with a sort of appendix which begins with "although..." or "despite the fact that ...". There must be something, some circumstance, which makes the second A differ from the first, and what the proposition states is that from the point of view which is currently mine, this circumstance is of no influence... we have decided to regard as negligible what we noticed was different » (Meyerson 1995 [1927] : 174, 178). In a similar vein, Poincaré had commented that mathematics « are the art of assigning the same name to different things » (quoted in Daval & Guilbaud 1945 : 124).

There is a clear difference however between « 0 = 0 » and « 5 - 5 = 0 ». In the first case nothing whatever has happened, in the second case, something has happened - such as distances being covered, first to the right, then to the left. Something of the same nature can be said about « 2 + 2 = 4 ». 2 and 2 remain 2 and 2, but 2 plus 2 become 4 because we have decided to ignore from now on what had justified earlier that the first 2 and the second 2 would first be envisaged separately. Again « 1 + x = x »  tells us something, that « x » is not here any kind of number, that it must be a transfinite number, otherwise the addition of 1 would make a difference to it ; the equality sign here is not passive, it reveals by its presence the special quality of the « x » mentioned.

5. The consequences of « realism » vs. « anti-realism » for twentieth-century mathematics

So much for the equality sign, but even more dramatic transformations are allowed to take place within the operation of mathematical proof. Let us to return to things we have examined in some detail before. Remember the proof per impossible or through reductio ad absurdum. I remind you that the entire mechanism of the proof relies on the fact that a premise is envisaged ex hypothesi under a particular form, a positive or negative expression, that as a consequence the conclusion shows itself to be true under both its positive and its contradictory negative expressions. This being impossible, one is forced to adopt the contradictory of the premise which was being tested ex hypothesi rather than take it under the form which was initially tested. Typically then the consequence of the proof per impossibile is that one of the axioms of the mathematical theory needs to be modified to re-establish coherence. But in the twentieth century an alternative was devised to this strategy, i.e. denying that there is anything wrong with one of the axioms and asserting that the « impossible » proposition is not « impossible » after all but simply « undecidable » within the particular system where the impossibility arises. This is exactly the case within « computability theory » otherwise known, as we have seen before, as « recursive function theory ». In Gödel's « Incompleteness Theorem » the theorem is proven once it gets established that « neither A, nor not-A » holds (both being « well-formed » propositions within the system, i.e. composed of valid symbols, combined according to valid syntactic rules). Literally speaking of course, establishing that « neither A, nor not-A » is not strictly equivalent to establishing that « A and not-A » , but my point here is similar to the one about « 1 + x = x » : mathematics make it always possible to redefine the circumstances as being different « in some respect » and redefine rules of inference in a different manner within a particular and well-defined context.

One way of showing fully the difference between the realist and anti-realist or platonist and constructivist positions consists then in examining how they would read the following sentence : « an unprovable true mathematical proposition ». The thing to keep in mind is the distinction of mathematics as elicited as part of a discovery process for the platonist and mathematics generated as part of an inventive process for the constructivist. For the former, « an unprovable true mathematical proposition » needs to be understood as something like an « unreachable distant planet ». There exist mathematical propositions and these are nothing but accurate statements about the entities which populate the mathematical universe. For some reason it happens that one of these propositions is unprovable in the same manner as a planet may be unreachable for some practical reason. For the constructivist who holds that « in order to establish that a mathematical object exists we must specify a procedure to posit that object » (Shanker 1987 : 44), a mathematical proposition which cannot be proved is no mathematical proposition at all. Shanker observes about Wittgenstein who - in this respect at least - is very close to the constructivist opinion, « The brunt of Wittgenstein's argument is that to describe a mathematical expression as unprovable is to deny that it is a mathematical proposition : i.e. that it is intelligible » (Shanker 1988 : 230). Indeed without a demonstrative path leading to a mathematical proposition, the proposition is non-existent. In the terms of French philosopher Jacques Bouveresse, « Wittgenstein holds that in mathematics the path is more important than the destination or, more exactly that in some way the path is the destination itself » (Bouveresse 1988 : 210). Or, as Shanker synthesizes the two opposing views : « For (platonists) proof is reduced ... to a trivial appendage introduced for the benefit of the incredulous or less gifted ; whereas for the (constructivists) proof constitutes the quiddity of mathematics » (Shanker 1988 : 185).

Now the reason I chose as an example to test both the platonist and the constructivist on the phrase « an unprovable true mathematical proposition » is that the notion of an « unprovable true mathematical proposition » is central to the demonstration of Gödel's "second theorem" « On formally undecidable propositions of Principia Mathematica and related systems ». As we will see, the theorem is only meaningful within a  platonist understanding of mathematics.

Gödel's theorem aims at showing the incompleteness of arithmetic, i.e. that it is possible to design a true proposition in arithmetic which cannot be proved neither disproved, the latter meaning that its negation cannot be proved either. Having been shown to be true but not provable the proposition is called « undecidable ».The demonstration by Gödel of his theorem is centered on a method devised by Cantor and called « diagonalisation ». When the method was first introduced it was clear that it had such devastating consequences that it had either to be banned or by-passed. I will go in some detail about it as it provides a wonderful test-case for the « foundational » issue of realist versus anti-realist and offers me a way for introducing the first part of my argument about mathematics as « virtual physics ».

Diagonalisation was used by Cantor to show that it is impossible to establish a one-to-one correspondence between the set of integers and the set of real numbers between 0 and 1 . I am quoting from the very precise account of diagonalisation provided by Ladrière : « Any real number between 0 and 1 can be written as an unlimited fractional decimal number of which the integer part is zero. Let us assume that all fractional numbers of the sort have been listed in such a manner as to make an infinite series, i.e. are made to correspond one-to-one with the infinite series of integers. This list of numbers looks like the following array :

1 | 0. a1 b1 c1 d1 ........

2 | 0. a2 b2 c2 d2 ........

3 | 0. a3 b3 c3 d3 ........

........................................................................

n | 0. an bn cn dn ........

........................................................................

where symbols such as a1, b1 stand for integers.

It has now become possible to determine one such unlimited fractional number between 0 and 1, which does not belong to the array.

In order to create such a number, one chooses for the first decimal the integer which obtains by adding one unit to the first decimal of the first fractional number in the list, for the second decimal the integer which obtains when adding one unit to the second decimal of the second fractional number in the list and, generally, one chooses as nth decimal the integer which obtains when adding one unit to the nth decimal of the nth fractional number in the list (there is an additional convention that one unit added to 9 makes 0).

One understands that in this construction, it is the main diagonal of the array (which passes through a1, b2, c3, etc. ...) which is reproduced with one unit added to each of its constitutive figures.

The new fractional number which obtains is

0. (a1 + 1) (b2 + 1) (c3 + 1) .... (nn + 1) ....

which is necessarily distinct from every fractional number in the list as it differs necessarily from the first in its first decimal, from the second in its second decimal, etc. » (Ladrière [1957] 1992 : 77).

To sum up the implications of the diagonalisation method, it implies some sort of a « paradox » : although one is working consistently at establishing a complete list of numbers of a particular type, diagonalisation suggests a manner for devising a number which cannot be in the list although it is obviously a number of the type which one would expect to be fully catalogued in the list. And here again, one can react in one of two manners, one typical of the realist, one typical of the anti-realist. For the latter, there is no doubt that the method is spurious. Indeed you have defined your list as being composed of a particular type of numbers - which you generate (or invent) in a particular manner. That diagonalisation seems to suggest there might be other numbers « on top of them all » shows the method to being unreliable and self-defeating. Besides this can be firmly established by showing that a double diagonalisation generates a « fixed point ». Let us define relationships on X, Y, Z of the type « X likes Y » which we note « XY », the negation of « XY » we note Neg(XY) which would be interpreted as « X does not like Y ». If we can determine a Z such that ZX « Neg(XX) than by substituting Z for X we automatically obtain ZZ « Neg(ZZ), meaning that a proposition entails its negation which is of course logically unacceptable (Marchal 1988 : 206-207).

Conversely, if I happen to be a realist I will say that diagonalisation has revealed the existence of a new and unforeseen type of numbers, and I will try to find out what these new numbers mean, i.e. I will not say that diagonalisation is spurious because it suggests that there are some additional numbers in addition to all numbers, I will say instead that diagonalisation has revealed the existence of a new type of numbers - so far undiscovered.

I first mentioned about Gödel's "second theorem" that Gödel was happy with the notion of an « unprovable true mathematical proposition », being what his theorem reveals. This suggested that Gödel was a realist or platonist and consequently we are not surprised to learn that he would be perfectly happy to use the method of diagonalisation. I imagine that we are not surprised either when being further told that he regarded the « fixed point » of diagonalisation, not at all as the ultimate proof of the spuriousness of diagonalisation as a method but as « a kind of miracle » .

As an anti-realist you hold that the truth of a theorem depends in an essential manner on it being provable. If then, having accepted provisionally to use the method of diagonalisation, you are able to establish through it that a proposition is both true and non-provable, it establishes in your eye - and as you suspected - that diagonalisation as a method of demonstration is « spurious ». Alternatively you decide as a realist that diagonalisation - which had already revealed some of its astonishing powers when Cantor used it - has revealed one additional important fact, i.e., that propositions may both be true and unprovable within a particular domain of mathematics and this signals to you that this domain - arithmetic - is incomplete.

Diagonalisation can be regarded therefore as either spurious or providing the key to a new world. Double diagonalisation creates a « fixed point », this is either further aggravation or a discovery of mystical proportion. The platonist is of course delighted to find out that some methods - even of intriguing status - open new universes and wants to show that the new objects thus revealed can be used to model the world. The anti-realist is upset because the method has dubious status and refuses to use it, or even wishes that the parts of mathematics so produced be removed.

But if we follow here « realism » or « platonism » - which as I observed earlier is very much the dominant view among mathematicians - what are the safeguards against an « anything goes » type of mathematics where every incongruity is bulldozed over and supposedly opens a new mathematical sub-universe ? Aren't we then depending on to what extent a mathematician manages to convince the mathematical community that an obstacle can be overcome through astute redefinitions ?

In this vein Tymoczko imagines an especially gifted mathematician who got into the habit of omitting from the demonstrations she proposes increasingly large portions under the pretext that they are too lengthy and tedious and who manages however to have these demonstrations accepted on the grounds of her reputation by people who haven't got the talent to reconstruct them anyway (Tymoczko 1979). But is Tymoczko staging here a hypothetical future or a description of current mathematical practice ? Look at the following sentences borrowed from a book entitled Computability. An introduction to recursive function theory : « Note immediately that (Church's) thesis is not a theorem which is susceptible to mathematical proof ; it has the status of a claim or belief which must be substantiated by evidence. The evidence for Church's thesis, which we summarize below, is impressive. 1. The Fundamental result : many independent proposals for a precise formulation of the intuitive idea have led to the same class of functions, which we have called C. [...] 2. [...] 3. [...] 4. No one has ever found a function that would be accepted as computable in the informal sense, that does not belong to C. ... etc. ... On the basis of this evidence, and that of their own experience, most mathematicians are led to accept Church's thesis. ... (It) remains (however) an expression of faith or confidence » (Cutland 1980 : 67, 70).

One cannot help wondering what Aristotle would have made of the argument that the evidence is « impressive » ? Haven't we reached here the lowest end of rhetorical persuasion in terms of mathematical demonstration ?

My suggestion however is that it is precisely this lack of « full proof-ness » which makes it possible for mathematics to model the world we live in, rather than some underlying « god given » harmony between the world of Nature and the world of Mathematics. What I am going to show next is that mathematics propose a « virtual physics », whatever the price to be paid to mathematical purity.

6. Mathematics as a cultural production

There is room for an alternative view to the realist / anti-realist or  platonist /  constructivist perspectives, which the anthropologist can derive from the observation of mathematical practice. I will characterize it by saying that mathematics are a « virtual physics » and that this is what mathematicians are actually performing whether they see themselves as  platonist discoverers or as  constructivist inventors. Such a view of mathematics as « virtual physics » accounts for what the anthropologist effectively observes : that methods of questionable status signal the presence of obstacles within mathematics and that the creativity of mathematicians in by-passing such obstacles generates new mathematical objects. If mathematicians saw themselves as « virtual physicists » dealing with the « necessary properties of abstract objects unambiguously defined », they would stop worrying about how these objects have come about. The « virtual physicist » mathematician would hold the pragmatic position - which has been that of mathematics as a field - that the questionable method can legitimately be regarded as spurious and IF NOT that the objects generated by it deserve the status of legitimate mathematical objects and can be used therefore for the purpose of innovative mathematical modeling.

As I see it, the anti-realist position although it is much more epistemologically coherent than the realist, platonist, view is at the same time much less fecund and left to its own devices would certainly never have generated the wealth of what we know nowadays as mathematics. It has been argued that Christianity introduced modernity in the West. If this is the case it was assuredly not the purpose of Christianity to play such a role. I believe something of the type applies to the platonist position in mathematics.

What I have mentioned at the very beginning about the difficulties met with irrational numbers in ancient Greece is a good indication that mathematical modeling is not a straightforward pursuit and that Wigner's view of « The Unreasonable Effectiveness of Mathematics in the Natural Sciences » requires at least a large measure of self-persuasion. Koestler used the phrase « the sleepwalkers » to refer to the first generations of great astronomers in the Renaissance. Why sleepwalkers ? because what they achieved was not the task they supposed they were pursuing. They imagined they were perfecting astrology, purifying it of some of its inconsistencies. Actually they were digging the grave of astrology and were building instead the modern rational science of astronomy.

In what way is the platonist mathematician a sleepwalker ? The platonist believes she is exploring a particular world, the world of mathematical entities. If she is a consistent platonist she will also hold - as we have seen - that this world of mathematical entities has got a privileged relationship with the empirical world. One way of putting it is that the platonist believes that once the world of mathematical entities has been mapped, the mapping of the empirical world is at least half done. Cultures have come up with widely divergent views about what constitutes the ultimate reality. The likelihood that it is numbers is however small because numbers is clearly something we - human beings - abstract from the empirical world, by - as Aristotle would see it - looking at the world in the perspective of the « category » of quantity, i.e. shedding a particular light on things, envisaging them in the perspective of « how much ? ». I would however easily concede that nobody has ever shown in a convincing manner - or in any manner at all - that reality is not constituted ultimately of numbers.

When the platonist explores the world of mathematical entities he does - as we all do in all types of activities - follow his intuition. Where does such intuition originate ? From our accumulated observations of what works and what does not work when we are going about the world. In other words, our intuition is a decision theory which has been shaped by the world - and one may add - in the hard way. In a second phase, the mathematician's intuition feeds on what works and what does not within the world of mathematical objects. This explains to me why whether the platonist is actually exploring or not a type of reality which holds a connection with the empirical world, he is importing in his exploration an intuition which has been molded in the first place by the empirical world. This is the reason why whatever may be the case about the world of mathematical entities, the platonist mathematician is, from inception, geared to building a « virtual physics », physics being used in this phrase as meaning no more than the science of the empirical world.

In other words, mathematical objects are neither invented nor discovered : they are a cultural production, they are created as part of human activity, the same way as a price is created in a commercial transaction, or they are conceived within a brain, i.e. within a human body in the same way as a child is conceived. A new theorem is very much like a neologism, a new word being coined, i.e. it is a way of « chunking » some existing conceptual reality, a new way for assigning a nickname to a complex of concepts, a new way for a stylization of the world. In the fourteenth century William of Ockham wondered where universal concepts do come from. The answer is that they do not come from any foreign location, they are created within language through human industry. Any time a new concept - a new theorem - is created, the world is different from then on, that is, if the memory of it survives, if its transmission gets accomplished. Culture is vulnerable to lack of transmission in the same way as a species depends on transmission of its genetic pool : culture requires « depositories » in the shape of memories contained within brains, it requires supports, in the same way as a radio transmitter is nothing without receivers.

Actually, what is implied in the concept of novelty in human culture as being either discovery or invention is that man is no intrinsic part of nature : he observes it and either discovers or invents. What he invents is supposedly not part of nature, it is seen as artificial, meaning in some way arbitrary.

7. Mathematics as « virtual physics »

Here is how reads a passage from a brilliantly written popular introduction to contemporary physics, John D. Barrow's Theories of Everything. The Quest for Ultimate Explanation (1990)  : « The development of non-Euclidean geometry as a branch of pure mathematics by Riemann in the nineteenth century, and the study of mathematical objects called tensors was a godsend to the development of twentieth-century physics. Tensors are defined by the fact that their constituent pieces change in a very particular fashion when their co-ordinate labels are altered in completely arbitrary ways. This esoteric mathematical machinery proved to be precisely what was required by Einstein in his formulation of the general theory of relativity. Non-Euclidean geometry described the distortion of space and time in the presence of mass-energy, while the behavior of tensors ensured that any law of Nature written in tensor language would automatically retain the same form no matter what the state of motion of the observer. Indeed, Einstein was rather fortunate in that his long-time friend, the pure mathematician Marcel Grossmann, was able to introduce him to these mathematical tools. Had they not already existed Einstein could not have formulated the general theory of relativity. » (Barrow 1992 [1990] : 189). What we are being told here is that an object borrowed from « pure mathematics » made Relativity Theory possible which would not have been the case otherwise. Moreover, Riemannian geometry being relatively recent (mid-nineteenth-century) there was here a kind of fortunate coincidence.

I have quoted this passage just as a way of illustration, a way of underlining what is commonly felt as the parallel development of physics and mathematics. This case supports the « toolbox » view : pure mathematicians design the tools in the toolbox and physicists pick from the box the tools they need when they need them. It has been observed however that the tools which are so designed by pure mathematics fall into oblivion if never used, either for other theoretical purpose, or in view of application.

There is a remarkable example of how a division of mathematics has developed as a « virtual physics », it is the case of the differential calculus. Here the mathematics were devised by the physicists themselves who were trying to develop a theory of motion, accounting satisfactorily for speed and acceleration. The calculus derives from an intuition which arose in the middle ages that a similar relationship holds between distance covered and speed, and between speed and acceleration. The methodology as is well documented (Edwards 1979 : 104-127) received major contributions from Cavalieri (1598-1647), Descartes (1596-1650) and Fermat (1601-1665) and would acquire its full dimension through the concurrent and competitive efforts of Leibniz (1646-1716) and Newton (1642-1727). However the references by the creators of the tools to « ultimate differences », « quantities smaller than any given quantities » or « qualitative zeroes » (Boyer 1959 [1949] :12) as well as the somewhat arbitrary way in which development series would be treated and some of the terms within ignored, contributed to an image of ad hoc tinkering. It became ever clearer that in order to get the physics right mathematics had been submitted to increasingly high degrees of degradation, that is until the modeling looked adequate.

The calculus is no minor part of mathematics, in the words of Morris Kline, « Next to the creation of Euclidean geometry the calculus has proved to be the most original and most fruitful concept in all of mathematics » (Kline [1959] 1981 : 363). The calculus has been in the making since Archimedes made a first attempt at trying to design a tool which would work for distance, speed and acceleration. Newton and Leibniz got the tool in working condition for all practical purpose. In the 1820s it was believed that Cauchy (1789-1857) had finally given the calculus a sound foundation when he provided an unambiguous definition of the concept of the limit. The limit reveals however its own trickiness as a technique in examples like the following which the French mathematician Henri Lebesgue reported : « Formerly, when I was a schoolboy, the teachers and pupils had been satisfied by passage to the limit. However, it ceased to satisfy me when some of my schoolmates showed me, along about my fifteenth year (1890), that one side of a triangle is equal to the sum of the other two, and that p = 2. Suppose that ABC is an equilateral triangle and that D, E, and F are the midpoints of BA, BC and CA. The length of the broken line BDEFC is AB + AC. If we repeat this procedure with the triangles DBE and FEC, we get a broken line of the same length made up of eight segments, etc. Now these broken lines have BC as their limit, and hence the limit of their lengths, that is, their common length, AB + AC, is equal to BC. The reasoning with regard to p is analogous ... This exercise has been extremely instructive to me ... The preceding example shows that passing to the limit for lengths, areas, or volumes requires justification, and ... it is enough to arouse all one's suspicions » (in Heims 1980 : 71).

It turned out however that Cauchy's efforts had not settled the question of sound foundations. Abraham Robinson - the founder of non-standard analysis - would write in 1965 : « It is generally believed that it was Cauchy who finally put the Calculus on rigorous foundations. And it may therefore come as a surprise to learn that infinitesimals still played a vital role in his system [...] Cauchy's infinitesimals still are, to use Berkeley's famous phrase, the ghosts of departed quantities » (Robinson quoted in Dauben 1995 : 363-364). Weierstrass (1815-1897) pursued the attempt at a clean up. But this still had its shortcomings, and a new claim has been made that it is Abraham Robinson's own work on non-standard analysis in the 1960s which - at long last - had set the calculus on firm foundations (Prestel 1991 : 306).

The development of the calculus in the seventeenth and early eighteenth centuries was so unashamedly untidy that Bishop Berkeley, the famous « sensualist » philosopher, could not hide his irritation and in an essay published in 1734, called The Analyst, he summoned the parties involved to tidy up their act, accusing in particular the late Isaac Newton of - as Robinson reminded us - toying with his infinitesimals like with « Ghosts of departed Quantities » (Berkeley [1734] 1992 : 199), suggesting further that any truth attained might have been through « compensation of errors » . Berkeley asked « whether such Mathematicians as cry out against Mysteries, have ever examined their own Principles » (ibid. 220). And in a comment referring to Aristotle's grading of demonstrative methods - discussed earlier by me - where the syllogism ranks as the supreme mode of scientific proof while induction is the mode of proof proper to opinion only, Berkeley observed that « in every other Science Men prove their Conclusions by their Principles, and not their Principles by the Conclusions. But if in yours you should allow yourselves this unnatural way of proceeding, the Consequence would be that you must take up with Induction, and bid adieu to Demonstration. And if you submit to this, your Authority will no longer lead the way in Points of Reason and Science. I have got no Controversy about your Conclusions, but only about your Logic and Method » (ibid. 180). Some ten years later d'Alembert would make a similar appeal for improved standards, when he wrote : « Up to the present ... more concern has been given to enlarging the building than to illuminating the entrance, to raising it higher than to giving proper strength to the foundations » (in Shanker 1987 : 261).

As Boyer observes in the conclusion of his History of the Calculus and its Conceptual Development, « Positivistic and materialistic thought were slow to accept the changed mathematical view and insisted that the calculus be interpreted in terms of velocities and actual intervals, corresponding to the data of experience and of ordinary algebra » (Boyer 1959 [1949] : 307). As opposed to the case of tensor calculus mentioned above about Einstein, where the physics were developed with the help of a newly conceived mathematical object, in the present case it was definitely the concern for accounting for bodies in continuous motion which had driven the effort to develop a satisfactory mathematical methodology and it took something like another two centuries before the « pure » mathematicians were able to rewrite the method to their theoretical satisfaction. Similarly Shanker comments Berkeley's onslaught in the following terms, « ... the heart of (Berkeley's) attack centered on the point that, given that mathematics quite rightly aspires to be a bona fide science, the truths which it yields demand precisely the same type of evidential support as applies to science » (Shanker 1987 : 263).

But it is Kline who comes closer to express the legitimate concerns about sound methodology which the calculus raises when he states : « Why did the mathematicians adopt this illegitimate child ? The answer is that it proved so immensely useful in the exploration of nature that their hearts were touched even though their minds remained critical. They had an idea that made physical sense, and, since mathematics and physical science were closely intertwined and even identified in the seventeenth and eighteenth centuries, they were not greatly concerned about the lack of mathematical rigor. One might say that in their minds the end justified the means » (Kline [1959] 1981 : 384). In actuality Berkeley was right all along : just as he claimed, some factors were eliminated from development series for no theoretically justifiable reason, and if things would get right in the end it is indeed as he thundered because errors were made to cancel.

What is the justification for some terms in the development series central to the calculus to be neglected ? At first sight there is none apart from the pragmatic justification, that they need to vanish to get the physics right, i.e. to get the « interpretation » of the mathematical model right when dealing with things of a physical nature. In a different context, Einstein sums up this type of argument about Brownian motion and the models used to represent it : « The success of the theory of the Brownian motion showed again conclusively that classical mechanics always offered trustworthy results whenever it was applied to motions in which the higher time derivatives of velocities are negligibly small » (Einstein 1949 : 49 ; my underlining).

That the calculus has been constructed as a physics, would require a volume to demonstrate step by step. The arguments are also somewhat hairy compared to the material that anthropologists traditionally study. Being fully aware that I am incurring the risk of oversimplification, I need to make part of the argument intuitively understandable for non-mathematicians.

Let us suppose that a curve represents the instantaneous speed of a body in motion at different points in time. For some reason linked to trying to improve on Aristotle's theory of motion we are interested in measuring instantaneous change in speed, i.e. acceleration. Let us admit that we have realized (by trial and error) that the value we are looking for for acceleration can be derived from the function of the curve according to the following rules :

a xn à a n xn-1

a à 0

where x stands for a variable and a stands for a constant. The first part, before the arrow is the function y, the second part, its derivative ý .

Let us say that the curve for speed can be represented as y = x3, then it follows that the value for acceleration (the value of the derivative) at every point of this curve is ý = 3 x2.

How do we get from one to the other ? What I am proposing here is a kind of shortcut through the history of the calculus. It is brutal but it is very much the gist of the argument, and in one way or other the manner mathematicians over the ages have dealt with the matter.

What we are interested in is the rate of change at one point of the curve. What is a point on the curve ? it is the smallest part of the curve you can think of. How long is it ? it is infinitely small. Let us call it, like we would say a « decimal », an infinitesimal and represent it by e . What happens on the curve at this point called e  ? The function of the curve being x3 one passes from x3 to (x + e )3 . Thus what happens to y when one adds e to x is the difference between (x + e )3 and x3 divided by e . How much is that worth ? it is easily calculated.

[ (x + e )3 - x3 ] / e = [ (x3 + 3 x2e + 3 xe 2 + e 3) - x3 ] / e

The first x3 between the brackets cancels with the subtracted one, outside the brackets. The remaining terms are then all divided by e

[ (x + e )3 - x3 ] / e = 3 x2 + 3 xe + e 2

The notion of the derivative is very close to what I have just called « what happens on the curve at the infinitesimal point called e  ». The derivative is 3 x2, what we have just obtained is 3 x2 + 3 xe + e 2. How can any reasoning proceed from the second to the first. It always goes somewhat like this :

« e is very small, it is an infinitesimal. Raised to the second power it is the square of something very small and can thus easily be regarded as negligible. Thus the third term, e 2 can be plainly ignored. The second term is e multiplied by three times x, as it is infinitely small, anything multiplied by it becomes also very small and can be ignored as well. Thus only the first term survives, and the value for the derivative is 3 x2 ».

One can then easily verify that the formulae

a xn à a n xn-1

a à 0

will always apply.

Because typically one is interested in what is taking place between a

[... + xn + ...] / e and a [... + (x + e )n + ...] / e , in the difference between the two, the xn and the first term in the development of (x + e )n will cancel and the second term will be of the type ( nxe n-1 ) / e , i.e. nxn-1 , while all the other terms involve e and are regarded negligible.

And such is the prototypical case of ignoring terms deemed negligible, which is one of the lax procedures condemned by Berkeley as arbitrary.

The second type of lax procedures typical of the calculus is the cancellation of errors. Why would it be justifiable to let errors cancel ? The answer is provided here by Vaihinger from within his « philosophy of "as if" » : « Berkeley rendered a great service in pointing out these contradictions in the method of fluxions (the term used for tangents to curves and for derivatives to functions) [...] He exhibits in detail the device by which the mathematicians attained their results, namely by committing a double error. Instead, however, of recognizing in this brilliant discovery, which is more profound than the discussions of the problem by Newton and Leibniz, the reason for the correct result and the justification for its application, he rejects the whole method as illogical, as contrary to the traditional code of logic. [...] He actually had the key in his hands ! [...] For us this method follows quite naturally from our principle and forms only part of the general fictive methods of thought. The auxiliary quantities drop out later. [...] The real solution of the secret lies in the fact that dx and dy in one case = 0, in another = something real, even though this is thought of as very small. [...] The infinitely small is a fiction. It is true that by means of this fiction (which is justified by the method of antithetic error), the world of reality can be broken up into its elements ; and this makes progress in calculation possible » (Vaihinger 1924 : 117-119). For Vaihinger, things are thus simple, all difficulties encountered by Newton and Leibniz have the same source : the two men did not recognize the « fictions » they had to introduce in their calculations as being such, i.e. some « auxiliary quantities [to be] drop[ped] out later ». Thus typically, in the example given above, we proceed with the infinitesimal e , and once we are finished, we ignore all terms where e still appears.

But if this is indeed the case that we introduce a fiction for the ease of calculation and are later on allowed to ignore it, how come as Vaihinger also says that in one case dx and dy equal 0 and in another case equal something real ? How would we know which is which and in what case ?

The answer is straightforward : what we have hit here with the calculus is the fact that the calculus was not developed as a method of mathematics but as a method of physics. In other words the mathematics had to be brutalized until it produced good physics. Now there is a possible rejoinder to what I have just said : that as we know the calculus provides not only rates of changes, relationships between distances covered, speed, and acceleration, but also relationships between entirely different types of entities such as relations between shapes, areas and the volumes they generate through rotation. The reply to this rejoinder is that this particular part of geometry which deals with areas and volumes of shapes is itself pure physics. In other words, the calculus is a method of physics, whether the physics is that of body in motion, or the more plain one of figures drawn on a bit of paper, or shapes rotated in space. That there exist different parts of the physical world which can be accounted for with the same methods is indeed part of this very fortunate state of affairs which makes mathematical modeling a fruitful pursuit. But nothing more can be said.

Let me return to some comments made by Morris Kline, which I cited before, firstly about the calculus having « proved to be the most original and most fruitful concept in all of mathematics » ([1959] 1981 : 363), second when he wrote that « They had an idea that made physical sense, and, since mathematics and physical science were closely intertwined and even identified in the seventeenth and eighteenth centuries, they were not greatly concerned about the lack of mathematical rigor » (ibid. 384). Thus Kline holds both that the calculus is « the most original and most fruitful concept in all of mathematics » and also that it displays a « lack of mathematical rigor ». How could that be without being a pure contradiction ? for the reason I just said : because in the calculus the mathematics in the method had to surrender to the demands of the physics. This to a large extent, Kline betrays in his own exposition of the calculus.

Kline writes for instance (my underlining), about calculating a maximum distance up for a projectile « the argument that the velocity must be zero at the highest point is a physical rather than a mathematical one. Moreover, this argument certainly does not apply to maximum and minimum problems in which velocity is not involved. But physical thinking has given us a most important lead » (ibid. 376),. Thus in this case it is the physics rather than the mathematics which indicate the solution to a physical problem. Now about refraction, « ... with the calculus the derivation of the law of refraction is almost immediate. Of course this law is a physical one, and new physical facts are not derivable from mathematics alone » (ibid. 377). Except that this is precisely what is supposed to take place with the calculus : the calculus is supposed to entail that the physical facts are derivable from the mathematics alone. And this is indeed what Kline claims elsewhere : « We started with a formula for acceleration and by purely mathematical processes derived the formula for velocity and then for distance [...] Mathematical deduction takes over the role of physical reasoning. Thus we see how with the enlargement of mathematical ideas and techniques the power of science to deduce physical knowledge is strengthened » (ibid. 380-381 ; stress through italics by the author himself).

« Mathematical deduction takes over the role of physical reasoning », writes Kline, and this is for one good reason - of which we are by now fully aware : because the mathematics have been tormented until they have become « virtual physics », they have been made to comply with the way the empirical world is.

8. Mathematics and physics

Szabo observes that at the time of Euclid « it all happens as if dialectics and mathematics had not yet fully separated, as if in those days, mathematics were still a branch of dialectics » (Szabo 1977 [1969] : 262). Aristotle said, « There is a close resemblance between dialectical and geometrical processes » (Topica : 159a 1-2). Dialectics is, I remind, the persuasive techniques to be used in court or in the public assemblies, as opposed to Analytics that presides to scientific demonstration, and Rhetorics, which can be used in oratory or even in everyday conversation.

In those days, arithmetic was dealing with numbers and geometry with lengths, areas and volumes. There was no doubt in Aristotle's mind that mathematics were a science of the natural world. Thus arithmetic was not like today seen as a « filling » for the empty and abstract categories of algebra. Different in this respect from the Babylonians, the Greeks would not, for instance, mix numbers belonging to distinctive geometric spaces, like lengths and areas (cf. van der Waerden 1983 : 72). To the contemporaries of Plato and Aristotle (Euclid is one of them), it would have been, for example, anathema to equate the « 9 » which results from having added « 1 » unit at the end of an « 8 » length, with the « 9 » which is the bi-dimensional square of « 3 », the area of a square with sides of length « 3 ».

Thus mathematics has beyond any possible doubt his roots within the empirical world : in whatever has to do with numbers, or what Aristotle regarded as the « category » of quantity (as opposed to other categories such as quality, time, place, configuration, etc.) and in that of disposition or configuration as rules geometric shapes or the apprehension of the world in terms of directed lengths and remarkable proportions between directed lengths.

The most consistent and thoughtful effort in this century to understand the functioning and development of mathematics is certainly that of Wittgenstein. After having indulged in the errors of ultra-formalism, he spent the rest of his life trying to determine which of these youthful pronouncements were excessive and which were not. Never did he separate however the syntactic scaffolding of mathematics from its inscription in the empirical world, i.e. what one may justifiably call the « physics of the natural numbers ». A typical example of a question addressed in Wittgenstein's lectures is « What sort of proposition is "there are three 7s in p " ? » (Ambrose 1979 : 198). Although examples like this, involving p , recur in his lectures, never - to my knowledge - does he show any interest in the physics, the very properties, of p . Never does he wonder for instance how it could possibly be the case that p is at the same time the ratio of the circumference of a circle and its diameter, the limit of an amazing variety of converging series, or why it is involved in a most remarkable relationship with some other of the constants which the numbers produce between them as their « singularities » : ep i = -1, where e is the base of the natural logarithms and « i » the imaginary square root of -1.

Similarly, when Gödel introduces « Gödel numbering » in order to operate the diagonalisation I discussed earlier, he relies implicitly for his numbering method on the distinction between these natural numbers which are prime and those which are not : a feature reflecting what I call the « physics of the natural numbers ».

Alan Turing, a major contributor to « computability theory » and to the early days of the electronic computer, devoted the final years of his life to an unfinished theory of embryology involving the numbers of the Fibonacci series whose convergent ratio is f , the golden section (Hodges 1985 [1983] : 437).

As for Roger Penrose, whom I quoted before, in a recent book he shows his interest for the fact that the microtubules which constitute the inner skeleton of cells may be organized according to the same Fibonacci series (Penrose 1994 : 361-362). Independently, his « twistor » theory for nuclear physics is based on the special significance of the natural numbers : « Penrose (...) felt that the universe should somehow be created out of integers alone, using combinatorial processes - that is, simple arithmetic operations such as ratio, addition, subtraction and permutation » (Peat 1991 [1988] : 175).

Arithmetic has its source in the empirical world where separate objects exist in various numbers and are denumerable. Arithmetic is the way we can approach things in the perspective of quantity proper. One way we can conceive of modern science is that of a movement of « colonization » of other Aristotelian categories by that of quantity. It is possible to view science as this venture amounting to account for quality, time and location in terms of quantities. This would provide an economical but apt manner of characterizing the task to which such men as Kepler, Galileo, Huyghens, Newton and every one of their successors dedicated their efforts. The colonization by the category of quantity means defining for each of these other categories such as quality, time and location, a special universe gauged according to its own « metric » making it able to comprise an infinite number of potential instances and where each of the actual instances - irreducible to each other - can find its own place as determined by its measure. In this way heat becomes measurable temperature, colour corresponds to a specific wavelength of the electromagnetic field, etc.

Similarly, geometry is founded in the physical world : it accounts for the properties and remarkable proportions of lines, areas and volumes, defined respectively as spaces of one, two and three dimensions ; and geometry can in this manner be extended to objects with n dimensions - which are then to be regarded as imaginary in so far as « non-empirical » when n is greater then three.

For these two domains with an empirical foundation, which are arithmetic and geometry, algebra plays the role of a « logic », i.e. it deals with their properties which can be accommodated on a purely formal basis, those properties which are invariant of contents, i.e. which hold independently from the individual properties of numbers or harmonic properties of individual proportions between lengths, areas and volumes. New developments in mathematics seem to have every time followed the same principle for progress : let yourself lead by some intuition of the empirical world and develop a mathematical universe accordingly. Start with the realm of numbers and explore it consistently and you will generate arithmetic. Let yourself lead by the spontaneous grasp of space and build geometry ; integers and rational numbers - corresponding to fractions of integers - will not suffice to the task and the irrational numbers will be required. Theorize the intuitive feel of motion and construct the differential calculus ; use sparingly infinitesimals to avoid the pitfalls of Zeno's paradoxes, replace the recourse to infinitesimals by reasoning with limits - if you can. Take the intuitive plausibility of some predictions and develop it in the theory of probability - tame modal logic and make it the measure of equiprobable cases. Take elementary logic as can be transposed into truth tables and map it onto an algebra. Follow your intuitive apprehension of sets and of elements which compose them and create set theory - introduce new types of infinities to stay away from paradoxes but be cautious because behind a paradox may hide a simple impossibility.

Over the centuries, mathematicians have attempted to purify their field from the contamination of the empirical world, unaware that their intuition - as active in particular in the world of proof - was constantly taking them back within the realm of plausible models of the world around them, erecting the walls of a « virtual physics » which would be « interpreted » when applied to the world. At times the attempt at a physics has been as blatant as with the calculus, in such a manner that a bishop had to appeal for a return to epistemological decency. Sometimes the attempt was as distant as it could from the world as it appeared to be, like when Gauss, Bolyai, Lobatschewski and Riemann designed geometries conceived as alternatives to the one that Euclid had devised for our empirical world. But precisely because mathematics are a « virtual physics » they were providing in so doing the toolbox which relativity theory would need to build its wonderful construction.

9. Conclusion

I have examined the activity of mathematicians in what I claimed was an « anthropological perspective ». By the latter I meant that, to rephrase Malinowski's apocryphal words, « I was more interested in what mathematicians do, than in what they claim they do ». Also, as opposed to the philosopher who ponders on mathematics' foundational issues, epitomized in the distinction between « realist » platonism and « anti-realist » constructivism, the focus of my attention has been, in metaphorical terms, practice rather than dogma or liturgy. My unmentioned assumption has been that examining the task effectively performed may reveal an agenda which may not be professed either by the orthodox mathematician, or by the heterodox.

Freud's metapsychology introduced the methodological principle that actors are poor judges of their own motives. Derrida's deconstructivism imported the same principle at the cultural level for cultural actors. I have attempted to show - in the brief time which a lecture allows - that, independently of their own representation of their task, mathematicians produce in actuality a « virtual physics ».

I have proceeded is the following way. I have introduced the principles of demonstrative proof as described and assessed by Aristotle. Modern authors could not have been cited instead as however accurate their cataloguing of methods of proof, they always refrain from grading these methods . Thus was shown the latitude in demonstrative methodology open to mathematicians, being able to resort to modes of proof ranging from the compelling to the poor.

Then I have shown that even such leeway in the matter of proof has been felt at times as an intolerable constraint. The proof by reductio ad absurdum, more appropriately called by its traditional name of per impossibile, was shown to be by-passable and effectively by-passed by mathematicians. The arising of an impossible conclusion used to signal a flaw on the path leading to it. An « epistemological coup » allowing the by-passing of such impossibilities consisted in defining the demonstrative process as unassailable, and shifting the property of impossibility into a positive attribute of the conclusion : undecidability, although distinct from impossibility, belongs to this type.

This would be starting a different subject, but a similar practice has characterized the development of the part of physics known as quantum mechanics. A particle is here or there. Sometimes it is possible to say whether it is here or there, sometimes it is not. You can blame your ignorance : there is something inadequate in your method for locating a particle. Or, you can say - as has been said - that it all lies in the nature of things themselves : particles have a third possible state of being neither here or there or of being simultaneously here and there.

How can you tell which view is justified ? You cannot. The only thing which is clear is that if you blame your ignorance you are currently stuck in your research, and this for an unknown length of time. On the contrary if you assume that having an indeterminate location is one potential attribute of particles, you are entitled to move on. The peril lying here though is that, one day, the mathematical object allowing to envisage once again particles in terms of their being exclusively here or there may be forthcoming. In which case any theorizing in the meantime on the basis of indeterminacy will turn out to have been building a house of cards.

I have quoted Morris Kline as saying both that the calculus was « the most original and most fruitful concept in all of mathematics » and that it had been plagued by its lack of mathematical rigor. The reason for this, we have seen, is that the world in its very build forced the calculus to be what it became.

The mathematician enters the world of mathematics armed with his intuition of how the world at large operates. This he imports within mathematics and, quite automatically, designs mathematical objects with an in-built « virtually physical » plausibility. The culture around him is impatient with mathematics which do not find their way to providing models. A double system of constraints, both inner and outer, contribute at making mathematics a « virtual physics ». The price to be paid is often high : unjustified discarding of terms is a sore, cancellation of errors is putrescence.

Sometimes the mathematician needs to bear blinkers, sometimes even a blindfold, sometimes he needs to grab an iron from the fire and blind himself purposely. This is the burden the universe imposes on mathematical pursuit. The anthropologist puts the mathematician under his anthropological microscope, and scrutinizing his works exclaims : « What extraordinary achievement ! »

 

References