Written: Nov. 1, 2001

The CONTENT of Mathematics has changed dramatically since Euclid's time,
and math has witnessed lots of CONCEPTUAL revolutions, including the
introduction of Non-Euclidean Geometries.
But the METHODOLOGY of doing,
or at least presenting, mathematics, has changed little since Euclid's
*Elements*.

For over two millenia, Euclid's style was *de-rigeur* in rigorous
math, and so great was its charisma and prestige that its style was
often imitated and emulated in other areas of intellectual endeavor,
in order to lend them an aura of absolute truth.
For example Spinoza wrote his philosophy exactly in Euclid's style, starting
from "self-evident" axioms.

The underlying philosophy in the Euclidean methodology, that is still
upheld by modern math is LOGOCENTRIC. Start from a set of axioms and
undefined concepts, and build your way, bottom-up, to lemmas and theorems.
Of course, that is not how one discovers new math, but the bottom-line,
the mathematical *text* was (almost) always presented in the
Euclidean style, plus or minus some "informal heuristics" noise.
At least there was the ideal of a completely formal proof. David Ruelle
called a human proof a "dance around a formal proof". For practical reason
we use a Natural language, but we hope that in principle
it can be converted into a completely formal proof.

Russel and Whitehead, in their *Principia Mathematica* came
pretty close to that ideal,
but at the price of making it almost unreadable to humans.
Gregory Chaitin likened it to a computer program, and indeed it was precise
enough for Gödel's famous undecidabilty results.

The reason Principia Mathematica is such rough going is (1) It is written in a very low-level "programming language", resembling something between machine and assembly language. (2) It is still obsessed with the Euclidean tradition of basing everything on Logic, modernized and formalized with Boolean and Fregean logic.

Now the idea of using a formal programming language to present math
is a good one, but to make it readable, we need to use
increasingly higher and higher -level languages, that
resemble English, but are all defined precisely in terms of
a very low-level language, that can be compiled down to bits for
a computer to understand. We also have to forget about logic
as the glue of math and work within targeted *ansatzes*
with *canonical form*, or at least
*normal form*, and try to discover new ones, that will
free us from the highly inefficient logocentric, Euclidean, methodology.

As we all know, Gödel meta-proved that both Russel's logicism and Hilbert's formalism were doomed to failure. But Gödel also showed that amongst the decidable propositions most of the short statements have very long proofs. Analogously, Shannon proved (using the pigeon-hole principle), that most Boolean functions are "complicated" (have exponentially-many gates). Hence, logic is a highly inefficient way for discovering new math, and even for proving it.

The reason mathematics has advanced so much was not because of the
Euclidean axioms-lemma-theorem straitjacket, but *in spite* of it.
Luckily, when we actually discover mathematics, we do it the
Babylonian way, empirically and algorithmically. It is only when it
is time to present it, that we put on the stifling Greek formal attire.

Most of Math is intrinsically Ansatz-based and/or algorithmic. Many (perhaps all) traditional theorems and proofs, that are phrased in the Euclidean, logocentric, lingo, can be deconstructed to reveal an implicit algorithm or a canonical-form reduction in an appropriate ansatz. Take for example the proof that the square root of 2 is irrational. It is basically an algorithm that inputs (a,b) relatively prime such that a^2=2*b^2 and outputs another, smaller pair with the same property. And if it is an algorithm, then it can (and should be!) programmed.

Let's face it, anything we humans can know for sure is trivial, since
we are creatures of such low complexity. With the help of computers,
we can hope to know semi-trivial results, at least semi-rigorously.
Hence, our best bet is to work within targeted *ansatzes*.
A fairly recent example
is WZ proof theory, that showed that every binomial coefficient summation
has a canonical form, and hence one can always decide whether A=B
if A and B both belong to the holonomic ansatz. A much older,
and even more amazing, ansatz, is
Rene Descartes revolutionary discovery that geometry is really high-school
algebra, and hence that it is completely routine.

Once you know the ansatz, you can have the machine, discover
*from scratch* all the statements of any bounded complexity,
and prove them at the same time, either rigorously (time permitting) or
semi-rigorously (if a polynomial of degree 10^100 vanishes at
10^6 random values it is most likely identically zero, just like
Rabin's pseudo-primes). This is what Shalosh B. Ekhad, XIV, did in
its 2050

What is so nice about it is that it is written in a precise, and hence completely formally correct programming language, Maple, but the names of the definitions are English based. Also each statement doubles as the proof, ready to be executed by Maple. So we have a text that is even more formally correct than any logocentric human proof (recall that Bill Thurston said that a computer program is much more formally correct than a human proof), yet just as much fun to read, and in which once you read the statement, you already have the proof (modulo running it on Maple, but I already did, so you can trust me and not run it again).

Enjoy!

Added Nov. 4, 2001: Read Ilan Vardi's Defense of Greek mathematics.

Doron Zeilberger's Opinion's Table of Content