Written: Oct. 6, 2007.

First there was Beauty, but then it became "Beauty", later on Reality became "Reality" and even Truth became "Truth". It is about time that "Understanding" and "Insight" will get the "scare quotes" or rather the ironical-cynical ones.

Since I am not a philosopher, I will focus on mathematical understanding and "insight".
In his classic wonderful text, *La Science et L'Hypothèse*, Henri Poincaré states
that, in his time, a physicist "understood" a physical phenomenon if he had
a *mechanical* explanation for it. That's why such great minds like Kelvin,
Lorentz, and even (although with great reservations) Poincaré himself, stuck to the Ether.
Then Albert Einstein came along and said that it was not necessary, and this lead to a new, more
profound, kind of physical understanding.

Most mathematicians, even today, feel that they understand a statement if and only
if they understand the proof, and this means-to most of them-understanding it
both *locally* and *globally*. Local "understanding" means that
they followed the proof line-by-line, and agree that each deduction, from one line
to the next, is valid. Global understanding is more fuzzy, but most people know it
when they have it. It is understanding the "big picture", the general design, and the
central ideas.

Is is high time that we, mathematicians, give up that obsession to follow all the details.
Mathematics, even before computers, got so complicated, that it is hopeless to try and
"understand" (in the local sense) all the results that one uses. It is fair game to use
a proved theorem, approved by experts, as a *macro* (or "black box"),
as long as one understands the statement, and the conditions under which it holds,
without feeling obligated to (locally) understand its proof.

So even with human-generated mathematics, it is no longer possible to have
perfect "local" "understanding. With computer-generated mathematics,
it is entirely hopeless. We should learn to trust the output of the computer
as a "black box". Of course, computers, at present, are still *programmed*
by those feeble-minded creatures called humans, so they should not be blindly trusted.
But with the right "architecture", both for hardware and software, and lots of quality-control
tests, we should learn to trust computers more and more.

We also have to change our notion of "beauty". Dear Uncle Paul Erdös, I have bad news
for you. Most theorems do not have proofs in your divine book, even if their statements
are "beautiful" (e.g. the Four Color Theorem). You probably already knew it, since I am sure
that Kurt Gödel told you that there exist short (i.e. "pretty" in your eyes) statements
with arbitrarily long proofs (i.e. "ugly" to your eyes), but I bet that you dismissed it as
"metamathematical nonsense". So indeed, if the definition of a "beautiful" proof is a
"proof from the book", i.e. that you can read and understand all the details of in five minutes,
then, **luckily**, most results are not pretty, since, if **you**,
a mere human, can understand it so well, it can't be very deep.

[You may retort that an easy-to-follow proof is not necessarily easy-to-find, and
you would be right, but if its proof is short, it is, at any rate, *a posteriori* trivial,
even if it not *a priori* so. ]

But, we humans should not despair. We just have to adjust, and learn to live with, computer-generated
mathematics, and tweak both our notions of "understanding" and "beauty". As for understanding,
we should give up on "local", micro-managing, understanding, and be happy with
the global kind, i.e. trade understanding with meta-understanding, and if necessary, even
with meta-meta-understanding. Understanding the *algorithm* that generated the proof,
or even merely understanding the meta-algorithm that generated the algorithm that generated the proof.
To take a trivial example, even the most traditional kind of mathematician,
when told that a certain two-hundred-digit-long integer is composite, would not
ask to see the "proof" (either as a product of two smaller integers, with all the
details of the long-multiplication spelled-out, or the detailed reasoning of the AKS
algorithm). But many mathematicians are still uncomfortable with the proof of
Kepler's conjecture and the Four Color Theorem. Granted, the programs should be
checked and double-checked, and there is a more than epsilon chance that they
have a bug, but we all know that nothing is sure in this world. Even ourselves
are just *statistical* averages of some quantum processes, or so physicists tell us.

And there is also hope for *beauty*, but not of the Erdösian kind, but rather
of the kind advocated by David Ruelle, in his fascinating new book
*The Mathematician's Brain*, that I very strongly recommend. In the
concluding chapter, entitled "The Beauty of Mathematics", Ruelle says:

"One may say that this is why mathematics is beautiful: it naturally embodies the simple and the complex that we are yearning for."

So there is still room for beauty, suitably defined, but we have to learn to live with the division of labor. We, humans, would do the simple part, and computers will do the complex part. And by "complex", I do not mean just tedious number crunching. Ninety-nine percent of what mathematicians do today is just one notch above number-crunching, it is symbol-crunching, and occasionally idea-crunching, but as computer algebra systems, suitably programmed, are already showing, computers are already surpassing humans in symbol-crunching in many areas, and very soon will surpass them completely. A good rule of thumb is that if you think too hard, you are on the wrong track. You should have looked at the big picture, designed an algorithm, and let the computer do the thinking.

Speaking of "computers taking over", Ruelle makes the following confession (p.47),

"Let me make here a personal remark. I must admit that I am somewhat frightened by the rapid, apparently limitless evolution of computers. I see no reason why they could not overtake our cultural evolution and become, in particular, better mathematicians than we are. When this happens, I feel that life will have become, somewhat less interesting, and somewhat less worth living."

And he goes on to say:

"Our world has seen the era of great Gothic cathedrals come to an end. And the era of great[myhumanmathematicians may also come to an end."

There are two replies to that. One is: "no big deal", or as my daughters would say "worse things happened to better people". But a kinder reply is as follows. With our beloved computer servants, soon to become our masters, life, in particular, mathematical life, would become much more interesting and much more worth living. Granted, we would not be able to "understand" everything in the old-fashioned, local way, but if we will start to train ourselves to only meta-understand, and be happy to be spared of the details, life will be even more worth living. Also let's hope that our human vanity and ego would not have to be completely given-up, and we can still serve our time, at least for the next one hundred years, as algorithm-designers to do mathematics. Sooner or later, of course, we would be mere spectators, but we can still have fun by doing human mathematics, the pencil-and-paper good old way, as a harmless pastime. We have lots of fun, and our ego gets amply rewarded, when we spend an hour solving a Sudoku puzzle that can be figured-out by computers in a few nano-seconds.

So cheer up, David, the loss of both Gothic Cathedrals and of Human Mathematicians are not such big tragedies, all we have to do is say:

"Bye-Bye Understanding, Hello Meta-Understanding", and
welcome to a much deeper kind of *"insight"*.

Opinions of Doron Zeilberger