Responses to Opinion 36

Last Update: June 1, 1999.

Feedback from Rabbi Professor Dror Bar Natan (Oops, I meant to say Senior Lecturer Dr. Dror Bar-Natan (added June 1, 1999: Dror just became (Assoc.) Professor, which means that in Israel he should be addressed as Professor Dror, hence Brendan McKay was prophetic when he addressed him as Prof.. Judging from this astounding prediction, we should expect Dror to become a Rabbi pretty soon, and perhaps he would even drop his hyphen)):


From drorbn@math.huji.ac.il Sun Apr 4 21:08:49 1999

Shalom Doron!

What a disaster it was that the French (Cauchy and his generation, and then Bourbaki) found that practically all of mathematics can be formalized! This formalization procedure seemed so powerful, that we have substituted "formal mathematics" to "mathematics", and for many of us math is ain't math until it is reduced to a sequence of theorems and proofs. Hence for example, as a group we were, and largely still are, blind to the discovery of path integrals, and we left this whole arena to the physicists, whose motivations are entirely different. Who knows how much more have we missed by requiring that the whole process, from imagination to formalization, not only be fully carried out within each mathematical context, not only be carried out in full by each generation of thinkers on each mathematical context, not only be fully carried out by each individual throughout her lifetime, but even be carried out in full within each individual submission to a journal!

Mathematics is not about proofs, it is about ideas. It is not about theorems, but about interpretations. Thus the main thing I dislike about your opinion piece is your very narrow view of what should be regarded as a part of "the BOOK". I counted 29 occurrences of the word "proof" and its derivatives in your piece, but not a single "idea".

Lehitraot,

Dror. (An avid programmer, as you probably know).

---------end of Dror's interesting feedback---------


Response from a distinguished mathematical statistician, who prefers to remain anonymous:


--------------begin Anonymous response-------- Hello Doron,

This is just a short comment (not to be made public, because I don't have the inclination to battle with the many people who will object to my opinion). My comment is that it's no surprise that many people will object to your viewpoints, as expressed in your opinion piece. The fact of the matter is that mathematicians, like so many other scientists, are extremely conservative in their own way. In part, the conservatism is, and can be, justified by the high level of training and rigor required to become a serious mathematician. (And frankly, after reading parts of Alan Sokal's superb hoax on the postmodern people, I am kind of happy that the barriers to becoming a serious mathematical scientist!)

However, there are aspects of the conservative nature of mathematicians which, in my view, cannot be substantiated by concerns about training, rigor, style, or whatever. This part of the conservative outlook stems purely from obstinacy, biases ingrained by mathematical upbringing, etc. This attitude is very hard to crack, and I think this is the root cause of the objections to the comments in your opinion piece.

The only way to crack this hard-core bias is to keep chipping away. I wish you lots of luck (and whatever little support I have to offer). Just keep hammering away ...

Take care,

xxxxxx

P.S. Have you ever heard the story about the really smart young Hungarian mathematician who was making his "coming-out" tour of the English universities (this took place nearer to the turn of the 20th century). The guy gave a talk at Cambridge (or Oxford?) and among the audience was the great G. H. Hardy. After the lecture was over, someone asked Hardy for his opinion of the lecturer. Hardy's response was "Obviously a very bright young man, but was that mathematics?"

As you can guess, the lecturer was John von Neumann. And imagine that as good a mathematician Hardy was unable to see that von Neumann's approach to mathematics was about to take the world by storm.

Of course, by bringing Ramanujan to England, Hardy did enough for mathematics. So I don't want to be too hard on him.

Bottom line: You and v-N are in the same situation, sort of.

-----end of response by friend who would rather stay anonymous-----------


-Response from Simon Plouffe, of Sloane-Plouffe, Pi , and many other fames-

From plouffe@math.uqam.ca Mon Apr 5 01:56:28 1999

I endorse your paper(!), fully.

I always thought that. I am very poor in proving something in mathematics (of course I know basic proofs but it never helped my life of mathematician to 'know' how to prove something), I always found that knowing to prove was a waste of brain energy and of course a waste of time.

I never understood what the other mathematicians are doing all day long (true!), doing numbers , sequences and constructive hypothesis on computers and programming it took all of my time in the past 25 years. I will continue doing the same, I had complexes for a long time when I was considering the way I work with numbers and computers, only recently I realized that I was not wrong at all : My method of research seems completely foreign to others I met in my life of mathematician. (apart from a few exceptions from which many are listed in your paper).

I think the same : we should concentrate on ways to use the computer and our brains to work out something usefull and new instead of using the papers and power-steering pencils!

Simon Plouffe

-------end of Simon's response-------


Reponse from Marko Petkovsek:

From marko.petkovsek@fmf.uni-lj.si Wed Apr 7 14:00:36 1999


============================================================== I posted your opinion 36 on my office door. I certainly agree with it, although I would have phrased it much more cautiously, of course. It has been my motto since my high-school days that just as the invention of mechanical tools has liberated us from physical/manual chores (and has shaped our mind, according to the Marxist anthropology that was taught to us Eastern Europeans at the time), so will the invention of "conceptual tools" (computers) free us from routine white-collar work (not only from simple office work but from everything that can be done algorithmically, e.g. evaluation of indefinite integrals of elementary functions) and give us the opportunity to be "really" creative ... I am sure your opinion will prevail sooner or later, but right now I think only a small percentage of mathematicians would seriously support it. We need more hard evidence to back it up, because the WZ method and Wu's method and Groebner Bases and Cylindric Algebraic Decomposition etc. only cover a tiny portion of the mathematical ground. We need more machinery of this type in new areas which so far have not even started to be automated. For example, it would be great if one could build a theorem-producing machine in graph theory, the largest area of combinatorics. Any ideas how it could be done (or is graph theory perhaps "too expressive" - too close to general logic)?

Marko Petkovsek, Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, 1111 Ljubljana, Slovenia , E-mail: marko.petkovsek@fmf.uni-lj.si,

***********************************************


--------------------response from Ira Gessel-----------------

From ira@cs.brandeis.edu Wed Apr 7 12:57:03 1999

Dear Doron,

You write

We are now on the brink of a much more significant revolution in mathematics, not of algebra, but of COMPUTER ALGEBRA. All our current theorems, FLT included, will soon be considered trivial, in the same way that Levi Ben Gerson's theorems and `mophetim' (he used the word MOPHET to designate proof, the literal meaning of mophet is `perfect', `divine lesson', and sometimes even miracle), are considered trivial today. I have a meta-proof that FLT is trivial. After all, a mere human (even though a very talented as far as humans go), with a tiny RAM, disk-space, and very unreliable circuitry, did it). So any theorem that a human can prove is, ipso facto, utterly trivial. (Of course, this was already known to Richard Feynman, who stated the theorem (Surely You're Joking Mr. Feynman, p. 70)- `mathematicians can prove only trivial theorems, because every theorem that is proved is trivial'.)

You say that any theorem that a human can prove is trivial because humans have a small amount of memory and an unreliable processor. I find this argument neither convincing nor (what is worse!) interesting.

You also write

Everything that we can prove today will soon be provable, faster and better, by computers.

However, you give no justification for this statement, and I believe that it is false.

Best regards,

Ira

----------end response from Ira Gessel---------------


---response from Douug Iannucci and his biologist friend Richard Hall---

From: "Douglas E. Iannucci"
Dear Doron:

I forwarded your opinion #36 to the entire science & math division at UVI, and one of our biologists had a reply for me, which I am forwarding to you at the end of this message (in case you are interested). I believe Wiles did not use a computer at all in proving FLT. When you say "a human with a tiny RAM, disk space," etc., are you referring to Wiles himself? I believe Rich Hall (our biologist) interpreted your statement as such, hence his reply. Also, I do not believe you were comparing the computer to the human brain per se, but rather you compared merely our respective theorem-proving capabilities. I think that's a big difference.

Doug Iannucci

From: Richard Hall
Subject: Re: Free Speech! (non-junky junkmail)

Newer computers easily exceed the ability of the human brain in some types of operations as measured in cycles per seconds or bits handled per second, however, the human brain is also described as massively parallel. For example:

The human eye transmits information to the thalamus at approximately 1 KHz but over 1 million parallel lines (bits). That works out to one billion bits per second. There are a total of 10 million parallel sensory lines into the cns, so in theory the maximum would be 10 billion bits per second. The maximum is never obtained because peripheral and central filters prioritize information at multiple levels... Thus the human nervous system can collect and analyze huge amounts of information.

A 300 MHz computer typically sends information in 32 bit packets or 0.9 billion bits per second. Newer computers can easily outperform one component of the system. However, the human brain is not one computer, it is many, many computers linked serially and in parallel. Consider Shakespeare's writing versus 10 monkeys on keyboards. Who would be considered the more productive? Substitute one thousand 400 MHz PC's and Shakespeare still gets all the royalities.

It gets better. A computer program tends to freeze when inappropriate information or missing information bolexes operations. The human brain can fill in the gaps and keep right on calculating using on average only 600 Kcal/day, rain or shine. Trivial problems like getting out of bed, feeding oneself, finding social companionship are computational nightmares for mere computers.

rlh

---------end of responses from Iannucci and Hall-----


----interesting remark from Roger Howe----------

From howe@math.nus.edu.sg Thu Apr 8 22:11:14 1999

Dear Doron,

I wanted to add a comment about your opinion #36. You have a very good point if you take as goal the proving of deep theorems, a goal which many mathematicians would grant. But I would say, it is the trivial theorems which are in many ways the best.

Cheers,
Roger

---------------end remark from R. Howe-----


-------begin interesting response from Michael Larsen----

From larsen@iu-math.math.indiana.edu Wed Apr 7 10:43:39 1999

Dear Doron,

I enjoyed reading your "Opinion 36" very much. As usual, with your opinions, I found it thought provoking and at times infuriating. But I want to take issue only with one rather specific issue, namely that of "depth". Obviously, we don't mean the same thing by depth as by "length" or we wouldn't bother with using a different term.

It is known that there is an effectively computable constant C such that every odd number less than C is a sum of three primes. Unfortunately, C is out of reach of any computer we can imagine today. But we could imagine a more powerful computer (possibly in a different universe) checking all the cases up to C and providing a proof that every odd integer 7 or greater is the sum of three primes. This proof would certainly be much longer than the proof of the weaker statement known today, but it would not be any deeper.

A deep proof should have *explanatory* power. It should force you to rearrange your thinking about a large block of conceptual material. It is likely to involve unexpected connections. It is also likely to seem inevitable once it has been discovered.

The proof of Fermat's Last Theorem is very deep and fairly long. Falting's proof of the Mordell conjecture and Deligne's proof of the Riemann hypothesis in characteristic p do not require many pages but are also very deep.

I am not bothered by the proofs of Haken-Appel or Hales. But the day may come when an AI announces a proof of the Riemann hypothesis and explains to the expectant human mathematical community that although it would like to at least sketch the approach, it's probably too difficult conceptually for a human being to grasp. That would bother me.

All the best,
Michael Larsen

--------end of Larsen's comments-----------


------Fascinating reaction by Ron Bruck that appeared in sci.math.research

I'm rather curious WHY the letter was rejected. Doron Zeilberger has invented one of the most astounding and far-reaching algorithms in computer algebra (of course, there were predecessors; it didn't come from NOWHERE; that's why it's usually called the "Gosper-Zeilberge" or "Wilf-Zeilberger" Algorithm, and the "A=B" book discusses earlier contributions).

Given that standing, his opinions SHOULD be of great interest to the mathematical community. The letter SHOULD have stimulated discussion, and I'm surprised it was rejected. Are we hearing the whole story? Did the editors perhaps ask for a redaction, and Doron refused? Did the editors suggest an alternative publication as being more appropriate? Are they perhaps concerned about his example of Rabbi Levi Ben Gerson (which I find the most fascinating part of the whole letter, incidentally)? The Notices have had problems with anti-semitic abstracts in the past, and perhaps they're hypersensitive about a "pro-semitic" letter? Farfetched, surely!

Oh, maybe Zeilberger overstates his views; read his other essays on his web page and you'll understand this isn't unusual for him. (I've been enjoying them for years.) It may take longer than he predicts; and the eventual evolution of man/machine collaboration is likely to be very different from what anybody today will imagine. E.g., What happens if man and machine MERGE? If machine becomes an integral part of us? Not the klutzy chips of today, of course, but our neural circuitry revised and optimized and supplemented with biological or quantum-computing implants. The machine may then have human aesthetic sensibilities, because it WILL be (partly) human; and who can say what sort of mathematics would evolve?

My first grandchild was born on Friday, and I watched as the doctors put silver nitrate (or whatever they use today) into her eyes. Perhaps one day the doctor will routinely insert her neural implant; more likely that'll be done prenatally, and it will be genetically engineered (to HER gene pattern) to "grow in". Phooey--why bother?--perhaps the whole thing will be genetically engineered, at least the optimization part, from the moment of conception. Who knows?

Yes, "suppression" is probably too harsh a word. (I don't read Doron Zeilberger using it, incidentally--only his student.) Some explanation of the editorial decision would be appropriate.

--Ron Bruck

bruck@math.usc.edu preferred e-mail

-------end of Bruck's comments _______________________________

---- Annoying reaction from Greg Kuperberg (in sci.math.research)----
----(with my (reaction)^2-----

Author: Greg Kuperberg

>I don't see any reason to have a hostile reaction to Doron Zeilberger's
>opinion 36. The basic thesis --- that computers are replacing
>mathematicians --- is true in some respects and not in others.

This was not the BASIC thesis (this part is obvious), the basic thesis was that we MUST change our working habits!

>I suspect
>that when computers make mathematicians obselete, all
>human activity
>will be superfluous and human history will end.

That's very math-centric! Math is but a small part of human endeavor, just slightly more gereral than Chess. Just because Deep Blue can beat us all does not mean that we have to commit suicide.

>I don't know how far
>in the future this eventuality might lay, but even if
>it were soon,
>nothing we do now would matter much afterwards.

If you will shoot yourself, that's your business, but frankly I hope that you won't (even though you sometimes are annoying).

>Doron's argument for his thesis is a little crazy, but it's not
>completely crazy. You shouldn't interpret it literally,
>even though Doron himself might.

Of course, you should not interpret it literally, but neither should you interpret anybody's text literally. As Derrida, Rorty and several others have shown, we are slaves to our own final vocabularies and we always have hidden agendas, and our `objective views' are just an instrument to bolster our ego, and to justify to ourselves our miserable existence. Now that plain Racism and Sexism is out of style, we cling to Human Chauvinism.

>Although it is not very original or strongly argued,

I have not yet seen any mathematician who said that paper-and-pencil proving is a waste of time and we should all be programming computers instead. So, even though I never claimed to be original, I only wanted to state my opinion, it is in fact original, for what's it is worth. Perhaps it is not as strongly argued as possible, but its rhetoric beats your underhanded `defense' of my views from the humanist mob.

>it obviously is provocative.

It was not meant to be provocative, neither did Galileo or Darwin mean to be, it just had to be.

>It's like shock art.

I am glad that I was able to shock you, since more than once you shocked me!

>When I'm in the mood for it, I like it.
Enjoy!

>The only part of "opinion 36" that I object to is the accusations
>against the editors of the Notices. The Notices is not the
> be-all of
>mathematical soapboxes, to be protected at all costs from
> the fallible judgement of its editors.

Of course, in spite of your pioneering efforts in the XXX archives and in electronic publishing, when it comes to the substance of doing mathematics, you are yet another HCP (Human Chauvinist Pig).

>In fact, sci.math.research is about as
>widely read as the Notices, although more by amateur mathematicians
>and less by professionals. I take the two forums equally seriously.
>One difference is that the moderators of sci.math.research give people
>more rope to hang themselves than the Notices does.

So according to you I should thank Susan&Anthony for sparing my neck!, thanks!

------end Greg Kuperberg's annoying reaction and my (reaction)^2---

-----fascinating feedback from Andrej Bauer

Author: Andrej Bauer
Date: 1999/04/15
Forum: sci.math.research
_________________________________________________________________
David desJardins writes:
>I don't think that a computer that can prove the Goldbach Conjecture >will get any kind of "head start" from reading a list of definitions >and facts used in proving theorems in plane geometry.

What is the point of the above statement? Of course nobody expects that planar geometry will help solve Goldbach Conjecture! I fail to see what you were going after there. Did Zeilberger suggest that?

I think there is a more constructive way to understand the above remark. Maybe you mean to say that providing a computer with a large knowledge base of theorems in a given branch of mathematics will NOT give the computer a head start in that branch. (I do not intend to put words in your mouth, so please accept my apology if you did not mean to say that.)

I would like to argue that it is extremely useful to have a large base of mathematical knowledge organized in a way that can be manipulated by computers, even if computers can do only the most trivial sort of theorem proving. In this sense Zeilberger is correct when he says that we should be "entering mathematical facts into computers".

At some level we already have a kind of Math-Internet knowledge database. Mathematicians correspond with each other via e-mail and put their papers on web pages. What is missing is the kind of knowledge that a computer could manipulate *semantically*. For example, there ought to be a Math-AltaVista where you could ask "Has anyone proved this theorem yet?", and the computer would go off searching the planet.

I believe that a large body of knowledge would help a theorem prover enormously, provided the searching mechanisms were efficient enough. I base this opinion on an analogy with how mathematicians operate (they know a whole lot of theorems, tricks and techniques), and on my experience with the 'Analytica' theorem prover, developed by Ed Clarke at CMU. Analytica uses Mathematica to do algebraic manipulations, and it has a large knowledge base of the basic properties of real numbers. In other words, it does not try to prove everything from the axioms all the time. It knows about useful definitions, and it does not automatically eliminate them (because that causes an exponential blow-up). To see what sort of things it can do, see the paper in Journal of Automated Reasoning, vol. 23, no. 3, December 1998, pp. 295--325. The point is that you could never prove certain kinds of theorems without *extensive* knowledge of properties of reals, and a lot of algebraic manipulation that Mathematica does.

>I don't think time spent on >entering such facts into Maple or Mathematica will advance mathematical >knowledge, now or ever, by one iota.

I agree that entering mathematical knowledge into existing computer algebra systems and theorem provers is a waste of time. We have not yet developed the infrastructure that is needed for a global mathematical database that could be manipulated on the semantic level. Mathematica and Maple are nowhere near being satisfactory systems for such an endeavor. I think Zeilberger is wrong when he thinks that we can do it today. We do not even have a good language to do it in.

However, I have no doubts that the required tools, mathematical and computer theoretic, are going to be developed in two or three decades, if not sooner. They *are* being developed by various groups, mostly by theoretical computer scientists (one that comes to mind is the Cornell Nuprl group). And once we have those you will be proven wrong---there will come a day, not too far from now, when computers will significantly advance mathematical knowledge. I do not want to make a prediction as to whether this will be just because of a global mathematical database, or also because of very smart theorem provers. But I do predict it will be in my lifetime.

>In the end no human mathematicians will be useful or needed, right? In >fact, "in the end" the universe will die of heat exhaustion and all of >the elementary particles will decay. But we don't have to make >decisions about what to do right now based on what will happen "in the >end"; we can decide based on the circumstances that prevail at the >moment.

You know very well that I was not thinking of the end of the universe and I was not talking about mathematician's angst of being useless.

So, silly remarks and intentional misunderstandings aside, the point I was trying to make was that either our generation or the one coming after us is going to make the big leap from doing math "by hand" to doing math "*with* computers". In that sense, we do have to make decisions now. I think the correct decision is to wait until those pesky theoretical computer scientists come up with decent languages and knowledge manipulation techniques that will bless the happy matrimony of math and computers. I suspect Zeilberger is afraid that the arrogant mathematicians will refuse to listen to the pesky theoretical computer scientists.

>I don't read the article as a modest exhortation to mathematicians to >become more familiar with what computers can do for them, and to use >them more in their research, which I would wholly support. I read it as >either lunacy or satire. Perhaps the editors read it the same way, and >that's why it was not accepted.

Yes, the article is not written in the the most diplomatic and convincing way. It is too enthusiastic, and that is why many people will probably dismiss it easily (which is just as well).

If you attempt to understand Zeilberger in a less dismissive way, though, then you could still find something positive in what he is saying. He's drawing our attention to a new technology which will, in his opinion, revolutionize mathematics. Zeilberger thinks this is happening now, I think it will happen in my lifetime, and if understand what you are saying, according to you it will never happen (even though you heavily use computers?). Opinions, opinions.

********

Let me also respond to Phil Diamond. Phil Diamond, pmd@maths.uq.edu.au writes:

>OK, I accept that the machines will be able to do these nontrivial >problems. But after that? Who will provide more nontrivial (or even >trivial) problems? Who will invent the concepts that are used?

Humans will invent new concepts, of course. May I ask why you are asking these questions? I do not understand what you are getting at. If I understand you correctly, you are making the point that even if machines could prove theorems much more efficiently than humans, they still would not know *what* to prove. So what? We are going to tell them what to prove. Machines are *tools*! They will replace those mathematicians who spend their days devising formulas for compound interest rates, and solving differential equations for the design of new cars. That's good!

I cannot help but to view your opinion as a form of technophobia. Is it?

Maybe something needs to be said about where mathematical problems come from. I think the best mathematical problems are the ones that originate from real-world problems, and I mean this in a very general sense. For example, I would claim that classical analysis was invented because of the needs of physics to understand the macroscopic world (Newtonian mechanics). A more recent example would be the way computer science is driving certain branches of mathematics (discrete math, type theory, constructive logic). I can't imagine we'd ever run out of problems to solve.

>This is a question that goes far beyond computer algebra and enters >the AI area. And after 40 years and zillions of $, the Holy Grail >of machine intelligence (whatever that is?) seems as unattainable >as ever.

Knowing the kind of stuff Zeilberger does, I do not think he has AI-ish inclinations. My understanding is that he is suggesting that mathematicians should be finding *algorithms* for solving problems (he talks about *programming*, not about automated theorem proving). The kinds of algorithms that he might have in mind are Schur's algorithm for finding the closed form of an indefinite integral, or Gosper-Zeilberger-Wilf algorithms for finding the closed form of summations, or Buchberger's algorithm for finding a Groebner basis. Of course, having blown his vision out of proportion, he paints a future which resembles a sci-fi movie.

Is Prof. Zeilberger reading this discussion? It would be great to hear his opinion. Maybe his student can provoke him into replying by showing him a printout of this thread.

>It is the difference between developing chess playing >systems that can beat any human being at the game, and **inventing** >the game.

Yes, yes. But don't you think that even if computers can be "just" very good at proving theorems, that would still have a huge impact on math? And that we should pay attention to such a possibility, even if computers will always lack the "human spirit and creativity"?

I'll borrow your analogy. We all know that computers have become very good at chess. It is perhaps less known that they are very bad at go, the Japanese game. Today's computers are as bad at math as they are at go, but thirty years from now they will be as good at math as they are at chess today.

-------end Andrej Bauer's reaction


--begin fascinating, profound, and humor-sparked feedback by Olivier Gerard--
From ogerard@ext.jussieu.fr Tue Apr 20 16:08:09 1999

About Doron Zeilberger's Opinion #36
(for DZ text see: http://www.math.rutgers.edu/~zeilberg/Opinion36.html)
(This is Olivier Gerard's Opinion #6, the previous ones will be translated in english before being released on my web page.)

While I am very sympathetic with Doron Zeilberger's frank and direct way of expressing his ideas and while most of the mathematics he does with his companion Shalosh B. Ekhad has a strong appeal to me, I am far to agree with it on many minor aspects of his opinions.

-> No, a Ph. D. in math does not warrant at all the ability to program, even when one has removed those very common psychological blocks. There is a lot to unlearn, there is the whole social implication of a researcher in the academic system. Moreover, we all now math Ph. D. who aren't smart *at all* and especially about mathematics. Introducing this kind of public to even mathematically-minded computer macro-languages like TeX is worth Hercules' works. Part of the trouble with any academic system is that there are areas, time and circonstances were you can be dubbed "researcher" or "professor" by only being a good schoolboy, no need to be smart about the subject.

-> The typical attitude in areas of mathematics where designing programs for calculations could be useful for one's research, was precisely to have top-notch programmers as graduate students to do the ground (and despised) work. Many of them, investing their energy in almost confidential project like this left the mathematical field disgusted, especially by the poor credit given to their efforts, often refered to as "non-mathematical".

-> The center of the mathematical discovery and of short (and you would say trivial) proofs, is not proving in itself (we agree), but not programming either. The common intellectual process is *finding a suitable representation of the problem*.

-> What do we do for computers, most of the time? We translate our mathematical folklore into a recordable or computable discrete model and we discharge ourselves of the burden of computing, veryfying or applying the said models.

-> Short or elegant proofs are similarly based on efficient models, even if they were not conceived to be implemented on a computer system.

-> In fact the real work of a mathematician is often to translate his ideas on various media: another part of his own brain, blackboard, paper, a given sub-language or sub-area of mathematics, another mathematician's brain and naturally Doron, computers. Each translation gives a chance to trivialize a step further what is passed through. But the triviality resides only in the final statement, whereas the mathematics is the sum of the modelisation(s) plus the statement, not the theorem alone.

-> If we come back to Rabbi Levi ben Gerson, I agree with Doron that probably, like many of us, he was fascinated by the plays of these charming entities that satisfied as by miracle identities and wanted to collect as many as possible in the same kind. Moreover, the use of a page-length word explanations made the results even more fascinating, like gems worked out of convoluted reasoning.

-> What is not present in Doron's text is that this attitude is probably a necessary step in the development of any mathematical knowledge by humans being. This is not only a matter of habit, it is also part of the relation of knowledge, power and attitude towards life.

-> I strongly support your view that the sequence: Machine code, assembly language, C (or other general purpose programming language), Mathematica (or other general purpose symbolic computation program), ... is just in its infancy and should be expanded with all our might. My own prejudice is to enhance the graphical interaction between us and the computer to exchange more intensively with it. This is the purpose of at least two of my projects: "Tasteful algebra(tm)" and "Suukan".

-> Another direction that systematically enhance our mathematical knowledge is the challenge of conveying our most recent mathematics to the youngest possible children or the most general public.

-> To mention another direction than strict computer theorem proving I recommand you to look to a recent article by Simon Colton published in the JIS (The Journal of Integer Sequences) at http://www.research.att.com/~njas/sequences/JIS/, the article can be reached directly at http://www.research.att.com/~njas/sequences/JIS/colton/joisol.html were you will find the reference of Colton's web site on his project. I plan to write later more in detail about this kind of endeavour which matches very well Doron's view of the future of mathematics and certain enterprises of mine.

-> However powerful, experienced and smart computers may become, I am still of the opinion that a) we need mathematics more badly than they do. b) we will still be a source of mathematical questions or concepts they wouldn't care about. Of course, I expect Shalosh B. Ekhad to say exactly the same on its side. I only hope that, if computers do replace us on the surface of the Earth, they'll have a little thought of our destiny when they are replaced by another living form I can't even dream of they will inevitably create someday to do even less trivial mathematics.

-> I sometimes suspect Doron Zeilberger of giving due credits to Shalosh B. Ekhad (when credit is due) in the hope that Shalosh will consent to keep him as a pet in his old days. It's a feeling I have sometimes about my own personified computer network and program library, Charles O. Gely which is systematically credited for all my computer aided mathematical discoveries.

-> A interesting side effect of putting more and more mathematical knowledge into computer systems will lead us to reconsider most of the hierarchy of mathematical subareas. I am convinced for instance, that a lot of algebraic geometry and algebraic topology will look rather easy and under complete dependence of combinatorics when this process is complete, but this is a view Doron has already forcefully expressed.

-> Doron Zeilberger adds to the end of his opinion that he proposed his text to the AMS Notices. I would have loved to see it in print in this venerable organ of the american mathematical community. But if I had been the Editor of the Forum section, I would have rejected it. I don't know what was the reasoning of Susan Friedlander, but here is what would have been mine: << Since the AMS Notices Forum Column is quite small and that we don't have the resource to manage real, open, and democratic debates in it, we better not start polemics when they don't stem from articles already published in former issues. If I reject his column, he will have a better case to make his opinions known and rally people under his banner, and mathematicians that do not minimally use online technologies won't understood Zeilberger points anyway. >> It may be that Susan Friedlander has rejected DZ column because she had very different motivations, bad or good. But in effect her decision serves you quite well, and if she was trying to block you, the result is not in that direction.

****
------end Olivier Gerard's reaction (much better than the cause!)------

Added May 17, 1999: Jaak Peetre (of Lions-Peetre Interpolation of Operators fame) wrote a very interesting defence of Rabbi Levi Ben Gerson

Added Jul 14, 1999: Ursula Martin wrote a very interesting article available from Ursula Martin's On-Line publications , that is highly relevant to the present issues.


Back to Opinion 36 of Doron Zeilberger

Doron Zeilberger's Homepage