Conceived: Sept. 28, 2009 (during Yom Kippur 5770); Typed and posted: Sept. 30, 2009.

As I was sitting in *shul*, trying to repent, and praying for forgiveness, during the passages
stating the triviality of humans, *ashes to ashes* etc., I was reminded of the ancient joke
about the rabbi dramatically proclaiming: "Oh God, please forgive me, I am nothing!",
that inspired the cantor to exclaim: "Oh God, please forgive me, I am nothing!",
that in turn inspired the *parnas* to announce: "Oh God, please forgive me, I am nothing!".
Deeply moved, Yenkel the *shamas* (janitor) repeated:

"Oh God, please forgive me, I am nothing!",

that lead the rabbi to whisper to the cantor: *Look who thinks that he is nothing!*.

We may laugh at the pompous and hypocritical rabbi of the joke, but we, human mathematicians, are not
any better. We often **say** how *little* we know, and we know, on some level,
that most mathematical
open problems we would never solve, but we **act** as though we can potentially solve everything.
We keep trying, very hard, to prove RH, P ≠ NP, 3x+1, Goldbach etc. with our limited human means,
even though we know that the prior probability of success is infinitesimal.

But if we would realize that we are *truly* **nothing**, and that there is an intrinsic *lower bound* on
the complexity of the proof that P ≠ NP, that far exceeds our limited human capacity,
we wouldn't even bother to try and prove it (by ourselves)! If we want to raise our chances from
ε^{2} to ε we should take full advantage of our beloved computers, and
try to *train* them how to eventually try to prove P ≠ NP, RH, etc. etc.

Yet, Rome wasn't built in a day. A direct and frontal attack is hopeless. We should start out
*modestly*, and try to have the computer, say, prove a lower bound of 3.1n, beating,
the current (I believe) bound of 3n (for the full-circuit complexity), that was proved (in 1984) by Norbert Bloom.
Now I said that you **must** use the computer, no credit for paper-and-pencil!
Because, the *skill* that you would acquire in teaching the computer how to make a minor
improvement, may, hopefully, one day, enable the computer to prove a super-polynomial lower bound on, (for example)
CLIQUE.

Of course, we need, at present, all the human attributes that helped us prove conjectures and solve open problems,
by hand, namely "cleverness", "creativity" and "thinking out of the box".
But don't waste your (very) limited talent
on doing things by hand. Remember that you are *nothing* by yourself,
and your best bet is to get help from your friend the computer, as Appel-Haken and Hales did.
This would require some investment of your time, and a sharp *learning curve*, teaching
yourself how to program computers *symbolically*, and using meta-algorithms, and getting rid of
the old hang-up that "computers can't think, they can just compute". Of course, they can't think,
but neither can you!, both computers and us *only* compute.
In other words, we *think* that we think, but we are really *nothing* but (lousy!) computers.
Computers already far surpass us in numerics and
routine symbolics, but soon they would also surpass us in concept- and idea- crunching.

Another piece of human vanity and *superstition* is the
Krattenthalerian insistence on fully "rigorous" proofs, and "absolute" truth, and
the all-or-nothing Boolean narrow-minded mentality.
First, there is no such thing as "rigorous proof". All proofs are either done by us humans
(and we, humans, are nothing!), or
by our much more reliable computer brethren, that nevertheless are built and programmed by us
humans (and we, humans, are nothing!). We should adopt a much more flexible attitude to "truth"
(whatever it is), and encourage diversity. So it would be great if there would be
a proposed proof-plan for, say, RH, with intricate lemmas and sublemmas, and subsublemmas,
some of which would still wait for a fully "rigorous" proof, but they would have
even a greater empirical plausibility, and empirical verification, than the
statement of the parent statement, RH itself, that has only been checked for a few billion cases.
Perhaps we can develop tools that would meta-determine (non-rigorously, but nevertheless reliably)
lower bounds for the length of rigorous proof of each yet-unproved-piece, if it exists,
and if the lower bound exceeds current resources, we should learn to live with it,
and enjoy what we have! But of course, my dear *freund* Christian Krattenthaler would not
accept this not-yet-full proof for his journal, since, according to him, and unfortunately
according to
ninety nine point nine nine nine per cent of currently living (human) mathematicians, you either have a proof, or you don't,
and "almost doesn't count". Nonsense! Almost (or even a tiny bit) **does** count, since
we are *truly* **nothing**, and *we do what we can* (if at all possible, taking
full advantage of computers).

Opinions of Doron Zeilberger