Posted on YouTube (3 parts): Part 1 Part 2 Part 3

We will review algorithm-Z, and show how it is used in proving the q-binomial theorem bijectively. Then we will describe a generalization with one additional parameter, and show how this is used in proving the q-Gauss summation formula bijectively. If time allows, we will discuss possibilities for future research. This is joint work with Ae Ja Yee (Penn State).

Posted on YouTube (2 parts): Part 1 Part 2

See http://www.math.rutgers.edu/~zeilberg/Opinion113.html

Posted on YouTube (2 parts): Part 1 Part 2

I will discuss some interesting new integer sequences from M. LeBrun, J. van Eck, H. van der Sanden, G. Back & M. Caragiu, M. Sapir, and N. Inaba, related to number theory and geometry. There are many unsolved problems. I will also give a brief report on the new OEIS, which was successfully launched on Nov. 11 2010 after two years of struggle (see http://oeis.org).

(Talk not video-recorded)

A permutation π=(π

Posted on YouTube (2 parts): Part 1 Part 2

The definition of integration has been greatly generalized since Riemann, but in a surprising direction: less and less continuous, even less defined, functions are integrable. On the other hand, perfectly nice functions (holonomic and/or with closed forms) which are analytic outside some (often rather mild) singularities are not integrable. By revisiting the process of (definite) integration, we can define an 'integral' which can cope with (some) singularities, with a quite close parallel to summation strategies for divergent series. We derive new effective algorithms for (classical) definite integration in closed-form from this method. But we also get insights into why such a generalization is qualitatively different than previous generalizations since, like summation of divergent series, the 'integral' is no longer process-independent in the general case.

Posted on YouTube (2 parts): Part 1 Part 2

Two of the most important ideas in quantum mechanics are compatibility of measurements and entanglement of outcomes. We look at a combinatorial description (simplicial sets) of the space of measurements and the space of possible outcomes of those experiments. An experiment is then a map from the space of outcomes to the space of measurements that assigns to each outcome the measurement that corresponds to it. The result of an experiment is simply a map from measurements to outcomes that is inverse to the experiment map. Different no-go theorems and properties of quantum systems are then simple statements about experiment maps. From this point of view, the notion of compatibility (which restricts the space of measurements) and entanglement (which restricts the space of outcomes) are seen as intimately related phenomena.

Posted on YouTube (2 parts): Part 1 Part 2

We give a homological interpretation of the coefficients of the Hilbert series for an algebra associated with a directed graph and its dual algebra. This allows us to obtain necessary conditions for Koszulity of such algebras in terms of homological properties of the graphs. We use our results to construct algebras with a prescribed Hilbert series. (joint work with Shirlei Serconek and Robert Wilson)

Due to technical difficulties, the video for this talk has been lost.

Han discovered two hook length formulas involving binary trees in which the hook lengths appear as exponents, rather than only as divisors as they do in most hook length formulas. Sagan gave a probabilistic proof of one of the formulas, and of generalizations to ordered trees and to finite subtrees of infinite rooted trees. We present another algorithm, choosing a random labeled tree by using each hook-length factor as a probability. Our algorithm gives a single proof of Han's first formula and both generalizations, and we show how both our algorithm and Sagan's algorithm can be modified to prove Han's second formula as well.

Slides from the talk.

Posted on YouTube (2 parts): Part 1 Part 2

In this thesis defense I will talk about two topics within the field of non-linear recurrences that appear as two chapters of my dissertation. First I will talk about global asymptotic stability in rational recurrences. A recurrence is globally asymptotically stable when the sequence it produces converges to an equilibrium solution given any initial conditions. I will explain the algorithm that I developed, in joint work with Doron Zeilberger, that takes as input a rational recurrence relation conjectured to be globally asymptotically stable, and outputs a rigorous proof of its stability. I will show how this algorithm has been applied to prove global asymptotic stability of some rational recurrences.

Secondly I will discuss a new type of generalized recurrence. Instead of producing a single sequence, these generalized recurrences produce infinitely many sequences from one set of initial conditions. I will present two families that produce rational numbers when complex numbers are expected, and observe that exponential sequences are being produced.

In addition, I will briefly mention the third topic from my thesis: a 3 parameter family of rational recurrences that produces integer sequences. I will mention the process used to prove integrality of these sequences. This topic was inspired by the Somos recurrences.

Slides from the talk.

Posted on YouTube (2 parts): Part 1 Part 2

We consider the problem of counting the number of permutations containing a given number of subpermutations matching a given pattern. This "number of copies" statistic provides many interesting counting problems, including counting the number of permutations with no copies of (i.e. avoiding) a given pattern or patterns. In this thesis defense I will discuss several of these problems, providing algorithmic solutions which apply to broad classes of patterns. These include applying the Goulden-Jackson cluster method to count permutations with a given number of copies of "dashed" patterns, adapting enumeration schemes to count permutations with no copies of dashed patterns, and adapting enumeration schemes to count permutations with no copies of certain patterns and a given number of copies of others.

I will also briefly discuss the following related extensions: converting enumeration schemes into functional equations for the corresponding generating functions, pattern avoidance by even permutations, and asymptotic joint-distributions of the permutation statistics "inversion number" and "major index."

Posted on YouTube (2 parts): Part 1 Part 2

When we read a paper, how deep do we read it? For my Masters Essay I looked at K. F. Roth's

The main page for the project discussed is: http://www.math.rutgers.edu/~davidjwi/roth/.

Posted on YouTube (2 parts): Part 1 Part 2

Following up on a projects started in the Experimental Mathematics class we uncovered a potentially new Combinatorial approach to investigating properties of Numbers. The approach rests on the observation that numbers can be identified with familiar combinatorial objects namely rooted trees. The bijection between numbers and trees provides some insights into unexpected connexions between Number theory, combinatorics and discrete probability theory.

Posted on YouTube (2 parts): Part 1 Part 2

Grid classes consist, roughly, of those permutations which can be divided into a specified number of pieces which satisfy local conditions. These classes have played a crucial role in the classification of small permutation classes, and are emerging as a powerful tool for the enumeration problem in general. I will report on recent results of myself together with a number of researchers, including M. H. Albert, M. D. Atkinson, M. Bouvel, R. Brignall, and N. Ruskuc.

Posted on YouTube (2 parts): Part 1 Part 2

The deep quench obstacle problem constitutes a mathematical model for low temperature phase separation. As in similar models, such as the Cahn-Hilliard equation, the dynamics are typically marked by an initial linear regime, followed by local saturation to equilibrium, and then by an extended coarsening regime. While various aspects of this dynamics of this problem are well understood, relatively recent results have shed some new light on, for example, precise coarsening rates and their realm of applicability. In the present lecture, we plan to present a variety of numerical results, which provide guidelines for analytical conjecture by focusing on various benchmarks.

(Joint work with L. Banas and R. Nurnberg.)

Posted on YouTube (2 parts): Part 1 Part 2

Suppose that you are dying to know the EXACT value of the number of partitions of a googol (10

(Joint work with Andrew V. Sills, see this masterpiece. Also very useful was the help of "joro" from the amazing mathoverflow forum. )

Posted on YouTube (2 parts): Part 1 Part 2

This talk will investigate a two player combinatorial game with parameters a, b and n. The game starts at a prearranged number n, and is a race to say the number 1. Each player, on his or her turn, can either subtract a from the current number, or divide the current number by b and round up. We will look at the Sprague-Grundy function of this game, and in particular, the sequence created by fixing a and b and letting n vary. While this sequence is not periodic, for many pairs of values a and b there are interesting patterns that appear. For certain pairs a and b, we will also be able to answer the more practical question - is there a simple formula for who wins?

Posted on YouTube (2 parts): Part 1 Part 2

Zeilberger's celebrated method of creative telescoping computes equations for given definite sums or integrals. These equations are linear recurrence or differential equtions of a certain finite order r with polynomial coefficients of some degree d. For designing efficient summation software, it is useful to know in advance for which pairs (r,d) there will exist a solution of order r and degree d. These pairs (r,d) form a certain region in N^2, whose precise shape was not well understood until now. Together with Shaoshi Chen, we have recently determined a curve which provides a surprisingly accurate description of the boundary of this region. We will not make an attempt at explaining our technical derivation of this curve in the talk, but we will show how its knowledge can be used to determine, for example, the pair (r,d) for which the computational cost is minimal. Perhaps surprisingly, it turns out that this is not the pair where r is minimal.

Posted on YouTube (2 parts): Part 1 Part 2

I will describe the notions of strong and weak epsilon nets in range spaces, and explain briefly some of their many applications in Discrete Geometry and Combinatorics, focusing on several recent results in the investigation of the extremal questions that arise in the area. Even after the recent progress, many of the basic problems remain open.

Posted on YouTube (2 parts): Part 1 Part 2

Consider the map which sends a positive integer to the sums of the squares of its digits. A number is defined to be happy if by iterating this map you eventually arrive at 1. Richard Guy inspired the study of happy numbers in his book

Posted on YouTube (2 parts): Part 1 Part 2

The method of creative telescoping was first formulated by Zeilberger in the early 1990's. Zeilberger's method has become a powerful tool in the study of special function identities. Algorithms for creative telescoping terminate on holonomic functions. However, the holonomicity of functions is difficult to detect algorithmically in general. In 1992, Wilf and Zeilberger conjectured that a hypergeometric function is holonomic if and only if it is proper. In this talk, I will present a proof of this conjecture with the help of an extension of the Ore-Sato theorem. According to some recent results, the properness of a hypergeometric function can be decided algorithmically via certain multiplicative decomposition of its certificates. So we get a bridge between the holonomicity and the properness of hypergeometric functions by turning the Wilf-Zeilberger conjecture into a theorem.

(This is a joint work with Christoph Koutschan and Garth Payne)

Posted on YouTube (2 parts): Part 1 Part 2

A system of partial differential equations of hypergeometric type can be determined by specifying an integer matrix of maximal rank together with a complex vector of parameters. We will say that such a system of equations has maximally reducible monodromy if its space of local holomorphic solutions in a neighbourhood of a generic point splits into the direct sum of one-dimensional invariant subspaces. In the talk, I will present necessary and sufficient conditions for the monodromy of a bivariate nonconfluent hypergeometric system to be maximally reducible. In particular, any bivariate system defined by a matrix whose rows determine a plane zonotope, admits maximally reducible monodromy for some choice of the vector of its complex parameters.

As an application, I will deduce estimates on the analytic complexity of bivariate algebraic functions. According to V.K. Beloshapka's definition, the order of complexity of any univariate function is equal to zero while the n-th complexity class is defined recursively to consist of functions of the form a(b(x,y)+c(x,y)), where a is a univariate analytic function and b and c belong to the (n-1)-th complexity class. Such a represenation is meant to be valid for suitable germs of multi-valued holomorphic functions.

A randomly chosen bivariate analytic functions will most likely have infinite analytic complexity. However, for a number of important families of algebraic functions their complexity is finite and can be computed or estimated. Using properties of solutions to the Gelfand-Kapranov-Zelevinsky system we obtain estimates for the analytic complexity of such functions.

Posted on YouTube (2 parts): Part 1 Part 2

Although the algorithmic treatment of symbolic summation has long history, standard algorithms and computer algebra implementations suffer from the fact that they might unnecessarily require running time which is exponential in the input size. Popular example of this kind is an attempt to apply Gosper algorithm to the rational or quasi-rational summand: in such cases running time can be exponential in the input size, even if the result has the size polynomial in the input size. We will present a set of new algorithms that overcomes this long-standing deficiency in the theory and practice of the indefinite summation, in particular new effective rational summation algorithm, based on shiftless factorization of polynomials and synthetic division in the ring of linear difference operators. Similar approach is applied to improve quasi-rational summation algorithms, replacing Gosper algorithm.

Posted on YouTube (2.5 parts): Part 1a Part 1b Part 2

A deterministic finite automaton is called synchronizing if there exists a word that sends every state to one particular state. Any such word with this property is called a synchronizing word. The Černý Conjecture, which has been open since 1964, states that any n-state synchronizing automaton must have a synchronizing word of length at most (n-1)^2. In this expository talk, we will outline some of the classical and more recent results toward settling this conjecture.

Posted on YouTube (2 parts): Part 1 Part 2

For a prime p and a positive integer x, the p-adic valuation of x is the highest power of p that divides x. The goal of the talk is to present a small collection of examples of p-adic valuations of classical sequences x

Posted on YouTube (2 parts): Part 1 Part 2

A classical theorem in Euclidean geometry asserts that if a set of points has the property that every line through two of them contains a third point, then they must all be on the same line. We prove several approximate versions of this theorem, which are motivated from questions about locally correctable codes and matrix rigidity. The proofs use an interesting combination of combinatorial, algebraic and analytic tools.

(Joint work with Boaz Barak, Zeev Dvir and Amir Yehudayoff)

Posted on YouTube (2 parts): Part 1 Part 2

The class of algebras associated to directed graphs discovered by Gelfand, Retakh, Serconek, and Wilson are related to factorizations of polynomials over non-commutative algebras. In 2008 the first non-Koszul example of an algebra of this type was found by mathematicians Cassidy and Sheldon. Recent results by Retakh, Serconek and Wilson have produced conditions for numerical Koszulity based upon the homological properties of the underlying graph. We discuss a computer aided proof which gives the minimal example of a layered graph producing an A(Γ) which fails to be Koszul. We also discuss the relationship between this algebra, the Cassidy-Sheldon example and other similar non-koszul algebras.

Posted on YouTube (2 parts): Part 1 Part 2