Log-Convexity of the Bell Numbers

A sequence \{a_n\} is said to be log-convex if, for all n \geq 1, a_n^2 \leq a_{n-1} a_{n+1}.

In this post I give a short, combinatorial-flavored proof that the Bell numbers are log-convex.  The argument is from my answer here on Math.SE.

The combinatorial interpretation of the Bell number B_n is that it counts the number of ways to partition the set \{1, 2, \ldots, n\} into nonempty subsets.  Let S_n denote the total number of sets over all partitions of \{1, 2, \ldots, n\}.  Then A_n = S_n/B_n is the average number of sets in a random partition of \{1, 2, \ldots, n\}.

Claim 1: A_n is increasing.

Proof: Each partition of \{1, 2, \ldots, n+1\} can be associated with a partition of \{1, 2, \ldots, n\} by removing the element n+1 from the set containing it.  Under the inverse of this mapping, each partition of \{1, 2, \ldots, n\} consisting of k sets maps to k partitions consisting of k sets (if n+1 is placed in an already-existing set) and one partition consisting of k+1 sets (if n+1 is placed in a set by itself) out of the partitions of \{1, 2, \ldots, n+1\}.  Thus partitions of \{1, 2, \ldots, n\} with more sets map to more partitions of \{1, 2, \ldots, n+1\} containing the same number of sets as well as one partition with one more set.  This raises the average number of sets as you move from n elements to n+1 elements.

Claim 2: The log-convexity of B_n is equivalent to the fact that A_n is increasing.

Proof: Separate the partitions counted by B_{n+1} into (1) those that have a set consisting of the single element n+1 and (2) those that don’t.  It should be clear that there are B_n of the former.  Also, there are S_n of the latter because each partition in group (2) is formed by adding n+1 to a set in a partition of \{1, 2, \ldots, n\}.  Thus B_{n+1} = B_n + S_n.

Therefore,

\displaystyle B_n^2 \leq B_{n-1} B_{n+1} \iff \frac{B_{n+1}}{B_n} \geq \frac{B_n}{B_{n-1}} \iff 1 + \frac{S_n}{B_n} \geq 1 + \frac{S_{n-1}}{B_{n-1}} \iff A_n \geq A_{n-1}.

Since we established the last inequality in Claim 1, the Bell numbers are log-convex.


I was not the first to find this proof; Canfield [1] published it in 1995.  Engel [2] was the first to prove that the Bell numbers are log-convex; he used a different method.  See my answer here on Math.SE for references to more proofs.

References

1.  E. Rodney Canfield, “Engel’s Inequality for Bell Numbers,” Journal of Combinatorial Theory, Series A, 72 (1995) 184-187.

2. Konrad Engel, “On the Average Rank of an Element in a Filter of the Partition Lattice,” Journal of Combinatorial Theory, Series A, 65 (1994) 67-78.

Posted in Bell numbers, combinatorics, inequalities, sequences and series | Leave a comment

Proving the Existence of Irrational Numbers

The ancient Greeks first proved the existence of irrational numbers by proving that \sqrt{2} is irrational.  The proof is, as modern proofs of irrationality go, fairly simple.  It is often the first example of a proof of irrationality that students see in, say, a course in real analysis.

Yesterday, though, I saw an even simpler proof of irrationality in Morgan’s Real Analysis [1] that I thought worth showing to my students in advanced calculus.

Theorem: The number \log_{10} 5 is irrational.

Proof: First, \log_{10} 5 > 0 because 5 and 10 are both greater than 1.  Suppose that \log_{10} 5 is rational.  Then there exist positive integers p, q such that 10^{p/q} = 5.  Thus \displaystyle 10^p = 5^q.  However, 10^p ends in 0, while 5^q ends in 5.  This is a contradiction, and so \log_{10} 5 is irrational.

References

  1. Frank Morgan, Real Analysis, American Mathematical Society, 2005.
Posted in irrational numbers, real analysis | 2 Comments

Shifts, Finite Differences, and Binomial Transforms

The binomial transform of a sequence a_n is the sequence b_n satisfying \displaystyle b_n = \sum_{k=0}^n \binom{n}{k} a_k.

In this post we prove the following:

Theorem: If b_n is the binomial transform of a_n, then \displaystyle \Delta^m b_n = \sum_{k=0}^n \binom{n}{k} a_{k+m}.

In other words, the mth finite difference of b_n is the binomial transform of the shifted sequence a_{n+m}.  This theorem is useful because once we know one binomial coefficient identity in the form of a binomial transform we can use it to generate others.

Proof: First, we have

\displaystyle \Delta b_n = b_{n+1} - b_n = \sum_{k=0}^{n+1} \binom{n+1}{k} a_k - \sum_{k=0}^n \binom{n}{k} a_k = \sum_{k=0}^{n+1} \binom{n}{k-1} a_k = \sum_{k=0}^n \binom{n}{k} a_{k+1}.

In the second-to-last step we use Pascal’s recurrence \binom{n+1}{k} = \binom{n}{k} + \binom{n}{k-1} plus the fact that \binom{n}{n+1} = 0.  In the last step we use \binom{n}{-1} = 0.

Therefore, \displaystyle \Delta (\Delta b_n) = \sum_{k=0}^n \binom{n}{k} a_{k+2}, and, in general, \displaystyle \Delta^m b_n = \sum_{k=0}^n \binom{n}{k} a_{k+m}.


Example: Start with the identity \displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+1} = \frac{1}{n+1}.  (I’ve proved this identity three different ways so far on this blog: here, here, and here.)  Multiplying it by -1, we have \displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^{k+1}}{k+1} = - \frac{1}{n+1}.

Applying the theorem we just proved yields

\displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^{k+2}}{k+2} = - \frac{1}{n+2} + \frac{1}{n+1} = \frac{-(n+1) + (n+2)}{(n+1)(n+2)} = \frac{1}{(n+1)(n+2)}.

Since (-1)^{k+2} = (-1)^2 (-1)^k = (-1)^k, this simplifies to

\displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+2} = \frac{1}{(n+1)(n+2)}.


This identity can be generalized by taking \displaystyle m-1 finite differences.  I’ll leave it to the reader to prove the generalization

\displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+m} = \frac{n!}{m(m+1) \cdots (m+n)}.

Posted in binomial coefficients, recurrence relations, sequences and series | Leave a comment

A Combinatorial Proof for the Power Sum

Probably the most common formula for the power sum \displaystyle \sum_{k=0}^n k^m is the one involving binomial coefficients and Bernoulli numbers B_k, sometimes called Faulhaber’s formula:

\displaystyle \sum_{k=0}^n k^m = \frac{1}{m+1} \sum_{k=0}^m \binom{m+1}{k} B_k n^{m+1-k}.

Historically, this, or a variant of it, was the first general formula for the power sum.

There are other ways to express the power sum, though.  The subject of this post is to give a combinatorial proof for the following one involving Stirling numbers:

\displaystyle \sum_{k=0}^n k^m = \sum_{k=1}^m \binom{n+1}{k+1} \left\{ m \atop k \right\} k! .

Proof: How many functions f are there from the set \{1, 2, \ldots, m+1 \} to the set \{1, 2, \ldots, n+1\} such that f(1) > f(j) for any j \in \{2, \ldots, m+1\}?

Answer 1: Condition on the value of f(1). If f(1) = k+1, then there are k choices for f(2), k choices for f(3), etc., for a total of k^m functions. Summing over all possible values of k gives the left side.

Answer 2: Condition on the number of elements in the range of f. (This is not the codomain, which is \{1, 2, \ldots, n+1\}, but the subset consisting of elements from the codomain that have something from \{1, 2, \ldots, m+1\} mapped to them.  The range may be the entire codomain or a proper subset of it.) If the range of f has k+1 elements, there are \binom{n+1}{k+1} ways to choose exactly which elements will comprise the range. The largest of these must be f(1), and the remaining m elements in the domain must be mapped to the other k elements in the range. There are \left\{ m \atop k \right\} ways to assign the remaining m elements to k nonempty groups, and then k! ways to assign these groups to the actual k remaining elements in the range. (More generally, the number of onto functions from an m-set to a k-set is \left\{ m \atop k \right\} k!.) Summing over all possible values of k gives the right side.

(I first gave this proof as an answer here on Math.SE.)

Posted in binomial coefficients, combinatorics, number theory | Leave a comment

Yet Another Nice Proof of the Derangements Formula

A couple of posts from a few years back give proofs of the formula for D_n, the number of derangements on n elements.  (A derangement is a permutation with no fixed points.)  Both proofs avoid the use of the principle of inclusion-exclusion, which is the standard method of proving the derangements formula.

This post contains another proof of the derangements formula, and I think this one is my favorite of the three.  It starts with a proof of the following identity:

\displaystyle n! = \sum_{k=0}^n \binom{n}{k} D_{n-k} = \sum_{k=0}^n \binom{n}{k} D_k.

Proof.  Both sides of the identity count the number of permutations on n elements.  The left side is straightforward.  For the middle formula, condition on the number of fixed points in the permutation.  To count the number of permutations with k fixed points, first choose which k of the n elements are to be mapped to themselves.  This can be done in \binom{n}{k} ways.  The remaining n-k elements cannot be mapped to themselves; i.e., they must be deranged.  There are D_{n-k} ways to do this.  Multiply these together and sum over all possible values of k to get the middle formula.  The right side just reverses the summation on the middle formula (i.e., replacing k with n-k), while using the binomial coefficient symmetry property \binom{n}{k} = \binom{n}{n-k}.

Next, apply the binomial inversion formula.  This comes in different forms.  The one we will use says that for sequences f(n) and g(n) we have

\displaystyle f(n) = \sum_{k=0}^n \binom{n}{k} g(k) \iff g(n) = \sum_{k=0}^n \binom{n}{k} f(k) (-1)^{n-k}.

Binomial inversion can be proved multiple ways.  My favorite uses exponential generating functions, but there are more direct proofs, too (see, for example, p. 193 of Concrete Mathematics [1]).

Applying the binomial inversion formula to Equation (1), with f(n) = n! and g(k) = D_k, we get

\displaystyle D_n = \sum_{k=0}^n \binom{n}{k} k! (-1)^{n-k} = \sum_{k=0}^n \binom{n}{k} (n-k)! (-1)^k = n! \sum_{k=0}^n \frac{(-1)^k}{k!},

where the second step reverses the summation.

References

1.  Ronald L. Graham, Donald E. Knuth, and Oren Patashnik.  Concrete Mathematics (2nd ed.), Addison-Wesley, 1994.

 

 

Posted in combinatorics, derangements, permutations | Leave a comment

Solving Two-Term Combinatorial Recurrence Relations, Part II

A few years ago I published a paper [2] giving a partial answer to the following question in Graham, Knuth, and Patashnik’s Concrete Mathematics (Problem 6.92):

Develop a general theory of the solutions to the two-parameter recurrence

\displaystyle \left| n \atop k \right| = (\alpha n + \beta k + \gamma) \left| n-1 \atop k \right| + (\alpha' n + \beta' k + \gamma') \left| n-1 \atop k-1 \right| + [n=k=0].

A post from a few years ago discusses the results in my paper.  I also asked a question on Math Overflow (my first MO question!) asking for other known results.

Well, a recent paper by Barbero, Salas, and Villaseñor, “Bivariate Generating Functions for a Class of Linear Recurrences: General Structure” [1], gives a complete solution to this recurrence in the form of exponential generating functions.  I’m not going to try to summarize their paper here, as it’s too technical, although I will mention that they do have to break the recurrence into cases to obtain their results.  (Given my earlier work on the problem this does not surprise me.)  If you’re interested, take a look.  They do some very nice work to get their results.


References

1.  Barbero, Salas, and Villaseñor, “Bivariate Generating Functions for a Class of Linear Recurrences: General Structure,” Journal of Combinatorial Theory, Series A, 125 (2014) 146-165.

2.  Spivey, Michael Z.,  “On Solutions to a General Combinatorial Recurrence,” Journal of Integer Sequences, 14 (9): Article 11.9.7, 2011.

Posted in combinatorics, recurrence relations | 1 Comment

A Probabilistic Proof of a Binomial Coefficient Identity

I’m a big fan of combinatorial proofs, in which you show that both sides of an identity count the same entity in two different ways.  Probabilistic proofs – in which you show that both sides of an identity express the probability that an event occurs in two different ways – are a close relative of combinatorial proofs.  In fact, many probabilistic proofs can be translated directly into combinatorial proofs by clearing denominators.  Sometimes, though, the probabilistic proof is more natural than the corresponding combinatorial one.  Today’s post is one of these.

One of the basic binomial coefficient identities that involves a fraction is the following:

\displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+1} = \frac{1}{n+1}.

Interpreting this probabilistically, the left-hand side looks like it uses inclusion-exclusion, and the right looks like the probability of selecting one special item out of n+1 choices. With that in mind, we can further interpret the left-hand side as having something to do with selecting items out of k+1 choices, as k varies from 0 to n. Probabilists like balls and jars, so let’s construct a scenario in those terms.

Suppose there are balls numbered 1 through n placed in a jar along with one black ball. We select the balls, one-by-one, and remove them from the jar. What is the probability that the black ball is the first ball chosen? It’s pretty clear that this is \frac{1}{n+1}, the right-hand side of our identity.

To use inclusion-exclusion for the left-hand side we have to interpret “the black ball is the first ball chosen” as the union or intersection of a collection of other events. Moreover, we have to do this in such a way as to work in the binomial coefficient and unit fraction \frac{1}{k+1} as part of counting up the probabilities when these events are intersected. From the intersection standpoint, the black ball does have to come before ball 1 and before ball 2 and before ball 3 and so forth. So let’s let A_i, 1 \leq i \leq n, be the event that the black ball is drawn before ball i and use the intersection version of inclusion-exclusion to find an expression for P\left( \bigcap_{i=1}^n A_i \right). We have that P(\bar{A}_i) = \frac{1}{2} for any i. Similarly, P(\bar{A}_i \cap \bar{A}_j) = \frac{1}{3} because the probability that the black ball does not come before ball i and does not come before ball j is the probability that it comes last of those three, which is \frac{1}{3}. In general, the probability that the black ball comes as the last of any k+1 specific balls is \frac{1}{k+1}, and there are \binom{n}{k} collections of k+1 specific balls that include the black ball. By inclusion-exclusion, then, the probability that the black ball is the first ball chosen is given by

\displaystyle 1 - \sum_{k=1}^n \binom{n}{k} \frac{(-1)^{k+1}}{k+1} = \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+1}.

Thus both sides of the identity represent the probability that the black ball is the first ball chosen out of n+1 specific balls.

(The are, of course, other proofs of this identity.  Two of the quicker ones are to use the absorption identity for binomial coefficients, as in my post here, and integrating the function (1-x)^n in two different ways, as in my post here.)

Posted in binomial coefficients, probability | 1 Comment