I’m a big fan of proving an identity in multiple ways, as I think each perspective gives additional insight into why the identity is true. In this post we’ll work through six different proofs of the binomial identity .

**1. The absorption identity proof.**

The absorption identity states that, for real *n*, . Thus we have

, recalling that .

**2. The combinatorial proof.**

How many chaired committees of any size can be formed from a group of *n* people?

One way to count them is to choose the committee first and then choose the chair. First, we condition on the size of the committee. If there are *k* people on the committee, there are ways to choose the people to be on the committee. Then there are *k* ways to choose the chair. Summing up over all possible values of *k*, we find that the answer is .

Another way is to choose the chair first and then choose the committee. There are *n* choices for the chair. Then, for each of the remaining people, we have two options: Each person could be on the committee or not. Thus there are ways to choose the committee once the chair is chosen. This gives an answer of when choosing the chair first.

Since the two answers must be equal, we have .

**3. The calculus proof.**

Differentiate the binomial theorem, , with respect to *x* to obtain . Letting , we have .

**4. The probabilitistic proof.**

Imagine an experiment where you flip a fair coin *n* times. What is the expected number of heads for this experiment?

One way to determine this starts by conditioning on the number *k* of heads. If there are *k * heads, there are ways of choosing which *k* flips will be the heads. The probability that these flips are all heads is , and the probability that the remaining flips are all tails is . Multiply these together and apply the definition of expected value to get an answer of .

Another way is to use indicator variables. Let be if flip *k* is heads and if flip *k* is tails. Then the number of heads in the sequence of *n* flips is . Also, the expected value of is, by definition, . Thus the expected number of heads is . (This is a formal way of arguing for the answer of that our intuition says should be the expected number of heads.)

Equating our two answers, we have , which implies .

**5. The exponential generating functions proof**

For this proof we’re going to need a definition and a few properties of exponential generating functions.

First, the *binomial convolution* of the sequences and is given by .

Second, we have the following. (See, for example, pages 126-128 of my book *The Art of Proving Binomial Identities*. [1])

- If and then . (This is the binomial convolution property for exponential generating functions.)
- We have . (This is the Maclaurin series for .)
- If then is the exponential generating function for the sequence given by .

By the definition of binomial convolution, is the binomial convolution of the sequences with and . What are the exponential generating functions of these two sequences?

By Property 2, the exponential generating function for the sequence , with , is .

If we take the sequence , append a to the front, and multiply it by *n*, we have the sequence with . By Property 3, then, the exponential generating function for the sequence is .

Thus, by Property 1, the exponential generating function for is . However, , which means that is the exponential generating function for the sequence . Thus, by Property 3, is also the exponential generating function for the sequence . Since and have the same generating function, they must be equal, and so .

**6. The finite differences bootstrapping proof**

This proof requires a result that I don’t think is that well-known. It’s Theorem 4 in my paper “Combinatorial Sums and Finite Differences.” [2]

Theorem: Let and be sequences such that is the finite difference sequence for ; i.e., for each . If and then, for ,

First, if , then . Thus , and, by the theorem, .

Next, if , then . By the theorem and the previous result, then, .

**References **

- Spivey, Michael Z.
*The Art of Proving Binomial Identities*. CRC Press, 2019. - Spivey, Michael Z. “Combinatorial Sums and Finite Differences.”
*Discrete Mathematics*, 307 (24): 3130-3146, 2007.

Somewhat similar to your binomial convolution version, but what about writing exp(X).exp(Y)=exp(X+Y) .. expand each term of taylor series (or formal series) .. then consider X=Y=1?

Could you provide some more details on the steps of that argument?