An observation on the unit circle

We did a quick review of the unit circle in my multivariate calculus class last week, and I pointed out a fact about the sines and cosines of the common angles in the first quadrant that some of the students appeared not to have seen before.  I thought I would record it here.

The five angles most commonly encountered in the first quadrant, together with their coordinates on the unit circle, are as follows (angle measures are in radians):

  • 0: (1,0)
  • \pi/6: (\sqrt{3}/2, 1/2)
  • \pi/4: (\sqrt{2}/2, \sqrt{2}/2)
  • \pi/3: (1/2, \sqrt{3}/2)
  • \pi/2: (0, 1)

Part of the reason these coordinates are important is that they tell you the sine and cosine of the corresponding angle.  The cosine value is the first coordinate (technical term: abscissa), while the sine value is the second coordinate (technical term: ordinate).  For example, \cos (\pi/3) = 1/2 and \sin(\pi/3) = \sqrt{3}/2.

Here’s another way of writing the same information, one that illustrates a pattern with the coordinate values.

  • 0: (\sqrt{4}/2, \sqrt{0}/2)
  • \pi/6: (\sqrt{3}/2, \sqrt{1}/2)
  • \pi/4: (\sqrt{2}/2, \sqrt{2}/2)
  • \pi/3: (\sqrt{1}/2, \sqrt{3}/2)
  • \pi/2: (\sqrt{0}/2, \sqrt{4}/2)

Every coordinate value is of the form \sqrt{i}/2, where i is a value between 0 and 4.  As the angle measure increases from 0 radians to \pi/2 radians, the cosine value decreases from \sqrt{4}/2 down to \sqrt{0}/2, while the sine value increases from \sqrt{0}/2 up to \sqrt{4}/2.  Perhaps this pattern will help some folks to memorize the coordinate values more easily.

Advertisements
Posted in trigonometry | Leave a comment

Six Proofs of a Binomial Identity

I’m a big fan of proving an identity in multiple ways, as I think each perspective gives additional insight into why the identity is true.  In this post we’ll work through six different proofs of the binomial identity \displaystyle \sum_{k=0}^n \binom{n}{k} k = n 2^{n-1}.

1. The absorption identity proof.

The absorption identity states that, for real n, k \binom{n}{k} = n \binom{n-1}{k-1}.  Thus we have

\displaystyle \sum_{k=0}^n \binom{n}{k} k = \sum_{k=0}^n \binom{n-1}{k-1} n = n \sum_{k=-1}^{n-1} \binom{n-1}{k} = n \sum_{k=0}^{n-1} \binom{n-1}{k} = n 2^{n-1}, recalling that \binom{n-1}{-1} = 0.

2. The combinatorial proof.

How many chaired committees of any size can be formed from a group of n people?

One way to count them is to choose the committee first and then choose the chair.  First, we condition on the size of the committee.  If there are k people on the committee, there are \binom{n}{k} ways to choose the people to be on the committee.  Then there are k ways to choose the chair.  Summing up over all possible values of k, we find that the answer is \sum_{k=0}^n \binom{n}{k} k.

Another way is to choose the chair first and then choose the committee.  There are n choices for the chair.  Then, for each of the remaining n-1 people, we have two options: Each person could be on the committee or not.  Thus there are 2^{n-1} ways to choose the committee once the chair is chosen.  This gives an answer of n 2^{n-1} when choosing the chair first.

Since the two answers must be equal, we have \displaystyle \sum_{k=0}^n \binom{n}{k} k = n 2^{n-1}.

3. The calculus proof.

Differentiate the binomial theorem, \displaystyle \sum_{k=0}^n \binom{n}{k} x^k = (x+1)^n, with respect to x to obtain \displaystyle \sum_{k=0}^n \binom{n}{k} k x^{k-1} = n(x+1)^{n-1}.  Letting x = 1, we have \displaystyle \sum_{k=0}^n \binom{n}{k} k = n 2^{n-1}.

4. The probabilitistic proof.

Imagine an experiment where you flip a fair coin n times.  What is the expected number of heads for this experiment?

One way to determine this starts by conditioning on the number k of heads.  If there are k heads, there are \binom{n}{k} ways of choosing which k flips will be the heads.  The probability that these flips are all heads is (1/2)^k, and the probability that the remaining flips are all tails is (1/2)^{n-k}.  Multiply these together and apply the definition of expected value to get an answer of \displaystyle \sum_{k=0}^n \binom{n}{k} k \left(\frac{1}{2} \right)^k \left(\frac{1}{2} \right)^{n-k} = \sum_{k=0}^n \binom{n}{k} k \left( \frac{1}{2} \right)^n = \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} k.

Another way is to use indicator variables.  Let X_k be 1 if flip k is heads and 0 if flip k is tails.  Then the number of heads in the sequence of n flips is \sum_{k=0}^n X_k.  Also, the expected value of X_k is, by definition, E(X_k) = 1 (1/2) + 0 (1/2) = 1/2.  Thus the expected number of heads is E \left( \sum_{k=0}^n X_k \right) = \sum_{k=0}^n E(X_k) = \sum_{k=0}^n (1/2) =n/2.  (This is a formal way of arguing for the answer of n/2 that our intuition says should be the expected number of heads.)

Equating our two answers, we have \displaystyle \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} k = \frac{n}{2}, which implies \displaystyle \sum_{k=0}^n \binom{n}{k} k = n 2^{n-1}.

5. The exponential generating functions proof

For this proof we’re going to need a definition and a few properties of exponential generating functions.

First, the binomial convolution of the sequences (a_n) and (b_n) is given by \displaystyle \sum_{k=0}^n \binom{n}{k} a_k b_{n-k}.

Second, we have the following.  (See, for example, pages 126-128 of my book The Art of Proving Binomial Identities. [1])

  1. If \displaystyle f_a(x) = \sum_{n=0}^{\infty} a_n \frac{x^n}{n!} and \displaystyle f_b(x) = \sum_{n=0}^{\infty} b_n \frac{x^n}{n!} then \displaystyle f_a(x) f_b(x) = \sum_{n=0}^{\infty} \left(\sum_{k=0}^n \binom{n}{k} a_k b_{n-k} \right) \frac{x^n}{n!}.  (This is the binomial convolution property for exponential generating functions.)
  2. We have \displaystyle e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!}.  (This is the Maclaurin series for e^x.)
  3. If \displaystyle f_a(x) = \sum_{n=0}^{\infty} a_n \frac{x^n}{n!} then x f_a(x) is the exponential generating function for the sequence given by (n a_{n-1}) = (0, a_0, 2a_1, 3a_2, \ldots).

By the definition of binomial convolution, \sum_{k=0}^n \binom{n}{k} k is the binomial convolution of the sequences with a_n = n and b_n = 1.  What are the exponential generating functions of these two sequences?

By Property 2, the exponential generating function for the sequence (b_n), with b_n = 1, is e^x.

If we take the sequence 1, 1, 1, \ldots, append a 0 to the front, and multiply it by n, we have the sequence with a_n = n.  By Property 3, then, the exponential generating function for the sequence (a_n) is x e^x.

Thus, by Property 1, the exponential generating function for \sum_{k=0}^n \binom{n}{k} k is x e^x (e^x) = x e^{2x}.  However, \displaystyle e^{2x} = \sum_{k=0}^{\infty} \frac{(2x)^k}{k!} = \sum_{k=0}^{\infty} \frac{2^k x^k}{k!}, which means that e^{2x} is the exponential generating function for the sequence (2^n).  Thus, by Property 3, x e^{2x} is also the exponential generating function for the sequence (n 2^{n-1}).   Since \left(\sum_{k=0}^n \binom{n}{k} k \right) and \left(n 2^{n-1}\right) have the same generating function, they must be equal, and so \displaystyle \sum_{k=0}^n \binom{n}{k} k = n 2^{n-1}.

6. The finite differences bootstrapping proof

This proof requires a result that I don’t think is that well-known.  It’s Theorem 4 in my paper “Combinatorial Sums and Finite Differences.” [2]

Theorem: Let (a_k) and (b_k) be sequences such that (a_k) is the finite difference sequence for (b_k); i.e., \Delta b_k = b_{k+1} - b_k = a_k for each k \geq 0.  If g_n = \sum_{k=0}^n a_k and h_n = \sum_{k=0}^n b_k then, for n \geq 0, \displaystyle h_n = 2^n \left( b_0 + \sum_{k=1}^n \frac{g_{k-1}}{2^k} \right).

First, if b_k = 1, then a_k = 0.  Thus g_n = 0, and, by the theorem, h_n = \sum_{k=0}^n \binom{n}{k} = 2^n.

Next, if b_k = k, then a_k = k+1 - k = 1.  By the theorem and the previous result, then, \displaystyle \sum_{k=0}^n \binom{n}{k} k = 2^n \left(0 + \sum_{k=1}^n \frac{2^{k-1}}{2^k} \right) = 2^n \sum_{k=1}^n \frac{1}{2} = 2^n \frac{n}{2} = n 2^{n-1}.

References 

  1. Spivey, Michael Z.  The Art of Proving Binomial Identities.  CRC Press, 2019.
  2. Spivey, Michael Z.  “Combinatorial Sums and Finite Differences.”  Discrete Mathematics, 307 (24): 3130-3146, 2007.
Posted in binomial coefficients, calculus, combinatorics, generating functions, probability | 2 Comments

Zeno’s Paradoxes

The ancient Greek philosopher Zeno of Elea is known for proposing several paradoxes related to time, space, motion, and infinity.  In this post we’ll focus on one of Zeno’s paradoxes and discuss how some ideas associated with calculus might or might not be able to resolve it.

The most famous of Zeno’s paradoxes is perhaps the Dichotomy paradox.  In this one, Zeno points out that in order for an object in motion to reach its goal it must first cover half the distance to its goal.  But to do so it must have previously covered half of the first half, or a quarter of the total distance.  Continuing with this line of thinking, we see that the object must have already covered an eighth of the total distance before reaching the quarter point, a sixteenth of the total distance before reaching the one-eighth point, and so forth, in order to complete its goal.  We end up with an infinite sequence of time intervals that the object must cover in order to travel the finite distance to its goal.  This seems impossible, for multiple reasons.  For one, it seems likely that one cannot actually complete infinitely many tasks.  For two, since an object’s initial movement interval can always be halved, how would an object even start moving?  Because of objections such as these, the conclusion of the paradox is that motion cannot occur.

Of course we know that things do, in fact, move, so the real intellectual challenge here is to determine exactly where Zeno’s line of reasoning fails.

One way out is to posit that space and time are not actually infinitely divisible.  Mathematicians assume that they are, but it might be the case that, say, real numbers are an intellectual abstraction that only approximates the true discrete nature of reality.  This would turn some basic calculus concepts on their heads; for example, it would make a derivative an approximation of a difference quotient, rather than the other way around.

Alternatively, some folks think that the precise nineteenth-century epsilon-delta notion of a limit has resolved Zeno’s paradoxes.  Certainly the Dichotomy paradox as presented here is considering a problem very close to the one that led to the precise definition of the derivative.  In the Dichotomy paradox there is an infinite sequence of time intervals involved, but they are getting smaller and smaller.  Moreover, each time interval has an associated distance interval, and these distance intervals are also getting smaller and smaller.  By dividing the length of each distance interval by the length of its corresponding time interval we get the sequence of average speeds associated with these intervals.  Mathematically, this is a sequence of difference quotients.  So the problem, in the Dichotomy paradox, of what happens as the distance and time intervals get smaller and smaller is closely related to the problem of what happens to the difference quotients as the associated intervals get smaller and smaller.

Over three hundred years after the development of calculus mathematicians are quite comfortable with what happens to an infinite sequence of difference quotients as the associated intervals get smaller and smaller: If those difference quotients are consistently getting closer and closer to a specific number then we call that specific number the derivative.  And we define precisely what we mean by “consistently getting closer and closer” through an epsilon-delta (or perhaps epsilon-N) argument.  Thus we have a consistent way to talk about the motion that the Dichotomy paradox says cannot occur.  Moreover, the value of the derivative is the speed of the object, which is a concept that makes no sense unless the object is actually moving.

But does this formulation really resolve Zeno’s Dichotomy paradox?  I’m not convinced it does.  I’m coming to the conclusion that mathematicians have merely defined away the problem posed by Zeno.  To be sure, mathematicians have found a way to talk about what happens to an infinite sequence of numbers that is both logically coherent and fits what we understand to be happening with those numbers.  On the other hand, can an actual person perform infinitely many tasks?  I would argue not, and I don’t think the limit approach truly addresses this aspect of the Dichotomy paradox.  It just avoids it in a consistent way.

At the very least, though, Zeno’s paradoxes have forced mathematicians to consider carefully what they mean by notions such as “infinitely small” and “arbitrarily large,” as well as the concept of infinity itself.  And that is no small thing.

Posted in calculus, infinity, logic, paradoxes, real analysis, sequences and series | Leave a comment

Five Theses on Mathematics in Games

This past weekend I gave a talk entitled “Mathematics Through Narrative” at the NarraScope conference on narrative games at MIT in Boston.  Much of what I discussed I had also put into a series of blog posts several months ago.  (See part 1, part 2, and part 3 of those posts.)

However, in the talk I also presented a set of five theses on mathematics in games that did not appear in those blog posts.  Here they are, with commentary.

Mathematics in games should…

  1. Be integrated into the story world.
  2. Be used to solve puzzles in-game.
  3. Feature, if possible, repetition.
  4. Allow the player to play with the mathematical concepts.
  5. Avoid symbolic manipulation as much as possible.

Mathematics in games should be integrated into the story world.  This was the primary point I was making in part 1 of the series of blog posts from several months back, so perhaps just a quick recap is all that’s necessary here.  First, we retain new ideas (both mathematical and otherwise) when we can situate them in a web of mental connections that we already have.  (I don’t think this is a particularly controversial or outlandish claim.)  Therefore, games that attempt to teach mathematics should integrate the math into the story world.  They should not yank the player out of the story world and force him to solve mathematics problems that are otherwise irrelevant to that story world in order to advance in the game.  To do otherwise turns the gaming experience into a homework set with a fancy wrapper around it.  It also likely leaves the impression that mathematics isn’t relevant to real life.  We can do better than that.

Mathematics in games should be used to solve puzzles in-game.  This follows from the previous thesis.  If we’re going to integrate math into the story world, how do we go about doing it?  Well, lots of games from a variety of genres have puzzles that must be solved in order to advance in the game.  And mathematics is a great source of ideas for puzzles.  It’s also an underutilized one: There is a lot of mathematics out there, but I have seen very little of it actually appear in a game.  I’m sure that’s because mathematics is perceived as inaccessible.  In addition, game designers aren’t used to thinking about mathematical ideas as part of their puzzle-building toolkit, beyond some standard logic puzzles.  However, there are examples of mathematical ideas – even advanced ones – appearing in games.  Jessica Sklar has an article [1] on linear algebra in game puzzles.  I also have an article [2] that discusses – among other examples – the Klein bottle puzzle in Trinity and Zeno’s bridge in Beyond Zork.  Then there’s my A Beauty Cold and Austere, whose gameplay is all about solving mathematics-based puzzles.  There is still a lot more mathematics out there that could be tapped to create engaging challenges for players, too.  Combinatorial optimization problems in particular seem like a rich, underutilized puzzle source.

Mathematics in games should feature, if possible, repetition.  I don’t think this is a particularly controversial claim.  After all, it’s clear that we learn and retain ideas not just when they’re placed in a context but also through continued repetition.  I put the qualifier “if possible” in there because the repetition goal can cut against other design goals.  If a game is going for breadth, mathematically-speaking, then there’s not going to be space in the game to hit every mathematical idea multiple times.  In addition, if a game just wants to feature a few mathematical ideas as puzzles or has more of an exposure-to-mathematics goal in mind, then repetition becomes less important.  But if a game really has the intent to teach mathematics, then repetition must be one of the features the game designer has to keep in mind.

Mathematics in games should allow the player to play with the mathematical concepts.  This is related to the previous thesis.  One way to implement something like repetition without creating multiple puzzles that feature exactly the same concept (which, if not carefully done, could get boring for the player) is to create a mini-sandbox featuring the mathematical idea you want to get across.  Then the player can engage the concept from multiple angles, seeing how it responds under different choices she makes.  I’ll draw an analogy here with people: If you want to get to know someone, you’ll have to see how that person responds in lots of different scenarios (e.g., with friends, with family, in good times, in stressful situations), not just in the context in which you originally met them.  The same is true with ideas – mathematical and otherwise.  You can’t truly understand an idea until you look at it from multiple perspectives.  One example in A Beauty Cold and Austere is the sequences-and-series machine.  By playing with the various controls on the machine you can create sixteen different settings for the golden path leading out of the room, each of which represents a different sequence or series.  Following the path for a particular setting takes the player to a number space corresponding to the limit associated with the underlying sequence or series.  Some of these paths represents solutions to puzzles, and some just exist to allow the player to engage with the mathematics.

Mathematics in games should avoid symbolic manipulation as much as possible.  An expression like \sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k} = \ln 2 is a concise way to represent the alternating harmonic series and its value, but it’s also a foreign language to the vast majority of players out there.  As such, when the typical player encounters something like this expression in a game it is not inviting or engaging.  Instead, it’s intimidating and off-putting.  As game designers it is our responsibility to find ways to express fascinating mathematical ideas without triggering players’ phobias around mathematics, many of which are (I’m convinced) tied to those ideas’ symbolic representations.  For example, when I gave my talk at NarraScope I displayed the alternating harmonic series equation above as an example of something to avoid.  Then I demoed a couple of puzzles from A Beauty Cold and Austere, the second of which was the sequences-and-series machine.  After playing with several of the settings to see what happened I picked a particular collection of settings and asked the audience whether we could predict where the golden path would lead.  We agreed that these settings would not get us to infinity (unlike another setting) nor result in values that bounced around more and more wildly (also unlike another setting).  Instead, these settings would create a path that would eventually settle down to a limiting value somewhere between 0 and 1.  And sure enough, that’s what happened, with the game telling us that the actual value was the natural logarithm of 2.  Then I pointed out that we had just recreated the ideas behind the alternating harmonic series equation I had displayed fifteen minutes earlier, without using any mathematical symbols at all.  That produced multiple murmurs from the audience, including a “mind blown” comment from someone sitting near the front.  (I’ll admit that that was my favorite moment of the whole talk.)  The point is that we can avoid symbolic manipulation when representing mathematics in games, even if we have to get creative in how we do it.

So, these are my five theses on featuring mathematics in games.  It’s a full ninety fewer than Martin Luther had.  I also did not nail them to one of the classroom doors at MIT.

References

  1. Sklar, Jessica.  “Dials and levers and glyphs, oh my! Linear algebra solutions to computer game puzzles.” Mathematics Magazine 79(5), 2006: 360-367.
  2. Spivey, Michael Z.  “Mathematics through narrative,” Math Horizons 26(3), 2019: 22-25.
Posted in games, teaching | Leave a comment

The Art of Proving Binomial Identities is Now Available

A couple of months ago I posted that my book, The Art of Proving Binomial Identities, would soon be finished.  Well, it’s out now, and you can buy it straight from the publisher (CRC Press).  It’s up on Amazon as well.

The March blog post summarizes the contents of the book, but there is one other feature of it I’d like to highlight.  In addition to the usual index of topics, I added an index of identities and theorems.  Every identity and theorem in the book appears in this index, with page numbers indicating exactly where.  My intent was for this to facilitate one of my main goals with the book: Understanding particular binomial identities better by looking at them through multiple lenses.  For example, the references to Identity 24, \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+1} = \frac{1}{n+1} (which I used in my last post), lead you to proofs of it that involve the absorption identity, integration, the beta function, probability, generating functions, finite differences, and mechanical summation.  Each proof gives another perspective on why Identity 24 is true.

Also, at some point within the past couple of months I needed to find a particular binomial identity.  Sure enough, it was in the index!  That was gratifying.

(There are a couple of small errors on the CRC site.  First, I am no longer department chair, as I finished my three-year term in 2018.  I was department chair when I submitted the book proposal, and I corrected it on the info sheet I was sent a couple of months back, but for some reason it didn’t get updated on the CRC site.  Second, there is a chapter on mechanical summation that didn’t make it into the site’s description of the Table of Contents.  I suspect this is because the TOC list is from the original draft of the book I sent CRC, and the mechanical summation chapter was added later.)

Posted in binomial coefficients, publishing | Leave a comment

Sum of the Reciprocals of the Binomial Coefficients

In this post we’re going to prove the following identity for the sum of the reciprocals of the numbers in column k of Pascal’s triangle, valid for integers k \geq 2:

Identity 1: \displaystyle \sum_{n=k}^{\infty} \frac{1}{\binom{n}{k}} = \frac{k}{k-1}.

The standard way to prove Identity 1 is is to convert the binomial coefficient in the denominator of the left side to an integral expression using the beta function, swap the integral and the summation, and pull some algebraic tricks with derivatives to sum the infinite series and then evaluate the integral.  (See, for example, the proof of Identity 107 on pages 77-78 of my upcoming Art of Proving Binomial Identities.)  In this post, though, we’re going to show how one can prove this identity using just three simple binomial identities:

Identity 2: \displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+x}  = \frac{n!}{x(x+1) \cdots (x+n)},

Identity 3: \displaystyle \sum_{k=0}^m \binom{n}{k} (-1)^k = (-1)^m \binom{n-1}{m},

Identity 4: \displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+1} = \frac{1}{n+1}.

Identity 2 is your basic partial fractions decomposition formula when the factors in the denominator are consecutive integers.  Identity 3 gives the formula for the partial alternating row sum of the binomial coefficients.  Identity 4 is perhaps less well-known, but it does have a nice interpretation as two different ways of expressing the probability of drawing the red ball from a jar containing one red ball and n numbered blue balls.  (These are Identities 109, 20, and 24, respectively, in The Art of Proving Binomial Identities.)

On to the proof.

Proof of Identity 1.

First, let’s rewrite the summand of Identity 1 as a fraction with consecutive integer factors and apply Identity 2:

\displaystyle \frac{1}{\binom{n}{k}} = \frac{k!}{(n-k+1) \cdots n} = k \frac{(k-1)!}{(n-k+1) \cdots n} = k \sum_{j=0}^{k-1} \binom{k-1}{j} \frac{(-1)^j}{j + n - k+1}.

This gives the left side of Identity 1 as

\displaystyle k \sum_{n=k}^{\infty} \sum_{j=0}^{k-1} \binom{k-1}{j} \frac{(-1)^j}{j + n - k+1}.

Now, let’s transform the region of summation (just as we might transform the region of integration in a double integral) from an (n,j) space to an (i,j) space, where i = j+n-k+1.  The idea here is that we want to see what happens when we fix a particular denominator and sum over all the numerator expressions for that denominator.  The original region of summation in (n,j) space looks like an infinite rectangle whose three finite sides are n = k, j = 0, and j = k-1.  The transformed region of summation looks like an infinite trapezoid whose three finite sides are i = j+1, j = 0, and j = k-1.  With the diagonal side i = j+1, we have to split the infinite series into two parts, one with i running from 1 to k-1 and one with i running from k to infinity.

Thus, we have \displaystyle k \sum_{n=k}^{\infty} \sum_{j=0}^{k-1} \binom{k-1}{j} \frac{(-1)^j}{j + n - k+1} = k \sum_{i=1}^{k-1} \frac{1}{i} \sum_{j=0}^{i-1} \binom{k-1}{j} (-1)^j + k \sum_{i=k}^{\infty} \frac{1}{i} \sum_{j=0}^{k-1} \binom{k-1}{j} (-1)^j.

Identity 3 tells us how to evaluate the two inner sums, leaving us (as the second inner sum is zero) with

\displaystyle \sum_{n=k}^{\infty} \frac{1}{\binom{n}{k}} = k \sum_{i=1}^{k-1} \frac{1}{i} \sum_{j=0}^{i-1} \binom{k-1}{j} (-1)^j = k \sum_{i=1}^{k-1} \binom{k-2}{i-1}\frac{(-1)^{i-1}}{i} = k \sum_{i=0}^{k-2} \binom{k-2}{i}\frac{(-1)^i}{i+1}    = \frac{k}{k-1},

where the last step follows from Identity 4.

Posted in binomial coefficients, sequences and series | 1 Comment

The Art of Proving Binomial Identities

I recently finished a book, The Art of Proving Binomial Identities, that will be published by CRC Press later this year.  We’re past the page proofs stage, and I think all that’s left on my end is to provide a little input on the cover the CRC folks are creating.  For this post I thought I would talk a little about the background of the book, as well as what’s in it.  (UPDATE: Here’s a link to the book’s web page on the CRC site.)

Back in 2010-11 I spent a fair amount of time on what was then a new mathematics question-and-answer web site, Mathematics Stack Exchange.  The questions on Math.SE range from calculus homework problems to the occasional research-level mathematics question.  Since multiple answers are allowed for the same question, sometimes you’ll see several creative solutions for the same problem.  I became particularly intrigued by all the different ways for proving various binomial coefficient identities.  (See, for example, the answers to this question.)  However, I couldn’t find a text anywhere that pulled together all of the different methods for proving binomial identities that I was seeing on Math.SE.  So I decided to write one.  The result, many years later, is The Art of Proving Binomial Identities.

The book has ten chapters.

  1. Introduction.  This chapter introduces the binomial coefficients and proves that four different definitions of the binomial coefficients are equivalent.  It also gives several proofs that the sum of row n in Pascal’s triangle is 2^n.  The different proof techniques used give an overview of the text as a whole.
  2. Basic Techniques.  This chapter is on a generalization of the binomial coefficient that allows us to prove Newton’s binomial series, binomial coefficients with negative upper indices (a special case of the just-considered generalization), the absorption identity (which has a perhaps-surprising number of applications), and binomial inversion.  The harmonic numbers make their first appearance, and we’ll see them here and there throughout the rest of the book.
  3. Combinatorics.  Basic combinatorial interpretations of the binomial coefficients, such as committees and the number of successes in coin flips or ball selections, are used in this chapter to prove a variety of identities.  We also look at inclusion-exclusion, sign-reversing involutions, lattice path arguments, and choosing with replacement.  We also get our first look at the derangement numbers.
  4. Calculus.  Differentiating and integrating the binomial theorem (sometimes repeatedly) can prove a lot of binomial identities.  We also look at the connection between Leibniz’s generalized product rule and the binomial theorem as well as the use of the gamma function and beta integral to prove more complicated binomial identities.
  5. Probability.  The binomial, negative binomial, and hypergeometric distributions contain binomial coefficients in their probability mass functions.  Various manipulations of these pmfs, particularly moment calculations, can lead to several nice probabilistic interpretations of binomial identities.  We also introduce the technique of indicator variables in probabilistic arguments, discuss the extension of inclusion-exclusion to a probabilistic setting, and consider a probabilistic proof of the relationship between the beta integral and the binomial coefficients.
  6. Generating Functions.  Generating functions are a powerful tool for proving identities.  We introduce the idea of a generating function and give several examples of their use in proving binomial identities, with particular focus on the negative binomial series, identities with central binomial coefficients, and convolution.  The last section in the chapter discusses exponential generating functions.  These are especially nice from a binomial coefficients point of view because binomial coefficients appear naturally in the sequence that is generated by the product of two egfs.
  7. Recurrence Relations and Finite Differences.  This chapter introduces recurrence relations as a tool for proving binomial identities.  The bulk of the chapter, though, is devoted to exploring the relationship between binomial coefficients and finite differences.  It turns out, for instance, that the binomial coefficients arise naturally when taking repeated finite differences of the same sequence.  In addition, we discuss the relationship between the binomial transform of a sequence and the binomial transform of the finite difference of that sequence, a relationship I first explored in a paper [1] I wrote a while back.  In the last section we prove how the factorial formula for the binomial coefficients can be derived from its recurrence relation without knowing the formula in advance (a derivation which first appeared in a recent paper [2] of mine).
  8. Special Numbers. The inspiration for this chapter is the chapter of the same name in the classic text Concrete Mathematics, although not all the numbers discussed are the same.  There are sections on the Fibonacci numbers, both kinds of Stirling numbers, the Bell numbers, the Lah numbers, the Catalan numbers (including more sophisticated lattice path arguments than we saw in Chapter 3), and the Bernoulli numbers.
  9. Miscellaneous Techniques.  The two techniques discussed in this chapter are the use of complex numbers and linear algebra to prove binomial identities.  Substituting the imaginary unit i into the binomial theorem and separating real and imaginary parts leads to proofs of binomial identities that can be difficult to prove otherwise.  We then generalize this idea by using complex roots of unity in place of i.  For the second technique, placing Pascal’s triangle into a matrix and using matrix operations can also lead to quick proofs and new perspectives on binomial identities.  In particular, the binomial inversion technique from Chapter 2 turns out to be matrix inversion in disguise.  We also consider matrices containing Stirling, Lah, and Fibonacci numbers.
  10. Mechanical Summation.  This chapter is devoted to an explanation of the automated identity proving techniques for hypergeometric sums developed by Gosper and extended by Zeilberger.  The three sections are on (1) hypergeometric series and notation, (2) Gosper’s algorithm, and (3) Zeilberger’s extension to Gosper’s algorithm.

Finally, there is an appendix containing hints and/or a solution to every problem in the text, an index of identities appearing in the text, and a general index.

References

  1. Michael Z. Spivey, “Combinatorial Sums and Finite Differences,” Discrete Mathematics, 307 (24): 3130-3146, 2007.
  2. Michael Z. Spivey, “The Binomial Recurrence,” Mathematics Magazine, 89 (3): 192-195, 2016.
Posted in binomial coefficients, publishing | 1 Comment