## Diversity Statements in Hiring

Recently Abigail Thompson, chair of the mathematics department at the University of California, Davis, and a vice president of the American Mathematical Society, published this article in Notices of the American Mathematical Society.  The article includes the following statement:

Faculty at universities across the country are facing an echo of the loyalty oath [of the 1950s], a mandatory “Diversity Statement” for job applicants.  The professed purpose is to identify candidates who have the skills and experience to advance institutional diversity and equity goals.  In reality it’s a political test, and it’s a political test with teeth.

She goes on to explain why.

Why is it a political test? Politics are a reflection of how you believe society should be organized. Classical liberals aspire to treat every person as a unique individual, not as a representative of their gender or their ethnic group. The sample rubric dictates that in order to get a high diversity score, a candidate must have actively engaged in promoting different identity groups as part of their professional life. The candidate should demonstrate “clear knowledge of, experience with, and interest in dimensions of diversity that result from different identities” and describe “multiple activities in depth.” Requiring candidates to believe that people should be treated differently according to their identity is indeed a political test.

I agree.  The use of diversity statements in hiring is a political test and thus is inherently discriminatory.  We should abandon the practice.

Posted in campus issues, diversity, politics | 2 Comments

## Finding the Area of an Irregular Polygon

Finding the area of an irregular polygon via geometry can be a bit of a chore, as the process depends heavily on the shape of the polygon.  It turns out, however, that there’s a formula that can give you the area of any polygon as long as you know the polygon’s vertices.  That formula is based on Green’s Theorem.

Green’s Theorem is a powerful tool in vector analysis that shows how to convert from a line integral around a closed curve to a double integral over the region enclosed by the curve and vice versa.  More specifically, the circulation-curl form of Green’s Theorem states that, if D is a closed region with boundary curve C and C is oriented counterclockwise, then

$\displaystyle \oint_C (M dx + N dy) = \iint_D \left( \frac{\partial N}{\partial x} - \frac{\partial M}{\partial y} \right) dA$.

The special case of Green’s Theorem that can generate our formula for the area of a polygon has $N = x$ and $M = 0$.  This yields

$\displaystyle \oint_C x dy = \iint_D dA$.

Since the double integral here gives the area of region D, this equation says that we can find the area of D by evaluating the line integral of $x dy$ counterclockwise around the curve C.  The boundary curve for a polygon consists of a finite set of line segments, though, and so to obtain our formula we just need to find out what $\oint_L x dy$ is for a line segment L that runs from a generic point $(x_1, y_1)$ to another generic point $(x_2, y_2)$.  Let’s do that now.

We need a parameterization of L.  That requires a point on the line segment L and a vector in the direction of L.  The starting point $(x_1, y_1)$ and the vector $(x_2 - x_1){\bf i} + (y_2 - y_1){\bf j}$ from the starting point to the ending point work well, giving us the parameterization

$\displaystyle {\bf r}(t) = (x_1 + t(x_2-x_1)){\bf i} + (y_1 + t(y_2-y_1)){\bf j}, \: 0 \leq t \leq 1.$

With $x = x_1 + t(x_2-x_1)$ and $y = y_1 + t(y_2-y_1)$, we have $dy = (y_2 - y_1)dt$.  This gives us the integral

$\displaystyle \int_0^1 (x_1 + t(x_2-x_1))(y_2-y_1)dt \\ = (y_2-y_1)\int_0^1 (x_1 + t(x_2-x_1))dt \\ = (y_2-y_1)\left[tx_1 + \frac{t^2}{2}(x_2-x_1)\right]_0^1 \\ = (y_2-y_1)\frac{x_1+x_2}{2}.$

In other words, the line integral of $x dy$ from $(x_1, y_1)$ to  $(x_2, y_2)$ is $\displaystyle (y_2-y_1)\frac{x_1+x_2}{2},$ the difference in the y coordinates times the average of the x coordinates.

Let’s put all this together to get our formula.  Suppose we have a polygon D with n vertices. Start with any vertex.  Label it $(x_1, y_1)$.  Then move counterclockwise around the polygon D, labeling successive vertices $(x_2, y_2), (x_3, y_3)$, and so forth, until you label the last vertex $(x_n, y_n)$.  Finally, give the starting vertex a second label of $(x_{n+1}, y_{n+1})$.  Then the area of D is given by

$\displaystyle \text{Area of } D = \sum_{i=1}^n (y_{i+1} - y_i) \frac{x_i + x_{i+1}}{2}.$

In words, this means that to find the area of polygon D you can just take the difference of the y coordinates times the average of the x coordinates of the endpoints of each line segment making up the polygon and then add up.

This formula isn’t the only such formula for finding the area of a polygon.  In fact, a more common formula that can also be proved with Green’s Theorem is

$\displaystyle \text{Area of } D= \frac{1}{2} \sum_{i=1}^n (x_i y_{i+1} - x_{i+1}y_i)$.

This second formula looks a little nicer to the eye, but I prefer the first formula for calculations by hand.

Posted in analytic geometry, calculus, Green's Theorem | 1 Comment

## An observation on the unit circle

We did a quick review of the unit circle in my multivariate calculus class last week, and I pointed out a fact about the sines and cosines of the common angles in the first quadrant that some of the students appeared not to have seen before.  I thought I would record it here.

The five angles most commonly encountered in the first quadrant, together with their coordinates on the unit circle, are as follows (angle measures are in radians):

• $0: (1,0)$
• $\pi/6: (\sqrt{3}/2, 1/2)$
• $\pi/4: (\sqrt{2}/2, \sqrt{2}/2)$
• $\pi/3: (1/2, \sqrt{3}/2)$
• $\pi/2: (0, 1)$

Part of the reason these coordinates are important is that they tell you the sine and cosine of the corresponding angle.  The cosine value is the first coordinate (technical term: abscissa), while the sine value is the second coordinate (technical term: ordinate).  For example, $\cos (\pi/3) = 1/2$ and $\sin(\pi/3) = \sqrt{3}/2$.

Here’s another way of writing the same information, one that illustrates a pattern with the coordinate values.

• $0: (\sqrt{4}/2, \sqrt{0}/2)$
• $\pi/6: (\sqrt{3}/2, \sqrt{1}/2)$
• $\pi/4: (\sqrt{2}/2, \sqrt{2}/2)$
• $\pi/3: (\sqrt{1}/2, \sqrt{3}/2)$
• $\pi/2: (\sqrt{0}/2, \sqrt{4}/2)$

Every coordinate value is of the form $\sqrt{i}/2$, where i is a value between 0 and 4.  As the angle measure increases from 0 radians to $\pi/2$ radians, the cosine value decreases from $\sqrt{4}/2$ down to $\sqrt{0}/2$, while the sine value increases from $\sqrt{0}/2$ up to $\sqrt{4}/2$.  Perhaps this pattern will help some folks to memorize the coordinate values more easily.

## Six Proofs of a Binomial Identity

I’m a big fan of proving an identity in multiple ways, as I think each perspective gives additional insight into why the identity is true.  In this post we’ll work through six different proofs of the binomial identity $\displaystyle \sum_{k=0}^n \binom{n}{k} k = n 2^{n-1}$.

1. The absorption identity proof.

The absorption identity states that, for real n, $k \binom{n}{k} = n \binom{n-1}{k-1}$.  Thus we have

$\displaystyle \sum_{k=0}^n \binom{n}{k} k = \sum_{k=0}^n \binom{n-1}{k-1} n = n \sum_{k=-1}^{n-1} \binom{n-1}{k} = n \sum_{k=0}^{n-1} \binom{n-1}{k} = n 2^{n-1}$, recalling that $\binom{n-1}{-1} = 0$.

2. The combinatorial proof.

How many chaired committees of any size can be formed from a group of n people?

One way to count them is to choose the committee first and then choose the chair.  First, we condition on the size of the committee.  If there are k people on the committee, there are $\binom{n}{k}$ ways to choose the people to be on the committee.  Then there are k ways to choose the chair.  Summing up over all possible values of k, we find that the answer is $\sum_{k=0}^n \binom{n}{k} k$.

Another way is to choose the chair first and then choose the committee.  There are n choices for the chair.  Then, for each of the remaining $n-1$ people, we have two options: Each person could be on the committee or not.  Thus there are $2^{n-1}$ ways to choose the committee once the chair is chosen.  This gives an answer of $n 2^{n-1}$ when choosing the chair first.

Since the two answers must be equal, we have $\displaystyle \sum_{k=0}^n \binom{n}{k} k = n 2^{n-1}$.

3. The calculus proof.

Differentiate the binomial theorem, $\displaystyle \sum_{k=0}^n \binom{n}{k} x^k = (x+1)^n$, with respect to x to obtain $\displaystyle \sum_{k=0}^n \binom{n}{k} k x^{k-1} = n(x+1)^{n-1}$.  Letting $x = 1$, we have $\displaystyle \sum_{k=0}^n \binom{n}{k} k = n 2^{n-1}$.

4. The probabilitistic proof.

Imagine an experiment where you flip a fair coin n times.  What is the expected number of heads for this experiment?

One way to determine this starts by conditioning on the number k of heads.  If there are k heads, there are $\binom{n}{k}$ ways of choosing which k flips will be the heads.  The probability that these flips are all heads is $(1/2)^k$, and the probability that the remaining flips are all tails is $(1/2)^{n-k}$.  Multiply these together and apply the definition of expected value to get an answer of $\displaystyle \sum_{k=0}^n \binom{n}{k} k \left(\frac{1}{2} \right)^k \left(\frac{1}{2} \right)^{n-k} = \sum_{k=0}^n \binom{n}{k} k \left( \frac{1}{2} \right)^n = \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} k$.

Another way is to use indicator variables.  Let $X_k$ be $1$ if flip k is heads and $0$ if flip k is tails.  Then the number of heads in the sequence of n flips is $\sum_{k=0}^n X_k$.  Also, the expected value of $X_k$ is, by definition, $E(X_k) = 1 (1/2) + 0 (1/2) = 1/2$.  Thus the expected number of heads is $E \left( \sum_{k=0}^n X_k \right) = \sum_{k=0}^n E(X_k) = \sum_{k=0}^n (1/2) =n/2$.  (This is a formal way of arguing for the answer of $n/2$ that our intuition says should be the expected number of heads.)

Equating our two answers, we have $\displaystyle \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} k = \frac{n}{2}$, which implies $\displaystyle \sum_{k=0}^n \binom{n}{k} k = n 2^{n-1}$.

5. The exponential generating functions proof

For this proof we’re going to need a definition and a few properties of exponential generating functions.

First, the binomial convolution of the sequences $(a_n)$ and $(b_n)$ is given by $\displaystyle \sum_{k=0}^n \binom{n}{k} a_k b_{n-k}$.

Second, we have the following.  (See, for example, pages 126-128 of my book The Art of Proving Binomial Identities. [1])

1. If $\displaystyle f_a(x) = \sum_{n=0}^{\infty} a_n \frac{x^n}{n!}$ and $\displaystyle f_b(x) = \sum_{n=0}^{\infty} b_n \frac{x^n}{n!}$ then $\displaystyle f_a(x) f_b(x) = \sum_{n=0}^{\infty} \left(\sum_{k=0}^n \binom{n}{k} a_k b_{n-k} \right) \frac{x^n}{n!}$.  (This is the binomial convolution property for exponential generating functions.)
2. We have $\displaystyle e^x = \sum_{k=0}^{\infty} \frac{x^k}{k!}$.  (This is the Maclaurin series for $e^x$.)
3. If $\displaystyle f_a(x) = \sum_{n=0}^{\infty} a_n \frac{x^n}{n!}$ then $x f_a(x)$ is the exponential generating function for the sequence given by $(n a_{n-1}) = (0, a_0, 2a_1, 3a_2, \ldots)$.

By the definition of binomial convolution, $\sum_{k=0}^n \binom{n}{k} k$ is the binomial convolution of the sequences with $a_n = n$ and $b_n = 1$.  What are the exponential generating functions of these two sequences?

By Property 2, the exponential generating function for the sequence $(b_n)$, with $b_n = 1$, is $e^x$.

If we take the sequence $1, 1, 1, \ldots$, append a $0$ to the front, and multiply it by n, we have the sequence with $a_n = n$.  By Property 3, then, the exponential generating function for the sequence $(a_n)$ is $x e^x$.

Thus, by Property 1, the exponential generating function for $\sum_{k=0}^n \binom{n}{k} k$ is $x e^x (e^x) = x e^{2x}$.  However, $\displaystyle e^{2x} = \sum_{k=0}^{\infty} \frac{(2x)^k}{k!} = \sum_{k=0}^{\infty} \frac{2^k x^k}{k!}$, which means that $e^{2x}$ is the exponential generating function for the sequence $(2^n)$.  Thus, by Property 3, $x e^{2x}$ is also the exponential generating function for the sequence $(n 2^{n-1})$.   Since $\left(\sum_{k=0}^n \binom{n}{k} k \right)$ and $\left(n 2^{n-1}\right)$ have the same generating function, they must be equal, and so $\displaystyle \sum_{k=0}^n \binom{n}{k} k = n 2^{n-1}$.

6. The finite differences bootstrapping proof

This proof requires a result that I don’t think is that well-known.  It’s Theorem 4 in my paper “Combinatorial Sums and Finite Differences.” [2]

Theorem: Let $(a_k)$ and $(b_k)$ be sequences such that $(a_k)$ is the finite difference sequence for $(b_k)$; i.e., $\Delta b_k = b_{k+1} - b_k = a_k$ for each $k \geq 0$.  If $g_n = \sum_{k=0}^n a_k$ and $h_n = \sum_{k=0}^n b_k$ then, for $n \geq 0$, $\displaystyle h_n = 2^n \left( b_0 + \sum_{k=1}^n \frac{g_{k-1}}{2^k} \right).$

First, if $b_k = 1$, then $a_k = 0$.  Thus $g_n = 0$, and, by the theorem, $h_n = \sum_{k=0}^n \binom{n}{k} = 2^n$.

Next, if $b_k = k$, then $a_k = k+1 - k = 1$.  By the theorem and the previous result, then, $\displaystyle \sum_{k=0}^n \binom{n}{k} k = 2^n \left(0 + \sum_{k=1}^n \frac{2^{k-1}}{2^k} \right) = 2^n \sum_{k=1}^n \frac{1}{2} = 2^n \frac{n}{2} = n 2^{n-1}$.

References

1. Spivey, Michael Z.  The Art of Proving Binomial Identities.  CRC Press, 2019.
2. Spivey, Michael Z.  “Combinatorial Sums and Finite Differences.”  Discrete Mathematics, 307 (24): 3130-3146, 2007.

The ancient Greek philosopher Zeno of Elea is known for proposing several paradoxes related to time, space, motion, and infinity.  In this post we’ll focus on one of Zeno’s paradoxes and discuss how some ideas associated with calculus might or might not be able to resolve it.

The most famous of Zeno’s paradoxes is perhaps the Dichotomy paradox.  In this one, Zeno points out that in order for an object in motion to reach its goal it must first cover half the distance to its goal.  But to do so it must have previously covered half of the first half, or a quarter of the total distance.  Continuing with this line of thinking, we see that the object must have already covered an eighth of the total distance before reaching the quarter point, a sixteenth of the total distance before reaching the one-eighth point, and so forth, in order to complete its goal.  We end up with an infinite sequence of time intervals that the object must cover in order to travel the finite distance to its goal.  This seems impossible, for multiple reasons.  For one, it seems likely that one cannot actually complete infinitely many tasks.  For two, since an object’s initial movement interval can always be halved, how would an object even start moving?  Because of objections such as these, the conclusion of the paradox is that motion cannot occur.

Of course we know that things do, in fact, move, so the real intellectual challenge here is to determine exactly where Zeno’s line of reasoning fails.

One way out is to posit that space and time are not actually infinitely divisible.  Mathematicians assume that they are, but it might be the case that, say, real numbers are an intellectual abstraction that only approximates the true discrete nature of reality.  This would turn some basic calculus concepts on their heads; for example, it would make a derivative an approximation of a difference quotient, rather than the other way around.

Alternatively, some folks think that the precise nineteenth-century epsilon-delta notion of a limit has resolved Zeno’s paradoxes.  Certainly the Dichotomy paradox as presented here is considering a problem very close to the one that led to the precise definition of the derivative.  In the Dichotomy paradox there is an infinite sequence of time intervals involved, but they are getting smaller and smaller.  Moreover, each time interval has an associated distance interval, and these distance intervals are also getting smaller and smaller.  By dividing the length of each distance interval by the length of its corresponding time interval we get the sequence of average speeds associated with these intervals.  Mathematically, this is a sequence of difference quotients.  So the problem, in the Dichotomy paradox, of what happens as the distance and time intervals get smaller and smaller is closely related to the problem of what happens to the difference quotients as the associated intervals get smaller and smaller.

Over three hundred years after the development of calculus mathematicians are quite comfortable with what happens to an infinite sequence of difference quotients as the associated intervals get smaller and smaller: If those difference quotients are consistently getting closer and closer to a specific number then we call that specific number the derivative.  And we define precisely what we mean by “consistently getting closer and closer” through an epsilon-delta (or perhaps epsilon-N) argument.  Thus we have a consistent way to talk about the motion that the Dichotomy paradox says cannot occur.  Moreover, the value of the derivative is the speed of the object, which is a concept that makes no sense unless the object is actually moving.

But does this formulation really resolve Zeno’s Dichotomy paradox?  I’m not convinced it does.  I’m coming to the conclusion that mathematicians have merely defined away the problem posed by Zeno.  To be sure, mathematicians have found a way to talk about what happens to an infinite sequence of numbers that is both logically coherent and fits what we understand to be happening with those numbers.  On the other hand, can an actual person perform infinitely many tasks?  I would argue not, and I don’t think the limit approach truly addresses this aspect of the Dichotomy paradox.  It just avoids it in a consistent way.

At the very least, though, Zeno’s paradoxes have forced mathematicians to consider carefully what they mean by notions such as “infinitely small” and “arbitrarily large,” as well as the concept of infinity itself.  And that is no small thing.

## Five Theses on Mathematics in Games

This past weekend I gave a talk entitled “Mathematics Through Narrative” at the NarraScope conference on narrative games at MIT in Boston.  Much of what I discussed I had also put into a series of blog posts several months ago.  (See part 1, part 2, and part 3 of those posts.)

However, in the talk I also presented a set of five theses on mathematics in games that did not appear in those blog posts.  Here they are, with commentary.

Mathematics in games should…

1. Be integrated into the story world.
2. Be used to solve puzzles in-game.
3. Feature, if possible, repetition.
4. Allow the player to play with the mathematical concepts.
5. Avoid symbolic manipulation as much as possible.

Mathematics in games should be integrated into the story world.  This was the primary point I was making in part 1 of the series of blog posts from several months back, so perhaps just a quick recap is all that’s necessary here.  First, we retain new ideas (both mathematical and otherwise) when we can situate them in a web of mental connections that we already have.  (I don’t think this is a particularly controversial or outlandish claim.)  Therefore, games that attempt to teach mathematics should integrate the math into the story world.  They should not yank the player out of the story world and force him to solve mathematics problems that are otherwise irrelevant to that story world in order to advance in the game.  To do otherwise turns the gaming experience into a homework set with a fancy wrapper around it.  It also likely leaves the impression that mathematics isn’t relevant to real life.  We can do better than that.

Mathematics in games should be used to solve puzzles in-game.  This follows from the previous thesis.  If we’re going to integrate math into the story world, how do we go about doing it?  Well, lots of games from a variety of genres have puzzles that must be solved in order to advance in the game.  And mathematics is a great source of ideas for puzzles.  It’s also an underutilized one: There is a lot of mathematics out there, but I have seen very little of it actually appear in a game.  I’m sure that’s because mathematics is perceived as inaccessible.  In addition, game designers aren’t used to thinking about mathematical ideas as part of their puzzle-building toolkit, beyond some standard logic puzzles.  However, there are examples of mathematical ideas – even advanced ones – appearing in games.  Jessica Sklar has an article [1] on linear algebra in game puzzles.  I also have an article [2] that discusses – among other examples – the Klein bottle puzzle in Trinity and Zeno’s bridge in Beyond Zork.  Then there’s my A Beauty Cold and Austere, whose gameplay is all about solving mathematics-based puzzles.  There is still a lot more mathematics out there that could be tapped to create engaging challenges for players, too.  Combinatorial optimization problems in particular seem like a rich, underutilized puzzle source.

Mathematics in games should feature, if possible, repetition.  I don’t think this is a particularly controversial claim.  After all, it’s clear that we learn and retain ideas not just when they’re placed in a context but also through continued repetition.  I put the qualifier “if possible” in there because the repetition goal can cut against other design goals.  If a game is going for breadth, mathematically-speaking, then there’s not going to be space in the game to hit every mathematical idea multiple times.  In addition, if a game just wants to feature a few mathematical ideas as puzzles or has more of an exposure-to-mathematics goal in mind, then repetition becomes less important.  But if a game really has the intent to teach mathematics, then repetition must be one of the features the game designer has to keep in mind.

Mathematics in games should allow the player to play with the mathematical concepts.  This is related to the previous thesis.  One way to implement something like repetition without creating multiple puzzles that feature exactly the same concept (which, if not carefully done, could get boring for the player) is to create a mini-sandbox featuring the mathematical idea you want to get across.  Then the player can engage the concept from multiple angles, seeing how it responds under different choices she makes.  I’ll draw an analogy here with people: If you want to get to know someone, you’ll have to see how that person responds in lots of different scenarios (e.g., with friends, with family, in good times, in stressful situations), not just in the context in which you originally met them.  The same is true with ideas – mathematical and otherwise.  You can’t truly understand an idea until you look at it from multiple perspectives.  One example in A Beauty Cold and Austere is the sequences-and-series machine.  By playing with the various controls on the machine you can create sixteen different settings for the golden path leading out of the room, each of which represents a different sequence or series.  Following the path for a particular setting takes the player to a number space corresponding to the limit associated with the underlying sequence or series.  Some of these paths represents solutions to puzzles, and some just exist to allow the player to engage with the mathematics.

Mathematics in games should avoid symbolic manipulation as much as possible.  An expression like $\sum_{k=1}^{\infty} \frac{(-1)^{k+1}}{k} = \ln 2$ is a concise way to represent the alternating harmonic series and its value, but it’s also a foreign language to the vast majority of players out there.  As such, when the typical player encounters something like this expression in a game it is not inviting or engaging.  Instead, it’s intimidating and off-putting.  As game designers it is our responsibility to find ways to express fascinating mathematical ideas without triggering players’ phobias around mathematics, many of which are (I’m convinced) tied to those ideas’ symbolic representations.  For example, when I gave my talk at NarraScope I displayed the alternating harmonic series equation above as an example of something to avoid.  Then I demoed a couple of puzzles from A Beauty Cold and Austere, the second of which was the sequences-and-series machine.  After playing with several of the settings to see what happened I picked a particular collection of settings and asked the audience whether we could predict where the golden path would lead.  We agreed that these settings would not get us to infinity (unlike another setting) nor result in values that bounced around more and more wildly (also unlike another setting).  Instead, these settings would create a path that would eventually settle down to a limiting value somewhere between 0 and 1.  And sure enough, that’s what happened, with the game telling us that the actual value was the natural logarithm of 2.  Then I pointed out that we had just recreated the ideas behind the alternating harmonic series equation I had displayed fifteen minutes earlier, without using any mathematical symbols at all.  That produced multiple murmurs from the audience, including a “mind blown” comment from someone sitting near the front.  (I’ll admit that that was my favorite moment of the whole talk.)  The point is that we can avoid symbolic manipulation when representing mathematics in games, even if we have to get creative in how we do it.

So, these are my five theses on featuring mathematics in games.  It’s a full ninety fewer than Martin Luther had.  I also did not nail them to one of the classroom doors at MIT.

References

1. Sklar, Jessica.  “Dials and levers and glyphs, oh my! Linear algebra solutions to computer game puzzles.” Mathematics Magazine 79(5), 2006: 360-367.
2. Spivey, Michael Z.  “Mathematics through narrative,” Math Horizons 26(3), 2019: 22-25.
Posted in games, teaching | 1 Comment

## The Art of Proving Binomial Identities is Now Available

A couple of months ago I posted that my book, The Art of Proving Binomial Identities, would soon be finished.  Well, it’s out now, and you can buy it straight from the publisher (CRC Press).  It’s up on Amazon as well.

The March blog post summarizes the contents of the book, but there is one other feature of it I’d like to highlight.  In addition to the usual index of topics, I added an index of identities and theorems.  Every identity and theorem in the book appears in this index, with page numbers indicating exactly where.  My intent was for this to facilitate one of my main goals with the book: Understanding particular binomial identities better by looking at them through multiple lenses.  For example, the references to Identity 24, $\sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+1} = \frac{1}{n+1}$ (which I used in my last post), lead you to proofs of it that involve the absorption identity, integration, the beta function, probability, generating functions, finite differences, and mechanical summation.  Each proof gives another perspective on why Identity 24 is true.

Also, at some point within the past couple of months I needed to find a particular binomial identity.  Sure enough, it was in the index!  That was gratifying.

(There are a couple of small errors on the CRC site.  First, I am no longer department chair, as I finished my three-year term in 2018.  I was department chair when I submitted the book proposal, and I corrected it on the info sheet I was sent a couple of months back, but for some reason it didn’t get updated on the CRC site.  Second, there is a chapter on mechanical summation that didn’t make it into the site’s description of the Table of Contents.  I suspect this is because the TOC list is from the original draft of the book I sent CRC, and the mechanical summation chapter was added later.)

## Sum of the Reciprocals of the Binomial Coefficients

In this post we’re going to prove the following identity for the sum of the reciprocals of the numbers in column k of Pascal’s triangle, valid for integers $k \geq 2$:

Identity 1: $\displaystyle \sum_{n=k}^{\infty} \frac{1}{\binom{n}{k}} = \frac{k}{k-1}.$

The standard way to prove Identity 1 is is to convert the binomial coefficient in the denominator of the left side to an integral expression using the beta function, swap the integral and the summation, and pull some algebraic tricks with derivatives to sum the infinite series and then evaluate the integral.  (See, for example, the proof of Identity 107 on pages 77-78 of my upcoming Art of Proving Binomial Identities.)  In this post, though, we’re going to show how one can prove this identity using just three simple binomial identities:

Identity 2: $\displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+x} = \frac{n!}{x(x+1) \cdots (x+n)},$

Identity 3: $\displaystyle \sum_{k=0}^m \binom{n}{k} (-1)^k = (-1)^m \binom{n-1}{m},$

Identity 4: $\displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+1} = \frac{1}{n+1}$.

Identity 2 is your basic partial fractions decomposition formula when the factors in the denominator are consecutive integers.  Identity 3 gives the formula for the partial alternating row sum of the binomial coefficients.  Identity 4 is perhaps less well-known, but it does have a nice interpretation as two different ways of expressing the probability of drawing the red ball from a jar containing one red ball and n numbered blue balls.  (These are Identities 109, 20, and 24, respectively, in The Art of Proving Binomial Identities.)

On to the proof.

Proof of Identity 1.

First, let’s rewrite the summand of Identity 1 as a fraction with consecutive integer factors and apply Identity 2:

$\displaystyle \frac{1}{\binom{n}{k}} = \frac{k!}{(n-k+1) \cdots n} = k \frac{(k-1)!}{(n-k+1) \cdots n} = k \sum_{j=0}^{k-1} \binom{k-1}{j} \frac{(-1)^j}{j + n - k+1}.$

This gives the left side of Identity 1 as

$\displaystyle k \sum_{n=k}^{\infty} \sum_{j=0}^{k-1} \binom{k-1}{j} \frac{(-1)^j}{j + n - k+1}.$

Now, let’s transform the region of summation (just as we might transform the region of integration in a double integral) from an $(n,j)$ space to an $(i,j)$ space, where $i = j+n-k+1$.  The idea here is that we want to see what happens when we fix a particular denominator and sum over all the numerator expressions for that denominator.  The original region of summation in $(n,j)$ space looks like an infinite rectangle whose three finite sides are $n = k, j = 0$, and $j = k-1$.  The transformed region of summation looks like an infinite trapezoid whose three finite sides are $i = j+1, j = 0$, and $j = k-1$.  With the diagonal side $i = j+1$, we have to split the infinite series into two parts, one with i running from 1 to $k-1$ and one with i running from k to infinity.

Thus, we have $\displaystyle k \sum_{n=k}^{\infty} \sum_{j=0}^{k-1} \binom{k-1}{j} \frac{(-1)^j}{j + n - k+1} = k \sum_{i=1}^{k-1} \frac{1}{i} \sum_{j=0}^{i-1} \binom{k-1}{j} (-1)^j + k \sum_{i=k}^{\infty} \frac{1}{i} \sum_{j=0}^{k-1} \binom{k-1}{j} (-1)^j.$

Identity 3 tells us how to evaluate the two inner sums, leaving us (as the second inner sum is zero) with

$\displaystyle \sum_{n=k}^{\infty} \frac{1}{\binom{n}{k}} = k \sum_{i=1}^{k-1} \frac{1}{i} \sum_{j=0}^{i-1} \binom{k-1}{j} (-1)^j = k \sum_{i=1}^{k-1} \binom{k-2}{i-1}\frac{(-1)^{i-1}}{i} = k \sum_{i=0}^{k-2} \binom{k-2}{i}\frac{(-1)^i}{i+1} = \frac{k}{k-1},$

where the last step follows from Identity 4.

| 1 Comment

## The Art of Proving Binomial Identities

I recently finished a book, The Art of Proving Binomial Identities, that will be published by CRC Press later this year.  We’re past the page proofs stage, and I think all that’s left on my end is to provide a little input on the cover the CRC folks are creating.  For this post I thought I would talk a little about the background of the book, as well as what’s in it.  (UPDATE: Here’s a link to the book’s web page on the CRC site.)

Back in 2010-11 I spent a fair amount of time on what was then a new mathematics question-and-answer web site, Mathematics Stack Exchange.  The questions on Math.SE range from calculus homework problems to the occasional research-level mathematics question.  Since multiple answers are allowed for the same question, sometimes you’ll see several creative solutions for the same problem.  I became particularly intrigued by all the different ways for proving various binomial coefficient identities.  (See, for example, the answers to this question.)  However, I couldn’t find a text anywhere that pulled together all of the different methods for proving binomial identities that I was seeing on Math.SE.  So I decided to write one.  The result, many years later, is The Art of Proving Binomial Identities.

The book has ten chapters.

1. Introduction.  This chapter introduces the binomial coefficients and proves that four different definitions of the binomial coefficients are equivalent.  It also gives several proofs that the sum of row n in Pascal’s triangle is $2^n$.  The different proof techniques used give an overview of the text as a whole.
2. Basic Techniques.  This chapter is on a generalization of the binomial coefficient that allows us to prove Newton’s binomial series, binomial coefficients with negative upper indices (a special case of the just-considered generalization), the absorption identity (which has a perhaps-surprising number of applications), and binomial inversion.  The harmonic numbers make their first appearance, and we’ll see them here and there throughout the rest of the book.
3. Combinatorics.  Basic combinatorial interpretations of the binomial coefficients, such as committees and the number of successes in coin flips or ball selections, are used in this chapter to prove a variety of identities.  We also look at inclusion-exclusion, sign-reversing involutions, lattice path arguments, and choosing with replacement.  We also get our first look at the derangement numbers.
4. Calculus.  Differentiating and integrating the binomial theorem (sometimes repeatedly) can prove a lot of binomial identities.  We also look at the connection between Leibniz’s generalized product rule and the binomial theorem as well as the use of the gamma function and beta integral to prove more complicated binomial identities.
5. Probability.  The binomial, negative binomial, and hypergeometric distributions contain binomial coefficients in their probability mass functions.  Various manipulations of these pmfs, particularly moment calculations, can lead to several nice probabilistic interpretations of binomial identities.  We also introduce the technique of indicator variables in probabilistic arguments, discuss the extension of inclusion-exclusion to a probabilistic setting, and consider a probabilistic proof of the relationship between the beta integral and the binomial coefficients.
6. Generating Functions.  Generating functions are a powerful tool for proving identities.  We introduce the idea of a generating function and give several examples of their use in proving binomial identities, with particular focus on the negative binomial series, identities with central binomial coefficients, and convolution.  The last section in the chapter discusses exponential generating functions.  These are especially nice from a binomial coefficients point of view because binomial coefficients appear naturally in the sequence that is generated by the product of two egfs.
7. Recurrence Relations and Finite Differences.  This chapter introduces recurrence relations as a tool for proving binomial identities.  The bulk of the chapter, though, is devoted to exploring the relationship between binomial coefficients and finite differences.  It turns out, for instance, that the binomial coefficients arise naturally when taking repeated finite differences of the same sequence.  In addition, we discuss the relationship between the binomial transform of a sequence and the binomial transform of the finite difference of that sequence, a relationship I first explored in a paper [1] I wrote a while back.  In the last section we prove how the factorial formula for the binomial coefficients can be derived from its recurrence relation without knowing the formula in advance (a derivation which first appeared in a recent paper [2] of mine).
8. Special Numbers. The inspiration for this chapter is the chapter of the same name in the classic text Concrete Mathematics, although not all the numbers discussed are the same.  There are sections on the Fibonacci numbers, both kinds of Stirling numbers, the Bell numbers, the Lah numbers, the Catalan numbers (including more sophisticated lattice path arguments than we saw in Chapter 3), and the Bernoulli numbers.
9. Miscellaneous Techniques.  The two techniques discussed in this chapter are the use of complex numbers and linear algebra to prove binomial identities.  Substituting the imaginary unit i into the binomial theorem and separating real and imaginary parts leads to proofs of binomial identities that can be difficult to prove otherwise.  We then generalize this idea by using complex roots of unity in place of i.  For the second technique, placing Pascal’s triangle into a matrix and using matrix operations can also lead to quick proofs and new perspectives on binomial identities.  In particular, the binomial inversion technique from Chapter 2 turns out to be matrix inversion in disguise.  We also consider matrices containing Stirling, Lah, and Fibonacci numbers.
10. Mechanical Summation.  This chapter is devoted to an explanation of the automated identity proving techniques for hypergeometric sums developed by Gosper and extended by Zeilberger.  The three sections are on (1) hypergeometric series and notation, (2) Gosper’s algorithm, and (3) Zeilberger’s extension to Gosper’s algorithm.

Finally, there is an appendix containing hints and/or a solution to every problem in the text, an index of identities appearing in the text, and a general index.

References

1. Michael Z. Spivey, “Combinatorial Sums and Finite Differences,” Discrete Mathematics, 307 (24): 3130-3146, 2007.
2. Michael Z. Spivey, “The Binomial Recurrence,” Mathematics Magazine, 89 (3): 192-195, 2016.
Posted in binomial coefficients, publishing | 1 Comment

## Mathematics in Games, Part 3: Junior Arithmancer

In the past two months’ blog posts I’ve talked some about mathematics in games.  December‘s focused on the importance of embedding the mathematics in the game world rather than pulling the player out of the game to solve math problems unrelated to that world.  In January I wrote up some thoughts about my parser-based interactive fiction game A Beauty Cold and Austere, in which the player recreates the history of mathematics via a dream world.

I’ve written another parser-based math game in addition to ABCA: Junior Arithmancer.  While there are some similarities between the two games, JA is trying to do something different than ABCA does, and so I don’t view it as a sequel or even a follow-up to ABCA.

In Junior Arithmancer you are (as in ABCA) trying to pass an exam.  This one, though, is an entrance examination to an institution of higher learning.  You have to demonstrate competency in an area of magic, arithmancy; this is done by using various arithmetic-related spells to recreate the first several digits in the decimal expansions of some famous constants, such as $\pi$, e, $\sqrt{2}$, and $\zeta(3)$.

For example, the first several digits of $\zeta(3)$ are 1, 2, 0, 2, 0, 5.  Suppose you have spells that STA (start), PLU (plus), MIN (minus), TIM (times), DIV (divide), and INC (increment by 1).  You could start at 1 by using STA 1.  Then you could move from 1 to 2 via PLU 1.  Then you could go to from 2 to 0 with MIN 2.  But now you’re stuck.  You can’t use a valid arithmetic operation involving TIM, DIV, or INC (the unused spells) to get from 0 to 2.  So STA 1, PLU 1, and MIN 2 only gets you three digits into the sequence.

Let’s see whether we can do better.  We must begin with STA 1.  There are three options now to get from 1 to 2: We could PLU 1, TIM 2, or INC.  Let’s go with INC as the least likely to be needed again.  Then from 2 to 0 we could use MIN 2 or TIM 0.  Let’s try the latter.  From 0 to 2 our only option is PLU 2.  Then from 2 to 0 we can’t use TIM again since it’s already been used, but we can go with MIN 2.  This is already better than the last time, as we’ve used STA 1, INC, TIM 0, PLU 2, and MIN 2 to recreate 1, 2, 0, 2, and 0 and so get five digits into the sequence.  Unfortunately, the only spell remaining is DIV, and you can’t use a division operation starting with 0 to obtain the last digit, 5.  So we’re stuck once again.

This example doesn’t use all the arithmancy spells in the game, though; there are others.  As you make progress through the exam you earn additional spells.  You can also earn spell prefixes; these modify the operations of the other spells.  For example, if you earn an INV (invert operation) prefix you could effectively gain a second PLU in a single sequence by using the INV prefix to turn your MIN operation into a PLU operation.

Then, after you’ve completed the ten sequences in the game, you’re faced with other challenges: Completing the sequences using only five, four, or even three spells; using spells to reach a number larger than 2 billion; and figuring out the color/number association scheme in the game and using that to visit certain colored spaces, perhaps in a particular order.

From a gameplay standpoint perhaps Junior Arithmancer‘s greatest strength is the huge explosion in the possibility space: (1) There are multiple arithmetic operations that can be applied, (2) Applying them in different orders produces different sequences of digits, and (3) Each operation’s effect can be modified by one of several prefixes.  So, after STA 1 to begin recreating $\zeta(3)$, you may have something like seven spells to choose from with five possible prefix modifiers (six counting the spell’s basic effect), for a total of 42 options for the next spell.  After using one of those you might have six spells left, again with those five possible prefix modifiers plus the spell’s basic effect, for a total of 36 options.  So now we have 42*36 = 1512 options for a two-spell sequence after STA.  The third spell after STA will have 30 options, bringing our total number of three-spell sequences after STA to 1512*30 = 45360.  And this continues growing, to 45360*24 = 1,088,640 four-spell sequences after STA and then to 1,088,640*18 = 19,595,520 five-spell sequences after STA.  Not all options are possible, and some duplicate each other, but this gives us a ballpark figure for how many five-spell sequences there are in the game.  You also have to be looking ahead, since once you use a spell you can’t use it again on a sequence.

Another feature of Junior Arithmancer is that it is “juicy” in the way I described in my January post: Once you understand the rules of how the spells function, every one of those many, many spell combinations actually does something meaningful in the game.  I was helped greatly in this respect when designing the game since I was able to use the same arithmetic operations we all already know: I did not have to create, ex nihilo, somewhere on the order of 20 million different sequences of spell effects.

How does Junior Arithmancer stack up in terms of embedding the mathematics in the game world?  Well, you’re taking an exam in the game, so it’s not as embedded as with the two examples I gave in my December post (the one with my real estate agent and the one with the Brazilian boys).  With that caveat, though, I think it does a good job of achieving embeddedness.  Your goal is to recreate several sequences of digits, and the only way to do so is to find creative ways to perform a set of arithmetic operations in a certain order.  You’re constantly having to think and rethink about arithmetic.  (Some of the other spells and modifiers make things more complex, so you’re not always working with single-digit numbers.)

Does Junior Arithmancer teach anything, though?  Not in the way that I tried to do with A Beauty Cold and Austere:  The arithmetic in JA is mostly elementary-school level.  The few exceptions don’t require anything more complicated than an understanding of how arithmetic works with negative numbers.  So the game is probably too complicated to be used to teach arithmetic to children effectively,  In addition, players interested enough to engage JA‘s number puzzles probably know arithmetic well enough that they don’t need the practice JA provides.

But I think there’s a place for math in games that don’t have an ulterior educational motive, too.  Mathematics has plenty of power as a tool to solve problems, but it’s also one of humanity’s great inventions in its own right.  Why not mine it for game ideas – for fun – and use it as a sandbox to play in?

Posted in games, teaching | 3 Comments