A Coin-Flipping Problem

One problem that I’ve assigned when discussing Markov chains is to calculate the expected number of flips required for a particular pattern to appear.  (Here I mean a pattern such as heads, heads, heads, or HHH).  In this post I’m going to discuss another approach – one that doesn’t use Markov chains – to solve this problem.

Suppose we want to find the expected number of flips required for the pattern HHH to appear.  Call this X.  We can calculate X by conditioning on the various patterns we might achieve that do or don’t give us HHH.  For example, for our first few flips we could observe T, HT, HHT, or HHH.  These cover the entire outcome space and have probability 1/2, 1/4, 1/8, and 1/8, respectively.

  • If we observe T, then we effectively have to start the entire process over, and we’ve used one flip to get to that point.  So, if we observe T, then the average number of flips required is 1+X.
  • If we observe HT, then we also have to start the entire process over, and we’ve used two flips to get there.  For this outcome, the average number of flips required is 2+X.
  • If we observe HHT, then we have to start the entire process over, and we’ve used three flips.  The average number of flips required for this scenario is 3+X.
  • Finally, if we observe HHH, then we have achieved our goal.  The number of flips required is 3.

All together, then, the average number of flips required satisfies the equation

\displaystyle X = 1/2(1+X) + 1/4(2+X) + 1/8(3+X) + 1/8(3).

Solving for X, we obtain X = 14.  So it takes 14 flips, on average, to obtain three consecutive heads.

What if we have a more complicated pattern, though?  Let’s look at HHT as an example.  Let X be the expected number of flips required for HHT to appear for the first time.

Once again, we can condition on the various patterns we might achieve that do or don’t give us HHT.  These are T, HT, HHH, and HHT.  As in the previous example, these cover the entire outcome space and have probability 1/2, 1/4, 1/8, and 1/8, respectively.

  • If we observe T, then we have to start over.  The average number of flips required is 1+X.
  • If we observe HT, then we have to start over.  The average number of flips required is 2+X.
  • If we observe HHH, then we don’t have to start over.  It’s entirely possible that the HH at the end of HHH could be followed by a T, and then we would have achieved our pattern!  Mathematically, this means that things get a bit more complicated.
    • Let E_{HH} denote the expected number of flips required for HHT to appear given that we currently have HH.
    • Thus if we start with HHH, then the average number of flips required to obtain HHT is 3 + E_{HH}.
    • Now we need to determine E_{HH}.
      • If we currently have HH, then the next flip is either a T, which completes our pattern, or an H, we means we must start over in our quest to complete HHT given that we currently have HH.
      • Thus E_{HH} = 1/2(1) + 1/2(1 + E_{HH}).
      • Solving this yields E_{HH} = 2.
    • Thus if we observe HHH, then the average number of flips required to obtain HHT is 3 + 2 = 5.
  • Finally, if we observe HHT, then we have achieved our goal in 3 flips.

Therefore, X = 1/2(1 + X) + 1/4(2 + X) + 1/8(5) + 1/8(3).  Solving this equation for X gives us X = 8 flips.

Notice that it takes fewer flips, on average, to achieve HHT than it does HHH.  This is because we don’t always have to start over every time the sequence fails to mach our goal sequence.

The interested reader is invited to find the expected number of flips for the other sequences.


Posted in coin flipping, Markov chains, probability | Leave a comment

A Request for a Proof of a Binomial Identity

A few weeks ago I received an email from Professor Steve Drekic at the University of Waterloo. He asked if I knew of a way to prove the following binomial identity:

\displaystyle \sum_{k=1}^n \frac{1}{2k-1} \binom{2k}{k} \binom{2n-2k}{n-k} = \binom{2n}{n}.

(He told me later that he wanted it to help prove that the random walk on the integers with transition probability p = 1/2 is null recurrent.)

It’s an interesting binomial identity, in that it has a nontrivial but not overly complicated sum on the left side and a simple expression on the right. Yet I had not seen it before. I was able to find a proof that uses generating functions, and I thought I would reproduce it here.

Lemma 1.

\displaystyle \binom{1/2}{k} = \frac{-1}{2k-1} \binom{-1/2}{k}.

By Identity 17 in my book The Art of Proving Binomial Identities [1],

\displaystyle \binom{1/2}{k} = \frac{(1/2)^{\underline{k}}}{k!} = \frac{(1/2) (-1/2)^{\underline{k}}}{(-1/2-k+1) k!} = \frac{(-1/2)^{\underline{k}}}{-(2k-1)) k!} = \frac{-1}{2k-1} \binom{-1/2}{k},

where in the second step we use properties of the falling factorial x^{\underline{k}}.

Next, we need the following generating function.

Lemma 2.

\displaystyle - \sqrt{1-4x} = \sum_{k=0}^{\infty} \frac{1}{2k-1} \binom{2k}{k} x^k.

By Newton’s binomial series (Identity 18 in my book [1]),
\displaystyle  -\sqrt{1-4x} = -(1-4x)^{1/2} = -\sum_{k=0}^{\infty} \binom{1/2}{k} (-4x)^k \\ = -\sum_{k=0}^{\infty} \frac{-1}{2k-1} \binom{-1/2}{k} (-4x)^k, \text{ by Lemma 1} \\ = \sum_{k=0}^{\infty} \frac{1}{2k-1} \left(\frac{-1}{4}\right)^k \binom{2k}{k} (-4x)^k, \text{ by Identity 30 in [1]} \\ = \sum_{k=0}^{\infty} \frac{1}{2k-1} \binom{2k}{k} x^k.

Finally, we’re ready to prove the identity.

Identity 1.

\displaystyle \sum_{k=1}^n \frac{1}{2k-1} \binom{2k}{k} \binom{2n-2k}{n-k} = \binom{2n}{n}.

Let a_k = \binom{2k}{k}/(2k-1), and let b_k = \binom{2k}{k}. Since the generating function for (a_k) is -\sqrt{1-4x} (Lemma 2), and the generating function for (b_k) is 1/\sqrt{1-4x} (Identity 150 in [1]), the generating function for their convolution,

\displaystyle c_n = \sum_{k=0}^n \frac{1}{2k-1} \binom{2k}{k} \binom{2n-2k}{n-k},

is, by the convolution property for generating functions (see Theorem 13 in [1]),

\displaystyle - \frac{\sqrt{1-4x}}{\sqrt{1-4x}} = -1.

Since -1 just generates the sequence -1, 0, 0, 0, \ldots, this means that

\displaystyle \sum_{k=0}^n \frac{1}{2k-1} \binom{2k}{k} \binom{2n-2k}{n-k} = \begin{cases} -1, & \: n = 0; \\ 0, & \: n \geq 1.\end{cases}

Therefore, when n \geq 1, we have
\displaystyle  \sum_{k=0}^n \frac{1}{2k-1} \binom{2k}{k} \binom{2n-2k}{n-k} = 0 \\ \implies \sum_{k=1}^n \frac{1}{2k-1} \binom{2k}{k} \binom{2n-2k}{n-k} - \binom{2n}{n} = 0 \\ \implies \sum_{k=1}^n \frac{1}{2k-1} \binom{2k}{k} \binom{2n-2k}{n-k} = \binom{2n}{n}.


1. Michael Z. Spivey, The Art of Proving Binomial Identities, CRC Press, 2019.

Posted in binomial coefficients, generating functions | 2 Comments

Strong Induction Wasn’t Needed After All

Lately when I’ve taught the second principle of mathematical induction – also called “strong induction” – I’ve used the following example to illustrate why we need it.

Prove that you can make any amount of postage of 12 cents or greater using only 4-cent and 5-cent stamps.

At this point in a class we would have just done several examples using regular induction, and so we would be naturally inclined to try to prove this postage stamp problem using that technique.  The base case is easy: Just use three 4-cent stamps to make 12 cents of postage.  The induction hypothesis is as usual, too: Suppose, for some k \geq 12, that we can make k cents’ worth of postage using only 4-cent and 5-cent stamps.  But regular induction fails at the induction step: To prove that you can make k+1 cents using only 4-cent and 5-cent stamps, knowing that you can make k cents isn’t helpful, since you can’t add either a 4-cent or a 5-cent stamp to k cents’ worth of postage to generate k+1 cents’ worth of postage.  Instead, you need to know that you can make, say, k-3 cents’ worth of postage.  Then you can add a 4-cent stamp to that amount to produce your k+1 cents.  In other words, you need to assume in the induction hypothesis not that you can make k cents but that you can make 12, 13, \ldots, k cents.  And that’s the essence of strong induction.

However, before we get to this realization my students will often suggest that we can produce k+1 cents from k cents by simply removing one of the 4-cent stamps used to produce k cents and replacing it with a 5-cent stamp.  This is a good idea.  However, it assumes that we actually used at least one 4-cent stamp to produce k cents, and that’s a faulty assumption.  Sometimes we don’t need a 4-cent stamp, such as if we use three 5-cent stamps to produce 15 cents.  (In fact, for 15 cents we must use only 5-cent stamps.)  If we don’t use any 4-cent stamps then we can’t generate k+1 cents from k cents by replacing a 4-cent stamp with a 5-cent one.

Normally the students are a little chagrined that this idea fails.  And that happened again this term.  After discussing the idea and why it doesn’t quite work I was about ready to move on and introduce strong induction when one of my students interjected, “I’ve got it!  I’ve got it!”  He pointed out that if we’re going to use only 5-cent stamps to generate k cents, when k \geq 12, then we’ll need at least three of them.  That’s true, I said.  Then, he added, if we have at least three 5-cent stamps to make k cents, we can replace them with four 4-cent stamps to yield k+1 cents’ worth of postage.  That covers the remaining case.  I was impressed enough that I applauded the class for their collective work in solving the problem.

To recap: You don’t actually need strong induction to solve the stamp problem described above.  Use regular induction, and break the induction step into two cases: (1) If you use at least one 4-cent stamp to make k cents, replace it with a 5-cent stamp to make k+1 cents.  (2) If you only use 5-cent stamps to make k cents, you’ll need at least three of them.  Replace those three with four 4-cent stamps to generate k+1 cents’ worth of postage.

Well done, Spring 2020 Math 210 students!

Posted in number theory, proof techniques | Leave a comment

Arguments for 0.9999… Being Equal to 1

Recently I tried to explain to my 11-year-old son why 0.9999… equals 1.  The standard arguments for 0.9999... = 1 (at least the ones I’ve seen) assume more math background than he has.  So I tried another couple of arguments, and they seemed to convince him.

The first argument.  The usual claim you get from people who aren’t yet convinced that 0.9999... = 1 is that 0.9999… is the real number just before 1 on the number line.  Let’s suppose this is true.  What is the average of 0.9999… and 1?  Stop and think about that for a bit.

If the average exists, it must be larger than 0.9999…, and it must be smaller than 1.  But if 0.9999… is just before 1 on the number line, there can’t be such a number.  So either (1) the averaging operation doesn’t apply to 0.9999… and 1, (2) there are numbers between 0.9999… and 1 on the number line, or (3) 0.9999… equals 1.  But (1) immediately leads to the question “Why not?”, which has no obvious answer, and (2) leads to the question of what their decimal representations would be, which also has no obvious answer.  Explanation (3), that 0.9999… equals 1, starts to look more plausible.

The second argument.  Again, let’s assume that 0.9999… is the real number just before 1 on the number line.  If this is true, then what is the difference of 1 and 0.9999…?  Again, stop and think about that for a bit.

If it’s not zero, then you could just add half of that difference to 0.9999… to get a new number between 0.9999… and 1, which not only contradicts our assumption but also forces us to come up with the decimal representation of such a number.  If it is 0, then you have that 0.9999... = 1.  And if you try to argue that you can’t subtract 0.9999… from 1, then you need to explain why that operation is not allowed for those two real numbers.  (This second argument is a lot like the first one, really.)  The most reasonable of the three options is that the difference is 0, which means that 0.9999… is actually equal to 1.

Final comments and the standard algebra argument.  Both arguments are reductio ad absurdum arguments; that is, they assume that 0.9999… is not equal to 1 and then reason to a contradiction.  The other arguments that I’ve seen are all direct arguments; i.e., they reason from basic mathematical principles to the conclusion that 0.9999… equals 1.

For example, here’s the standard argument via algebra.  We know that 0.9999… must be equal to some number, so let’s call that number x.  Multiplying by 10 yields 10x = 9.9999....  Subtracting the first equation from the second leaves us 9x = 9.0000..., which implies that x = 1.  Thus 0.9999... = 1.

The algebraic argument is a great one, provided you know algebra.  But for my preteen with a pre-algebra background, these two reductio ad absurdum arguments seemed to be enough to convince him.

Posted in arithmetic, number theory | Leave a comment

Diversity Statements in Hiring

Recently Abigail Thompson, chair of the mathematics department at the University of California, Davis, and a vice president of the American Mathematical Society, published this article in Notices of the American Mathematical Society.  The article includes the following statement:

Faculty at universities across the country are facing an echo of the loyalty oath [of the 1950s], a mandatory “Diversity Statement” for job applicants.  The professed purpose is to identify candidates who have the skills and experience to advance institutional diversity and equity goals.  In reality it’s a political test, and it’s a political test with teeth.

She goes on to explain why.

Why is it a political test? Politics are a reflection of how you believe society should be organized. Classical liberals aspire to treat every person as a unique individual, not as a representative of their gender or their ethnic group. The sample rubric dictates that in order to get a high diversity score, a candidate must have actively engaged in promoting different identity groups as part of their professional life. The candidate should demonstrate “clear knowledge of, experience with, and interest in dimensions of diversity that result from different identities” and describe “multiple activities in depth.” Requiring candidates to believe that people should be treated differently according to their identity is indeed a political test.

I agree.  The use of diversity statements in hiring is a political test and thus is inherently discriminatory.  We should abandon the practice.

Posted in campus issues, diversity, politics | 2 Comments

Finding the Area of an Irregular Polygon

Finding the area of an irregular polygon via geometry can be a bit of a chore, as the process depends heavily on the shape of the polygon.  It turns out, however, that there’s a formula that can give you the area of any polygon as long as you know the polygon’s vertices.  That formula is based on Green’s Theorem.

Green’s Theorem is a powerful tool in vector analysis that shows how to convert from a line integral around a closed curve to a double integral over the region enclosed by the curve and vice versa.  More specifically, the circulation-curl form of Green’s Theorem states that, if D is a closed region with boundary curve C and C is oriented counterclockwise, then

\displaystyle \oint_C (M dx + N dy) = \iint_D \left( \frac{\partial N}{\partial x} - \frac{\partial M}{\partial y} \right) dA.

The special case of Green’s Theorem that can generate our formula for the area of a polygon has N = x and M = 0.  This yields

\displaystyle \oint_C x dy = \iint_D dA.

Since the double integral here gives the area of region D, this equation says that we can find the area of D by evaluating the line integral of x dy counterclockwise around the curve C.  The boundary curve for a polygon consists of a finite set of line segments, though, and so to obtain our formula we just need to find out what \oint_L x dy is for a line segment L that runs from a generic point (x_1, y_1) to another generic point (x_2, y_2).  Let’s do that now.

We need a parameterization of L.  That requires a point on the line segment L and a vector in the direction of L.  The starting point (x_1, y_1) and the vector (x_2 - x_1){\bf i} + (y_2 - y_1){\bf j} from the starting point to the ending point work well, giving us the parameterization

\displaystyle {\bf r}(t) = (x_1 + t(x_2-x_1)){\bf i} + (y_1 + t(y_2-y_1)){\bf j}, \: 0 \leq t \leq 1.

With x = x_1 + t(x_2-x_1) and y = y_1 + t(y_2-y_1), we have dy = (y_2 - y_1)dt.  This gives us the integral

\displaystyle \int_0^1 (x_1 + t(x_2-x_1))(y_2-y_1)dt \\ = (y_2-y_1)\int_0^1 (x_1 + t(x_2-x_1))dt \\ = (y_2-y_1)\left[tx_1  + \frac{t^2}{2}(x_2-x_1)\right]_0^1 \\ = (y_2-y_1)\frac{x_1+x_2}{2}.

In other words, the line integral of x dy from (x_1, y_1) to  (x_2, y_2) is \displaystyle (y_2-y_1)\frac{x_1+x_2}{2}, the difference in the y coordinates times the average of the x coordinates.

Let’s put all this together to get our formula.  Suppose we have a polygon D with n vertices. Start with any vertex.  Label it (x_1, y_1).  Then move counterclockwise around the polygon D, labeling successive vertices (x_2, y_2), (x_3, y_3), and so forth, until you label the last vertex (x_n, y_n).  Finally, give the starting vertex a second label of (x_{n+1}, y_{n+1}).  Then the area of D is given by

\displaystyle \text{Area of } D = \sum_{i=1}^n (y_{i+1} - y_i) \frac{x_i + x_{i+1}}{2}.

In words, this means that to find the area of polygon D you can just take the difference of the y coordinates times the average of the x coordinates of the endpoints of each line segment making up the polygon and then add up.

This formula isn’t the only such formula for finding the area of a polygon.  In fact, a more common formula that can also be proved with Green’s Theorem is

\displaystyle \text{Area of } D=  \frac{1}{2} \sum_{i=1}^n (x_i y_{i+1} - x_{i+1}y_i).

This second formula looks a little nicer to the eye, but I prefer the first formula for calculations by hand.

Posted in analytic geometry, calculus, Green's Theorem | 1 Comment

An observation on the unit circle

We did a quick review of the unit circle in my multivariate calculus class last week, and I pointed out a fact about the sines and cosines of the common angles in the first quadrant that some of the students appeared not to have seen before.  I thought I would record it here.

The five angles most commonly encountered in the first quadrant, together with their coordinates on the unit circle, are as follows (angle measures are in radians):

  • 0: (1,0)
  • \pi/6: (\sqrt{3}/2, 1/2)
  • \pi/4: (\sqrt{2}/2, \sqrt{2}/2)
  • \pi/3: (1/2, \sqrt{3}/2)
  • \pi/2: (0, 1)

Part of the reason these coordinates are important is that they tell you the sine and cosine of the corresponding angle.  The cosine value is the first coordinate (technical term: abscissa), while the sine value is the second coordinate (technical term: ordinate).  For example, \cos (\pi/3) = 1/2 and \sin(\pi/3) = \sqrt{3}/2.

Here’s another way of writing the same information, one that illustrates a pattern with the coordinate values.

  • 0: (\sqrt{4}/2, \sqrt{0}/2)
  • \pi/6: (\sqrt{3}/2, \sqrt{1}/2)
  • \pi/4: (\sqrt{2}/2, \sqrt{2}/2)
  • \pi/3: (\sqrt{1}/2, \sqrt{3}/2)
  • \pi/2: (\sqrt{0}/2, \sqrt{4}/2)

Every coordinate value is of the form \sqrt{i}/2, where i is a value between 0 and 4.  As the angle measure increases from 0 radians to \pi/2 radians, the cosine value decreases from \sqrt{4}/2 down to \sqrt{0}/2, while the sine value increases from \sqrt{0}/2 up to \sqrt{4}/2.  Perhaps this pattern will help some folks to memorize the coordinate values more easily.

Posted in trigonometry | Leave a comment