Solving Second-Order Difference Equations with Constant Coefficients

Suppose you have a linear, homogeneous second-order difference equation with constant coefficients of the form a_{n+2} = A a_{n+1} + B a_n.  The solution procedure is as follows: “Guess” a solution of the form a_n = r^n.  Then substitute this guess into the difference equation to obtain r^{n+2} = A r^{n+1} + Br^n.  Divide both sides by r^n (assuming r \neq 0), yielding r^2 = Ar + B.  Solve this equation to obtain the two solutions r = r_1 and r = r_2.  This gives two solutions to the original difference equation: a_n = r_1^n and a_n = r_2^n.  Assuming r_1 \neq r_2, and after arguing that the solutions to the original equation form a vector space of dimension 2 (we’ll leave that aside for now), your two solutions to the original equation form a basis for the solution space.  This means that all solutions to the difference equation can be represented as a linear combination of the two basis solutions r_1^n and r_2^n, which means that the general solution to a_{n+2} = A a_{n+1} + B a_n is a_n = C r_1^n + D r_2^n, where C and D are determined by the initial conditions a_0 and a_1.

Many of my students do not find this argument satisfying (although it is valid).  They don’t like the guessing aspect near the beginning.  How do you know the right guess?  How would you come up with that guess?  In this post we’ll do a direct derivation that a_n = C r_1^n + D r_2^n is actually the correction solution (with no guesses!).

The derivation uses generating functions.  We’ll start by reindexing the original recurrence into the form a_n = A a_{n-1} + B a_{n-2}.  Then multiply this equation by x^n and then sum as n goes from 2 to infinity to obtain

\displaystyle \sum_{n=2}^{\infty} a_n x^n = A \sum_{n=2}^{\infty} a_{n-1} x^n + B \sum_{n=2}^{\infty} a_{n-2} x^n.

We want to reindex this equation so that each summation has the same bounds and the same subscripts for a_n.  Doing this yields

\displaystyle \sum_{n=0}^{\infty} a_n x^n - a_0 - a_1 x= A \sum_{n=1}^{\infty} a_n x^{n+1} + B \sum_{n=0}^{\infty} a_n x^{n+2}, or

\displaystyle \sum_{n=0}^{\infty} a_n x^n - a_0 - a_1 x= A x \sum_{n=0}^{\infty} a_n x^n - a_0 x + B x^2 \sum_{n=0}^{\infty} a_n x^n.

For simplicity’s sake, let’s let \displaystyle S = \sum_{n=0}^{\infty} a_n x^n.  This substitution and some rearrangement produces

\displaystyle S - Ax S - Bx^2 S = a_0 + a_1 x - a_0 x, which implies

\displaystyle S = \frac{a_0 + (a_1 - a_0)x}{1 - Ax - Bx^2}.

Now, let’s factor the denominator.  We already know that x^2 - Ax - B = (x-r_1)(x-r_2).  Replacing x with 1/x produces 1/x^2 - A/x - B = (1/x-r_1)(1/x-r_2).  Then multiply by x^2 to obtain 1 - Ax - Bx^2 = (1-r_1 x)(1 - r_2 x).  Therefore,

\displaystyle S = \frac{a_0 + (a_1 - a_0)x}{(1-r_1 x)(1-r_2 x)}.

Applying partial fractions decomposition, we can write this as

\displaystyle S = \frac{C}{1-r_1 x} + \frac{D}{1-r_2 x},

for some constants and D.  We could figure out what C and D are in terms of a_0 and a_1, but for the purposes of this derivation we don’t need to do that.

Now we’re going to pull a property of generating functions.  They can be thought of as formal power series, with the consequence that you can expand them as power series without worrying about questions of convergence.  (See, for example, Wilf’s text generatingfunctionology.)  Applying the geometric series expansion, we have

\displaystyle S = C \sum_{n=0}^{\infty} (r_1 x)^n + D \sum_{n=0}^{\infty} (r_2 x)^n = \sum_{n=0}^{\infty} (C r_1^n + D r_2^n)x^n.

Remembering that \displaystyle S = \sum_{n=0}^{\infty} a_n x^n, we now have

\displaystyle \sum_{n=0}^{\infty} a_n x^n = \sum_{n=0}^{\infty} (C r_1^n + D r_2^n)x^n.

Since the only way two formal power series can be equal is if their coefficients are equal, we get the solution to the original difference equation to be

a_n = C r_1^n + D r_2^n.

Remember, this is for the case when the characteristic polynomial r^2 - Ar - B has two distinct roots r_1 and r_2.  Next month we’ll take a look at the repeated root case, where r_1 = r_2.

Advertisements
Posted in generating functions, recurrence relations | Leave a comment

Adventures in Fine Hall

The Princeton Alumni Weekly has a great article up about the mathematicians at Princeton and the Institute for Advanced Study in the 1930s and 1940s.  I’ve heard a lot of anecdotes about mathematicians over the years, but I had not heard any of the ones in this article before.  Check it out!

Posted in mathematicians | Leave a comment

A Quasiperfect Number Must Be an Odd Perfect Square

Let \sigma be the sum of divisors function.  For example, \sigma(5) = 1 + 5 = 6, as the divisors of 5 are 1 and 5.  Similarly, \sigma(6) = 1 + 2 +3 + 6 = 12, as the divisors of 6 are 1, 2, 3, and 6.

A perfect number is one that is equal to the sum of its divisors, excluding itself.  Thus 6 is a perfect number, as 6 = 1 + 2 + 3.  In terms of the divisor function, a perfect number is one such that \sigma(n) = 2n.  An interesting theorem about perfect numbers is that an even number is perfect if and only if it is of the form 2^{p-1}(2^p-1), where both p and 2^p-1 are prime.

Similarly, a quasiperfect number is a number n such that \sigma(n) = 2n+1.  We don’t know whether there are any quasiperfect numbers, but we do know that if they exist there are certain properties they must have.  In this post we will work through the following result of Cattaneo [1].  (I gave the same argument in this post on math.SE.)

Theorem: If \sigma(n) = 2n+1, then n is an odd perfect square.

Proof: If n = \prod_{i=1}^r p_i^{a_i} is the prime factorization of n, then we know that
\displaystyle \sigma(n) = \prod_{i=1}^r (1 + p_i + p_i^2 + \cdots + p_i^{a_i}).
Since \sigma(n) = 2n+1 is odd, though, each factor 1 + p_i + p_i^2 + \cdots + p_i^{a_i} must also be odd.

For each odd prime factor p_i, then, there must be an odd number of terms in the sum 1 + p_i + p_i^2 + \cdots + p_i^{a_i}. Thus a_i must be even. This means that if p_i is odd, p_i^{a_i} is an odd perfect square. The product of odd perfect squares is another odd perfect square, and therefore n = 2^s m^2, where m is odd.

Now we have \sigma(n) = (2^{s+1}-1)M, where

\displaystyle M = \prod_{p_i \text{ odd}} (1 + p_i + p_i^2 + \cdots + p_i^{a_i}).

Since \sigma(n) = 2n+1,
\begin{aligned}  &(2^{s+1}-1)M = 2^{s+1}m^2 + 1 \\  \implies &(2^{s+1}-1)(M - m^2)-1 = m^2.  \end{aligned}
This means that -1 is a quadratic residue for each prime factor of 2^{s+1}-1. A consequence of the quadratic reciprocity theorem, though, is that -1 is a quadratic residue of an odd prime p if and only if p \equiv 1 \bmod 4. Thus all prime factors of 2^{s+1}-1 are congruent to 1 \bmod 4. The product of numbers congruent to 1 \bmod 4 is still congruent to 1 \bmod 4, so 2^{s+1}-1 \equiv 1 \bmod 4. However, if s > 0, then 2^{s+1}-1 \equiv 3 \bmod 4. Thus s must be 0. Therefore, n = m^2, where m is odd.

References

  1.  Paolo Cattaneo, “Sui numeri quasiperfetti,” Bollettino dell’Unione Matematica ItalianoSerie 3, Vol. 6 (1951), no. 1, p. 59-62.
Posted in number theory | Leave a comment

Determinant of a Symmetric Pascal Matrix

The infinite symmetric Pascal matrix Q is given by

\displaystyle Q = \begin{bmatrix} 1 & 1 & 1 & 1 & \cdots  \\ 1 & 2 & 3 & 4 & \cdots \\ 1 & 3 & 6 & 10 & \cdots \\ 1 & 4 & 10 & 20 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix},

where entry (i,j) in Q is \binom{i+j}{i}.  (Note that we begin indexing the matrix with 0, not 1, in keeping with the way Pascal’s triangle is usually indexed.)

The purpose of this post is to show that the determinant of Q_n, the (n+1) \times (n+1) upper-left submatrix of Q, is 1.

For example, here’s Q_4:

\displaystyle Q_4 = \begin{bmatrix} 1 & 1 & 1 & 1 & 1\\ 1 & 2 & 3 & 4 & 5\\ 1 & 3 & 6 & 10 & 15\\ 1 & 4 & 10 & 20 & 35 \\ 1 & 5 & 15 & 35 & 70 \end{bmatrix}.

The proof starts with Vandermonde’s identity,

\displaystyle \sum_{k=0}^n \binom{n}{k} \binom{m}{r-k} = \binom{n+m}{r}.

Let r = m in Vandermonde, and then use the symmetry property of the binomial coefficients, \binom{m}{m-k} = \binom{m}{k}.  This gives us

\displaystyle \sum_{k=0}^n \binom{n}{k} \binom{m}{k} = \binom{n+m}{n}.

Now, let’s create matrix P_n, where the (i,j) entry in P_n is \binom{i}{j}.  For example, P_4 is given by

\displaystyle P_4 = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 \\ 1 & 2 & 1 & 0 & 0 \\ 1 & 3 & 3 & 1 & 0 \\ 1 & 4 & 6 & 4 & 1 \end{bmatrix}.

Now, let’s consider the product P_n P^T_n.  Since the (i,j) entry in P^T_n is \binom{j}{i}, entry (n,m) in P_n P^T_n is given by

\displaystyle \sum_{k=0}^n \binom{n}{k} \binom{m}{k},

which, according to the corollary to Vandermonde’s identity that we just proved above, is \binom{n+m}{n}.

Since \binom{n+m}{n} is entry (n,m) in Q_n, this means that P_n P^T_n = Q_n.  For example,

\displaystyle \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 \\ 1 & 2 & 1 & 0 & 0 \\ 1 & 3 & 3 & 1 & 0 \\ 1 & 4 & 6 & 4 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 & 4 \\ 0 & 0 & 1 & 3 & 6 \\ 0 & 0 & 0 & 1 & 4 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 1 & 1 & 1 & 1\\ 1 & 2 & 3 & 4 & 5\\ 1 & 3 & 6 & 10 & 15\\ 1 & 4 & 10 & 20 & 35 \\ 1 & 5 & 15 & 35 & 70 \end{bmatrix}.

Since it is clearly the case that \det P_n = \det P^T_n = 1, and \det (AB) = \det A \det B in general, we have that

\displaystyle \det Q_n = 1.

We’ve done more than prove that \det Q_n = 1, actually; we’ve also found the LU decomposition of Q_n!

For further reading I recommend Alan Edelman and Gilbert Strang’s article “Pascal Matrices” (American Mathematical Monthly 111(3): March 2004, pp. 189-197).  They give four proofs (including this one) that \det Q_n = 1.  Each proof describes a different way of establishing the LU decomposition given here.

Posted in binomial coefficients, matrices | 2 Comments

A Sequence of Right-Hand Riemann Sums Whose Limit Does Not Exist

Recently in my advanced calculus class one of my students asked a (perhaps the) key question when we hit the integration material: Why can’t you just use the limit of the right-hand Riemann sums formed from equally-spaced partitions to define the integral?  I gave the usual answer involving the 0-1 Dirichlet function:  If you choose a rational point in each subinterval to evaluate the function you get 1 for the limit of the Riemann sums, but if you choose an irrational point in each subinterval you get 0 for the limit.  Thus the integral would not be well-defined.

I realized later, though, that I had kind of sidestepped his question.  He asked specifically about right-hand Riemann sums using equally-spaced partitions, and if you force yourself to work under those constraints on the standard interval [0,1], then the partition points are always rational.  Thus you’re taking 1 for the function value on each subinterval, yielding \lim_{n \to \infty} R_n = 1.  The limit of the right-hand Riemann sums using equally-spaced partitions on [0,1] does in fact exist for the Dirichlet function.

So, I wondered, is there a simple example of a bounded function on [0,1] such that the limit of the right-hand Riemann sums based on equally-spaced partitions does not exist?  (A simple unbounded example is f(x) = \frac{1}{x}[x > 0].)  It took me a while to come up with one, but here it is.

Let \displaystyle f(x) = \begin{cases} 1, &\text{ if } x= p/q \text{ in lowest terms and } q \text{ is even}; \\ 0, &\text{otherwise}. \end{cases}

Let \displaystyle R_n = \sum_{i=1}^n f(1/n) \frac{1}{n}, the right-hand Riemann sum on [0,1] using n equally-spaced subintervals.

If n is odd, then R_n = 0.  This is because n has no even factors, and so i/n cannot reduce to a fraction with an even denominator.

However, if n is a power of 2, then R_n = \frac{n-1}{n}.  To see this, first observe that i/n for any i \in \{1, 2, \ldots, n-1\} will have more powers of 2 in its denominator than in its numerator, leaving i/n with an even denominator when reduced to lowest terms.  However, f(n/n) = f(1) = 0.  Thus \displaystyle R_n = \sum_{i=1}^n f(i/n) \frac{1}{n} = \sum_{i=1}^{n-1} \frac{1}{n} = \frac{n-1}{n}.

Since R_n \geq 0 for each n and R_n = 0 when n is odd, we have \liminf_{n \to \infty} R_n = 0.  Similarly, since R_n \leq 1 for each n and R_n = \frac{n-1}{n} when n is a power of 2, \limsup_{n \to \infty} R_n = 1.  Thus \lim_{n \to \infty}R_n does not exist.

Posted in calculus, real analysis | Leave a comment

Mathematics through Narrative: A Beauty Cold and Austere

Earlier this year I spent some time (well, a lot of time) writing a mathematical computer game.  It’s called A Beauty Cold and Austere (ABCA), and it’s text-based; there are no graphics and no symbolic manipulation (e.g., no solving of an equation by moving symbols around).  Instead, the mathematics in the game is expressed in narrative form.  To explain what I mean by this I’ll give an example, but first a little background.

Games of the genre of ABCA are parser-based interactive fiction (IF), and they have a long history stretching back into the 1970s.  In parsed-based IF, you type commands, and the game interprets them and responds accordingly.  For a time in the 1980s parser-based IF was commercially viable; in fact, some of the most successful computer games of the 1980s were IF games (for example, the Zork series, The Hitchhiker’s Guide to the Galaxy).  However, through a combination of factors (such as increased computing power and improved graphics technology), since about 1990 the genre has not generally been commercially viable.  Instead, it has mostly survived on the efforts of a large number of dedicated and talented amateurs.  (Some of these “amateur” games are, in my opinion, actually better than anything produced during the commercial era.)

The first example of mathematics in this kind of interactive narrative form I ever saw was in 1986, in the game Trinity, by Brian Moriarty.  (Spoiler alert for one of the puzzles in Trinity coming up.)  During the story you find yourself in a magical garden.  There’s a sundial in the middle of the garden that you need to manipulate.  There’s also a gnomon that you can screw into the sundial, but its threads are oriented backwards (clockwise vs. counterclockwise) from the threads on the hole in the sundial.  However, there’s also an enterable Klein bottle sculpture in the garden.  If you go through the Klein bottle sculpture everything outside of you and what you’re carrying will, thanks to the properties of a Klein bottle (which are explained in the game), reverse orientation relative to you.  So, the solution to the puzzle is to go through the sculpture while carrying the gnomon, screw the gnomon into the reverse-oriented sundial, go back through the sculpture to re-orient everything relative to yourself, and continue on your way through the game.

My 13-year-old self had never heard of a Klein bottle or even the branch of mathematics known as topology that Klein bottles are normally associated with.  I was fascinated, and I’ve never forgotten that experience.

Trinity isn’t really about mathematics, though; it’s really a commentary in narrative form about the atomic bomb and atomic warfare.  It just happens to include a Klein bottle puzzle.  This past year, after becoming acquainted with the modern interactive fiction scene, I decided to write a game full of puzzles like that Klein bottle puzzle in Trinity.  The result is A Beauty Cold and Austere.  My hope is that the game will cause people to engage with mathematical concepts in new and different ways.  So ABCA is a game, an interactive story, and an experiment in mathematics education.

As of October 1, and through November 15, 2017, A Beauty Cold and Austere is part of the annual Interactive Fiction Competition and is only available through that site.  (Direct link to an online version of ABCA through the IFComp here.  [Update: Now this link points to ABCA‘s IFDB entry.])  After November 15, when the competition is over, I’ll update this page with a non-IFComp link to ABCA.


UpdateA Beauty Cold and Austere took 7th out of 79 games in IFComp 2017!

(The link takes you to the Interactive Fiction Database, where you may either play ABCA online or download it.  I recommend the download option for faster play, although downloading also requires downloading an interpreter on which to play it.  For PC folks, I suggest Gargoyle, which may be downloaded here.  I don’t use Macs, but I’ve been told Lectrote works well on Macs, and perhaps Gargoyle does as well.)

Some public praise for the game, mostly during IFComp:

“ABCaA is an incredibly polished game, with complex mechanics that perfectly work and some good writing.” – Marco Innocenti, Interactive Fiction Database

“This is one of the best big games released in recent years.” – Mathbrush, Interactive Fiction Database

This one exceeded my expectations by far…  A Beauty Cold and Austere is a highlight in my book.” – Mr. Creosote, The Good Old Days

“It was the most fun I’ve had playing an IFComp entry in years…  The parser is superb… The gameplay is terrific, and the scope is breathtaking.” – jepflast, intfiction.org

“great, enormous game, imaginative and well-implemented” – perhapsdessert, intfiction.org

“a damn well-constructed piece of edutainment” – The Xenographer

“The game needed to do two things in order to be a success, in my estimation:
1) It needed to convey the beauty, depth, and awe of mathematics to a lay audience;
2) It needed to be engaging and entertaining as a puzzle game.
After completing the game… I have to say my negative preconceptions were unfounded; the game succeeds, to a large extent, on both counts.” – evouga, intfiction.org

“By far my favorite game in the competition” – tmack, intfiction.org


Also, I sent Beth Malmskog a link to the game, and she wrote about it on her AMS blog, PhD + Epsilon.

Posted in games, teaching | 2 Comments

The Validity of Mathematical Induction

Suppose you have some statement S(k).  Mathematical induction says that the following is sufficient to prove that S(k) is true for all natural numbers k.

  1. S(1) is true.
  2. For any natural number k, if S(k) is true, then S(k+1) is true.

The idea is that the first two statements imply that S(2) is true.  Then the truth of S(2) together with the second statement implies that S(3) is true.  Then the truth of S(3) together with the second statement implies that S(4) is true.  One can continue with this process, showing that S(k) is true for all natural numbers k.

While this is the idea, the formal proof that mathematical induction is a valid proof technique tends to rely on the well-ordering principle of the natural numbers; namely, that every nonempty set of positive integers contains a least element.  See, for example, here.

However, if you dig into this a little further you eventually realize that mathematical induction is tied up with the very definition of the natural numbers.  See, for example, here.  The rest of this post is a proof of the validity of mathematical induction that uses that idea explicitly.  (It’s taken from Fitzpatrick’s Advanced Calculus, second edition, pages 5-6.)

First, the following definition: A set S of real numbers is said to be inductive provided that (1) 1 \in S and (2) if x \in S then x+1 \in S.

There are lots of inductive sets: the reals, the rationals, and the integers, for example.

Next, we define the natural numbers: The set of natural numbers \mathbb{N} is the intersection of all inductive subsets of the real numbers.

Now we’re ready to prove the validity of mathematical induction.  Let A = \{ k \in \mathbb{N} | S(k) \text{ is true}\}.  Since S(1) is true, 1 \in A.  Since the truth of S(k) for some natural number k implies the truth of S(k+1) for the natural number k+1, we have that if k \in S then k+1 \in S.  Thus the set A is an inductive set.  By definition of \mathbb{N}, this means \mathbb{N} \subseteq A.  However, the definition of A means that A \subseteq \mathbb{N}.  Thus A = \mathbb{N}.  Therefore, S(k) is true for all natural numbers.

Posted in number theory, proof techniques | Leave a comment