## Proof of the Recursive Identity for the Bernoulli Numbers

The Bernoulli numbers $B_n$ satisfy the recursive relationship $\displaystyle \sum_{k=0}^n \binom{n+1}{k} B_k = [n=0]$.  (Here, $[P]$ is the Iverson bracket, where $[P]$ evaluates to 1 if P is true and 0 if P is false.)  The relationship can be used to calculate the Bernoulli numbers fairly easily.  This post gives a proof of this relationship.

The exponential generating function for the Bernoulli numbers is given by $\displaystyle \frac{x}{e^x-1} = \sum_{n=0}^{\infty} B_n \frac{x^n}{n!}$.  The exponential generating function for the sequence consisting of 0 followed by an infinite number of 1’s is given by $\displaystyle e^x-1 = \sum_{n=1}^{\infty} \frac{x^n}{n!}$.  Multiplying these together, we obtain, thanks to the multiplication principle for exponential generating functions (see, for example, Concrete Mathematics [1], p. 365)

$\displaystyle x = \left(\sum_{n=1}^{\infty} \frac{x^n}{n!} \right) \left( \sum_{n=0}^{\infty} B_n \frac{x^n}{n!}\right) = \sum_{n=0}^{\infty} \left(\sum_{k=0}^{n-1} \binom{n}{k} B_k \right) \frac{x^n}{n!}.$

But $x$ has generating function $\displaystyle \sum_{n=0}^{\infty} [n=1] \frac{x^n}{n!}$.  Thus we have $\displaystyle \sum_{k=0}^{n-1} \binom{n}{k} B_k = [n=1]$.  Replacing $n-1$ with n, we have the formula we were trying to prove:

$\displaystyle \sum_{k=0}^n \binom{n+1}{k} B_k = [n=0]$.

(As a side note, the Concrete Mathematics reference, p. 365, derives the exponential generating function for the Bernoulli numbers from the recurrence relation.)

References

1.  Ronald L. Graham, Donald E. Knuth, and Oren Patashnik.  Concrete Mathematics, 2nd edition.  Addison-Wesley, 1994.

| 1 Comment

## Does Pointwise Convergence to a Continuous Function Imply Uniform Convergence?

Recently in my advanced calculus class we discussed how the uniform limit of a sequence of continuous functions is itself continuous.  One of my students turned the question around: If the limit of a pointwise sequence of continuous functions is continuous, does that mean that the convergence is uniform?

The answer is “No,” although I couldn’t think of a counterexample off the top of my head.  I found one (not surprisingly) on Math.SE.  Although there are some good discussion and multiple good answers to this question given there,  I’m going to discuss Ilmari Karonen’s answer.

Let $\displaystyle f_n: [0,1] \mapsto \mathbb{R}$ be given by

$\displaystyle f_n(x) = \begin{cases} nx, & 0 \leq x \leq 1/n; \\ 2 - nx, & 1/n < x \leq 2/n; \\ 0, & \text{ otherwise}. \end{cases}$

Each $f_n$ is continuous (piecewise, but that’s fine).  For any $x \in (0,1]$, there exists N such that $x > 2/n$ for all $n \geq N$.  Since $f_n(0) = 0$ for all n, this means that $f_n(x) \to 0$ for all $x \in [0,1]$.  The zero function is clearly continuous.

However, for each $n \in \mathbb{N}$ we have that $f_n(1/n) = 1$.  This means that when $\epsilon < 1$ there cannot be an $N \in \mathbb{N}$ such that $|f_n(x)| < \epsilon$ for all $x \in [0,1]$ and $n \geq N$.  Thus the convergence cannot be uniform.

Posted in real analysis | 2 Comments

## A Simple Proof That the p-Series Converges for p > 1

Proving that the p-series $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^p}$ converges for $p > 1$ is a standard exercise in second-semester calculus.  It’s also an important property to know when establishing the convergence of other series when using a comparison test.  The usual way I do this in class is with the integral test.  However, I saw a different, simple proof by joriki on Math.SE a few years back.  That proof is the subject of this post.

Let $S_n$ be the nth partial sum $\displaystyle S_n = \sum_{k=1}^n \frac{1}{k^p}$.  Clearly, $\{S_n\}$ is increasing.  We have

$\displaystyle S_{2n+1} = \sum_{k=1}^{2n+1} \frac{1}{k^p} = 1 + \sum_{i=1}^n \left( \frac{1}{(2i)^p} + \frac{1}{(2i+1)^p} \right) < 1 + \sum_{i=1}^n \frac{2}{(2i)^p}$

$\displaystyle = 1 + 2^{1-p} S^n < 1 + 2^{1-p} S_{2n+1}$.

Solving for $S_{2n+1}$ yields

$\displaystyle S_{2n+1} < \frac{1}{1-2^{1-p}}$.

Thus the sequence $\{S_n\}$ is bounded above.  By the Monotone Convergence Theorem, then, $\{S_n\}$ converges.  And, of course, since the sequence of partial sums converges, $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^p}$ converges.

One use of this proof, as a commenter on joriki’s post mentions, is when doing the series material at the beginning of a Calculus II course rather than at the end.  The only piece of content from the rest of the Calculus II that you need for the series material is the integral test, and the primary use of the integral test is to prove convergence of the p-series when $p > 1$.  With this proof one can establish this convergence without knowing improper integration or the integral test.

## Proof of the Irrationality of e

In a previous post I proved that $\log_{10} 5$ is irrational.  In this post I prove the irrationality of e.

A proof of the irrationality of e must start by defining e.  There are some different ways to do that.  We’ll take e to be the unique positive number b such that $\dfrac{d}{dx} b^x = b^x$.  (This property itself can be proved from other ways to define e.  See, for example, Fitzpatrick’s Advanced Calculus [1], Section 5.2.)  We’ll also make the assumptions that (1) $e < 3$ and (2) $e^x$ is an increasing function.

Under this definition of e, the Lagrange Remainder Theorem says that, given $x > 0$ and $n \in \mathbb{N}$, there exists $c \in (0,x)$ such that

$\displaystyle e^x = 1 + x + \frac{x^2}{2} + \cdots + \frac{x^n}{n!} + \frac{e^c x^{n+1}}{(n+1)!}.$

Taking $x = 1$, we obtain, for some $c \in (0,1)$,

$\displaystyle e = 1 + 1 + \frac{1}{2} + \cdots + \frac{1}{n!} + \frac{e^c}{(n+1)!}$.

$\displaystyle \implies 0 < e - \left[ 2 + \frac{1}{2} + \cdots + \frac{1}{n!}\right] = \frac{e^c }{(n+1)!} \leq \frac{e}{(n+1)!} \leq \frac{3}{(n+1)!}.$

Now, suppose e is rational.  Then there exist $m_0, n_0 \in \mathbb{N}$ with $e = m_0/n_0$.  Therefore,

$\displaystyle 0 < \frac{m_0}{n_0} - \left[ 2 + \frac{1}{2} + \cdots + \frac{1}{n!} \right] \leq \frac{3}{(n+1)!}$.

Multiplying by $n!$, this becomes

$\displaystyle 0 < n! \frac{m_0}{n_0} - n! \left[ 2 + \frac{1}{2} + \cdots + \frac{1}{n!} \right] \leq \frac{3}{n+1}$.

Since the above inequality is true for all $n \in \mathbb{N}$, it is true for $n = \max \{n_0, 3\}$.  For this value of n,  $\displaystyle n!\frac{m_0}{n_0} - n!\left[ 2 + \frac{1}{2} + \cdots + \frac{1}{n!}\right]$ is an integer, and $3/(n+1) \leq 3/4$.

This means that there is an integer in the interval $(0,3/4)$.  However, we know that there is no integer in the interval $(0,1)$.  Contradiction; thus e is irrational.

[1] Patrick M. Fitzpatrick, Advanced Calculus, American Mathematical Society, 2006.

## Some Divisibility Properties of the Binomial Coefficients

Recently I read in Koshy’s book [1] on Catalan numbers some divisibility properties of the binomial coefficients I had not seen before.  Koshy credits them to Hermite.   They are particularly interesting to me because (as Koshy notes) some of the most famous divisibility properties of the binomial coefficients are immediate corollaries.  This post will give proofs of Hermite’s properties (following those in Koshy) and discuss the corollaries.

Divisibility properties.  Let $m, n \geq 1$.  Let $(m,n)$ denote the greatest common divisor of $m$ and $n$.  Then

(1) $\displaystyle \binom{m}{n}$ is divisible by $\displaystyle \frac{m}{(m,n)}$, and

(2) $\displaystyle \binom{m}{n}$ is divisible by $\displaystyle \frac{m-n+1}{(m+1,n)}$.

Proofs

(1)  Let $d = (m,n)$.  By the Euclidean algorithm, there exist integers A and B such that $d = Am + Bn$.  Thus

$\displaystyle d \binom{m}{n} = A m \binom{m}{n} + Bn \binom{m}{n} = A m \binom{m}{n} + B m \binom{m-1}{n-1}$, by the absorption identity.  Then

$\displaystyle d \binom{m}{n} = m \left[ A \binom{m}{n} + B \binom{m-1}{n-1} \right] = mC$, where C is an integer.

Thus $\displaystyle \frac{m}{d}$ divides $\displaystyle \binom{m}{n}$.  In other words, $\displaystyle \frac{m}{(m,n)}$ divides $\displaystyle \binom{m}{n}$.

(2)  Let $d = (m+1,n)$.  Then there exist integers P and Q such that $d = P(m+1) + Qn = (m-n+1)P + n(P+Q)$.  Thus

$\displaystyle d \frac{m!}{n! \, (m-n+1)!} = \binom{m}{n} P + \binom{m}{n-1} (P+Q)$.

Let $\displaystyle R = \binom{m}{n} P + \binom{m}{n-1}(P+Q)$.  Clearly R is an integer.  Then we have

$\displaystyle d \binom{m}{n} = (m-n+1) R$.

Thus $\displaystyle \frac{m-n+1}{d}$ divides $\displaystyle \binom{m}{n}$.  In other words, $\displaystyle \frac{m-n+1}{(m+1,n)}$ divides $\displaystyle \binom{m}{n}$.

Corollaries

1. The central binomial coefficient $\displaystyle \binom{2n}{n}$ is even.
1. Proof:  The greatest common divisor of 2n and n is n.  From Divisibility Property (1), $\displaystyle \binom{2n}{n}$ is divisible by $2n/n = 2$.
2. The Catalan number $C_n = \binom{2n}{n}/(n+1)$ is an integer.
1. Proof:  The greatest common divisor of $2n+1$ and $n$ is 1.  By Divisibility Property (2), $\displaystyle \binom{2n}{n}$ is divisible by $n+1$.
3. For p prime and values of n with $1 \leq n \leq p-1$$\displaystyle \binom{p}{n}$ is divisible by p.
1. Proof:  For values of n with $1 \leq n \leq p-1$, the greatest common divisor of p and n is 1.  By Divisibility Property (1), $\displaystyle \binom{p}{n}$ is divisible by p.

References

1. Thomas Koshy, Catalan Numbers with Applications, Oxford University Press, 2009.

## Log-Convexity of the Bell Numbers

A sequence $\{a_n\}$ is said to be log-convex if, for all $n \geq 1$, $a_n^2 \leq a_{n-1} a_{n+1}$.

In this post I give a short, combinatorial-flavored proof that the Bell numbers are log-convex.  The argument is from my answer here on Math.SE.

The combinatorial interpretation of the Bell number $B_n$ is that it counts the number of ways to partition the set $\{1, 2, \ldots, n\}$ into nonempty subsets.  Let $S_n$ denote the total number of sets over all partitions of $\{1, 2, \ldots, n\}$.  Then $A_n = S_n/B_n$ is the average number of sets in a random partition of $\{1, 2, \ldots, n\}$.

Claim 1: $A_n$ is increasing.

Proof: Each partition of $\{1, 2, \ldots, n+1\}$ can be associated with a partition of $\{1, 2, \ldots, n\}$ by removing the element n+1 from the set containing it.  Under the inverse of this mapping, each partition of $\{1, 2, \ldots, n\}$ consisting of k sets maps to k partitions consisting of k sets (if n+1 is placed in an already-existing set) and one partition consisting of k+1 sets (if n+1 is placed in a set by itself) out of the partitions of $\{1, 2, \ldots, n+1\}$.  Thus partitions of $\{1, 2, \ldots, n\}$ with more sets map to more partitions of $\{1, 2, \ldots, n+1\}$ containing the same number of sets as well as one partition with one more set.  This raises the average number of sets as you move from n elements to n+1 elements.

Claim 2: The log-convexity of $B_n$ is equivalent to the fact that $A_n$ is increasing.

Proof: Separate the partitions counted by $B_{n+1}$ into (1) those that have a set consisting of the single element n+1 and (2) those that don’t.  It should be clear that there are $B_n$ of the former.  Also, there are $S_n$ of the latter because each partition in group (2) is formed by adding n+1 to a set in a partition of $\{1, 2, \ldots, n\}$.  Thus $B_{n+1} = B_n + S_n$.

Therefore,

$\displaystyle B_n^2 \leq B_{n-1} B_{n+1} \iff \frac{B_{n+1}}{B_n} \geq \frac{B_n}{B_{n-1}} \iff 1 + \frac{S_n}{B_n} \geq 1 + \frac{S_{n-1}}{B_{n-1}}$ $\iff A_n \geq A_{n-1}.$

Since we established the last inequality in Claim 1, the Bell numbers are log-convex.

I was not the first to find this proof; Canfield [1] published it in 1995.  Engel [2] was the first to prove that the Bell numbers are log-convex; he used a different method.  See my answer here on Math.SE for references to more proofs.

References

1.  E. Rodney Canfield, “Engel’s Inequality for Bell Numbers,” Journal of Combinatorial Theory, Series A, 72 (1995) 184-187.

2. Konrad Engel, “On the Average Rank of an Element in a Filter of the Partition Lattice,” Journal of Combinatorial Theory, Series A, 65 (1994) 67-78.

## Proving the Existence of Irrational Numbers

The ancient Greeks first proved the existence of irrational numbers by proving that $\sqrt{2}$ is irrational.  The proof is, as modern proofs of irrationality go, fairly simple.  It is often the first example of a proof of irrationality that students see in, say, a course in real analysis.

Yesterday, though, I saw an even simpler proof of irrationality in Morgan’s Real Analysis [1] that I thought worth showing to my students in advanced calculus.

Theorem: The number $\log_{10} 5$ is irrational.

Proof: First, $\log_{10} 5 > 0$ because 5 and 10 are both greater than 1.  Suppose that $\log_{10} 5$ is rational.  Then there exist positive integers p, q such that $10^{p/q} = 5$.  Thus $\displaystyle 10^p = 5^q.$  However, $10^p$ ends in 0, while $5^q$ ends in 5.  This is a contradiction, and so $\log_{10} 5$ is irrational.

References

1. Frank Morgan, Real Analysis, American Mathematical Society, 2005.
Posted in irrational numbers, real analysis | 3 Comments