A Bonus Question on Convergent Series

Occasionally when teaching the sequences and series material in second-semester calculus I’ve included the following question as a bonus:

Question: Suppose \displaystyle \sum_{n=1}^{\infty} a_n is absolutely convergent.  Does that imply anything about the convergence of \displaystyle \sum_{n=1}^{\infty} a^2_n?

The answer is that \displaystyle \sum_{n=1}^{\infty}a^2_n converges.  I’m going to give two proofs.  The first is the more straightforward approach, but it is somewhat longer.  The second is clever and shorter.

First method: If \displaystyle \sum_{n=1}^{\infty} a_n is absolutely convergent, then, by the divergence test, \displaystyle \lim_{n \to \infty} |a_n| = 0.  Thus there exists some N > 0 such that if n \geq N then |a_n| \leq 1.  This means that, for n \geq N, a^2_n \leq |a_n|.  By the direct comparison test, then, \displaystyle \sum_{n=1}^{\infty} a^2_n converges.

Second method: As we argued above, \displaystyle \lim_{n \to \infty} |a_n| = 0.  Thus \displaystyle \lim_{n \to \infty} \frac{a^2_n}{|a_n|} = \lim_{n \to \infty} |a_n| = 0.  By the limit comparison test, then, \sum_{n=1}^{\infty} a^2_n converges.

(The first method is David Mitra’s answer and the second method is my answer to this Math.SE question.)

Posted in calculus, sequences and series | Leave a comment

My Experiences on a Post-Election Panel

No mathematics post this month.  Instead, I’m just going to link to an article I published last week in Inside Higher Ed.  In this article I describe my experiences as the conservative voice on a panel held on my campus on November 9.  This was the day after the U.S. Presidential election, and the purpose of the panel was to help the mostly progressive campus make sense of Donald Trump’s win.

Posted in campus issues | 1 Comment

Relations That Are Symmetric and Antisymmetric

When I teach relations and their properties, the question of whether a relation can be both symmetric and antisymmetric always seems to arise.  This post addresses that question.

First, a reminder of the definitions here: A relation \rho on a set S is symmetric if, for all x,y \in S, (x,y) \in \rho implies that (y,x) \in \rho.  A relation \rho on a set S is antisymmetric if, for all x,y \in S, (x,y) \in \rho and (y,x) \in \rho implies that x = y.

At first glance the definitions look a bit strange, in the sense that we would expect antisymmetric to mean “not symmetric.”  But that’s not quite what antisymmetric means. In fact, it is possible for a relation to be both symmetric and antisymmetric.

The boring (trivial) example is the empty relation \rho on a set.  In this case, the antecedents in the definitions of symmetric ((x,y) \in \rho) and antisymmetric ((x,y) \in \rho and (y,x) \in \rho) are never satisfied.  Thus the conditional statements in the definitions of the two properties are vacuously true, and so the empty relation is both symmetric and antisymmetric.

More interestingly, though, there are nontrivial examples.  Let’s think through what those might be. Suppose (x,y) \in \rho.  If \rho is symmetric, then we require (y,x) \in \rho as well.  If \rho is antisymmetric, then, since (x,y) \in \rho and (y,x) \in \rho, we must have x = y.  And this gives us a characterization of relations that are both symmetric and antisymmetric:

If a relation \rho on a set S is both symmetric and antisymmetric, then the only ordered pairs in \rho are of the form (x,x), where x \in S.

Thus, for example, the equality relation on the set of real numbers is both symmetric and antisymmetric.

Perhaps it’s clearer now the sense in which antisymmetric is opposing symmetric.  Symmetric means that there cannot be one-way relationships between two different elements.  Antisymmetric means that there cannot be two-way relationships between two different elements.  Both definitions allow for a relationship between an element and itself.

Posted in relations | Leave a comment

How Do You Tell Whether Two Lines Intersect in 3D?

How do you tell whether two three-dimensional lines intersect?  My colleague Greg Elliott in physics asked me this recently.  He understood how the usual method works, but he wanted to know whether there was a more clever method.  In this post I’ll discuss the usual method and the alternative method I first suggested and that we refined.  (I’ll leave it to the reader to determine whether the second method is “more clever.”)  This post also assumes that you know that the lines aren’t parallel.

To see how the methods work, let’s look at an example.  Do the lines (t, -2+3t, 4-t) and (2s, 3+s, -3+4s) intersect?

Method 1:  The usual method is to set the components equal to each other and see whether there is a solution.  This gives the system

\displaystyle \begin{aligned} t &= 2s \\ -2+3t &= 3+s \\ 4-t &= -3+4s \end{aligned}

Substituting the first equation into the second yields -2+3(2s) = 3+s \implies s = 1.  Substituting the first equation into the third yields 4-(2s) = -3+4s \implies s = 7/6.  Since s cannot take on both values, the lines do not intersect.  (If we had gotten the same value for s then the lines would intersect.)

Method 2: Two lines intersect if they lie in the same plane.  Thus, first construct the plane defined by the first line and the direction vector for the second line.  Then see if the second line lies in this plane.  To construct the plane, take the direction vectors for the two lines: {\bf u} = \langle 1, 3, -1 \rangle and {\bf v} = \langle 2, 1, 4 \rangle.  Then cross them to get the vector {\bf w} = \langle 13, -6, -5 \rangle.  A plane needs a point and a normal vector.  Take any point from the first line; the simplest is probably (0, -2, 4).  The equation for the plane defined by the first line and v is thus 13x - 6(y+2) - 5(z-4) = 0.  Now, if the second line lies in this plane then any point from that line will satisfy the plane equation.  Substituting in the point (0, 3, -3) from the second line yields 0 - 6(3+2) - 5(-3-4) = -30+35 = 5 \neq 0.  Thus the second line lies in a different plane, and the two lines do not intersect.

(Formula for Method 2).  Generalizing from this example, we can give an explicit condition in terms of a formula.   A line in 3D is defined by a point p and a director vector u.  Let the second line be defined by point q and direction vector v.  Take the two direction vectors u and v and cross them to get vector {\bf u} \times {\bf v}.  Then take the displacement vector {\bf p} - {\bf q} between the two points.  Since {\bf u} \times {\bf v} is orthogonal to the plane defined by the direction vectors, the only way the two lines lie in the same plane is if {\bf u} \times {\bf v} is orthogonal to the displacement vector {\bf p} - {\bf q}.  In other words, the two lines lie in the same plane if and only if ({\bf u} \times {\bf v}) \cdot ({\bf p} - {\bf q}) = 0.

Posted in analytic geometry, vectors | Leave a comment

Cassini’s Identity without Matrix Powers

Cassini’s identity for Fibonacci numbers says that F_{n+1} F_{n-1} - F_n^2 = (-1)^n.  The classic proof of this shows (by induction) that \begin{bmatrix} F_{n+1} & F_n \\ F_n & F_{n-1} \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}^n.  Since \det \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} = -1, Cassini’s identity follows.

In this post I’m going to give a different proof involving determinants, but one that does not use powers of the Fibonacci matrix.

Let’s start with the 2 \times 2 identity matrix, which we’ll call A_0:

\displaystyle A_0 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.

To construct A_1, add the second row to the first and then swap the two rows.  This gives us

\displaystyle A_1 = \begin{bmatrix} 0 & 1 \\ 1 & 1 \end{bmatrix}.

Continue this process of adding the second row to the first and then swapping the two rows.  This yields

\displaystyle A_2 = \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix}, A_3 = \begin{bmatrix} 1 & 2 \\ 2 & 3 \end{bmatrix}, A_4 = \begin{bmatrix} 2 & 3 \\ 3 & 5 \end{bmatrix}, \ldots .

Since A_1 = \begin{bmatrix} F_0 & F_1 \\ F_1 & F_2 \end{bmatrix} and F_{n+1} = F_n + F_{n-1}, the fact that we’re adding rows each time means that A_n = \begin{bmatrix} F_{n-1} & F_n \\ F_n & F_{n+1} \end{bmatrix}.

Since adding a row to another row doesn’t change the determinant, and swapping two rows changes only the sign of the determinant, we have

\displaystyle F_{n+1} F_{n-1} - F_n^2 = \det A_n = (-1)^n \det A_0 = (-1)^n,

which is Cassini’s identity.

See also my paper “Fibonacci Identities via the Determinant Sum Property.”

Posted in Fibonacci sequence, matrices | Leave a comment

Alternating Sum of the Reciprocals of the Central Binomial Coefficients

In the last post we proved the generating function for the reciprocals of the central binomial coefficients:

\displaystyle \sum_{n=0}^{\infty} \frac{x^n}{\binom{2n}{n}} =  \frac{1}{1-\frac{x}{4}} + \frac{4 \sqrt{x}\arcsin \left(\frac{\sqrt{x}}{2}\right)}{(4-x)^{3/2}}.

In this post we’re going to use this generating function to find the alternating sum of the reciprocals of the central binomial coefficients.  On the left side of the generating function, we need merely substitute -1 for x.  (Recall that the series converges for -4 < x < 4.)  On the right side, however, we have the arcsine of an imaginary number.  Most of this post will be about how to make sense of that.  Essentially, we’re going to convert inverse sine to inverse hyperbolic sine and then use the logarithm formula for the latter.

First, recall the representation of \sin x in terms of complex exponentials, as well as the definition of hyperbolic sine:

\displaystyle \sin x = \frac{e^{ix} - e^{-ix}}{2i}, \\ \sinh x = \frac{e^x - e^{-x}}{2}.

Substituting ix for x in the representation of \sin x shows that \sin (ix) = i \sinh x.  Now, letting y = \sinh x, we have x = \sinh^{-1} y.  Also, \sin (ix) = i y.  Thus ix = \arcsin (iy), and i \sinh^{-1} y = \arcsin (iy).  In other words, \arcsin (ix) = i \sinh^{-1} x.

With the representation

\displaystyle \sinh^{-1} x = \ln (x + \sqrt{x^2+1}),

we can finally obtain a simple expression for the alternating sum of the reciprocals of the central binomial coefficients:

\displaystyle \sum_{n=0}^{\infty} \frac{(-1)^n}{\binom{2n}{n}} =  \frac{1}{1-\frac{-1}{4}} + \frac{4 (\sqrt{-1})\arcsin \left(\frac{\sqrt{-1}}{2}\right)}{(4-(-1))^{3/2}} =  \frac{4}{5} + \frac{4 i \arcsin \left(\frac{i}{2}\right)}{5^{3/2}}

\displaystyle =  \frac{4}{5} + \frac{4 i (i) \sinh^{-1} \left(\frac{1}{2}\right)}{5 \sqrt{5}} =  \frac{4}{5} - \frac{4 \ln (\frac{1}{2} + \sqrt{1/4+1})}{5 \sqrt{5}} =\frac{4}{5} - \frac{4 \sqrt{5}}{25} \ln \left(\frac{1 + \sqrt{5}}{2} \right).

Posted in binomial coefficients, sequences and series | Leave a comment

Generating Function for the Reciprocals of the Central Binomial Coefficients

In this post we generalize the result from the last post to find the generating function for the reciprocals of the central binomial coefficients.  As we did with that one, we start with the beta integral expression for 1/\binom{2n}{n}:

\displaystyle \frac{1}{\binom{2n}{n}} = (2n+1) \int_0^1 y^n (1-y)^n \, dy = \int_0^1 (2n+1) (y(1-y))^n \, dy.

Now, multiply by \frac{x^n}{2n+1} (for now, it’s easier not to deal with the extra factor of 2n+1 on the right) and sum up:

\displaystyle \sum_{n=0}^{\infty} \frac{x^n}{\binom{2n}{n} (2n+1)} = \sum_{n=0}^{\infty} \int_0^1 (xy(1-y))^n \, dy = \int_0^1 \left( \sum_{n=0}^{\infty}  (xy(1-y))^n \right) \, dy = \int_0^1 \frac{1}{1-xy(1-y)} \, dy,

where we use the geometric series formula in the last step.  Now, apply the substitution t = \sqrt{x} (2y-1).  It’s not obvious at this point that this will be helpful, but it will.  The denominator of the integral becomes

\displaystyle 1-xy(1-y) = 1-x \left(\frac{t + \sqrt{x}}{2\sqrt{x}}\right) \left( \frac{\sqrt{x}-t}{2 \sqrt{x}}\right) = 1- \frac{x-t^2}{4} = \frac{4 - x+ t^2}{4}.

This gives us

\displaystyle \int_0^1 \frac{1}{1-xy(1-y)} \, dy = \int_{-\sqrt{x}}^{\sqrt{x}} \frac{4}{2 \sqrt{x} (t^2 + 4-x)} \, dt = \frac{2}{\sqrt{x}} \int_{-\sqrt{x}}^{\sqrt{x}} \frac{1}{t^2 + 4-x} \, dt \\  = \left.\frac{2}{\sqrt{x (4-x)}} \arctan \left( \frac{t}{\sqrt{4-x}} \right) \right|_{-\sqrt{x}}^{\sqrt{x}} \\  = \frac{2}{\sqrt{4x-x^2}} \left( \arctan \left( \frac{\sqrt{x}}{\sqrt{4-x}} \right) - \arctan \left( \frac{-\sqrt{x}}{\sqrt{4-x}} \right) \right)\\  = \frac{4}{\sqrt{4x-x^2}} \arctan \left( \frac{\sqrt{x}}{\sqrt{4-x}} \right) \\  = \frac{4}{\sqrt{4x-x^2}} \arcsin \left( \frac{\sqrt{x}}{2} \right),

where in the second-to-last step we use the fact that arctangent is an odd function.

In sum, we now have

\displaystyle \sum_{n=0}^{\infty} \frac{x^n}{\binom{2n}{n} (2n+1)} = \frac{4}{\sqrt{4x-x^2}} \arcsin \left( \frac{\sqrt{x}}{2} \right).

We have some more work to do to get the left side where we want it.  Replace x with x^2 to get

\displaystyle \sum_{n=0}^{\infty} \frac{x^{2n}}{\binom{2n}{n} (2n+1)} = \frac{4}{\sqrt{4x^2-x^4}} \arcsin \left( \frac{x}{2} \right) =  \frac{4}{x\sqrt{4-x^2}} \arcsin \left( \frac{x}{2} \right) .

(Technically, we get |x| in place of x on the right, but since arcsine is odd the expression simplifies to what we have here.)  Now, multiply both sides by x and differentiate.  After some simplification, we get the following:

\displaystyle \sum_{n=0}^{\infty} \frac{x^{2n}}{\binom{2n}{n}} =  \frac{1}{1-\left(\frac{x}{2}\right)^2} + \frac{4x \arcsin \left(\frac{x}{2}\right)}{(4-x^2)^{3/2}}.

To finish, we replace x with \sqrt{x}:

\displaystyle \sum_{n=0}^{\infty} \frac{x^n}{\binom{2n}{n}} =  \frac{1}{1-\frac{x}{4}} + \frac{4 \sqrt{x}\arcsin \left(\frac{\sqrt{x}}{2}\right)}{(4-x)^{3/2}}.

(To verify with the result from the previous post, we need to investigate this result for convergence.  Convergence matters only with the geometric series evaluation with common ratio xy(1-y).  Since y must be between 0 and 1, the max value of y(1-y) is 1/4, and so the values of x for which the series converges are -4 < x < 4.  Thus we can safely substitute 1 for x in the formula we just derived, obtaining \frac{4}{3} + \frac{2 \pi \sqrt{3}}{27}, as we found in the last post.)


Posted in binomial coefficients, generating functions, sequences and series | 1 Comment