Relations That Are Symmetric and Antisymmetric

When I teach relations and their properties, the question of whether a relation can be both symmetric and antisymmetric always seems to arise.  This post addresses that question.

First, a reminder of the definitions here: A relation \rho on a set S is symmetric if, for all x,y \in S, (x,y) \in \rho implies that (y,x) \in \rho.  A relation \rho on a set S is antisymmetric if, for all x,y \in S, (x,y) \in \rho and (y,x) \in \rho implies that x = y.

At first glance the definitions look a bit strange, in the sense that we would expect antisymmetric to mean “not symmetric.”  But that’s not quite what antisymmetric means. In fact, it is possible for a relation to be both symmetric and antisymmetric.

The boring (trivial) example is the empty relation \rho on a set.  In this case, the antecedents in the definitions of symmetric ((x,y) \in \rho) and antisymmetric ((x,y) \in \rho and (y,x) \in \rho) are never satisfied.  Thus the conditional statements in the definitions of the two properties are vacuously true, and so the empty relation is both symmetric and antisymmetric.

More interestingly, though, there are nontrivial examples.  Let’s think through what those might be. Suppose (x,y) \in \rho.  If \rho is symmetric, then we require (y,x) \in \rho as well.  If \rho is antisymmetric, then, since (x,y) \in \rho and (y,x) \in \rho, we must have x = y.  And this gives us a characterization of relations that are both symmetric and antisymmetric:

If a relation \rho on a set S is both symmetric and antisymmetric, then the only ordered pairs in \rho are of the form (x,x), where x \in S.

Thus, for example, the equality relation on the set of real numbers is both symmetric and antisymmetric.

Perhaps it’s clearer now the sense in which antisymmetric is opposing symmetric.  Symmetric means that there cannot be one-way relationships between two different elements.  Antisymmetric means that there cannot be two-way relationships between two different elements.  Both definitions allow for a relationship between an element and itself.

Posted in relations | Leave a comment

How Do You Tell Whether Two Lines Intersect in 3D?

How do you tell whether two three-dimensional lines intersect?  My colleague Greg Elliott in physics asked me this recently.  He understood how the usual method works, but he wanted to know whether there was a more clever method.  In this post I’ll discuss the usual method and the alternative method I first suggested and that we refined.  (I’ll leave it to the reader to determine whether the second method is “more clever.”)  This post also assumes that you know that the lines aren’t parallel.

To see how the methods work, let’s look at an example.  Do the lines (t, -2+3t, 4-t) and (2s, 3+s, -3+4s) intersect?

Method 1:  The usual method is to set the components equal to each other and see whether there is a solution.  This gives the system

\displaystyle \begin{aligned} t &= 2s \\ -2+3t &= 3+s \\ 4-t &= -3+4s \end{aligned}

Substituting the first equation into the second yields -2+3(2s) = 3+s \implies s = 1.  Substituting the first equation into the third yields 4-(2s) = -3+4s \implies s = 7/6.  Since s cannot take on both values, the lines do not intersect.  (If we had gotten the same value for s then the lines would intersect.)

Method 2: Two lines intersect if they lie in the same plane.  Thus, first construct the plane defined by the first line and the direction vector for the second line.  Then see if the second line lies in this plane.  To construct the plane, take the direction vectors for the two lines: {\bf u} = \langle 1, 3, -1 \rangle and {\bf v} = \langle 2, 1, 4 \rangle.  Then cross them to get the vector {\bf w} = \langle 13, -6, -5 \rangle.  A plane needs a point and a normal vector.  Take any point from the first line; the simplest is probably (0, -2, 4).  The equation for the plane defined by the first line and v is thus 13x - 6(y+2) - 5(z-4) = 0.  Now, if the second line lies in this plane then any point from that line will satisfy the plane equation.  Substituting in the point (0, 3, -3) from the second line yields 0 - 6(3+2) - 5(-3-4) = -30+35 = 5 \neq 0.  Thus the second line lies in a different plane, and the two lines do not intersect.

(Formula for Method 2).  Generalizing from this example, we can give an explicit condition in terms of a formula.   A line in 3D is defined by a point p and a director vector u.  Let the second line be defined by point q and direction vector v.  Take the two direction vectors u and v and cross them to get vector {\bf u} \times {\bf v}.  Then take the displacement vector {\bf p} - {\bf q} between the two points.  Since {\bf u} \times {\bf v} is orthogonal to the plane defined by the direction vectors, the only way the two lines lie in the same plane is if {\bf u} \times {\bf v} is orthogonal to the displacement vector {\bf p} - {\bf q}.  In other words, the two lines lie in the same plane if and only if ({\bf u} \times {\bf v}) \cdot ({\bf p} - {\bf q}) = 0.

Posted in analytic geometry, vectors | Leave a comment

Cassini’s Identity without Matrix Powers

Cassini’s identity for Fibonacci numbers says that F_{n+1} F_{n-1} - F_n^2 = (-1)^n.  The classic proof of this shows (by induction) that \begin{bmatrix} F_{n+1} & F_n \\ F_n & F_{n-1} \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}^n.  Since \det \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} = -1, Cassini’s identity follows.

In this post I’m going to give a different proof involving determinants, but one that does not use powers of the Fibonacci matrix.

Let’s start with the 2 \times 2 identity matrix, which we’ll call A_0:

\displaystyle A_0 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.

To construct A_1, add the second row to the first and then swap the two rows.  This gives us

\displaystyle A_1 = \begin{bmatrix} 0 & 1 \\ 1 & 1 \end{bmatrix}.

Continue this process of adding the second row to the first and then swapping the two rows.  This yields

\displaystyle A_2 = \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix}, A_3 = \begin{bmatrix} 1 & 2 \\ 2 & 3 \end{bmatrix}, A_4 = \begin{bmatrix} 2 & 3 \\ 3 & 5 \end{bmatrix}, \ldots .

Since A_1 = \begin{bmatrix} F_0 & F_1 \\ F_1 & F_2 \end{bmatrix} and F_{n+1} = F_n + F_{n-1}, the fact that we’re adding rows each time means that A_n = \begin{bmatrix} F_{n-1} & F_n \\ F_n & F_{n+1} \end{bmatrix}.

Since adding a row to another row doesn’t change the determinant, and swapping two rows changes only the sign of the determinant, we have

\displaystyle F_{n+1} F_{n-1} - F_n^2 = \det A_n = (-1)^n \det A_0 = (-1)^n,

which is Cassini’s identity.

See also my paper “Fibonacci Identities via the Determinant Sum Property.”

Posted in Fibonacci sequence, matrices | Leave a comment

Alternating Sum of the Reciprocals of the Central Binomial Coefficients

In the last post we proved the generating function for the reciprocals of the central binomial coefficients:

\displaystyle \sum_{n=0}^{\infty} \frac{x^n}{\binom{2n}{n}} =  \frac{1}{1-\frac{x}{4}} + \frac{4 \sqrt{x}\arcsin \left(\frac{\sqrt{x}}{2}\right)}{(4-x)^{3/2}}.

In this post we’re going to use this generating function to find the alternating sum of the reciprocals of the central binomial coefficients.  On the left side of the generating function, we need merely substitute -1 for x.  (Recall that the series converges for -4 < x < 4.)  On the right side, however, we have the arcsine of an imaginary number.  Most of this post will be about how to make sense of that.  Essentially, we’re going to convert inverse sine to inverse hyperbolic sine and then use the logarithm formula for the latter.

First, recall the representation of \sin x in terms of complex exponentials, as well as the definition of hyperbolic sine:

\displaystyle \sin x = \frac{e^{ix} - e^{-ix}}{2i}, \\ \sinh x = \frac{e^x - e^{-x}}{2}.

Substituting ix for x in the representation of \sin x shows that \sin (ix) = i \sinh x.  Now, letting y = \sinh x, we have x = \sinh^{-1} y.  Also, \sin (ix) = i y.  Thus ix = \arcsin (iy), and i \sinh^{-1} y = \arcsin (iy).  In other words, \arcsin (ix) = i \sinh^{-1} x.

With the representation

\displaystyle \sinh^{-1} x = \ln (x + \sqrt{x^2+1}),

we can finally obtain a simple expression for the alternating sum of the reciprocals of the central binomial coefficients:

\displaystyle \sum_{n=0}^{\infty} \frac{(-1)^n}{\binom{2n}{n}} =  \frac{1}{1-\frac{-1}{4}} + \frac{4 (\sqrt{-1})\arcsin \left(\frac{\sqrt{-1}}{2}\right)}{(4-(-1))^{3/2}} =  \frac{4}{5} + \frac{4 i \arcsin \left(\frac{i}{2}\right)}{5^{3/2}}

\displaystyle =  \frac{4}{5} + \frac{4 i (i) \sinh^{-1} \left(\frac{1}{2}\right)}{5 \sqrt{5}} =  \frac{4}{5} - \frac{4 \ln (\frac{1}{2} + \sqrt{1/4+1})}{5 \sqrt{5}} =\frac{4}{5} - \frac{4 \sqrt{5}}{25} \ln \left(\frac{1 + \sqrt{5}}{2} \right).

Posted in binomial coefficients, sequences and series | Leave a comment

Generating Function for the Reciprocals of the Central Binomial Coefficients

In this post we generalize the result from the last post to find the generating function for the reciprocals of the central binomial coefficients.  As we did with that one, we start with the beta integral expression for 1/\binom{2n}{n}:

\displaystyle \frac{1}{\binom{2n}{n}} = (2n+1) \int_0^1 y^n (1-y)^n \, dy = \int_0^1 (2n+1) (y(1-y))^n \, dy.

Now, multiply by \frac{x^n}{2n+1} (for now, it’s easier not to deal with the extra factor of 2n+1 on the right) and sum up:

\displaystyle \sum_{n=0}^{\infty} \frac{x^n}{\binom{2n}{n} (2n+1)} = \sum_{n=0}^{\infty} \int_0^1 (xy(1-y))^n \, dy = \int_0^1 \left( \sum_{n=0}^{\infty}  (xy(1-y))^n \right) \, dy = \int_0^1 \frac{1}{1-xy(1-y)} \, dy,

where we use the geometric series formula in the last step.  Now, apply the substitution t = \sqrt{x} (2y-1).  It’s not obvious at this point that this will be helpful, but it will.  The denominator of the integral becomes

\displaystyle 1-xy(1-y) = 1-x \left(\frac{t + \sqrt{x}}{2\sqrt{x}}\right) \left( \frac{\sqrt{x}-t}{2 \sqrt{x}}\right) = 1- \frac{x-t^2}{4} = \frac{4 - x+ t^2}{4}.

This gives us

\displaystyle \int_0^1 \frac{1}{1-xy(1-y)} \, dy = \int_{-\sqrt{x}}^{\sqrt{x}} \frac{4}{2 \sqrt{x} (t^2 + 4-x)} \, dt = \frac{2}{\sqrt{x}} \int_{-\sqrt{x}}^{\sqrt{x}} \frac{1}{t^2 + 4-x} \, dt \\  = \left.\frac{2}{\sqrt{x (4-x)}} \arctan \left( \frac{t}{\sqrt{4-x}} \right) \right|_{-\sqrt{x}}^{\sqrt{x}} \\  = \frac{2}{\sqrt{4x-x^2}} \left( \arctan \left( \frac{\sqrt{x}}{\sqrt{4-x}} \right) - \arctan \left( \frac{-\sqrt{x}}{\sqrt{4-x}} \right) \right)\\  = \frac{4}{\sqrt{4x-x^2}} \arctan \left( \frac{\sqrt{x}}{\sqrt{4-x}} \right) \\  = \frac{4}{\sqrt{4x-x^2}} \arcsin \left( \frac{\sqrt{x}}{2} \right),

where in the second-to-last step we use the fact that arctangent is an odd function.

In sum, we now have

\displaystyle \sum_{n=0}^{\infty} \frac{x^n}{\binom{2n}{n} (2n+1)} = \frac{4}{\sqrt{4x-x^2}} \arcsin \left( \frac{\sqrt{x}}{2} \right).

We have some more work to do to get the left side where we want it.  Replace x with x^2 to get

\displaystyle \sum_{n=0}^{\infty} \frac{x^{2n}}{\binom{2n}{n} (2n+1)} = \frac{4}{\sqrt{4x^2-x^4}} \arcsin \left( \frac{x}{2} \right) =  \frac{4}{x\sqrt{4-x^2}} \arcsin \left( \frac{x}{2} \right) .

(Technically, we get |x| in place of x on the right, but since arcsine is odd the expression simplifies to what we have here.)  Now, multiply both sides by x and differentiate.  After some simplification, we get the following:

\displaystyle \sum_{n=0}^{\infty} \frac{x^{2n}}{\binom{2n}{n}} =  \frac{1}{1-\left(\frac{x}{2}\right)^2} + \frac{4x \arcsin \left(\frac{x}{2}\right)}{(4-x^2)^{3/2}}.

To finish, we replace x with \sqrt{x}:

\displaystyle \sum_{n=0}^{\infty} \frac{x^n}{\binom{2n}{n}} =  \frac{1}{1-\frac{x}{4}} + \frac{4 \sqrt{x}\arcsin \left(\frac{\sqrt{x}}{2}\right)}{(4-x)^{3/2}}.

(To verify with the result from the previous post, we need to investigate this result for convergence.  Convergence matters only with the geometric series evaluation with common ratio xy(1-y).  Since y must be between 0 and 1, the max value of y(1-y) is 1/4, and so the values of x for which the series converges are -4 < x < 4.  Thus we can safely substitute 1 for x in the formula we just derived, obtaining \frac{4}{3} + \frac{2 \pi \sqrt{3}}{27}, as we found in the last post.)

 

Posted in binomial coefficients, generating functions, sequences and series | 1 Comment

Sum of the Reciprocals of the Central Binomial Coefficients

In this post we prove the formula for the sum of the reciprocals of the central binomial coefficients \binom{2n}{n}:

\displaystyle \sum_{n=0}^{\infty} \frac{1}{\binom{2n}{n}} = \frac{4}{3}  + \frac{2 \pi \sqrt{3}}{27}.

(Of course, the sum of the central binomial coefficients themselves does not converge.)

We start with the beta integral\displaystyle \frac{1}{\binom{n}{k}} = (n+1) \int_0^1 x^k (1-x)^{n-k} \, dx.

Replacing n with 2n and k with n, this becomes

\displaystyle \frac{1}{\binom{2n}{n}} = (2n+1) \int_0^1 x^n (1-x)^n \, dx = \int_0^1 (2n+1) (x(1-x))^n \, dx.

Now, let y = \sqrt{x(1-x)} and sum both sides from 0 to \infty, moving the summation inside the integral.  (We’re not performing a u-substitution here; the bounds and the differential remain in terms of x.  The substitution is just to make the upcoming infinite series calculation easier to see.)  We have

\displaystyle \sum_{n=0}^{\infty} \frac{1}{\binom{2n}{n}} = \int_0^1 \sum_{n=0}^{\infty} (2n+1) y^{2n} \, dx = \int_0^1 \sum_{n=0}^{\infty}  \frac{d}{dy} (y^{2n+1}) \, dx = \int_0^1 \frac{d}{dy} \sum_{n=0}^{\infty}   y^{2n+1} \, dx

\displaystyle = \int_0^1  \frac{d}{dy} \left( y \sum_{n=0}^{\infty}  (y^2)^n \right) \, dx = \int_0^1 \frac{d}{dy} \left( \frac{y}{1-y^2} \right) \, dx = \int_0^1 \frac{1+y^2}{(1-y^2)^2} dx

\displaystyle = \int_0^1 \frac{1+x-x^2}{(1-x+x^2)^2} dx.

(The infinite series \sum_{n=0}^{\infty} (y^2)^n converges because x must be between 0 and 1, which means the maximum value of y = \sqrt{x(1-x)} is 1/2.)

At this point we have the integral of a rational function.  Partial fractions decomposition (or just rewriting the numerator as 2 - (1-x+x^2)) gets us to

\displaystyle \int_0^1 \left(\frac{-1}{(1-x+x^2)^2} + \frac{2}{(1-x+x^2)^2}\right)  dx.

Using trigonometric substitution, this evaluates to

\displaystyle \left. \frac{2\sqrt{3}}{9} \arctan \left( \frac{2x-1}{\sqrt{3}} \right) + \frac{2(2x-1)}{3(1-x+x^2)} \right|_0^1

\displaystyle = \frac{2\sqrt{3}}{9} \arctan \frac{1}{\sqrt{3}} + \frac{2}{3} - \frac{2 \sqrt{3}}{9} \arctan \left(\frac{-1}{\sqrt{3}}\right) - \frac{-2}{3}

\displaystyle = \frac{4}{3} + \frac{2\sqrt{3}}{9} \left( \frac{\pi}{6} - \frac{-\pi}{6} \right) = \frac{4}{3} + \frac{2\pi \sqrt{3}}{27}.

Posted in binomial coefficients, sequences and series | 1 Comment

A Probabilistic Proof of a Binomial Coefficient Identity, Generalized

In a post from a couple of years ago I gave a probabilistic proof of the binomial coefficient identity

\displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k+1} = \frac{1}{n+1}.

In this post I modify and generalize this proof to establish the identity

\displaystyle (1) \:\:\:\:\:\: \sum_{k=0}^n \frac{(-1)^k}{m+k} = \frac{(m-1)! \, n!}{(m+n)!}.

As in the original proof, we use a balls-and-jars interpretation.  Suppose we have a jar that contains a black ball, n numbered blue balls, and m-1 numbered red balls.  Suppose we choose the balls one-by-one from the jar.  Let A_i be the event that the black ball is chosen after blue ball i.  Let B be the event that the black ball is chosen before all the red balls.  We argue that Identity (1) gives in two different ways the probability of B \cap \bigcap_{i=1}^n A_i, the event that all the blue balls are chosen, then the black ball, and then all the red balls.

The right side of Identity (1) is easier to explain.  There are (m+n)! ways to arrange all m+n balls.  There are (m-1)! ways to order the blue balls, one way to order the black ball, and n! ways to order the red balls.  This gives \displaystyle \frac{(m-1)! \, n!}{(m+n)!} as the probability that all the blue balls are selected, then the black ball, and then all of the red balls.

For the left side of Identity (1) we need the principle of inclusion/exclusion.  A slight generalization of one version of it says that

\displaystyle P \left(B \cap \bigcap_{i=1}^n A_i \right) = P(B) - \sum_{i=1}^n P(B \cap \bar{A}_i) + \sum_{1 \leq i < j \leq n} P(B \cap \bar{A}_i \cap \bar{A}_j) \\ - \sum_{1 \leq i < j < k \leq n} P(B \cap \bar{A}_i \cap \bar{A}_j \cap \bar{A}_k) + \cdots + (-1)^n P(B \cap \bar{A}_1 \cap \cdots \cap \bar{A}_n).

The event \bar{A}_i is the event that the black ball is chosen before blue ball i.  Thus, the intersection of any k of these \bar{A}_i‘s with B is the event that the black ball is chosen first out of m+k specific balls: the black one, the m-1 red ones, and k of the blue ones.  This has probability 1/(m+k).  There are \binom{n}{k} ways to select k of the \bar{A}_i‘s.  By this generalized inclusion/exclusion principle, then, the probability that the black ball is chosen after the blue balls but before any of the red balls is also given by

\displaystyle \sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{m+k}.

This argument is taken from my paper [1], where I give a probabilistic proof of the binomial inverse of Identity (1), as well as proofs of generalizations of both (1) and its binomial inverse.


References

  1.  Michael Z. Spivey, “Probabilistic proofs of a binomial identity, its inverse, and generalizations.” American Mathematical Monthly, 123 (2): 175-180, 2016.
Posted in binomial coefficients, probability | Leave a comment