A recent post and question on math.SE ask for an intuitive proof of the fact that the identity is the only symmetric 0-1 matrix with all eigenvalues positive. As I mentioned in that recent post, Robert Israel’s argument is quite nice, but I was hoping for a more big-picture-type proof. “Intuitive” is, of course, rather fuzzy, but I found two proofs that were along the lines of what I was hoping for. (I answered my own math.SE question with these proofs.)

Let *A* be a symmetric, 0-1 matrix with all eigenvalues positive.

**Proof 1: Correlation matrices.**

Let *B* be the diagonal matrix consisting of 1’s where *A* has 0’s and 0’s where *A* has 1’s. Then *A+B* is a correlation matrix. Since adding nonnegative values to *A’*s diagonal can’t decrease any of *A’*s eigenvalues (see, for example, Weyl’s inequality), *A+B* has all eigenvalues positive and is thus invertible. But the only invertible 0-1 correlation matrix is the identity. Since *A+B* is the identity, *A* must be a diagonal matrix. But the only invertible 0-1 diagonal matrix is the identity. Therefore, *A* is the identity matrix.

**Proof 2: Linear transformations.**

Since *A* is a real symmetric matrix it is orthogonally diagonalizable, which means that it represents a linear transformation with scaling in mutually perpendicular directions. Since the eigenvalues are the scaling factors for the different directions, the scaling factors are all positive. This implies that for **x** ≠ **0**, *A* cannot map **x** to a vector that is perpendicular to **x**. (This follows geometrically, but you can also see this by remembering that positive definite implies something even stronger: *A***x** must make an acute angle with **x**.)

This non-orthogonality restriction between *A***x** and **x** rules out any diagonal elements of *A* being 0 as otherwise *a _{ii}* = 0 would imply

**e**

*and*

_{i}*A*

**e**

*are orthogonal.*

_{i}Since *A* has all 1’s along its diagonal, the non-orthogonality restriction now rules out any off-diagonal elements being 1. Suppose otherwise; i.e., that *a _{ij}* =

*a*= 1 with

_{ji}*a*=

_{ii}*a*= 1. Then

_{jj}*A*maps both

**e**

*and*

_{i}**e**

*to the vector (1,1) when the range of*

_{j}*A*is restricted to the 2D subspace given by span{

**e**

*,*

_{i}**e**

*}. But this means that*

_{j}*A*maps

**e**

*–*

_{i }**e**

*to the vector (0,0) when the range of*

_{j}*A*is restricted to span{

**e**

*,*

_{i}**e**

*}. Since*

_{j}**e**

*–*

_{i }**e**

*is itself in span{*

_{j}**e**

*,*

_{i}**e**

*}, this implies that*

_{j}**e**

*–*

_{i }**e**

*and*

_{j}*A*(

**e**

*–*

_{i }**e**

*) are orthogonal, and we have our contradiction with the non-orthogonality restriction.*

_{j}The only option left is that *A* is the identity matrix.

Is there an intuitive way to see that “the only invertible 0-1 correlation matrix is the identity”?

A 0-1 correlation matrix means that any of the underlying random variables are either uncorrelated or perfectly correlated. Distinct but perfectly correlated random variables are scalar multiples of each other. This linear dependence carries over to the correlation matrix, which therefore can’t be invertible. So all of the off-diagonal elements must be 0. Since every variable is perfectly correlated with itself, all of the diagonal elements must be 1. (This is true for every correlation matrix.) Is that intuitive enough? 🙂

Ok. Though I think it is also essential to add that if $a$ is completely correlated to $b$ and $b$ is completely correlated to $c$, then $a$ is also completely correlated to $c$. Hence, this makes the rows/columns linearly dependent if the variables are completely correlated. Is it right?

Transitivity in the case of perfect correlation is true, but it’s not needed here. If for some correlation matrix

B, then variablesIandJare perfectly correlated, and the linear dependence between just those two variables is enough to produce a singular correlation matrix.