[Note: *This is an excerpt from lectures delivered as part of our FEM Program.*]

Matrices are arrays - an arrangement of rows and columns - of numbers. These numbers can be real or complex. For example, a 3 x 3 symmetric correlation matrix we have the pair correlation between two variables. Say, if we have three financial assets, A, B and C arranged as A, B and C on both rows and columns, and their correlation matrix, M, is given by:

In the above matrix, all elements are real numbers and each element represents the correlation between two variables. For example, 0.65 is the correlation between A and B and 0.35 is the correlation between B and C. The matrix is symmetric because all elements above the diagonal are equal to all elements below the diagonal (which is how a correlation matrix should be). Now say a smart but a lazy quant in a bank wants to estimate the correlation matrix of these three financial assets. Rather than do historical analysis on the time series of the prices of A, B and C or try to find out the correlation from the options market, he simply generates a set of uniform random numbers between 0 and 1 (using Excel spreadsheet or some other random number generator) and fills up the off-diagonal elements of the matrix M with them (of course, he is assuming that all correlations will be either zero or positive). So he gets a correlation matrix, M, as:

In the above matrix, 0.425, 0.459 and 0.035 are the first three digits of uniform random numbers between 0 and 1. The elements of the above matrix are all random numbers.

This is an example of a Random matrix.

Generally speaking, a random matrix is a matrix of random numbers drawn from some probability distribution. More specifically, a random matrix is a symmetric matrix where all elements are random numbers drawn from some probability distribution.

A correlation matrix or a covariance matrix can be a random symmetric matrix. A random matrix can be generalized as:

In the above all elements are random numbers and could be drawn from a uniform distribution, i.e. or a Gaussian (Normal) distribution, i.e. . They could be drawn from any other probability distribution.

If the elements of the above matrix are all random numbers drawn from a Gaussian (Normal) distribution, , then it is a special kind of matrix known as Wigner matrix.

Even though random matrices were discovered in the 1930s, it was nuclear physicist Eugene Wigner in the early 1950s who made connection between them and the energy levels of atomic nuclei and thereby unleashed a revolution in the field of applied mathematics and physics. Wigner conjectured that the excitation energies of heavy nuclei behave like the eigenvalues of a matrix whose elements are random numbers drawn from a Gaussian distribution with mean zero and standard deviation of 1.

Ever since then random matrices and Random Matrix Theory have been applied to various disciplines including electrical engineering, quantum mechanics, sociology, econometrics and even quantitative finance.

Consider a large square matrix, A, whose elements are random numbers drawn from a certain probability distribution, say the Gaussian distribution. Then, for such a matrix, A, what can be said about the probabilities of a few of the eigenvalues or the eigenvectors of this matrix? This is the central problem in Random Matrix Theory (RMT) and the answer to this question has far reaching ramifications, not only in nuclear physics but also in such diverse areas as quantitative finance, mathematics and mechanical engineering, etc.

Let's see how we can construct a random matrix. Take, a symmetric, matrix A, whose elements are all drawn from a Gaussian (Normal) distribution with mean 0 and standard deviation of 1, i.e. all elements belong to N(0,1). Then a symmetric matrix, H, is formed as: . The matrix, , is known as the Wigner matrix.

For, example, using Excel^{TM} random number generator we generate the following random matrix, A:

In the above matrix, . Then the symmetric Wigner matrix, , is given by:

The diagonal elements of are distributed as i.i.d. N(0,1) and off-diagonal elements are distributed as i.i.d. N(0,1/2). The eigenvalues of this random matrix, , will display very interesting properties. As , the spacing between the eigenvalues of the matrix like could very accurately approximate the spacing between the energy levels of a heavy nuclei. This was Eugene Wigner's insight in the 1950s. Wigner originally explored a real, symmetric random matrix whose diagonal elements are zero and off diagonal elements were with probability of .

How's all this relevant to the study of quantitative finance?

Take an example from portfolio analysis and asset allocation. When an equity analyst in a hedge fund quantitatively analyzes a large portfolio of stocks (or any other asset) she first and foremost estimates the correlation and/or the covariance matrix of the stock returns. Let's say that the covariance matrix of stock returns is and random (symmetric) matrix of the type described above (with suitable constraints) is . If the eigenvalues of are distributed in a similar fashion as the eigenvalues of then the analyst will conclude that the elements of , the actual, observable covariance matrix, have considerable degree of randomness. This means that there is a lot of noise in the stock price data that she observes for all the stocks in her portfolio. Random matrix theory can be used to filter out noise from a correlation or a covariance matrix in a portfolio of assets. There are many other important applications of random matrices in quantitative finance.

**Reference:**

- Bhattacharya, Rahul,
*The Book of Greeks*, CFE School Publishing (unpublished draft), 2013
- Gatheral, Jim,
*Random Matrix Theory and Covariance Estimation*, Presentation, Merrill Lynch, New York, Oct, 2008.
- Anderson, Greg W., Guionnet, Alice, Zeitouni, Ofer,
*An Introduction to Random Matrices*, Cambridge University Press

Any comments and queries can
be sent through our
web-based form.

__
More on Financial and Engineering Mathematics >>__

back to top