#### How a Data Scientists can get the most of this statistic

People are quite familiar with the colloquial usage of the term ‘correlation’: that it tends to resemble a phenomena where ‘things’ move together. If a pair of variables are said to be correlated then if one variable goes up, it’s quite likely that variable two will also go up as well.

Pearson’s Correlation is the simplest form of the Mathematical definition in that uses the covariance between two variables to discern a statistical, and albeit, a linear relationship. It looks at the dot product between the two vectors of data and normalises this summation: the resulting metric is a statistic which is bound towards +/- 1. A positive (negative) correlation indicates that the variables move in the same (different) direction with +1 at the extreme, indicating that the variables are moving in perfect harmony. For reference:

Where x and y are the two variables, ⍴ is the correlation statistic, and σ is the covariance metric.

It’s also interesting to note that the OLS Beta and Pearson’s Correlation are intrinsically linked. Beta is mathematically defined as the covariance between two variables over the variance of the first: it attempts to uncover the linear relationship between a set of variables.

However, the only difference between the two metrics is the ratio that scales the correlation based on the standard deviation of each variable: (sigma x / sigma y). This normalises the boundaries of the beta coefficient to +/- 1 and thereby giving us the correlation metric.

Let’s now move onto the Sampling Distribution of Pearson’s Correlation

### Expectation of Pearson’s Correlation

Now we know that a sample variance calculation that is adjusted for Bessel’s Correction is an unbiased estimator. As Pearson’s correlation involves effectively 3 sample variances, therefore, inferring that the metric itself is also unbiased:

### Standard Error of Pearsons Correlation

A problem with the correlation coefficient occurs when we sample from a distribution involving highly correlated variables, not to mention the changing dynamics as the number of observations change.

These two variables which are intrinsic to the calculation of the correlation coefficient can really complicate matters at the extreme, which is why empirical methods like permutation methods or bootstrap methods are used to derive a standard error.

These are both relatively straight forward and can be referenced elsewhere (note here and here). Let’s quickly go through a bootstrap method.

Say we have two time-series of length 1000 (x and y). Then we can take a random subset of n samples from x, and the corresponding samples from y to now have x* and y* which are subsets of the original data (with replacement). From there, we can calculate a Pearson’s correlation to make one single data point. We re-run this, say, 10000 times and now have a vector of 10000 bootstrapped samples. From this, we can calculate a standard deviation of these 10000 samples to result in the standard error of our metric.

As an example, here I take two random normal variables of length 10,000. I subsample 100 data points and calculate the correlation of it over 1000 times. I then calculate the standard deviation of this dataset and empirically derive a standard error of 0.1 (code as below):

Now this empirical derivation is definitely practical when you’re unsure of the underlying distribution of the pairs of variables. Given that we’re sampling from bivariate normal and independent data, we can approximate the standard error by also looking at Fishers Transform as follows:

which in the above case would be approximately 0.10. A great reference on Fishers Transform is here, which deserves an article in itself, so I will not go into detail.

Moreover, if we want to retain the functional form of the correlation coefficient itself, we can also derive the empirical standard error of the correlation coefficient as:

which is ~ root((1-⍴²)/n) and ⍴= 0 so ~ root(1/n)~1/root(n) = 0.1. Again, I will not go into detail here as this derivation is lengthy, but, another great reference is here.

The important thing to note here is that the three of these results converge because the series that we’re testing have an underlying normal distribution. If this wasn’t the case, then you would see that the standard error approximation of the normal distribution, and the standard error approximated from fishers distribution begin to diverge.

Note: I deplore the reader to focus on empirical methods when estimating standard errors for correlations. Distributions can move under the surface so it’s much more reliable to use empirical methods like permutation or bootstrapping.

The correlation coefficient is an incredibly powerful tool to discern relationships between variables. It actually provides information in a useful manner, although as discussed, it’s distributional properties are quite sensitive to the underlying data. If you can make a function that samples quickly, then the empirical methods will really set you in the right direction

Thanks again! Please message me if you need anything 🙂

### Code

`import numpy as npimport pandas as pdimport seaborn as snsimport matplotlib.pyplot as plt`
`all_data = pd.DataFrame(np.random.rand(10000,2))`
`n = 100nsamples = 1000bootstraps = pd.DataFrame([all_data.sample(n).corr()[0][1] for m in range(nsamples)])`
`sns.distplot(bootstraps)plt.title(‘Standard Error is=[{:.3f}]’.format(bootstraps[0].std()))plt.grid()`