Why the RepNet is so important

Using Deep Learning to Count RepetitionsPhoto by Efe Kurnaz on UnsplashIn our daily lives, repeating actions occur frequently. This ranges from organic cycles such as heartbeats and breathing, through programming and manufacturing, to planetary cycles like day-night rotation and seasons.The need to recognise these repetitions, like those in videos, is unavoidable and requires a system that... Continue Reading →

How to Derive an OLS Estimator in 3 Easy Steps

Mohammad Hasan on [Pixabay]A Data Scientist’s Must-KnowOLS Estimation was originally derived in 1795 by Gauss. 17 at the time, the genius mathematician was attempting to define the dynamics of planetary orbits and comets alike and in the process, derived much of modern day statistics. Now the methodology I show below is a hell of a... Continue Reading →

The Power-Law Distribution

Pareto’s Power-Law Distribution Explaining the Laws of Nature (Including the Golden Ratio)The laws of nature are complicated and throughout time, Scientists from all corners of the world have attempted to model and reengineer what they see around them to extract some value from it. Quite often we see a pattern that comes up time and time... Continue Reading →

Robust Statistical Methods

Anomalies hidden in plain sight. Chart from Liu and Neilson (2016)Methods that Data Scientists Should LoveA robust statistic is a type of estimator used when the distribution of the data set is not certain, or when egregious anomalies exist. If we’re confident on the distributional properties of our data set, then traditional statistics like the Sample Mean... Continue Reading →

The Sampling Distribution of OLS Estimators

OLS Regression on sample data Details, details: it’s all about the details!Ordinary Least Squares (OLS) is usually the first method every student learns as they embark on a journey of statistical euphoria. It’s a method that quite simply finds the line of best fit within a two dimensional dataset. Now the assumptions behind the model, along with... Continue Reading →

Asymptotic Distributions

Eulers Infinity Infinity (and beyond…)The study of asymptotic distributions looks to understand how the distribution of a phenomena changes as the number of samples taken into account goes from n → ∞. Say we’re trying to make a binary guess on where the stock market is going to close tomorrow (like a Bernoulli trial): how does the... Continue Reading →

The Distribution of the Sample Mean

One-sided test on a distribution that is shaped like a Bell Curve. [Image from Jill Mac from Source (CC0) ]All Machine Learning Researchers should know thisMost machine learning and mathematical problems involve extrapolating a subset of data to infer for a global population. As an example, we may only get 100 replies on a survey to our... Continue Reading →

Parts-based learning by Non-Negative Matrix Factorisation

Visualising the principal components of portrait facial images. ‘Eigenfaces’ are the decomposed images in the direction of largest variance.Why we can’t relate to eigenfacesTraditional methods like Principal Component Analysis (PCA) would decompose a dataset into some form of latent representation e.g. eigenvectors, which at times can be meaningless when visualised — what actually is my first principal... Continue Reading →

Powered by WordPress.com.

Up ↑