Why the RepNet is so important

Using Deep Learning to Count Repetitions

Photo by Efe Kurnaz on Unsplash

In our daily lives, repeating actions occur frequently. This ranges from organic cycles such as heartbeats and breathing, through programming and manufacturing, to planetary cycles like day-night rotation and seasons.

The need to recognise these repetitions, like those in videos, is unavoidable and requires a system that can identify and count repetitions. Think exercising — how many repetitions are you doing?

The Unsolved Problem

Isolating repeating actions is a difficult task. I know, it seems pretty straight forward when you see someone in front of you jumping up and down but translating that into the form of a machine learning problem makes it much more difficult. How do you teach a computer what a jumping jack looks like from all 360 degrees? How can you generalise any inference from video?

Previous work in the space took the approach of analysing videos at a fine-grain level using a cycle-consistency constraint across different videos of the same action. Reading the paper of the old model, you can see that you’re basically building a model that compares frames in a collection of videos:

Temporal Cycle-Consistency Learning: [source]

However, in the real world problems are faced such as camera motion, objects in the field that distort the vision, and changes of form of the repeating view: basically trying to calculate features invariant to such noise. The existing process required a lot of work to ‘densely label data’ and it’d be much more ideal if an algorithm could learn a sequence from a single video.

That’s where RepNet comes in

A RepNet solves the problem of counting repetitions in real-world videos, incorporating a noise that ranges from having camera motion, obscured vision, drastic scale difference and changes in form etc.

Unlike in the past where this problem was addressed directly by comparing pixel intensities in frames, a RepNet can solve this in a single video that contains period action. The RepNet returns the number of repetitions of any such video.

A RepNet is composed of three components: a frame encoder, a temporal self-similarity matrix as an intermediate representation, and a period predictor.

Its frame encoder generates embeddings by fleeting each frame of a video to the encoder of the ResNet architecture.

Then the temporal self-similarity matrix (TSM) can be calculated by comparing each frame with every other frame in the video.

As such, a matrix that is easy for subsequent modules to analyse is returned for counting repetitions. Transformers are used directly from the sequence of similarities in the TSM.

Once the period is attained, the per-frame count can now be obtained from dividing the number of frames captured in a periodic segment by the period length.

An advantage of this representation is that the models interpretability is baked into the network architecture, as the network is being forced to predict the period from the self-similarity matrix only, as opposed to inferring the period from latent high-dimensional features (such as from the frames themselves).

Temporal Self Similarity: [Source]

Note that this learning architecture also allows the model to take into account speed changes of the repetition, but also, any obfuscation of a repeating series (i.e. a video that’s rotating whilst also showing a repeated task). The reason why that’s important is because it shows a model that’s generalising. Models that can generalise can be applied to a much wider array of problems, a great leap forward in ML.

You can can use the following resource for more information, including a downloadable pre-trained RepNet: source


The use case of the ability to capture repeating tasks is important. The current level of sophistication struggles to capture relatively well-posed questions like, how many push ups am I doing? Rather, we want to be able to build upon this to be able to infer more complicated repeated actions that we see.

Given that current foundation, it’s only a matter of time till we’re able to characterise more complicated dynamics in video and it’s exciting to see the steps that researchers are making on a daily basis. Google is printing so many papers that the rate of development, along with the rate of knowledge progress is insane.

This article highlights one piece of the AI puzzle. More pieces are yet to come!


Thanks for reading! If you have any messages, please let me know!

Keep up to date with my latest articles here!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Powered by WordPress.com.

Up ↑

%d bloggers like this: