In [1]:
# To activate only if you're working with the git version, inside the repository (for developpers)
import sys
sys.path.insert(0, './src')
%load_ext autoreload
%autoreload 2

import logging
logging.getLogger().setLevel('INFO')
In [2]:
# either
# ! pip install keras-video-generators

Keras Video Generators

This package aims to produce generators that yields video frames from video dataset. It provides VideoFrameGenerator, SlidingFrameGenerator and OpticalFlowGenerator. The first one is the mother class.

This Notebook uses a subset of "HMDB: a large human motion database": https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/

Basics with VideoFrameGenerator

The first class is the simplest and probably the best to use if your dataset is large enough.

In [3]:
import keras_video
Using TensorFlow backend.

Here, we will get create a generator for above given classes. As you can see, the classname pattern is represented as directory name.

Since v1.0.10: we can avoid to give the classes argument to let the generator to find all classes respecting the glob_pattern.

In [4]:
gen = keras_video.VideoFrameGenerator(batch_size=4, nb_frames=5, glob_pattern='./_test/{classname}/*', split_val=.2)
class dribble, validation count: 29, train count: 116
class golf, validation count: 21, train count: 84
Total data: 2 classes for 200 files for train

You can use transformation parameter to give an ImageGenerator object that will randomly transform frames for each epoch.

To check your generator, you can use keras_video.utils that gives show_sample() function to display (in a notebook) one batch.

In [5]:
from keras_video import utils as ku
In [6]:
ku.show_sample(gen, random=True)