If you have ever thought of propagating values across training steps while still using the model.fit() functionality of keras, you have come to the right place. In principle this behaviour requires a layer similar to inbuilt LSTM layers, that can store hidden states across steps. …


Threshold for sample filtering defined using loss values for last “m” batches

This is 3rd (part 1, part 2) article in the series on importance sampling with tensorflow-keras framework. This implementation calculates the loss threshold from history of latest “m” batches. Along with this, it skips over the simple batches…


Working with global statistics instead of per-batch statistics

In my earlier article (part 1), I discussed implementation of importance sampling, based on per-batch statistics. There, a sample with loss value in the top nth-percentile of its corresponding batch was filtered for training.

Now, the shortcoming of the above approach is that it is possible that most batches contains…


Why Mahalanobis Distance?

  1. Mahalanobis is an effective measure of distance between a point and a dataset distribution. It is a powerful tool for outlier detection as it employs covariance matrix for calculating the distance between the dataset center and a given observation (figure below). Please check references for more details on this.
Image by Author. “Red” is an outlier identified by its distance “d” from the centre of the distribution

2…


In this article, I cover the implementation of tf.data.Dataset class on top of keras’ ImageDataGenerator for creating a data pipeline for image pairs. Few Example tasks with this requirement are

  1. SuperResolution
  2. Deblurring

These tasks inherently require matching input-output pairs to be passed to the model. …


In this article, I share my implementation of CBAM as proposed by S Woo et. al in CBAM: Convolutional Block Attention Module, and my observations with its inclusion.

Environment specifications:

  • keras (v2.4.3)
  • tensorflow (v2.2.0) backend

Below is the code snippet for implementation of cbam layer into your NN.

Code by…

Training step executed only on the most lossy samples

In this article, I share my work on accelerating training procedure using very simple scheme for filtering the hardest (with highest loss) samples.

Tools and dataset specifications:

  • keras (v2.4.3)
  • tensorflow (v2.2.0) backend
  • cifar-10” dataset from tf.data.Dataset pipeline

TL;DR: Below is an…


Remove previously saved models with early training stop feature

We all have come to appreciate the flexibility afforded to us by keras’ various in-built callbacks like TensorBoard, LearningRateScheduler etc. during model training. However, it is not uncommon either for them to occasionally leave us wanting. …


In this article I summarize the tensorflow implementation for 1) creating an imbalanced dataset, 2) oversampling of under-represented samples using tf.data.Dataset.

Who is this article aimed at? Do you want to

  1. work with tf.data.Dataset class
  2. to create an imbalanced dataset
  3. want to oversample the underrepresented samples in imbalanced dataset
  4. apply…

TL;DR: lpips metric might not translate well as a loss function for training an image transformation/processing model. This is despite the fact that it might serve as a good quantitative evaluator, relating well with human perception of image quality.

Following is the code snippet I utilized for implementing the lpips

Anuj Arora

Budding AI Researcher

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store