In this article, I share my implementation of CBAM as proposed by S Woo et. al in CBAM: Convolutional Block Attention Module, and my observations with its inclusion.

Environment specifications:

  • keras (v2.4.3)
  • tensorflow (v2.2.0) backend

Below is the code snippet for implementation of cbam layer into your NN.

Code by author. To include into your model please pass the output of a conv layer to cbam_block() above.

The below figure outlines the overall model architecture employed for the target task. cbam layer was included after.

Combined (classifier/autoencoder) model architecture. Dotted lines in encoder represents skip connections. Classifier model uses separate multi-level classification blocks. Downsample blocks are shown below.

Training step executed only on the most lossy samples

In this article, I share my work on accelerating training procedure using very simple scheme for filtering the hardest (with highest loss) samples.

Tools and dataset specifications:

  • keras (v2.4.3)
  • tensorflow (v2.2.0) backend
  • cifar-10” dataset from tf.data.Dataset pipeline

TL;DR: Below is an implementation of training on samples with maximum loss.

Code by author. Lines 71 — 109 implements importance sampling for high loss sample

Observations
Commenting on the performance, keeping everything else same, with the above setting for filtering (filter only if min. loss < 0.1*max. loss):

  1. model training speedup by 15%…

Remove previously saved models with early training stop feature

We all have come to appreciate the flexibility afforded to us by keras’ various in-built callbacks like TensorBoard, LearningRateScheduler etc. during model training. However, it is not uncommon either for them to occasionally leave us wanting. In one such instance, I looked for functionality in ModelCheckpoint to delete the saved model files.

The motivation being that I was working on a shared machine and running multiple training experiments, each running 100s of training epochs. It did not take long for me to end up filling GBs and GBs of storage space. …


In this article I summarize the tensorflow implementation for 1) creating an imbalanced dataset, 2) oversampling of under-represented samples using tf.data.Dataset.

Who is this article aimed at? Do you want to

  1. work with tf.data.Dataset class
  2. to create an imbalanced dataset
  3. want to oversample the underrepresented samples in imbalanced dataset
  4. apply image and batch level data augmentations
  5. create a split (train, validation) from a given dataset

Tool and dataset specifications:

  • keras (v2.4.3)
  • tensorflow (v2.2.0) backend
  • cifar-10” dataset from tf.data.Dataset pipeline

TL;DR
Below is the bare minimum code snippet that will fulfill these requirements. …


TL;DR: lpips metric might not translate well as a loss function for training an image transformation/processing model. This is despite the fact that it might serve as a good quantitative evaluator, relating well with human perception of image quality.

Following is the code snippet I utilized for implementing the lpips based loss function for a de-blurring model.

Except for the convolutional layer after SpatialDropout, all the weights were frozen. This refers to the lin configuration as defined by Zhang, et al. in The Unreasonable Effectiveness of Deep Features as a Perceptual Metric.

Here, I share the key insights through…


Keras has three methods for defining neural network architectures, namely, Sequential API, Functional API and model subclassing. More about this can be read here. This article introduces a method to import subclass model weights to Functional API model.

Now, I have written about importing pre-trained tensorflow-1 model weights to keras. Back then the google-research still had not provided a tensorflow-2 implementation of SimCLR. However, tf2 version employs model subclassing method for training and saving of the weights. This implies that the loaded model from the saved model files will not be a keras object as discussed here,

The object returned…


For the past few weeks, I have been experimenting with various generator-discriminator networks (GANs) and among numerous trials and struggles, I have managed to come up with a few helpful strategies of my own. In this article, I share one such finding.

This one pertains to discriminator definitions. More specifically, discriminator definitions for image transformation (super-resolution, deblurring) generator models. There are various discriminator models are out there, with their own strengths and weaknesses, aptly covered by A Jolicoeur-Martineau in The relativistic discriminator: a key element missing from standard GAN.

In the paper, Alexia advocates for using a relativistic discriminator with…


While using Pytorch’s (v1.4.0) Dataloader with multiple workers (num_workers > 0), I encountered the following error,

Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.

With this started my couple of hours long struggle for increasing the shared memory size. Now, if one is running a docker container with docker run command, this issue can be handled by inserting following command line argument.

However, for running the job on a kubernetes cluster, one needs to include the relevant flag in the corresponding *.yaml file.


Recently, while implementing my Deep Neural Network (DNN) model into a WebAPI I faced multiple issues,

  1. my usage of absl-flags for command line arguments somehow clashed with the flask’s run command.
  2. in receiving image file back as an output from the server side, i received the following errors,

AttributeError: 'Image' object has no attribute 'read'
AttributeError: 'numpy.ndarray' object has no attribute 'read

I could have switched from using absl-py to some other module (for example, argparse) to set the default values for my command line arguments (unsure if this would have worked, since I did not test) but that would…


In this article, I present three different methods for training a Discriminator-generator (GAN) model using keras (v2.4.3) on a tensorflow (v2.2.0) backend. These vary in implementation complexity, speed of execution and flexibility. I mention the observations for these methods from these aspects.

Method 1:

Carrying out a batch-wise update of discriminator and generator alternatively, inside nested for loops for epoch and training steps. Most references obtained through an internet search seem to be using this method. The example code is designed with “Data transformation model”. Make necessary tweaks depending on the kind of model you will be requiring.

Anuj Arora

Budding AI Researcher

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store