Threshold for sample filtering defined using loss values for last “m” batches
This is 3rd (part 1, part 2) article in the series on importance sampling with tensorflow-keras framework. This implementation calculates the loss threshold from history of latest “m” batches. Along with this, it skips over the simple batches (representative loss value of the batch less than the threshold). This provides a couple of advantages:
In my earlier article (part 1), I discussed implementation of importance sampling, based on per-batch statistics. There, a sample with loss value in the top nth-percentile of its corresponding batch was filtered for training.
Now, the shortcoming of the above approach is that it is possible that most batches contains only simple samples. Even if we filter the batch, the filtered samples are still simple enough for the model. Therefore, a filtering scheme contingent on individual batch statistics is unable to fully exploit the benefits of importance sampling.
With that thought, I wanted to employ whole dataset dependent statistics to…
2. It is a powerful anomaly detection and localization tool as demonstrated by Defard et al. in PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and Localization
However, these methods owing to their implementations with 1-D arrays are very…
In this article, I cover the implementation of tf.data.Dataset class on top of keras’ ImageDataGenerator for creating a data pipeline for image pairs. Few Example tasks with this requirement are
These tasks inherently require matching input-output pairs to be passed to the model. Now one could write their own iterator function to pick matching pairs but that means
Hence, in light of the…
In this article, I share my implementation of CBAM as proposed by S Woo et. al in CBAM: Convolutional Block Attention Module, and my observations with its inclusion.
Below is the code snippet for implementation of cbam layer into your NN.
The below figure outlines the overall model architecture employed for the target task. cbam layer was included before SpatialDropout2D of downsample block in the following neural network architecture.
Training step executed only on the most lossy samples
In this article, I share my work on accelerating training procedure using very simple scheme for filtering the hardest (with highest loss) samples.
Tools and dataset specifications:
TL;DR: Below is an implementation of training on samples with maximum loss.
We all have come to appreciate the flexibility afforded to us by keras’ various in-built callbacks like TensorBoard, LearningRateScheduler etc. during model training. However, it is not uncommon either for them to occasionally leave us wanting. In one such instance, I looked for functionality in ModelCheckpoint to delete the saved model files.
The motivation being that I was working on a shared machine and running multiple training experiments, each running 100s of training epochs. It did not take long for me to end up filling GBs and GBs of storage space. …
In this article I summarize the tensorflow implementation for 1) creating an imbalanced dataset, 2) oversampling of under-represented samples using tf.data.Dataset.
Who is this article aimed at? Do you want to
Tool and dataset specifications:
Below is the bare minimum code snippet that will fulfill these requirements. …
TL;DR: lpips metric might not translate well as a loss function for training an image transformation/processing model. This is despite the fact that it might serve as a good quantitative evaluator, relating well with human perception of image quality.
Following is the code snippet I utilized for implementing the lpips based loss function for a de-blurring model.
Except for the convolutional layer after SpatialDropout, all the weights were frozen. This refers to the lin configuration as defined by Zhang, et al. in The Unreasonable Effectiveness of Deep Features as a Perceptual Metric.
Here, I share the key insights through…
Keras has three methods for defining neural network architectures, namely, Sequential API, Functional API and model subclassing. More about this can be read here. This article introduces a method to import subclass model weights to Functional API model.
Now, I have written about importing pre-trained tensorflow-1 model weights to keras. Back then the google-research still had not provided a tensorflow-2 implementation of SimCLR. However, tf2 version employs model subclassing method for training and saving of the weights. This implies that the loaded model from the saved model files will not be a keras object as discussed here,
Budding AI Researcher