Data Augmentation for Tabular Data

Lasse Schmidt
Analytics Vidhya
Published in
4 min readNov 7, 2021

--

Photo by JOSHUA COLEMAN on Unsplash

The main idea of why and how to use Deep Learning to create data augmentation on tabular data is decribed in my previous blogpost on this topic. Since then quite some time has passed and I decided to rewrite the Code so it is easier to use. In this blogpost I want to show you how to use the new approach, how flexible it is regarding the model you want to use and how you can use your own custom Callbacks to better see what the model does in the background. Some of these features I already built-in and here’s how to use them.

Again, we use the credit-card fraud dataset from kaggle.

The deep_tabular_augmentation works on the simple idea, that we want to keep the data in a dedicated class (which we call the Learner) together with the model. The data has to come as a dataloader object, which I store in the DataBunch class. In it are the dataloaders for the training and test data. The runner class then defines the flow.

We first scale the data and only keep the data of the class we want to augment:

As mentioned, I then put the train and testloader into a class called DataBunch, which is just a container for the data. You can easily create your own dataloaders and put them in a DataBunch.

To make use of the deep_data_augmentation, we need to specify the input shape (so basically how many variables are in the dataset), the column name of the target-class we want to augment and the corresponding number, and lastly the column names of the input variables.

Then, wen can define whatever model architecture we would like to have. We just pass it as a list into the model. We can also define how many latent dimension we would like to add to our Autoencoder.

The runner class then provides us with the flow of the data and also let’s you add Callbacks. One of the built-in Classbacks is a Learning Rate Finder, and this is how you can use it:

We can also use the callbacks to create a learning rate scheduler. Here is an example: use 30% of the budget to go from 0.01 to 0.1 following a cosine, then the last 70% of the budget to go from 0.1 to 0.01, still following a cosine.

Now we can train our Autoencoder. We want to keep track of the loss, we want to be able to do some loss plotting and we want to add our learning-rate scheduling:

We can have a look at the training loss:

And the learnig rate over the epochs:

ou can create any kind of Callbacks you want and pass them to the training. Within the runner I created some hooks after which you can directly link your Callback, for example begin_batch, begin_epoch, after_pred, after_fit and quite a few more. When you create your Callback you can directly refer to these to pinpoint where your Callback should do specific tasks. This is how the LossTracker looks like:

When beginning the training, I start with an empty list, and after each batch I add the loss. After each epoch then I print the loss.

Moreover, the runner also provides you with the possibility to create augmented data by the trained model. You can specify how many samples of the specified class you want, and you can also add some noise to it, which I found to create better input for later use.

Let’s see how our model does when it comes to replicating the fraud cases. First we plot V1 against V2 on the non-fraud cases, then we do the same for the fraud cases:

They look quite different. Let’s see how our fake data does:

This looks actually quite amazing. If you have any questions or want anything added to the package, just ask me.

Lasse

Originally published at https://lschmiddey.github.io on November 07, 2021.

--

--