Machine Learning Recycling Project — Part 6— Collecting a Dataset

Nathan Bailey
4 min readJan 15, 2024

--

It has been a while since I last posted on this project. This is mainly because life has taken over during the Christmas and New Year break, but I have still been working on this in the background. The last blog ended with a look at pruning the trained network to decrease its size whilst maintaining a high level of accuracy. We found out that APoZ pruning produced the best results. This combined with teacher-student methods helped to reduce the parameters further whilst maintaining the same level of accuracy.

Now that we have an optimised model producing a high level of accuracy, we next turn our attention to deploying this model into a recycling bin. To do this we first must collect a representative dataset. The dataset we currently train the model on was fit for our initial development. However, when we deploy the edge device into a recycling bin the photos taken from the camera will not resemble the training images. Therefore, we will see poor performance.

We must collect a dataset of recycling items as the device would capture them in the bin. This is a significant undertaking since we must ensure that we collect a wide range of items that span the classes we have. In this blog, I detail the current work done to collect the dataset, the performance of a trained model on this dataset and the work that needs to be done to complete the collection of the dataset.

Collection of the Dataset

To collect the dataset, I positioned the Jetson Nano on the inside of the bin and attached a webcam to it so that it could capture video. I originally planned to attach a Raspberry Pi camera to the Jetson, however, upon doing this it turns out that the the Jetson Nano no longer supports the latest version of this camera.

Whilst this is a bit of a sketchy setup at the moment and not how I envision the final system working it will do the job for now.

Setup to Collect the Dataset

To capture the dataset, I wrote a simple Python program to capture a video input and continually capture frames from it. I then proceeded to place items in the bin one by one. Once I had captured all the items, I manually filtered out the frames that contained an item and removed those frames that did not. Once this was complete, I could label the data. For the sake of complexity, I decided to omit the organic (food waste) class and just capture paper, plastic, metal and glass.

As one can imagine the process of capturing enough items to form a complete dataset takes a long time. At the current time of writing, I have collected 91 distinct items. This is not enough to train a model such that it can generalize well but it is a good enough start.

The figures below show some of these collected images and the corresponding labels.

Collected Dataset

Training a Model on the Dataset

Training a model on this dataset was a similar process to how we originally trained the model. We apply the same starting learning rate and scheduler as before, however, this time we take advantage of the information learned in the original model by loading the trained model before re-training on the new dataset.

Due to the small amount of data, and since we are only interested in if we can train a network with this data we omit the use of a testing set. This allows more data to be split between training and validation.

We also make sure to take advantage of data augmentation to artificially increase the size of the training dataset, applying the same five augmentations as we originally trialled. This is very useful as it allows more of the original data to be dedicated to validation whilst keeping a large training set.

The following graphs show the training process. We can see that the model learns well, reaching a high level of accuracy. However, it fails to generate well on the validation set.

Training Graphs

While I cannot say for sure, I think this is due to the lack of training data. We have only collected 91 items and therefore it is a stretch for the network to generalize well to this data. To properly evaluate the generalization ability of this network, we must collect more data.

Conclusions

This blog looked at how we collected a new dataset for the recycling project. While a network trained on this data performed well on the training data, it failed to generalize well on validation data. To correct this, the next step in this project is to collect more data.

--

--

Nathan Bailey
Nathan Bailey

Written by Nathan Bailey

MSc AI and ML Student @ ICL. Ex ML Engineer @ Arm, Ex FPGA Engineer @ Arm + Intel, University of Warwick CSE Graduate, Climber. https://www.nathanbaileyw.com

No responses yet