Vehicle detection and tracking using deep learning

  • 🔬 Data Science
  • 🥠 Deep Learning and Object Detection
  • 🛤️ Tracking

Introduction and objective

Vehicle detection and tracking is a common problem with multiple use cases. Government authorities and private establishment might want to understand the traffic flowing through a place to better develop its infrastructure for the ease and convenience of everyone. A road widening project, timing the traffic signals and construction of parking spaces are a few examples where analysing the traffic is integral to the project.

Traditionally, identification and tracking has been carried out manually. A person will stand at a point and note the count of the vehicles and their types. Recently, sensors have been put into use, but they only solve the counting problem. Sensors will not be able to detect the type of vehicle.

In this notebook, we'll demonstrate how we can use deep learning to detect vehicles and then track them in a video. We'll use a short video taken from live traffic camera feed.

Necessary imports

import os
import pandas as pd
from pathlib import Path

from arcgis.gis import GIS
from arcgis.learn import RetinaNet, prepare_data
gis = GIS('home')

Prepare data that will be used for training

You can download vehicle training data from here. Extract the downloaded file to get your training data.

Model training

Let's set a path to the folder that contains training images and their corresponding labels.

training_data = gis.content.get('ccaa060897e24b379a4ed2cfd263c15f')
training_data
vehicle_detection_and_tracking
Image Collection by api_data_owner
Last Modified: August 26, 2020
0 comments, 10 views
filepath = training_data.download(file_name=training_data.name)
import zipfile
with zipfile.ZipFile(filepath, 'r') as zip_ref:
    zip_ref.extractall(Path(filepath).parent)
data_path = Path(os.path.join(os.path.splitext(filepath)[0]))

We'll use the prepare_data function to create a fastai databunch with the necessary parameters such as batch_size, and chip_size. A complete list of parameters can be found in the API reference.

The given dataset has 235 images of size 854x480 pixels. We will define a chip_size of 480 pixels which will create random crops of 480x480 from the given images. This way we will maintain the aspect ratios of the objects but can miss out on objects when training the model for fewer epochs. To avoid cropping, we can set resize_to=480 so that every chip is an entire frame and doesn't miss any object, but there is a risk of poor detection with smaller sized object.

data = prepare_data(data_path, 
                    batch_size=4, 
                    dataset_type="PASCAL_VOC_rectangles", 
                    chip_size=480)
Please check your dataset. 9 images dont have the corresponding label files.

We see the warning above because there are a few images in our dataset with missing corresponding label files. These images will be ignored while loading the data. If it is a significant number, we might want to fix this issue by adding the label files for those images or removing those images.

We can use the classes attribute of the data object to get information about the number of classes.

data.classes
['background',
 'bicycle',
 'bus',
 'car',
 'motorcycle',
 'person',
 'scooter',
 'tempo',
 'tractor',
 'truck',
 'van']

Visualize training data

To visualize and get a sense of the training data, we can use the data.show_batch method.

data.show_batch()
<Figure size 576x576 with 4 Axes>

In the previous cell, we see a sample of the dataset. We can observe, in the given chips, that the most common vehicles are cars and bicycles. It can also be noticed that the different instance of the vehicles have varying scales.

Load model architecture

arcgis.learn provides us object detection models which are based on pretrained convnets, such as ResNet, that act as the backbones. We will use RetinaNet with the default parameters to create our vehicle detection model. For more details on RetinaNet check out How RetinaNet works? and the API reference.

retinanet = RetinaNet(data)

We will use the lr_find() method to find an optimum learning rate. It is important to set a learning rate at which we can train a model with good accuracy and speed.

lr = retinanet.lr_find()
<Figure size 432x288 with 1 Axes>
4.365158322401661e-05

Train the model

We will now train the RetinaNet model using the suggested learning rate from the previous step. We can specify how many epochs we want to train for. Let's train the model for 100 epochs. Also, we can turn tensorboard True if we want to visualize the training process in tensorboard.

retinanet.fit(100, lr=lr, tensorboard=True) 
epochtrain_lossvalid_losstime
02.6511603.12269900:33
12.7274853.08971000:32
22.7449203.01592200:32
32.6717972.85199400:31
42.4575542.49741000:31
52.3817402.32883400:31
62.0601744.13856700:31
71.79240321.45185700:31
81.7129774.19350800:31
91.6087064.87681300:32
101.4963294.95595000:32
111.5755262.12423900:33
121.4484792.76598200:31
131.3567832.73908800:31
141.2960361.94117000:32
151.2355883.04296900:32
161.1774692.91674000:32
171.1631512.46218200:32
181.1244771.95231900:32
191.0557232.63934600:32
200.9765541.88405600:32
210.8658621.54538900:32
220.8854761.69367400:32
230.8619831.38662400:32
240.8122861.25724500:33
250.7941381.57858800:32
260.7656401.20883500:34
270.7028181.11739500:32
280.6691101.21365300:33
290.6747981.13019100:32
300.6753001.15488100:32
310.6807911.25790700:33
320.6555861.07234700:32
330.5864071.00921000:32
340.5707551.22029000:33
350.5902230.98279000:34
360.5750410.99769000:33
370.5854121.03581400:33
380.5728871.01508200:33
390.5521260.94972800:32
400.5354551.19522400:33
410.4991690.94674600:33
420.5273451.00981200:34
430.5470290.99167500:33
440.5154410.90666100:33
450.5479480.98616600:33
460.5171090.94300200:33
470.4748260.89487500:33
480.4404340.90988600:33
490.4419180.81984000:33
500.4330400.83771100:33
510.4245010.83416100:33
520.4423970.82519400:33
530.4385010.77857700:34
540.4257940.79080900:33
550.4055440.77412500:34
560.3975290.75109400:34
570.3860210.75689900:33
580.3957990.76377200:33
590.3853720.78558100:35
600.3797650.76733800:34
610.3695030.72005000:33
620.3678060.72071200:35
630.3787310.73485900:34
640.3688380.72913500:33
650.3445550.70002400:35
660.3404110.74390800:35
670.3508000.71876400:34
680.3648900.71552400:35
690.3379520.68867300:34
700.3480770.71921500:35
710.3231960.70002000:34
720.3610270.71942300:35
730.3677120.71981400:35
740.3675070.69380800:35
750.3476510.70826400:35
760.3452690.70560100:34
770.3411630.71963300:34
780.3213590.71902100:34
790.3250860.71069500:34
800.3076210.70998500:34
810.3120100.69520900:34
820.3084550.72305000:34
830.3337490.72123500:34
840.3233370.71869600:33
850.3303530.70931600:34
860.3377850.72864500:33
870.2999530.73227900:33
880.3090580.72300100:33
890.3414130.74913800:33
900.3322620.73432800:33
910.3068630.71680800:33
920.3008030.73775400:33
930.3130410.71491800:33
940.3294770.71177200:33
950.3213540.71455800:33
960.3213790.70137300:34
970.3013400.72629600:33
980.2971740.72615800:33
990.3100640.73669000:33

After the training is complete, we can view the plot with training and validation losses.

retinanet.learn.recorder.plot_losses()
<Figure size 432x288 with 1 Axes>

Visualize results on validation set

To see sample results we can use the show_results method. This method displays the chips from the validation dataset with ground truth (left) and predictions (right). We can also specify the threshold to view predictions at different confidence levels. This visual analysis helps in assessing the qualitative results of the trained model.

retinanet.show_results(thresh=0.4)
<Figure size 576x1152 with 8 Axes>

To see the quantitative results of our model we will use the average_precision_score method.

retinanet.average_precision_score(detect_thresh=0.4)
100.00% [6/6 00:01<00:00]
{'bicycle': 0.6121794875615674,
 'bus': 0.0,
 'car': 0.770548729309354,
 'motorcycle': 0.0,
 'person': 0.0,
 'scooter': 0.0,
 'tempo': 0.0,
 'tractor': 0.0,
 'truck': 1.0,
 'van': 0.38429487869143486}

We can see the average precision for each class in the validation dataset. Note that while car and bicycle have a good score, van doesn't, and a few have a score of 0. Remember when we visualized the data using show_batch we noted that the cars and bicycles were the most common objects. It means, the scores could be correlated with the number of examples of these objects we have in our training dataset.

Let's look at the number of instances of each class in the training data and it should explain.

all_classes = []
for i, bb in enumerate(data.train_ds.y):
    all_classes += bb.data[1].tolist()
    
df = pd.value_counts(all_classes, sort=False)
df.index = [data.classes[i] for i in df.index] 
df
bicycle       266
bus            19
car           756
motorcycle     33
person         24
scooter         6
tempo           1
tractor         4
truck          30
van            69
dtype: int64

It is evident that the classes that have a score of 0.0 have extremely low number of examples in the training dataset.

Save the model

Let's save the model by giving it a name and calling the save method, so that we can load it later whenever required. The model is saved by default in a directory called models in the data_path initialized earlier, but a custom path can be provided.

retinanet.save('vehicle_det_ep100_defaults')
WindowsPath('vehicle_detection/models/vehicle_det_ep100_defaults')

Inference and tracking

Multiple-object tracking can be performed using predict_video function of the arcgis.learn module. To enable tracking, set the track parameter in the predict_video function as track=True.

The following options/parameters are available in the predict video function for the user to decide:-

  • vanish_frames: The number of frames the object remains absent from the frame to be considered as vanished.

  • detect_frames: The number of frames an object remains present in the frame to start tracking.

  • assignment_iou_thrd: There might be multiple trackers detecting and tracking objects. The Intersection over Union (iou) threshold can be set to assign a tracker with the mentioned threshold value.

video_data = gis.content.get('1801dc029fed467ba67d6e39113202af')
video_data
vehicle_detection_and_tracking_video
Image Collection by api_data_owner
Last Modified: August 31, 2020
0 comments, 3 views
videopath = video_data.download(file_name=video_data.name)
import zipfile
with zipfile.ZipFile(videopath, 'r') as zip_ref:
    zip_ref.extractall(Path(videopath).parent)
video_file = os.path.join(os.path.splitext(videopath)[0], 'test.mp4')
retinanet.predict_video(input_video_path=video_file, 
                        metadata_file='test.csv',
                        track=True, 
                        visualize=True, 
                        threshold=0.5,
                        resize=True)

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.