Image Captioning Using Deep Learning

  • 🔬 Data Science
  • 🥠 Deep Learning and Image Captioning

Introduction and objective

Image caption, a concise textual summary that describes the content of an image, has applications in numerous fields such as scene classification, virtual assistants, image indexing, social media, for visually impaired persons and more. Deep learning has been achieving superhuman level performance in computer vision tasks ranging from object detection to natural language processing. ImageCaptioner, which is a combination of both image and text, is a deep learning model that generates image captions of remote sensing image data.

This sample shows how ArcGIS API for Python can be used to train ImageCaptioner model using Remote Sensing Image Captioning Dataset (RSICD) [1]. It is a publicly available dataset for remote sensing image captioning task. RSICD contains more than ten thousands remote sensing images which are collected from Google Earth, Baidu Map, MapABC and Tianditu. The images are fixed to 224X224 pixels with various resolutions. The total number of remote sensing images is 10921, with five sentences descriptions per image. The below screenshot shows an example of this data:

The trained model can be deployed on ArcGIS Pro or ArcGIS Enterprise to generate captions on a high satellite resolution imagery.

Necessary imports

from pathlib import Path
import os, json

from arcgis.learn import prepare_data, ImageCaptioner
from arcgis.gis import GIS
gis = GIS('home')

Prepare data that will be used for training

We need to put the RSICD dataset in a specific format, i.e., a root folder containing a folder named "images" and the JSON file containing the annotations named "annotations.json". The specific format of the json can be seen here.


Folder structure for RSICD dataset. A root folder containing "images" folder and "annotations.json" file.

Model training

Let's set a path to the folder that contains training images and their corresponding labels.

training_data = gis.content.get('8c4fc46930a044a9b20bb974d667e074')
training_data
ImageCaptioning
Image Collection by api_data_owner
Last Modified: May 12, 2022
0 comments, 0 views
filepath = training_data.download(file_name=training_data.name)
import zipfile
with zipfile.ZipFile(filepath, 'r') as zip_ref:
    zip_ref.extractall(Path(filepath).parent)
data_path = Path(os.path.join(os.path.splitext(filepath)[0]))

We'll use the prepare_data function to create a databunch with the necessary parameters such as batch_size, and chip_size. A complete list of parameters can be found in the API reference.

data = prepare_data(data_path, 
                    chip_size=224,
                    batch_size=4,
                    dataset_type='ImageCaptioning')

Visualize training data

To visualize and get a sense of the training data, we can use the data.show_batch method.

data.show_batch()
<Figure size 720x720 with 4 Axes>

Load model architecture

arcgis.learn provides us image captioning model which are based on pretrained convnets, such as ResNet, that act as the backbones. We will use ImageCaptioner with the backbone parameters as Resnet50 to create our image captioning model. For more details on ImageCaptioner check out How image_captioning works? and the API reference.

ic = ImageCaptioner(data, backbone='resnet50')

We will use the lr_find() method to find an optimum learning rate. It is important to set a learning rate at which we can train a model with good accuracy and speed.

lr = ic.lr_find()
<Figure size 432x288 with 1 Axes>

Train the model

We will now train the ImageCaptioner model using the suggested learning rate from the previous step. We can specify how many epochs we want to train for. Let's train the model for 100 epochs.

ic.fit(100, lr, early_stopping=True)
33.00% [33/100 29:17:58<59:29:12]
epochtrain_lossvalid_lossaccuracybleutime
04.6952804.6834840.1693590.00000053:17
14.1498654.1933020.2121670.00000053:33
23.8479403.8639360.3029880.06224253:08
33.5310383.5937630.3508440.08947153:07
43.2376153.3246730.3807080.10721953:04
52.8850943.1156190.4135820.12876453:05
62.7459632.8637580.4459320.15220653:10
72.5408812.7666860.4627750.17733753:09
82.3555272.6005040.4898850.20756853:06
92.3131792.4872140.5049360.23559053:04
102.0542112.3468650.5292350.25447053:10
112.0471082.1996410.5517830.28386853:10
122.0647402.1722410.5565250.29195253:06
131.9864572.1282420.5637660.29658853:06
141.8265412.0317500.5791890.31109353:11
151.7639972.0464510.5661210.29109953:05
161.8178601.9194160.5929660.32602753:06
171.7668801.8717300.5926740.31950253:58
181.7149021.8995550.5935880.32974853:33
191.6177951.8817320.5951260.32912853:17
201.6047281.8553260.6007840.33077253:13
211.5658431.8442620.6035790.33700153:16
221.5542461.7902820.6094480.34171153:15
231.5817631.7954340.6133120.34812453:32
241.5436761.7736220.6156200.34869953:47
251.5040901.7316510.6252330.36018953:47
261.5324151.7562490.6150120.34007353:10
271.4733161.7195560.6207360.34946953:05
281.4203761.7273260.6169570.34454553:05
291.4871801.7379150.6220990.36211753:05
301.4499371.7519290.6185530.34714153:08
311.4857891.7558980.6244640.36317953:08
321.3718491.7195120.6218080.35547853:08

100.00% [273/273 04:59<00:00]
Epoch 33: early stopping

Visualize results on validation set

To see sample results we can use the show_results method. This method displays the chips from the validation dataset with ground truth (left) and predictions (right). This visual analysis helps in assessing the qualitative results of the trained model.

ic.show_results()
<Figure size 1440x1440 with 8 Axes>

Evaluate model performance

To see the quantitative results of our model we will use the bleu_score method. Bilingual Evaluation Understudy Score(BLEU’s): is a popular metric that measures the number of sequential words that match between the predicted and the ground truth caption. It compares n-grams of various lengths from 1 through 4 to do this. A perfect match results in a score of 1.0, whereas a perfect mismatch results in a score of 0.0. summarizes how close the generated text is to the expected text.

ic.bleu_score()
{'bleu-1': 0.5853038148042357,
 'bleu-2': 0.3385487762905085,
 'bleu-3': 0.2464713554187269,
 'bleu-4': 0.1893991004368455,
 'BLEU': 0.2583728068745172}

Save the model

Let's save the model by giving it a name and calling the save method, so that we can load it later whenever required. The model is saved by default in a directory called models in the data_path initialized earlier, but a custom path can be provided.

ic.save('image-captioner-33epochs')
Computing model metrics...
WindowsPath('//ImageCaptioning/models/image-captioner-33epochs')

Prediction on test image

We can perform inferencing on a small test image using the predict function.

ic.predict(r'\image-captioner\test_img.tif')
'some cars are parked in a parking lot .'
<Figure size 432x288 with 1 Axes>

Now that we are satisfied with the model performance on a test image, we are ready to perform model inferencing on our desired images. In our case, we are interested in inferencing on high resolution satellite image.

Model inference

Before using the model for inference we need to make some changes in the model_name>.emd file. You can learn more about this file here.

By default, CropSizeFixed is set to 1. We want to change the CropSizeFixed to 0 so that the size of tile cropped around the features is not fixed. the below code will edit the emd file with CropSizeFixed:0 information.

with open(
    os.path.join(data_path, "models", "image-captioner-33epochs", "image-captioner-33epochs" + ".emd"), "r+"
) as emd_file:
    data = json.load(emd_file)
    data["CropSizeFixed"] = 0
    emd_file.seek(0)
    json.dump(data, emd_file, indent=4)
    emd_file.truncate()

In order to perform inferencing in ArcGIS Pro, we need to create a feature class on the map using Create Feature Class or Create Fishnet tool.

The Feature Class and the trained model has been provided for reference. You could directly download these files to run perform model inferencing on desired area.

with arcpy.EnvManager(extent="-13049125.3076102 4033595.5228646 -13036389.0790898 4042562.3896354", cellSize=1, processorType="GPU"):
arcpy.ia.ClassifyObjectsUsingDeepLearning("Inferencing_Image", r"C:\Users\Admin\Documents\ImgCap\captioner.gdb\Classified_ImageCaptions", r"D:\image-captioner-33epochs\image-captioner-33epochs.emd", "California_Features", '', "PROCESS_AS_MOSAICKED_IMAGE", "batch_size 1;beam_width 5;max_length 20", "Caption")

Results

We selected an area unseen (by the model) and generated some features using the Create Feature Class tool. We then used our model to generate captions. Below are the results that we have achieved.

Conclusion

In this notebook, we demonstrated how to use the ImageCaptioner model from the ArcGIS API for Python to generate image captions using RSICD as training data.

References

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.