ArcGIS Developer
Dashboard

ArcGIS API for Python

PDF Table to Pandas Data Frame to PDF Map

Often PDF files can contain useful information presented in a tabular form. If such data contains location information, it would be much more insightful if presented as a cartographic map. Thus this sample shows how Pandas can be used to extract data from a table within a PDF file (in this case, a PDF accessible from California Department of Forestry & Fire Protection (CAL FIRE) website) and how it can be then brought into the GIS for further analysis and visualization.

Step 1. Install tabula-py

Note: to run this sample, you need a few extra libraries in your conda environment. If you don't have the libraries, install them by running the following commands from cmd.exe or your shell

pip install lxml
pip install tabula-py==1.4.3

`tabula-py` can read table of PDF and convert into panda's DataFrame. In order to have tabula-py work on your environment, please make sure you have Java 7 or 8 on your machine before installing tabula-py.

In [1]:
import pandas as pd
import numpy as np

Step 2. Read table from PDF

California Department of Forestry & Fire Protection (CAL FIRE) provides statistics and reports that are tracked on a weekly basis and also snapshots of the number of fires and acres burned. These are preliminary numbers taken from the Computer Aided Dispatch system, and will likely change as dispatched wildfires may end up being other types of fires or false alarms [1].

The table of Number of Fires and Acres can be fetched from the web page and parsed as a Pandas data frame (as following):

In [2]:
df = pd.read_html("https://www.fire.ca.gov/stats-events/")[0]
df.head()
Out[2]:
0 1 2
0 Interval Fires Acres
1 January 1, 2020 through August 16, 2020 5356 100916
2 January 1, 2019 through August 16, 2019 3325 28060
3 5-Year Average (same interval) 3826 306602
4 2020 Combined YTD (CALFIRE & US Forest Service) 6506 276574

Also CAL FIRE provides statistically summarizing reports for historic wildfires in California, e.g. Top 20 largest, Top 20 most destructive, and top 20 deadliest, URLs of which are kept in a list below:

In [3]:
pdf_url_list = ["https://www.fire.ca.gov/media/11416/top20_acres.pdf",
                "https://www.fire.ca.gov/media/11417/top20_destruction.pdf",
                "https://www.fire.ca.gov/media/5512/top20_deadliest.pdf"]

Provided the url of the online PDF file, tabula-py can read the table from the pdf at the url.

In [4]:
import tabula
In [5]:
from tabula import wrapper
df = wrapper.read_pdf(pdf_url_list[0])
Got stderr: Aug 26, 2020 1:58:53 PM org.apache.pdfbox.pdmodel.font.FileSystemFontProvider loadDiskCache
WARNING: New fonts found, font cache will be re-built
Aug 26, 2020 1:58:54 PM org.apache.pdfbox.pdmodel.font.FileSystemFontProvider <init>
WARNING: Building on-disk font cache, this may take a while
Aug 26, 2020 1:58:55 PM org.apache.pdfbox.pdmodel.font.FileSystemFontProvider <init>
WARNING: Finished building on-disk font cache, found 465 fonts

In [6]:
# in order to print first 5 lines of Table
df.head()
Out[6]:
FIRE NAME (CAUSE) DATE COUNTY ACRES STRUCTURES DEATHS
0 MENDOCINO COMPLEX\r1\r(Under Investigation) July 2018 Colusa County, Lake County,\rMendocino County ... 459,123 280 1.0
1 SCU LIGHTNING COMPLEX\r2\r(Under Investigation) * August 2020 Stanislaus, Santa Clara, Alameda,\rContra Cost... 363,772 18 0.0
2 LNU LIGHTNING COMPLEX\r3\r(Under Investigation) * August 2020 Sonoma, Lake, Napa, Yolo & Solano 352,913 937 5.0
3 4\rTHOMAS (Powerlines) December 2017 Ventura & Santa Barbara 281,893 1,063 2.0
4 5\rCEDAR ( Human Related) October 2003 San Diego 273,246 2,820 15.0

Step 3. Process the table contents

Looking at each columns presented in the Pandas data frame, some are having composite information and would require further processing. For example, FIRE NAME (CAUSE) column contains a sequential ordering number, fire name, and cause in one string, and would need to be split into separate columns.

Next, break the FIRE NAME (CAUSE) column into ID, Fire Name and Cause columns:

In [7]:
# new data frame with split value columns 
new = df["FIRE NAME (CAUSE)"].str.split("(", n = 1, expand = True) 
  
df["ID"] = new[0].str[0]

# making separate first name column from new data frame 
df["FIRE NAME"]= new[0].str.replace('\r', '').str[1:]

# making separate last name column from new data frame 
df["CAUSE"]= new[1].str[:-1]

df = df.drop(["FIRE NAME (CAUSE)"], axis=1)

df.drop(df.tail(1).index,inplace=True)
In [8]:
df.head()
Out[8]:
DATE COUNTY ACRES STRUCTURES DEATHS ID FIRE NAME CAUSE
0 July 2018 Colusa County, Lake County,\rMendocino County ... 459,123 280 1.0 M ENDOCINO COMPLEX1 Under Investigation
1 August 2020 Stanislaus, Santa Clara, Alameda,\rContra Cost... 363,772 18 0.0 S CU LIGHTNING COMPLEX2 Under Investigation)
2 August 2020 Sonoma, Lake, Napa, Yolo & Solano 352,913 937 5.0 L NU LIGHTNING COMPLEX3 Under Investigation)
3 December 2017 Ventura & Santa Barbara 281,893 1,063 2.0 4 THOMAS Powerlines
4 October 2003 San Diego 273,246 2,820 15.0 5 CEDAR Human Related

Similar to FIRE NAME (CAUSE) column, the ACRES column also has composite information for some rows. For instance, when the fire happens in two states, e.g. CA and NV, you would see in the ACRES column - 271,911 CA /\r43,666 NV, the actual impacted area is the sum of 271911 and 43666 acres, and some numeric extraction and summing need to be done here:

In [9]:
import re

"""Used to extract numbers from composite string e.g. "271,911 CA /\r43,666 NV" 
   and add the acres within two or more states
"""
def extract_and_sum(s):
    tmp = map(int, re.sub(r'[ a-zA-Z,]+[ a-zA-Z,]+', '', s, re.I).replace(',','').split('/\r'))
    return sum(list(tmp))

"""Used to turn numerics e.g. 273,911 to int without delimitors e.g. 273911
"""
def replace_and_reassign(s):
    return int(s.replace(',',''))

temp= df['ACRES'].apply(lambda x: extract_and_sum(x) if '\r' in x else replace_and_reassign(x))
In [10]:
df['ACRES'] = temp
df.head()
Out[10]:
DATE COUNTY ACRES STRUCTURES DEATHS ID FIRE NAME CAUSE
0 July 2018 Colusa County, Lake County,\rMendocino County ... 459123 280 1.0 M ENDOCINO COMPLEX1 Under Investigation
1 August 2020 Stanislaus, Santa Clara, Alameda,\rContra Cost... 363772 18 0.0 S CU LIGHTNING COMPLEX2 Under Investigation)
2 August 2020 Sonoma, Lake, Napa, Yolo & Solano 352913 937 5.0 L NU LIGHTNING COMPLEX3 Under Investigation)
3 December 2017 Ventura & Santa Barbara 281893 1,063 2.0 4 THOMAS Powerlines
4 October 2003 San Diego 273246 2,820 15.0 5 CEDAR Human Related

Now we have cleaned up the FIRE NAME (CAUSE) and the ACRES columns, the last column that needs splitting is COUNTY which contains multiple impacted counties in California when the fire happens to multiple counties. Here, we will need to split a row of N counties to N rows with 1 county each.

Since the delimiter can be , or &, we would need to do explode_str twice (then remove \r and County from the column):

In [11]:
"""Using `numpy.arrange` in creating new rows as place holder to hold the splitted strings from df[col] based on `sep`;
   then `iloc` and `assign` to place the splitted results into newly added rows
"""
def explode_str(df, col, sep):
    s = df[col]
    i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
    return df.iloc[i].assign(**{col: sep.join(s).split(sep)})

new_df = explode_str(df, 'COUNTY', ',')
In [12]:
new_df = explode_str(new_df, 'COUNTY', '&')
In [13]:
new_df['COUNTY'] = new_df['COUNTY'].str.replace('\r','')
In [14]:
new_df['NAME'] = new_df['COUNTY'].str.replace(' County','')

To reduce ambiguity, we are adding an additional column STATE to the DataFrame, so two counties with the same name (but in different states) will be cause confusions.

In [15]:
new_df['STATE'] = 'CA'
In [16]:
new_df.head()
Out[16]:
DATE COUNTY ACRES STRUCTURES DEATHS ID FIRE NAME CAUSE NAME STATE
0 July 2018 Colusa County 459123 280 1.0 M ENDOCINO COMPLEX1 Under Investigation Colusa CA
0 July 2018 Lake County 459123 280 1.0 M ENDOCINO COMPLEX1 Under Investigation Lake CA
0 July 2018 Mendocino County 459123 280 1.0 M ENDOCINO COMPLEX1 Under Investigation Mendocino CA
0 July 2018 Glenn County 459123 280 1.0 M ENDOCINO COMPLEX1 Under Investigation Glenn CA
1 August 2020 Stanislaus 363772 18 0.0 S CU LIGHTNING COMPLEX2 Under Investigation) Stanislaus CA

Step 4. Merge two tables

The DataFrame we have got so far contains fire statistics, and would need to be aggregated with shape or geometry data in order to be drawn or saved as Web Map item. In doing such, we will access an existing feature layer called USA Counties, and merge the geographic attributes of each counties in this feature layer (e.g. 'Shape_Area', 'Shape_Leng', 'ShapeArea', 'ShapeLength') into the fire statistics table.

Before we can access the Feature Layer, let us connect to our GIS to geocode this data and present it as a map, either by specifying username and password, e.g. in gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123") or via an existing profile:

In [17]:
from arcgis.gis import GIS

gis = GIS('home')
In [18]:
counties_item = gis.content.get('48f9af87daa241c4b267c5931ad3b226')
counties_item
Out[18]:
USA Counties
USA Counties provides the county boundaries of the United States in the 50 states, the District of Columbia, and Puerto Rico.Feature Layer Collection by esri_dm
Last Modified: December 11, 2019
0 comments, 8,96,55,319 views
In [19]:
counties_flayer = counties_item.layers[0]
counties_fset = counties_flayer.query("STATE_NAME='California'")
counties_fset.sdf['NAME'].head()
Out[19]:
0       Monterey
1     Santa Cruz
2    Santa Clara
3     San Benito
4        Ventura
Name: NAME, dtype: object
In [20]:
counties_fset.sdf.columns
Out[20]:
Index(['FID', 'OBJECTID', 'NAME', 'STATE_NAME', 'STATE_FIPS', 'CNTY_FIPS',
       'FIPS', 'POPULATION', 'POP_SQMI', 'POP2010', 'POP10_SQMI', 'WHITE',
       'BLACK', 'AMERI_ES', 'ASIAN', 'HAWN_PI', 'HISPANIC', 'OTHER',
       'MULT_RACE', 'MALES', 'FEMALES', 'AGE_UNDER5', 'AGE_5_9', 'AGE_10_14',
       'AGE_15_19', 'AGE_20_24', 'AGE_25_34', 'AGE_35_44', 'AGE_45_54',
       'AGE_55_64', 'AGE_65_74', 'AGE_75_84', 'AGE_85_UP', 'MED_AGE',
       'MED_AGE_M', 'MED_AGE_F', 'HOUSEHOLDS', 'AVE_HH_SZ', 'HSEHLD_1_M',
       'HSEHLD_1_F', 'MARHH_CHD', 'MARHH_NO_C', 'MHH_CHILD', 'FHH_CHILD',
       'FAMILIES', 'AVE_FAM_SZ', 'HSE_UNITS', 'VACANT', 'OWNER_OCC',
       'RENTER_OCC', 'NO_FARMS12', 'AVE_SIZE12', 'CROP_ACR12', 'AVE_SALE12',
       'SQMI', 'Shape_Leng', 'Shape_Area', 'Shape__Area', 'Shape__Length',
       'SHAPE'],
      dtype='object')
In [21]:
cols_2 = ['NAME', 'OBJECTID', 'POPULATION', 'POP_SQMI',
          'SHAPE', 'SQMI', 'STATE_FIPS', 'STATE_NAME', 'Shape_Area',
          'Shape_Leng', 'Shape__Area', 'Shape__Length']
overlap_rows = pd.merge(left = counties_fset.sdf[cols_2], right = new_df, how='inner',
                        on = 'NAME')
overlap_rows.head()
Out[21]:
NAME OBJECTID POPULATION POP_SQMI SHAPE SQMI STATE_FIPS STATE_NAME Shape_Area Shape_Leng ... Shape__Length DATE COUNTY ACRES STRUCTURES DEATHS ID FIRE NAME CAUSE STATE
0 Monterey 34 431696 130.2 {'rings': [[[-13520087.6809747, 4283419.790017... 3314.42 06 California 0.860396 6.495277 ... 7.926464e+05 July 1977 Monterey 177866 0 0.0 1 4MARBLE CONE Lightning CA
1 Monterey 34 431696 130.2 {'rings': [[[-13520087.6809747, 4283419.790017... 3314.42 06 California 0.860396 6.495277 ... 7.926464e+05 June 2008 Monterey 162818 58 0.0 1 6BASIN COMPLEX Lightning CA
2 Ventura 38 861790 464.1 {'rings': [[[-13306531.4673448, 3933240.934103... 1857.01 06 California 0.471859 3.916199 ... 4.811731e+05 September 1932 Ventura 220000 0 0.0 1 0MATILIJA Undetermined CA
3 Ventura 38 861790 464.1 {'rings': [[[-13306531.4673448, 3933240.934103... 1857.01 06 California 0.471859 3.916199 ... 4.811731e+05 September 2006 Ventura 162702 11 0.0 1 7DAY FIRE Human Related CA
4 Los Angeles 39 10275545 2514.1 {'rings': [[[-13149456.875, 3995666.875], [-13... 4087.19 06 California 1.036926 8.423152 ... 1.020462e+06 August 2009 Los Angeles 160557 209 2.0 1 8STATION Human Related CA

5 rows × 21 columns

Step 5. Draw the top 20 impacted counties

We now can draw the top 20 impacted counties, represented by the merged output DataFrame overlap_rows, with the API method DataFrame.spatial.plot().

In [22]:
map1 = gis.map('California, USA')
map1.layout.height = '650px'
map1.legend = True
map1
In [23]:
map1.clear_graphics()
overlap_rows.spatial.plot(kind='map', map_widget=map1,
                          renderer_type='c',  # for class breaks renderer
                          method='esriClassifyNaturalBreaks',  # classification algorithm
                          class_count=7,  # choose the number of classes
                          col='ACRES',  # numeric column to classify
                          cmap='hot',  # color map to pick colors from for each class
                          alpha=0.7  # specify opacity
                         )
Out[23]:
True

Besides viewing the top 20 impacted counties in the notebook's map widget, we can also save it as a web map, or then export the web map to a PDF output again.

In [24]:
item_properties = {
    "title": "Top 20 largest Wildfires in California",
    "tags" : "wildfire",
    "snippet": "read pdf and export pdf",
    "description": "test description"
}
item = map1.save(item_properties)
item
Out[24]:
Top 20 largest Wildfires in California
read pdf and export pdfWeb Map by arcgis_python
Last Modified: August 26, 2020
0 comments, 0 views

Step 6. Export to PDF

If you would also like to export the fire impacting counties into a PDF map, the required steps are: (1) obtain the web map json via the Item.get_data() API method, (2) modify the web map json to also include exportOptions and mapOptions, and (3) use arcgis.mapping.export_map() to get the PDF deliverable of the web map.

In [25]:
webmap_json = item.get_data()
webmap_json.keys()
Out[25]:
dict_keys(['operationalLayers', 'baseMap', 'spatialReference', 'version', 'authoringApp', 'authoringAppVersion'])
In [26]:
map1.extent
Out[26]:
{'spatialReference': {'latestWkid': 3857, 'wkid': 102100},
 'xmin': -14383204.412078053,
 'ymin': 3664734.601863475,
 'xmax': -11976355.265435062,
 'ymax': 5254624.790194718}

Based on webmap spec, the mapOptions and exportOptions also need to be specified for the json input. Here, the extent and scale are chosen based on the extents of the California State [2].

In [27]:
webmap_json['mapOptions'] = {
    "extent" : {
        "xmin": -14383204.412078043,
        "ymin": 3664734.6018634685,
        "xmax": -11976355.2654350533,
        "ymax": 5254624.790194712,
        "spatialReference": {
            "latestWkid": 3857,
            "wkid": 102100
        }
    },
    "scale" : 9244648.868618,
    "spatialReference" : {
        "wkid" : 102100
    }
}
In [28]:
webmap_json['exportOptions'] = { "dpi" : 96,
                                 "outputSize" :  [746, 575]
                               }

Finally, we used `export_map` method to create a PDF output of the web map item [3].

In [29]:
from arcgis.mapping import export_map
res = export_map(web_map_as_json=webmap_json, format='PDF', layout_template='MAP_ONLY')
#res["url"]
res.url
Out[29]:
'https://utility.arcgisonline.com/arcgis/rest/directories/arcgisoutput/Utilities/PrintingTools_GPServer/x_____xl3SfXO9Y8JofGGdEpHmfWw..x_____x_ags_7bdbaffa-e78b-11ea-96b8-22000bb3a270.pdf'

The res.url shown above points to the URL address of the exported map which displays the top 20 most impacted counties in California by wildfire. If interested in creating more fun maps from the PDF files provided by CAL FIRE, you can repeat the previously done workflow with two other links stored in variable pdf_url_list.


Feedback on this topic?