PDF Table to Pandas Data Frame to PDF Map¶
Often PDF files can contain useful information presented in a tabular form. If such data contains location information, it would be much more insightful if presented as a cartographic map. Thus this sample shows how Pandas can be used to extract data from a table within a PDF file (in this case, a PDF accessible from California Department of Forestry & Fire Protection (CAL FIRE)
website) and how it can be then brought into the GIS for further analysis and visualization.
Step 1. Install tabula-py¶
Note: to run this sample, you need a few extra libraries in your conda environment. If you don't have the libraries, install them by running the following commands from cmd.exe or your shell
pip install lxml
pip install tabula-py==1.4.3
`tabula-py` can read table of PDF and convert into panda's DataFrame
. In order to have tabula-py work on your environment, please make sure you have Java 7 or 8 on your machine before installing tabula-py.
import pandas as pd
import numpy as np
Step 2. Read table from PDF¶
California Department of Forestry & Fire Protection (CAL FIRE) provides statistics and reports that are tracked on a weekly basis and also snapshots of the number of fires and acres burned. These are preliminary numbers taken from the Computer Aided Dispatch system, and will likely change as dispatched wildfires may end up being other types of fires or false alarms [1].
The table of Number of Fires and Acres can be fetched from the web page and parsed as a Pandas data frame (as following):
df = pd.read_html("https://www.fire.ca.gov/stats-events/")[0]
df.head()
Also CAL FIRE provides statistically summarizing reports for historic wildfires in California, e.g. Top 20 largest, Top 20 most destructive, and top 20 deadliest, URLs of which are kept in a list below:
pdf_url_list = ["https://www.fire.ca.gov/media/11416/top20_acres.pdf",
"https://www.fire.ca.gov/media/11417/top20_destruction.pdf",
"https://www.fire.ca.gov/media/5512/top20_deadliest.pdf"]
Provided the url of the online PDF file, tabula-py
can read the table from the pdf at the url.
import tabula
from tabula import wrapper
df = wrapper.read_pdf(pdf_url_list[0])
# in order to print first 5 lines of Table
df.head()
Step 3. Process the table contents¶
Looking at each columns presented in the Pandas data frame, some are having composite information and would require further processing. For example, FIRE NAME (CAUSE)
column contains a sequential ordering number, fire name, and cause in one string, and would need to be split into separate columns.
Next, break the FIRE NAME (CAUSE)
column into ID
, Fire Name
and Cause
columns:
# new data frame with split value columns
new = df["FIRE NAME (CAUSE)"].str.split("(", n = 1, expand = True)
df["ID"] = new[0].str[0]
# making separate first name column from new data frame
df["FIRE NAME"]= new[0].str.replace('\r', '').str[1:]
# making separate last name column from new data frame
df["CAUSE"]= new[1].str[:-1]
df = df.drop(["FIRE NAME (CAUSE)"], axis=1)
df.drop(df.tail(1).index,inplace=True)
df.head()
Similar to FIRE NAME (CAUSE)
column, the ACRES
column also has composite information for some rows. For instance, when the fire happens in two states, e.g. CA and NV, you would see in the ACRES
column - 271,911 CA /\r43,666 NV
, the actual impacted area is the sum of 271911 and 43666 acres, and some numeric extraction and summing need to be done here:
import re
"""Used to extract numbers from composite string e.g. "271,911 CA /\r43,666 NV"
and add the acres within two or more states
"""
def extract_and_sum(s):
tmp = map(int, re.sub(r'[ a-zA-Z,]+[ a-zA-Z,]+', '', s, re.I).replace(',','').split('/\r'))
return sum(list(tmp))
"""Used to turn numerics e.g. 273,911 to int without delimitors e.g. 273911
"""
def replace_and_reassign(s):
return int(s.replace(',',''))
temp= df['ACRES'].apply(lambda x: extract_and_sum(x) if '\r' in x else replace_and_reassign(x))
df['ACRES'] = temp
df.head()
Now we have cleaned up the FIRE NAME (CAUSE)
and the ACRES
columns, the last column that needs splitting is COUNTY
which contains multiple impacted counties in California when the fire happens to multiple counties. Here, we will need to split a row of N counties to N rows with 1 county each.
Since the delimiter can be ,
or &
, we would need to do explode_str
twice (then remove \r
and County
from the column):
"""Using `numpy.arrange` in creating new rows as place holder to hold the splitted strings from df[col] based on `sep`;
then `iloc` and `assign` to place the splitted results into newly added rows
"""
def explode_str(df, col, sep):
s = df[col]
i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
return df.iloc[i].assign(**{col: sep.join(s).split(sep)})
new_df = explode_str(df, 'COUNTY', ',')
new_df = explode_str(new_df, 'COUNTY', '&')
new_df['COUNTY'] = new_df['COUNTY'].str.replace('\r','')
new_df['NAME'] = new_df['COUNTY'].str.replace(' County','')
To reduce ambiguity, we are adding an additional column STATE
to the DataFrame, so two counties with the same name (but in different states) will be cause confusions.
new_df['STATE'] = 'CA'
new_df.head()
Step 4. Merge two tables¶
The DataFrame we have got so far contains fire statistics, and would need to be aggregated with shape or geometry data in order to be drawn or saved as Web Map item. In doing such, we will access an existing feature layer called USA Counties
, and merge the geographic attributes of each counties in this feature layer (e.g. 'Shape_Area', 'Shape_Leng', 'ShapeArea', 'ShapeLength') into the fire statistics table.
Before we can access the Feature Layer, let us connect to our GIS to geocode this data and present it as a map, either by specifying username and password, e.g. in gis = GIS("https://www.arcgis.com", "arcgis_python", "P@ssword123")
or via an existing profile:
from arcgis.gis import GIS
gis = GIS('home')
counties_item = gis.content.get('48f9af87daa241c4b267c5931ad3b226')
counties_item
counties_flayer = counties_item.layers[0]
counties_fset = counties_flayer.query("STATE_NAME='California'")
counties_fset.sdf['NAME'].head()
counties_fset.sdf.columns
cols_2 = ['NAME', 'OBJECTID', 'POPULATION', 'POP_SQMI',
'SHAPE', 'SQMI', 'STATE_FIPS', 'STATE_NAME', 'Shape_Area',
'Shape_Leng', 'Shape__Area', 'Shape__Length']
overlap_rows = pd.merge(left = counties_fset.sdf[cols_2], right = new_df, how='inner',
on = 'NAME')
overlap_rows.head()
Step 5. Draw the top 20 impacted counties¶
We now can draw the top 20 impacted counties, represented by the merged output DataFrame overlap_rows
, with the API method DataFrame.spatial.plot()
.
map1 = gis.map('California, USA')
map1.layout.height = '650px'
map1.legend = True
map1
map1.clear_graphics()
overlap_rows.spatial.plot(kind='map', map_widget=map1,
renderer_type='c', # for class breaks renderer
method='esriClassifyNaturalBreaks', # classification algorithm
class_count=7, # choose the number of classes
col='ACRES', # numeric column to classify
cmap='hot', # color map to pick colors from for each class
alpha=0.7 # specify opacity
)
Besides viewing the top 20 impacted counties in the notebook's map widget, we can also save it as a web map, or then export the web map to a PDF output again.
item_properties = {
"title": "Top 20 largest Wildfires in California",
"tags" : "wildfire",
"snippet": "read pdf and export pdf",
"description": "test description"
}
item = map1.save(item_properties)
item
Step 6. Export to PDF¶
If you would also like to export the fire impacting counties into a PDF map, the required steps are: (1) obtain the web map json via the Item.get_data()
API method, (2) modify the web map json to also include exportOptions
and mapOptions
, and (3) use arcgis.mapping.export_map()
to get the PDF deliverable of the web map.
webmap_json = item.get_data()
webmap_json.keys()
map1.extent
Based on webmap spec, the mapOptions
and exportOptions
also need to be specified for the json input. Here, the extent
and scale
are chosen based on the extents of the California State [2].
webmap_json['mapOptions'] = {
"extent" : {
"xmin": -14383204.412078043,
"ymin": 3664734.6018634685,
"xmax": -11976355.2654350533,
"ymax": 5254624.790194712,
"spatialReference": {
"latestWkid": 3857,
"wkid": 102100
}
},
"scale" : 9244648.868618,
"spatialReference" : {
"wkid" : 102100
}
}
webmap_json['exportOptions'] = { "dpi" : 96,
"outputSize" : [746, 575]
}
Finally, we used `export_map` method to create a PDF output of the web map item [3].
from arcgis.mapping import export_map
res = export_map(web_map_as_json=webmap_json, format='PDF', layout_template='MAP_ONLY')
#res["url"]
res.url
The res.url
shown above points to the URL address of the exported map which displays the top 20 most impacted counties in California by wildfire. If interested in creating more fun maps from the PDF files provided by CAL FIRE, you can repeat the previously done workflow with two other links stored in variable pdf_url_list
.
Feedback on this topic?