-
Notifications
You must be signed in to change notification settings - Fork 0
Google data for Displacement Risk
We used some data from the google api for displacement risk mapping.
The data we needed to create was a measure of the distance to the nearest school, pharmacy, restaurant, and supermarket for each Census tract.
Because Census tracts are so large, we wanted a way to represent some of the in tract variation. For that reason, we pulled data for the distance to each amenity for each zone. Then we aggregated up to the census tract by computing a weighted average distance across zones for each amentity. The weight in each zone was the number of households in the zone.
The script that runs the process is here: https://github.com/psrc/data-science/blob/master/google_data/google_places_data/get_distances/get_distances/get_distances.py
To use the script you need to set up a google api key.
The input files can be found here: https://github.com/psrc/data-science/tree/master/google_data/google_places_data
-
The lat-long of each zone centroid: zone_file = 'zone_lat_long.csv' The zone lat-long file can be produced from the traffic analysis zones layer, used in the travel model. It is the 4,000 zone system.
-
The tract associated with each zone and the number of households in the zone: 'tract_zone_hh_file.csv' The households in the zone comes from the land use base year data for the number of households by parcels. The travel modelers have this data readily available, as an input to SoundCast. First the data is aggregated to the zone level by grouping by zone id and summing the number of households. Next a tract was associated with each zone by finding which tract the zone centroid fell into.
-
The distance from each zone centroid to each amenity: zone_distances = 'zone_dist_amenity.csv'
-
The weighted average distance to each amenity in the tract, across the zones: out_file = 'tract_dist_amenity.csv'