Creating OSM-compatible mappings directly from multiple sequences of street-level imagery, particularly 360-degree images, would be a groundbreaking step in simplifying geospatial data generation and enhancing mapping accuracy. This project serves as a benchmark for evaluating the performance of large language models (LLMs) in mapping tasks, specifically designed to test their ability to generate structured mappings and automate workflows efficiently using this kind of visual input.
-
Photos: This directory contains street-level imagery. Each image follows the naming convention
{sequence_id}_{sequence_index}.png
, where sequence_id is the unique identifier of a car trip and sequence_index is the numerical position of a photo in the given sequence. Together they form the unique identifier of a photo. -
Metadata: This directory includes metadata related to the photos, sequences of photos per way, and ground truth annotations at way level. We also provide different LLM predictions with the naming convention predictions_*.csv
-
Demo Utilities: This directory contains a demo notebook showcasing a possible approach for creating predictions for sequences of street-level imagery.
Tools for evaluating predictions and generating metrics:
-
eval.py
Provides a streamlined command-line evaluation method. It generates.csv
and.md
files in theevaluation_results
directory containing feature-specific and general metrics.Usage:
python eval.py path/to/predictions.csv [id_suffix]
path/to/predictions.csv
: Path to the predictions file in.csv
format.id_suffix
(optional): A custom identifier for the evaluation. If not provided, a default identifier will be used.
-
interactive_eval_notebook.ipynb
Contains interactive evaluation and visualization utilities at feature level.
This directory contains metrics and reports generated after running the evaluation scripts.
- Download
photos.zip
from Google Drive and extract its contents to ./photos. - Details about metadata files, feature specific map-making and other related information can be accesed in this documentation
conda create -n "automapper"
conda activate automapper
pip install -r requirements.txt
Feel free to contribute by improving the benchmarks.