Skip to content

LARG/SocialNavSUB

Repository files navigation

SocialNav-SUB (Social Navigation Scene Understanding Benchmark)

This repository contains the code and resources for benchmarking VLMs for scene understanding of challenging social navigation scenarios. For more information, please see the paper (will add link later).

Getting Started

  1. Install Dependencies

    pip install -r requirements.txt
  2. Download the Dataset

    Please download our dataset from HuggingFace by running the download_dataset.sh script:

    ./download_dataset.sh
  3. Benchmark a VLM

    Make a config file and specify the VLM under the baseline_model parameter and parameters for the experiments (such as prompt representation). API models require an environment variable containing an API key (GOOGLE_API_KEY or OPENAI_API_KEY).

    python socialnavsub/evaluate_vlm.py --cfg_path <cfg_path>
  4. View Results

    Results will be saved in the directory specified in the config file under the evaluation_folder entry. To postprocess the results, please run:

    python socialnavsub/postprocess_results.py --cfg_path <cfg_path>

    The results will be viewable in the csv whose filepath is specified in the postprocessed_results_csv entry in the config file (by default, postprocessed_results.csv).

Contributing

Contributions are welcome! Please open issues or pull requests for bug fixes, new features, or improvements.

How to add support for additional VLMs

You can make a new class file that contains a subclass of APIBaseline (api_baseline.py). For examples, please see gemini.py, llava.py, or gpt4o.py.

Contact

For questions or support, please open an issue or email [email protected].

About

[CoRL 2025] VLM Benchmark for Social Navigation Scene Understanding

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •