🚀 Excited to share that our paper "GeoSAM: Fine-tuning SAM with Multi-Modal Prompts for Mobility Infrastructure Segmentation" has been accepted to the 28th European Conference on Artificial Intelligence (ECAI 2025).
Hello, This is an updated version of GeoSAM, here, we implement a fine-tuning approach with automatically generated multi-modal prompts, specifically, point prompts from a pre-trained, task-specific traditional model, complemented by text prompts provided by users.
If you have any questions, fill out this form or just email me at hm4013@wayne.edu. I will get back to you as soon as possible.
See the demo here.
Contrasting with the previous version, which you can find in the GeoSAM_old branch.
In the previous approach, we were using feature embeddings from a traditional model to create dense prompts which was assisting our sparse or click prompts generation. However, in the new version, we have decided to use the assistance of natural language. So, instead of dense prompts, we use text prompts from natural language to provide SAM with a more natural language context to assist the click prompts. We incorporate a multi-prompts system by using texts as direct prompts for SAM which aids the model by providing more semantic context.
Also, please find the link for the weights.
In geographical image segmentation, performance is often constrained by the limited availability of training data and a lack of generalizability, particularly for segmenting mobility infrastructure such as roads, sidewalks, and crosswalks. Vision foundation models like the Segment Anything Model (SAM), pre-trained on millions of natural images, have demonstrated impressive zero-shot segmentation performance, providing a potential solution. However, SAM struggles with geographical images, such as aerial and satellite imagery, due to its training being confined to natural images and the narrow features and textures of these objects blending into their surroundings. To address these challenges, we propose Geographical SAM (GeoSAM), a SAM-based framework that fine-tunes SAM using automatically generated multi-modal prompts. Specifically, GeoSAM integrates point prompts from a pre-trained task-specific model as primary visual guidance, and text prompts generated by a large language model as secondary semantic guidance, enabling the model to better capture both spatial structure and contextual meaning. GeoSAM outperforms existing approaches for mobility infrastructure segmentation in both familiar and completely unseen regions by at least 5% in mIoU, representing a significant leap in leveraging foundation models to segment mobility infrastructure, including both road and pedestrian infrastructure in geographical images.
We want to thank these two works for their open-source code and contributions to the respective fields!
This work was supported by the U.S. National Science Foundation (NSF), Innovation and Technology Ecosystems (ITE), under Award Number 2235225 as part of the NSF Convergence Accelerator Track H: Leveraging Human-Centered AI Microtransit to Ameliorate Spatiotemporal Mismatch between Housing and Employment for Persons with Disabilities. We thank the Innovation and Technology Ecosystems program for their invaluable contributions.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
If these codes are helpful for your study, please cite:
@misc{sultan2024geosamfinetuningsammultimodal,
title={GeoSAM: Fine-tuning SAM with Multi-Modal Prompts for Mobility Infrastructure Segmentation},
author={Rafi Ibn Sultan and Chengyin Li and Hui Zhu and Prashant Khanduri and Marco Brocanelli and Dongxiao Zhu},
year={2024},
eprint={2311.11319},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2311.11319},
}