You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-4
Original file line number
Diff line number
Diff line change
@@ -315,12 +315,13 @@ It comprises 24 pairs of multispectral images taken from the Sentinel-2 satellit
315
315
*https://captain-whu.github.io/SCD/
316
316
* Change detection at the pixel level
317
317
318
-
## Amazon and Atlantic Forest dataset for semantic segmentation with Sentinel 2
318
+
## Amazon and Atlantic Forest dataset
319
+
For semantic segmentation with Sentinel 2
319
320
*[Amazon and Atlantic Forest image datasets for semantic segmentation](https://zenodo.org/record/4498086#.Y6LPLuzP1hE)
320
-
*[attention-mechanism-unet](https://github.com/davej23/attention-mechanism-unet) -> code for paper 'An attention-based U-Net for detecting deforestation within satellite sensor imagery'
321
-
*[TransUNetplus2](https://github.com/aj1365/TransUNetplus2) -> code for 2023 paper: TransU-Net++: Rethinking attention gated TransU-Net for deforestation mapping
321
+
*[attention-mechanism-unet](https://github.com/davej23/attention-mechanism-unet) -> An attention-based U-Net for detecting deforestation within satellite sensor imagery
322
+
*[TransUNetplus2](https://github.com/aj1365/TransUNetplus2) -> Rethinking attention gated TransU-Net for deforestation mapping
322
323
323
-
## fMoW - Functional Map of the World
324
+
##Functional Map of the World ( fMoW)
324
325
*https://github.com/fMoW/dataset
325
326
* RGB & multispectral variants
326
327
* High resolution, chip classification dataset
@@ -412,6 +413,7 @@ Since there is a whole community around GEE I will not reproduce it here but lis
412
413
## Image captioning datasets
413
414
*[RSICD](https://github.com/201528014227051/RSICD_optimal) -> 10921 images with five sentences descriptions per image. Used in [Fine tuning CLIP with Remote Sensing (Satellite) images and captions](https://huggingface.co/blog/fine-tune-clip-rsicd), models at [this repo](https://github.com/arampacha/CLIP-rsicd)
414
415
*[RSICC](https://github.com/Chen-Yang-Liu/RSICC) -> the Remote Sensing Image Change Captioning dataset contains 10077 pairs of bi-temporal remote sensing images and 50385 sentences describing the differences between images. Uses LEVIR-CD imagery
416
+
*[ChatEarthNet](https://github.com/zhu-xlab/ChatEarthNet) -> A Global-Scale Image-Text Dataset Empowering Vision-Language Geo-Foundation Models, utilizes Sentinel-2 data with captions generated by ChatGPT
415
417
416
418
## Weather Datasets
417
419
* NASA (make request and emailed when ready) -> https://search.earthdata.nasa.gov
0 commit comments