Skip to content
This repository was archived by the owner on Oct 9, 2023. It is now read-only.
This repository was archived by the owner on Oct 9, 2023. It is now read-only.

SemanticSegmentationData - zero-size array to reduction operation maximum which has no identity #1513

@marisancans

Description

@marisancans

🐛 Bug

Running example from docs causes an exception I cannot understand.

To Reproduce

Paste the code from docs:

import torch

import flash
from flash.core.data.utils import download_data
from flash.image import SemanticSegmentation, SemanticSegmentationData

# 1. Create the DataModule
# The data was generated with the  CARLA self-driving simulator as part of the Kaggle Lyft Udacity Challenge.
# More info here: https://www.kaggle.com/kumaresanmanickavelu/lyft-udacity-challenge
download_data(
    "https://github.com/ongchinkiat/LyftPerceptionChallenge/releases/download/v0.1/carla-capture-20180513A.zip",
    "./data",
)

datamodule = SemanticSegmentationData.from_folders(
    train_folder="data/CameraRGB",
    train_target_folder="data/CameraSeg",
    val_split=0.1,
    transform_kwargs=dict(image_size=(256, 256)),
    num_classes=1,
    batch_size=4,
)

# 2. Build the task
model = SemanticSegmentation(
    backbone="mobilenetv3_large_100",
    head="fpn",
    num_classes=datamodule.num_classes,
)

# 3. Create the trainer and finetune the model
trainer = flash.Trainer(max_epochs=3, gpus=torch.cuda.device_count())
trainer.finetune(model, datamodule=datamodule, strategy="freeze")

# 4. Segment a few images!
datamodule = SemanticSegmentationData.from_files(
    predict_files=[
        "data/CameraRGB/F61-1.png",
        "data/CameraRGB/F62-1.png",
        "data/CameraRGB/F63-1.png",
    ],
    batch_size=3,
)
predictions = trainer.predict(model, datamodule=datamodule)
print(predictions)

# 5. Save the model!
trainer.save_checkpoint("semantic_segmentation_model.pt")

In the dataset folder (data) delete all files but leave first 4, set batch_size 1

Code sample

Expected behavior

Training with low number of samples. I understand that the problem could be that 4 images is not enough, but in my opinion the error message has to be more clearer. I currently have no idea what it means.

Environment

  • OS (e.g., Linux): Ubuntu 20.04
  • Python version: 3.9.15
  • PyTorch/Lightning/Flash Version (e.g., 1.10/1.5/0.7): lightning-flash==0.8.1.post0 (latest pip install flash)
  • GPU models and configuration: A2000
  • Any other relevant information:

Additional context

Error message:

Exception has occurred: ValueError
zero-size array to reduction operation maximum which has no identity
  File "/mnt/data/src/word_bot/asd.py", line 12, in <module>
    datamodule = SemanticSegmentationData.from_folders(
ValueError: zero-size array to reduction operation maximum which has no identity

Metadata

Metadata

Assignees

No one assigned

    Labels

    bug / fixSomething isn't workinghelp wantedExtra attention is neededwon't fixThis will not be worked on

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions