A deep learning application for automatic segmentation of brain tumors from MRI scans using a UNet neural network architecture. This web-based tool allows users to upload NIfTI format brain MRI scans and receive instant tumor segmentation results with comprehensive analysis and visualization options.
Checkout the app here: https://brain-tumor-segmentation-using-unet-python.streamlit.app/
- Advanced Tumor Segmentation: Using state-of-the-art UNet architecture trained on BraTS2020 dataset
- Interactive Visualization: Explore MRI scans with multiple view orientations (Axial, Sagittal, Coronal)
- Detailed Tumor Metrics: Quantitative analysis including size, shape, location, and morphology
- AI-Powered Clinical Interpretations: Optional clinical assessment using Google's Gemini AI (requires API key)
- User-Friendly Interface: Built with Streamlit for an intuitive and responsive experience
- Versatile Visualization Options: Multiple overlay styles and confidence mapping
- Installation
- Usage
- Technical Details
- Features In-Depth
- AI Explanations
- Sample Data
- Project Structure
- Contributing
- License
- Python 3.8+
- pip package manager
-
Clone this repository:
git clone https://github.com/yourusername/brain-tumor-segmentation.git cd brain-tumor-segmentation
-
Install required dependencies:
pip install -r requirements.txt
-
Run the application:
streamlit run App.py
The application should now be running on your local machine at http://localhost:8501
.
- Launch the application by running
streamlit run App.py
- Upload an MRI scan in NIfTI format (.nii or .nii.gz), or use the provided sample data
- Select the view orientation (Axial, Sagittal, or Coronal)
- Navigate through slices using the slider
- Adjust segmentation parameters in the sidebar if needed
- View results including segmentation overlay and metrics
- Download images or reports for further use
- Adjust the segmentation threshold to fine-tune tumor detection sensitivity
- Select different normalization methods to optimize for different MRI acquisition parameters
- Change overlay styles for better visualization
- Enable batch processing for analyzing multiple slices simultaneously
The application uses a UNet model with the following characteristics:
- Architecture: Standard U-Net with encoder-decoder and skip connections
- Input: 2D MRI slices (normalized)
- Output: Probabilistic segmentation maps
- Training Data: BraTS2020 dataset (Multimodal Brain Tumor Segmentation Challenge)
- Loading: NIfTI files are loaded using nibabel
- Preprocessing: Slices are extracted and normalized
- Inference: The model processes the slice and produces a segmentation mask
- Postprocessing: Results are analyzed for metrics calculation and visualization
The application calculates comprehensive metrics including:
- Size: Area in pixels, percentage of slice occupied
- Dimensions: Width, height, estimated diameter
- Shape Analysis: Perimeter, circularity index
- Location: Center coordinates, bounding box
- Contour Overlay: Highlighting tumor boundaries on the original image
- Heatmap Overlay: Color-coded visualization of tumor regions
- Confidence Map: Visualization of the model's prediction confidence
- Multiple color schemes: hot, viridis, jet, cool
The application optionally integrates with Google's Gemini API to provide AI-powered clinical interpretations of the segmentation results.
- Obtain a Google Gemini API key from the Google AI Studio
- In the application sidebar, expand "🔑 AI Explanation Settings"
- Enter your API key and click "Activate API"
- Enable "Show AI Clinical Explanation" in the sidebar options
The AI will analyze the tumor characteristics and provide clinical insights including:
- Potential diagnosis and differential considerations
- Relevant observations about tumor morphology
- Suggested next steps in clinical evaluation
The application includes synthetic sample brain MRI data for testing and demonstration purposes. This can be accessed by checking the "Use sample data instead" option on the main interface.
brain-tumor-segmentation/
├── App.py # Main application file
├── requirements.txt # Python dependencies
├── logo.png # Application logo
├── checkpoint-epoch-29.pt # Pre-trained model weights
└── README.md # This documentation
Contributions to improve the application are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- BraTS challenge organizers for the inspiration and training data concepts
- PyTorch team for the deep learning framework
- Streamlit team for the web application framework
Note: This application is intended for research and educational purposes only and should not be used for clinical diagnosis without proper validation and expert medical oversight.