This project demonstrates an end-to-end object detection pipeline using Ultralytics YOLO and PyTorch.
It is designed as a clean, reproducible portfolio project that covers dataset handling, training, evaluation, inference, and model export.
The project uses a custom PPE Helmet Detection dataset (single class: helmet) that is already included in the repository.
Task: Detect safety helmets in images
Model: YOLO (Ultralytics)
Framework: PyTorch
Classes: 1 (helmet)
Dataset: PPE Helmet Detection (custom)
This repository is structured so that it can be cloned and run immediately, without external dataset downloads or API keys.
Object_Detection/
│
├── data/
│ └── PPE/
│ ├── train/
│ │ ├── images/
│ │ └── labels/
│ ├── valid/
│ │ ├── images/
│ │ └── labels/
│ ├── test/
│ │ ├── images/
│ │ └── labels/
│ └── data.yaml
│
├── scripts/
│ ├── train.py
│ ├── eval.py
│ ├── infer.py
│ ├── export.py
│ └── check_dataset.py
│
├── assets/
│ └── results/ # Training curves & prediction images (added manually)
│
├── runs/ # YOLO training outputs (auto-generated, not committed)
├── environment.yml
└── README.md
- The dataset is included locally under
data/PPE - No external download or API keys are required
- Paths in
data.yamlare relative for portability
path: data/PPE
train: train/images
val: valid/images
test: test/images
nc: 1
names: ['helmet']This project uses conda.
conda env create -f environment.yml
conda activate yoloBefore training, verify the dataset:
python scripts/check_dataset.py --data data/PPE/data.yamlThis checks:
- Image–label matching
- Missing files
- Dataset size
python scripts/train.py --data data/PPE/data.yamlOutputs:
- Training curves
- Validation metrics
- Saved model weights
All results are saved automatically under runs/.
python scripts/eval.py --data data/PPE/data.yamlEvaluates:
- Precision
- Recall
- mAP
Run inference on sample images:
python scripts/infer.pyPredictions are saved with bounding boxes and confidence scores.
Export the trained model for deployment:
python scripts/export.pySupported formats include:
- ONNX
- TorchScript
Note: The images below are placeholders.
After training, copy selected result images fromruns/intoassets/results/and update or keep the filenames as shown.
- Training and validation losses decrease smoothly
- Indicates stable convergence and good generalization
- No significant overfitting observed
- mAP@0.5 ≈ 0.8
- mAP@0.5:0.95 ≈ 0.5–0.55
- Precision reaches ~0.85–0.9
- Recall stabilizes around ~0.65–0.7
These values indicate reliable helmet detection with strong precision.
- 48 true helmet detections
- 6 missed helmets
- 24 false positives from background
- Clean, modular script-based workflow
- Portable dataset configuration
- End-to-end ML pipeline
- Suitable for recruiters and technical reviews
- Dataset download scripts were intentionally removed to keep the project fully self-contained
- This reflects real-world ML workflows where datasets are versioned locally


