Detect and classify emojis in images using a Convolutional Neural Network (CNN) model built with TensorFlow/Keras.
This project predicts both the class of an emoji and its location using a dual-output neural network โ one for classification and one for bounding box regression.
- ๐ Emoji Detection โ Locate emoji within the image by predicting bounding box coordinates.
- ๐ญ Emoji Classification โ Classify the emoji (e.g., happy, sad, skeptical, etc.).
- ๐ฆ Custom
IoU
Metric โ Measures the overlap between predicted and ground-truth boxes. - ๐ผ๏ธ Visualization โ Real-time plotting of predicted vs actual results with bounding boxes.
- CNN backbone (Conv + MaxPooling layers)
- Two heads:
class_out
โsoftmax
activation for emoji classificationbox_out
โlinear
activation for bounding box coordinates
- Compiled with:
model.compile( loss={ 'class_out': 'categorical_crossentropy', 'box_out': 'mse' }, optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), metrics={ 'class_out': 'accuracy', 'box_out': IoU(name='iou') } )
When testing the model, the visualization uses the following color-coding:
-
Bounding Boxes:
- โ Green box = Ground-truth location (actual position)
- โ Red box = Model's predicted location
-
Class Labels:
- โ Green label = Correct classification
- โ Red label = Incorrect classification
- Clone the repository
git clone https://github.com/your-username/emoji-detector.git
cd emoji-detector
- Install dependencies
pip install -r requirements.txt
- Train the model
jupyter notebook train.ipynb
- Test predictions
jupyter notebook test.ipynb
-
Python 3.6+
-
TensorFlow 2.x
-
Jupyter Notebook
Includes a custom tf.keras.metrics.Metric implementation to track bounding box overlap performance across batches.
- Detect multiple emojis per image
- Add data augmentation
- Train on larger, real-world datasets
- Use YOLO or SSD for real-time detection
TensorFlow
NumPy
Matplotlib
Pillow (PIL)