This project focuses on building a robust emotion recognition system capable of identifying human emotions from facial expressions. Using the FER+ dataset and Convolutional Neural Networks (CNN), the model achieves high accuracy through data augmentation and pre-processing techniques.
- Dataset: FER+ (Facial Expression Recognition Plus)
- Architecture: Convolutional Neural Networks (CNN)
- Accuracy: 83.52% validation accuracy
- Tech Stack: Python, OpenCV, Keras, TensorFlow
In this project, I developed an emotion recognition model that can classify facial expressions into distinct emotion categories. The model leverages deep learning techniques, specifically CNNs, and achieves significant performance through the following key components:
- Pre-processing: Handled image resizing, normalization, and face detection using OpenCV. cData Augmentation:** Applied rotation, zoom, and horizontal flip to increase dataset diversity.
- CNN Architecture: Built a multi-layer CNN to extract features from images and accurately classify emotions.
- Reached 83.52% validation accuracy in recognizing facial expressions from the FER+ dataset.
- Successfully implemented a
- data augmentation pipeline to enhance model robustness.
To run this project locally, follow these steps:
- Clone this repository:
git clone https://github.com/your-username/emotion-recognition.git
- Navigate to the project directory:
cd emotion-recognition - Install required dependencies::
pip install -r requirements.txt
Once the setup is complete, you can run the model on your own images or the provided dataset. Example commands:
- Train the model:
python train_model.py
- Predict emotion from an image:
python predict_emotion.py --image_path your_image.jpg
- Fine-tune the CNN model for higher accuracy.
- Explore real-time emotion recognition via webcam integration.