Published: March 02, 2020, Edited by: Frederik Tollund Juutilainen

Facial Expression Recognition on a Raspberry Pi

Background

Early evolutionary psychology suggests that humans, regardless of their cultural background and geographic origin, share six basic facial expressions or microexpressions namely: anger, fear, surprise, sadness, happyness and disgust [wiki]. While more modern empirical studies claim that these categories by no means exhaust the "systematic variability in emotional response" [3], this guide will show we can work with the six basic categories in a prototyping or experimental context.

This guide will show we can make basic Facial Expression Recognition (FER) with a webcam on a Raspberry Pi (though it has also been tried and tested on macOS). On a raspberry pi 4 this has been tested to run with approx. 5 frames per second.

If you run into issues please checkout the troubleshooting-section. If it still doesn't work you are welcome to send an email to Frederik at tollund@ruc.dk

How does it work?

This guide is based on Atul Balaji's excellent github repository. We have created a fork of Atul's repository for easier setup.

The machine learning model is trained on 35887 face images from the The Facial Expression Recognition 2013 (FER-2013) dataset. This dataset consists of 48x48 grayscale images labelled by the emotion shown in the image. For this guide, we will use Atul Balaji's pretrained model and the program thereafter runs as described below:

  • First step is detecting and extracting faces using haar cascade.
  • This extracted face is resized a size of 48x48 and sent to a convolutional neural network.
  • The network outputs softmax scores for each facial recognition, which is the probability of each of the possible emotions.
  • Finally the emotion with the highest score is displayed.
Contents of this guide:

Getting Tensorflow and openCV to run on a Raspberry Pi was surprisingly tricky, so one of the aims of this quick guide is to make this process easier. Firstly this will include installing dependencies for openCV then installing python dependencies in a virtual environment and lastly running the code.

Requirements:

In order to get started you need a Rasberry Pi with a connected display and camera. This has been tested on a Raspberry Pi 4 and 3. The code presented in this guide has been tested on Raspbian Buster.

If you need help installing an operating system on an SD card, this guide is very useful.

This guide assumes no prior knowledge, but some familiarity with the terminal is very useful for trouble-shooting.

Cloning repository

The program is based on a fork of Atul Balaji's Emotion-detection repository.

Open the terminal on your Raspberry Pi and clone the repository and cd into the folder:

git clone https://github.com/faaip/Emotion-detection  
cd Emotion-detection  

Installing openCV dependencies on your Raspberry Pi

Before continuing we need to install dependencies for openCV. At the time of writing (17 Feb 2020), this can be done using the provided install_opencv_dependencies.sh-script.

To run the script:

sh install_opencv_dependencies.sh  

And this should do it! If you later run into problems with yor openCV installation on the Raspberry Pi, please refer to this great and more elaborate guide.

Using a virtual environment

Virtual environments are really useful for Python development, since you can have multiple simultaneous projects to work on without having to worry too much about interfering dependencies. If you don't already have virtualenv installed, you can do so by using the following command:

sudo pip3 install virtualenv  

Setup

Start by opening the terminal on your Raspberry Pi.

cd to the repository (if you aren't already there):

cd Emotion-detection  

Initialize the virtual environment:

virtualenv venv -p python3  

Activate the virtual environment:

source venv/bin/activate  

Install python dependencies in the virtual environment:

pip install -r requirements.txt  

Download atulapra's pre-trained model. Using the download script:

sh download-model.sh  

Running the program

Navigate to the folder (if you're not already in it) and activate the virtual environment:

cd Emotion-detection  
source venv/bin/activate  

Now launch the program!

python run.py  

Press q to exit.

Options

To run in full screen:

python run.py --fullscreen  

To flip image (input feed):

python run.py --flip  

To show debug info:

python run.py --debug  

Trouble-shooting

  • If you get errors when running the script which relate to import cv2 there's an error with your openCV installation. You might have omitted installing openCV on your rasberry pi.
  • If you are using a PiCam and cannot connect to it, make sure it's enabled in the Raspberry config. (Preferences -> Raspberry Pi Configuration -> Interfaces -> Camera).

References

  1. atulapra's Emotion-detection GitHub repository.
  2. "Challenges in Representation Learning: A report on three machine learning contests." I Goodfellow, D Erhan, PL Carrier, A Courville, M Mirza, B
    Hamner, W Cukierski, Y Tang, DH Lee, Y Zhou, C Ramaiah, F Feng, R Li,
    X Wang, D Athanasakis, J Shawe-Taylor, M Milakov, J Park, R Ionescu, M Popescu, C Grozea, J Bergstra, J Xie, L Romaszko, B Xu, Z Chuang, and Y. Bengio. arXiv 2013.
  3. Cowen, Alan & Sauter, Disa & Tracy, Jessica & Keltner, Dacher. (2019). Mapping the Passions: Toward a High-Dimensional Taxonomy of Emotional Experience and Expression. Psychological Science in the Public Interest. 20. 69-90. 10.1177/1529100619850176.