Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ernesto-jimenez/6426a7199ae9d0f0de4daa46f66ed7f1 to your computer and use it in GitHub Desktop.
Save ernesto-jimenez/6426a7199ae9d0f0de4daa46f66ed7f1 to your computer and use it in GitHub Desktop.

Before you start

Make sure you have python, OpenFace and dlib installed. You can either install them manually or use a preconfigured docker image that has everying already installed:

docker pull bamos/openface
docker run -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
cd /root/openface

Pro-tip: If you are using Docker on OSX, you can make your OSX /Users/ folder visible inside a docker image like this:

docker run -v /Users:/host/Users -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
cd /root/openface

Then you can access all your OSX files inside of the docker image at /host/Users/...

ls /host/Users/

Step 1

Make a folder called ./training-images/ inside the openface folder.

mkdir training-images

Step 2

Make a subfolder for each person you want to recognize. For example:

mkdir ./training-images/will-ferrell/
mkdir ./training-images/chad-smith/
mkdir ./training-images/jimmy-fallon/

Step 3

Copy all your images of each person into the correct sub-folders. Make sure only one face appears in each image. There's no need to crop the image around the face. OpenFace will do that automatically.

Step 4

Run the openface scripts from inside the openface root directory:

First, do pose detection and alignment:

./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96

This will create a new ./aligned-images/ subfolder with a cropped and aligned version of each of your test images.

Second, generate the representations from the aligned images:

./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/

After you run this, the ./generated-embeddings/ sub-folder will contain a csv file with the embeddings for each image.

Third, train your face detection model:

./demos/classifier.py train ./generated-embeddings/

This will generate a new file called ./generated-embeddings/classifier.pkl. This file has the SVM model you'll use to recognize new faces.

At this point, you should have a working face recognizer!

Step 5: Recognize faces!

Get a new picture with an unknown face. Pass it to the classifier script like this:

./demos/classifier.py infer ./generated-embeddings/classifier.pkl your_test_image.jpg

You should get a prediction that looks like this:

=== /test-images/will-ferrel-1.jpg ===
Predict will-ferrell with 0.73 confidence.

From here it's up to you to adapt the ./demos/classifier.py python script to work however you want.

Important notes:

  • If you get bad results, try adding a few more pictures of each person in Step 3 (especially picures in different poses).
  • This script will always make a prediction even if the face isn't one it knows. In a real application, you would look at the confidence score and throw away predictions with a low confidence since they are most likely wrong.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment