Skip to content

Instantly share code, notes, and snippets.

@ageitgey
Last active December 13, 2023 12:00
Show Gist options
  • Star 37 You must be signed in to star a gist
  • Fork 23 You must be signed in to fork a gist
  • Save ageitgey/63304fce6963cddec800afac5e3b065e to your computer and use it in GitHub Desktop.
Save ageitgey/63304fce6963cddec800afac5e3b065e to your computer and use it in GitHub Desktop.

Before you start

Make sure you have python, OpenFace and dlib installed. You can either install them manually or use a preconfigured docker image that has everying already installed:

docker pull bamos/openface
docker run -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
cd /root/openface

Pro-tip: If you are using Docker on OSX, you can make your OSX /Users/ folder visible inside a docker image like this:

docker run -v /Users:/host/Users -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
cd /root/openface

Then you can access all your OSX files inside of the docker image at /host/Users/...

ls /host/Users/

Step 1

Make a folder called ./training-images/ inside the openface folder.

mkdir training-images

Step 2

Make a subfolder for each person you want to recognize. For example:

mkdir ./training-images/will-ferrell/
mkdir ./training-images/chad-smith/
mkdir ./training-images/jimmy-fallon/

Step 3

Copy all your images of each person into the correct sub-folders. Make sure only one face appears in each image. There's no need to crop the image around the face. OpenFace will do that automatically.

Step 4

Run the openface scripts from inside the openface root directory:

First, do pose detection and alignment:

./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96

This will create a new ./aligned-images/ subfolder with a cropped and aligned version of each of your test images.

Second, generate the representations from the aligned images:

./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/

After you run this, the ./generated-embeddings/ sub-folder will contain a csv file with the embeddings for each image.

Third, train your face detection model:

./demos/classifier.py train ./generated-embeddings/

This will generate a new file called ./generated-embeddings/classifier.pkl. This file has the SVM model you'll use to recognize new faces.

At this point, you should have a working face recognizer!

Step 5: Recognize faces!

Get a new picture with an unknown face. Pass it to the classifier script like this:

./demos/classifier.py infer ./generated-embeddings/classifier.pkl your_test_image.jpg

You should get a prediction that looks like this:

=== /test-images/will-ferrel-1.jpg ===
Predict will-ferrell with 0.73 confidence.

From here it's up to you to adapt the ./demos/classifier.py python script to work however you want.

Important notes:

  • If you get bad results, try adding a few more pictures of each person in Step 3 (especially picures in different poses).
  • This script will always make a prediction even if the face isn't one it knows. In a real application, you would look at the confidence score and throw away predictions with a low confidence since they are most likely wrong.
@steam0
Copy link

steam0 commented Sep 12, 2016

I cannot perform one of the steps in step 4. This is the error log, even when I try to use the example images.

root@65af8ee87892:~/openface# ls
CONTRIBUTING.md  LICENSE    aligned-images  batch-represent  cloc.sh  demos  evaluation            images      models                        openface          run-tests.sh  tests     training-images   util
Dockerfile       README.md  api-docs        build            data     docs   generated-embeddings  mkdocs.yml  opencv-dlib-torch.Dockerfile  requirements.txt  setup.py      training  training-images2
root@65af8ee87892:~/openface# ./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./images/examples-aligned/
{
  data : "./images/examples-aligned/"
  imgDim : 96
  model : "/root/openface/models/openface/nn4.small2.v1.t7"
  device : 1
  outDir : "./generated-embeddings/"
  cache : false
  cuda : false
  batchSize : 50
}
./images/examples-aligned/
cache lotation:         /root/openface/images/examples-aligned/cache.t7
Creating metadata for cache.
{
  sampleSize :
    {
      1 : 3
      2 : 96
      3 : 96
    }
  split : 0
  verbose : true
  paths :
    {
      1 : "./images/examples-aligned/"
    }
  samplingMode : "balanced"
  loadSize :
    {
      1 : 3
      2 : 96
      3 : 96
    }
}
running "find" on each class directory, and concatenate all those filenames into a single file containing all image paths for a given class
now combine all the files to a single large file
load the large concatenated list of sample paths to self.imagePath (/tmp/lua_y7ibku)
Length of comboned image file: 0
/root/torch/install/bin/luajit: /root/openface/batch-represent/dataset.lua:194: Could not find any image file in the given input paths
stack traceback:
        [C]: in function 'assert'
        /root/openface/batch-represent/dataset.lua:194: in function '__init'
        /root/torch/install/share/lua/5.1/torch/init.lua:91: in function </root/torch/install/share/lua/5.1/torch/init.lua:87>
        [C]: in function 'dataLoader'
        /root/openface/batch-represent/batch-represent.lua:19: in function 'batchRepresent'
        ./batch-represent/main.lua:42: in main chunk
        [C]: in function 'dofile'
        /root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
        [C]: at 0x00406670
root@65af8ee87892:~/openface#

@tejaslodaya
Copy link

@ageitgey, You should reject the images with High confidence levels.

The more the confidence level, poorer is the classification
and, viceversa

@Gelezako
Copy link

Gelezako commented Apr 17, 2017

You should reject the images with High confidence levels.

How to do it? I just took some photos of the webcam. I was able to successfully crop the photo using the command.

./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96

But I get exactly the same error as steam0

@mukesh148
Copy link

I am doing face recognition on webcam and I have generated embedding for my all images. But now for every frame of video I have to calculate embedding for that frame and pass it to the classifier i have trained. But how to generate embedding for corresponding bounding box in frame in python file.
please help

@poojashah89
Copy link

Can I use this on raspberry pi ?

@Simon-TheUser
Copy link

I was having the same issues with step 4. For me, it turns out that I accidentally copied my pictures in ./training-images/ instead of ./training-images/will-ferrell/.

@deepak2226
Copy link

./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/
I am trying to run this step but its prompting me require torch. I am using windows10. I tried alot to install torch on my windows machine but I am not getting the installation of torch for windows. Please advise

@subhadeeps
Copy link

I have used pre configured docker as mentioned in the blog.

I put a single face image in sub directory under training-images directory and run the command as mentioned in Step 3 and executed the command for Step 4.

Pose detection and alignment:

./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96
Output:
=== ./training-images/subhadeep/IMG_20180219_180131.jpg ===

Generate the representations from the aligned images:

./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/
Output:
{
data : "./aligned-images/"
imgDim : 96
model : "/root/openface/models/openface/nn4.small2.v1.t7"
device : 1
outDir : "./generated-embeddings/"
cache : false
cuda : false
batchSize : 50
}
./aligned-images/
cache lotation: /root/openface/aligned-images/cache.t7
Creating metadata for cache.
{
sampleSize :
{
1 : 3
2 : 96
3 : 96
}
split : 0
verbose : true
paths :
{
1 : "./aligned-images/"
}
samplingMode : "balanced"
loadSize :
{
1 : 3
2 : 96
3 : 96
}
}
running "find" on each class directory, and concatenate all those filenames into a single file containing all image paths for a given class
now combine all the files to a single large file
load the large concatenated list of sample paths to self.imagePath
1 samples found...... 0/1 ......................] ETA: 0ms | Step: 0ms
Updating classList and imageClass appropriately
[=================== 1/1 =====================>] Tot: 0ms | Step: 0ms
Cleaning up temporary files
Splitting training and test sets to a ratio of 0/100
nImgs: 1
Represent: 1/1

Later when I run the command for "train your face detection model", I have received an error:

./demos/classifier.py train ./generated-embeddings/
Output:
/root/.local/lib/python2.7/site-packages/sklearn/lda.py:4: DeprecationWarning: lda.LDA has been moved to discriminant_analysis.LinearDiscriminantAnalysis in 0.17 and will be removed in 0.19
"in 0.17 and will be removed in 0.19", DeprecationWarning)
Loading embeddings.
Training for 1 classes.
Traceback (most recent call last):
File "./demos/classifier.py", line 291, in
train(args)
File "./demos/classifier.py", line 166, in train
clf.fit(embeddings, labelsNum)
File "/root/.local/lib/python2.7/site-packages/sklearn/svm/base.py", line 151, in fit
y = self._validate_targets(y)
File "/root/.local/lib/python2.7/site-packages/sklearn/svm/base.py", line 521, in _validate_targets
% len(cls))
ValueError: The number of classes has to be greater than one; got 1

Can you help me how do I resolve the issue?

@sanketchobe
Copy link

Getting below error for training step in step 4.

Loading embeddings.
Training for 1 classes.
Traceback (most recent call last):
File "./demos/classifier.py", line 291, in
train(args)
File "./demos/classifier.py", line 166, in train
clf.fit(embeddings, labelsNum)
File "/root/.local/lib/python2.7/site-packages/sklearn/svm/base.py", line 151, in fit
y = self._validate_targets(y)
File "/root/.local/lib/python2.7/site-packages/sklearn/svm/base.py", line 521, in _validate_targets
% len(cls))
ValueError: The number of classes has to be greater than one; got 1

Anyone have any solution?

@Kabariya
Copy link

you have to have more than one folder in training-images folder and then before first step check is there any folder with name "aligned-images" if yes then get into that folder and remove catch file and then start from the first step. I was getting the same error i removed catch file and started from first step and now it's working :)

@toofo
Copy link

toofo commented Nov 20, 2018

Hello,

I am running code from this example . Everything works perfectly when I launch align of ./util/align-dlib.py from command line. However, when I am debugging "util/align-dlib" from Spyder I have:
it falls on line
import openface
With error
'No module named openface'

Is there some way to hardcode the reference to openface?

Thanks!

@tasjapr
Copy link

tasjapr commented Mar 1, 2019

Hi! I did everything according to the instructions, up to step 5 everything works. But when I go to step 5, I get an error:

Traceback (most recent call last):
File "./demos/classifier.py", line 298, in <module> infer(args, args.multi)
  File "./demos/classifier.py", line 196, in infer
      person = le.inverse_transform(maxI)
  File "/usr/local/lib/python2.7/dist-packages/sklearn/preprocessing/label.py", line 273, in inverse_transform
    y = column_or_1d(y, warn=True)
  File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/validation.py", line 797, in column_or_1d
    raise ValueError("bad input shape {0}".format(shape))

ValueError: bad input shape ()

Solved by cmusatyalab/openface#393 (comment)

@aleksandra309303
Copy link

When executing lua batch-represent/main.lua -outDir generated-embeddings -data aligned-images I got :
table: 0x56399ed5d410
aligned-images
cache lotation: /mnt/c/Users/aleksandra/Desktop/Diplomski/openface/aligned-images/cache.t7
Creating metadata for cache.
table: 0x56399ee143d0
running "find" on each class directory, and concatenate all those filenames into a single file containing all image paths for a given class
now combine all the files to a single large file
load the large concatenated list of sample paths to self.imagePath
Segmentation fault
Did someone have this problem? Please, help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment