Make sure you have python, OpenFace and dlib installed. You can either install them manually or use a preconfigured docker image that has everying already installed:
docker pull bamos/openface
docker run -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
cd /root/openface
Make a folder called /training-images/
somewhere on your computer.
Make a subfolder for each person you want to recognize. For example:
/training-images/will-ferrell/
/training-images/chad-smith/
/training-images/jimmy-fallon/
Copy all your images of each person into the correct sub-folders
Run the openface scripts from inside the openface root directory:
First, do pose detection and alignment:
./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96
Second, generate the representations from the aligned images:
./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/
When you are done, the ./generated-embeddings/
folder will contain a csv file with the embeddings for each image.
Im sorry but after I run this it says
/root/torch/install/bin/luajit: /root/openface/batch-represent/dataset.lua:193: could not find any image file in the given input paths
I am wondering why this happens, and also, if I have the embeddings, how can I use this to test the videos or other images on the Internet?