Make sure you have python, OpenFace and dlib installed. You can either install them manually or use a preconfigured docker image that has everying already installed:
docker pull bamos/openface
docker run -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
cd /root/openface
Make a folder called /training-images/
somewhere on your computer.
Make a subfolder for each person you want to recognize. For example:
/training-images/will-ferrell/
/training-images/chad-smith/
/training-images/jimmy-fallon/
Copy all your images of each person into the correct sub-folders
Run the openface scripts from inside the openface root directory:
First, do pose detection and alignment:
./util/align-dlib.py ./training-images/ align outerEyesAndNose ./aligned-images/ --size 96
Second, generate the representations from the aligned images:
./batch-represent/main.lua -outDir ./generated-embeddings/ -data ./aligned-images/
When you are done, the ./generated-embeddings/
folder will contain a csv file with the embeddings for each image.
I tried to run this code on windows environment .I have set up everything dlib and pytorch using anaconda torch,csvigo,dnn using luarorks
and the first part of aligning the images was a success, I used "python util/align-dlib.py training-images/ align outerEyesAndNose aligned-images/ --size 96" command in a conda environment and got the aligned images in "aligned-images" folder but the problem is I was unable to get the embeddings in the "generated-embeddings" folder i used the command "lua batch-represent/main.lua -outDir generated-embeddings/ -data aligned-images/" at first i got lots of errors like requred csvigo,dnn,nn etc.but after installing all those packages the command executed successfully but the folder "generated-embeddings" was not created ,I am not even getting any errors.can some one help me how to run this code successfully on a windows environment.
-Thanks