8.3 KiB
#Take one picture of yourself a day, automatically generate a movie!
FaceMovie is a simple project that aims at helping you create videos of yourself over time, using photos as input. Simply take several pictures of yourself in the same position, and decide when to compile everything into a video. Just indicate the location of your pictures, Facemovie does everything else for you.
I see a growing interest for projects where people take one picture of themselves a day for several months (years ?) and compile it into a video. When searching on the web, I realized that there was only one software allowing people to do this, the [everyday paid iphone app](everyday url). I hope that Facemovie could help some of you! The main difference with everyday is that Facemovie searches automatically for faces in the input images and compile them in the best possible way, so that your video lok awesome.
Due to its general implementation, FaceMovie may be used for faces, but also profiles (for projects showing [women along pregnancy for example](url pregnant women) or full body(for people workouting). The only limitation comes from you !
You can check out an example of video here.
Getting started
There are several ways to use Facemovie :
- Download a single executable here. Choose the file corresponding to your architecture, unzip the archive and you're done !
- Depending on your setup, you can choose installers including the Python interpreter or not.
- The executable ships with a folder called haar__cascades and containing elements needed for the recognition phaze of the algorithm. Leave them in the same location as the executable by default.
- Install the Python package via pip. (see command here).
- setup install
- You'll need some libraries installed to run the code, but the Facemovifier command will be available in your Python interpreter.
- Clone the project from Github and use the code. For this, you will have to install [all the tool needed to run the Python code](see requirements).
git clone git://github.com/jlengrand/FaceMovie.git
- I created scripts for Windows and Linux in the repo, so that the code can be used easily.
- This way, you'll get the last version of the code
For each of the following commands, Facemovifier (ITALIQUE) should be replaced by FaceMovifier.exe or python Facemovifier depending on your installation method (executable or Python egg).
Once installed, let's start by calling the helper of Facemovie. It can be done by
$ Facemoviefier -h
depending whether you used the installer or the Python egg. This command will list all the available parameters that can be used.
The next step is to try to create you first video. It is no more complex than running the following in command line :
$ Facemoviefier -i input_folder -o output_folder
Where input_folder is the repository where all your images are stored, and output_folder where you want the results to be saved. If you don't have images, you can still test the application by downloading some samples [here](lien vers samples).
Here is a concrete example :
$ Facemoviefier.exe -i "../data/input/samples" -o "../data/output"
**NOTE : ** In order to get good results, your images should contain only one person; and you should try to always keep the same angle with the camera for each of them.
Facemovie needs the list of haar_cascades to correctly detect faces. This means that if you decide to run the Facemovie from another location, you should update the folders accordingly and use the root_folder option:
$ Facemoviefier -i input_folder -o output_folder -r haar_cascades_folder_location
Facemovie allows you to choose the type of output you want once the processing is done. This can be done by using the --type (-t) option. Here is the case where I save images instead of a movie in output.
$ Facemoviefier -i "../data/input/Axel" -o "../data/output" -t i
By default, Facemovie is searching for frontal faces. You can change this by setting up which profile to use using the --profile (-p) option:
$ Facemoviefier -i "../data/input/Axel" -o "../data/output" -p "profile face"
An extensive list of training files is available while calling the helper, or by running the following command.
$ Facemoviefier -p ?
Options available in the Facemoviefier
Required :
- -i, --input : Input folder of the images to be processed
- -o, --output : Output folder where the final results will be saved
Optional :
- -h, --help : Shows help message and exits
- -r, --root : Location of the facemovie folder. Required if you run the Facemovifier from an external location
- -e, --equalize : If this option is activated, images will NOT be resized so that all faces have the same size.
- -p, --param : Used to change the file used to train the classifier. Useful you want to detect something else than front faces. Available parameters : - upper body. - profile face. - lower body. - frontal face (default). - full body.
- -t, --type : The type of output to be created. Can be either images, video or simple display (nothing written on disc). Available types : - video - images - simple graphical display
- -e, --equalize : When this option is activated, Facemovie will NOT resize images so that faces always keep the same size. This may result in lower quality results but avoid resizing images.
- -s, --sort : The way used to sort images chronologically. Can be done either by using file names or EXIF metadata. Available modes : - name (default) - EXIF
- -c, --crop : In this mode, final images are cropped so that only the desired part of the body is kept. This will remove parts of the input images, but will avoid addition of black borders in the output.
- -d, --cropdims : Expects two floats here. Ignored if crop mode is not selected. This allows to choose the window to be cropped. The values are defined in "number of face size". This means that for example -d 2 2 will output square images, of size 2 x the size of the subject face.
Libraries
This piece of code is developed in Python, simply because I love it :P (and because it allows easy testing while developing Image Processing applications). I used Python 2.7 for development. The only library needed to run the code for now is Opencv (and by extension Numpy). See the documentation for more information.
This project is developed on a Windows (7) platform, but there should (and, as a fanatic Linux User, will) be no compatibility problems with UNIX.
TODO
**In progress: **
- The very central part of the code is finished.
- I currently work on video quality enhancement (compression, speed, fade effects, ...)
- I plan to include a GUI to help people use the software, and later add support for a broader type of input images (profiles, eyes, glasses, . . .)
- I also think about a way for user to help the software in case it struggles with images (by manually pointing face on problematic images?).
- Any other idea is welcome
License
This project is released under the new BSD license (3 clauses version). You can read more about the license in the LICENSE file.
Acknowledgment
This project comes from an idea of Axel Catoire, currently travelling around the world with his girlfriend. He also provides me with new pictures :).
As a starter for my code, I used an excellent example from JAPSKUA, that you can find here
Contact
I would enjoy having feedback if you like this idea, or even used it (even though you should change the source code to run it for now :) ). I would also like to know if you have heard about any other solution to make this kind of stuff ! (Couldn't find any but this Iphone app on the internet !) Feel free to mail me for any comment or request.
You can contact me at julien at lengrand dot fr, or on my current website.