-
Notifications
You must be signed in to change notification settings - Fork 466
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
07469e2
commit 8f00e00
Showing
26 changed files
with
1,721 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
FROM debian:stretch | ||
|
||
# install debian packages | ||
ENV DEBIAN_FRONTEND noninteractive | ||
|
||
RUN apt-get update -qq \ | ||
&& apt-get install --no-install-recommends -y \ | ||
# install essentials | ||
build-essential \ | ||
# install python 3 | ||
python3.5 \ | ||
python3-dev \ | ||
python3-pip \ | ||
python3-wheel \ | ||
# Boost for dlib | ||
cmake \ | ||
libboost-all-dev \ | ||
# requirements for keras | ||
python3-h5py \ | ||
python3-yaml \ | ||
python3-pydot \ | ||
python3-setuptools \ | ||
&& apt-get clean \ | ||
&& rm -rf /var/lib/apt/lists/* | ||
|
||
COPY ./requirements.txt . | ||
RUN pip3 --no-cache-dir install -r ./requirements.txt | ||
|
||
WORKDIR /srv/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,140 @@ | ||
# Prerequisites | ||
Machine learning essentially involves a ton of trial and error. You're letting a program try millions of different settings to land on an algorithm that sort of does what you want it to do. This process is really really slow unless you have the hardware required to speed this up. | ||
|
||
The type of computations that the process does are well suited for graphics cards, rather than regular processors. **It is pretty much required that you run the training process on a desktop or server capable GPU.** Running this on your CPU means it can take weeks to train your model, compared to several hours on a GPU. | ||
|
||
## Hardware Requirements | ||
|
||
**TL;DR: you need at least one of the following:** | ||
|
||
- **A powerful CPU** | ||
- Laptop CPUs can often run the software, but will not be fast enough to train at reasonable speeds | ||
- **A powerful GPU** | ||
- Currently only Nvidia GPUs are supported. AMD graphics cards are not supported. | ||
This is not something that we have control over. It is a requirement of the Tensorflow library. | ||
- The GPU needs to support at least CUDA Compute Capability 3.0 or higher. | ||
To see which version your GPU supports, consult this list: https://developer.nvidia.com/cuda-gpus | ||
Desktop cards later than the 7xx series are most likely supported. | ||
- **A lot of patience** | ||
|
||
## Supported operating systems: | ||
|
||
- **Windows 10** | ||
Windows 7 and 8 might work. Your milage may vary | ||
- **Linux** | ||
Most Ubuntu/Debian or CentOS based Linux distributions will work. | ||
- **macOS** | ||
GPU support on macOS is limited due to lack of drivers/libraries from Nvidia. | ||
|
||
Alternatively there is a docker image that is based on Debian. | ||
|
||
|
||
# Important before you proceed | ||
|
||
**In its current iteration, the project relies heavily on the use of the command line. If you are unfamiliar with command line tools, you should not attempt any of the steps described in this guide.** Wait instead for this tool to become usable, or start learning more about working with the command line. This guide assumes you have intermediate knowledge of the command line. | ||
|
||
The developers are also not responsible for any damage you might cause to your own computer. | ||
|
||
# Installation Instructions | ||
|
||
## Installing dependencies | ||
|
||
### Python 3.6 | ||
|
||
Note that you will need the 64bit version of Python, especially to setup the GPU version! | ||
|
||
#### Windows | ||
|
||
Download the latest version of Python 3 from Python.org: https://www.python.org/downloads/release/python-364/ | ||
|
||
#### macOS | ||
|
||
By default, macOS comes with Python 2.7. For best usage, need Python 3.6. The easiest way to do so is to install it through `Homebrew`. If you are not familiar with `homebrew`, read more about it here: https://brew.sh/ | ||
|
||
To install Python 3.6: | ||
|
||
``` | ||
brew install python3 | ||
``` | ||
|
||
#### Linux | ||
|
||
You know how this works, don't you? | ||
|
||
### Virtualenv | ||
|
||
Install virtualenv next. Virtualenv helps us make a containing environment for our project. This means that any python packages we install for this project will be compartmentalized to this specific environment. We'll install virtualenv with `pip` which is Python's package/dependency manager. | ||
|
||
```pip install virtualenv``` | ||
|
||
or | ||
|
||
```pip3 install virtualenv``` | ||
|
||
Alternative, if your Linux distribution provides its own virtualenv through apt or yum, you can use that as well. | ||
|
||
#### Windows specific: | ||
|
||
`virtualenvwrapper-win` is a package that makes virtualenvs easier to manage on Windows. | ||
|
||
```pip install virtualenvwrapper-win``` | ||
|
||
|
||
## Getting the faceswap code | ||
|
||
Simply download the code from http://github.com/deepfakes/faceswap/ - For development it is recommended to use git instead of downloading the code and extracting it. | ||
|
||
For now, extract the code to a directory where you're comfortable working with it. Navigate to it with the command line. For our example we will use `~/faceswap/` as our project directory. | ||
|
||
## Setting up our virtualenv | ||
|
||
### First steps | ||
|
||
We will now initialize our virtualenv: | ||
|
||
``` | ||
virtualenv faceswap_env/ | ||
``` | ||
|
||
On Windows you can use: | ||
|
||
``` | ||
mkvirtualenv faceswap | ||
setprojectdir . | ||
``` | ||
|
||
This will create a folder with python, pip, and setuptools all ready to go in its own little environment. It will also activate the Virtual Environment which is indicated with the (faceswap) on the left side of the prompt. Anything we install now will be specific to this project. And available to the projects we connect to this environment. | ||
|
||
Let's say you’re content with the work you’ve contributed to this project and you want to move onto something else in the command line. Simply type `deactivate` to deactivate your environment. | ||
|
||
To reactive your environment on Windows, you can use `workon faceswap`. On Mac and Linux, you can use `source ./faceswap_env/bin/activate`. Note that the Mac/Linux command is relative to the project and virtualenv directory. | ||
|
||
### Setting up for our project | ||
|
||
With your virtualenv activated, install the dependencies from the requirements files. Like so: | ||
|
||
```bash | ||
pip install -r requirements.txt | ||
``` | ||
|
||
If you want to use your GPU instead of your CPU, substitute `requirements.txt` with `requirements-gpu.txt`: | ||
|
||
```bash | ||
pip install -r requirements-gpu.txt | ||
``` | ||
|
||
Should you choose the GPU version, Tensorflow might ask you to install the CUDA Toolkit and the cuDNN libraries. Instructions on installing those can be found on Nvidia's website. | ||
|
||
Once all these requirements are installed, you can attempt to run the faceswap tools. Use the `-h` or `--help` options for a list of options. | ||
|
||
```bash | ||
python faceswap.py -h | ||
``` | ||
|
||
Proceed to [../blob/master/USAGE.md](USAGE.md) | ||
|
||
## Notes | ||
|
||
This guide is far from complete. Functionality may change over time, and new dependencies are added and removed as time goes on. | ||
|
||
If you are experiencing issues, please raise them in the [faceswap-playground](https://github.com/deepfakes/faceswap-playground) repository instead of the main repo. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,113 @@ | ||
# deepfakes | ||
This is the code for "DeepFakes" by Siraj Raval on Youtube | ||
|
||
**Notice:** This repository is not operated or maintained by [/u/deepfakes](https://www.reddit.com/user/deepfakes/). Please read the explanation below for details. | ||
|
||
--- | ||
|
||
# deepfakes_faceswap | ||
|
||
Faceswap is a tool that utilizes deep learning to recognize and swap faces in pictures and videos. | ||
|
||
## Overview | ||
The project has multiple entry points. You will have to: | ||
- Gather photos (or use the one provided in the training data provided below) | ||
- **Extract** faces from your raw photos | ||
- **Train** a model on your photos (or use the one provided in the training data provided below) | ||
- **Convert** your sources with the model | ||
|
||
### Extract | ||
From your setup folder, run `python faceswap.py extract`. This will take photos from `src` folder and extract faces into `extract` folder. | ||
|
||
### Train | ||
From your setup folder, run `python faceswap.py train`. This will take photos from two folders containing pictures of both faces and train a model that will be saved inside the `models` folder. | ||
|
||
### Convert | ||
From your setup folder, run `python faceswap.py convert`. This will take photos from `original` folder and apply new faces into `modified` folder. | ||
|
||
#### General notes: | ||
- All of the scripts mentioned have `-h`/`--help` options with a arguments that they will accept. You're smart, you can figure out how this works, right?! | ||
|
||
Note: there is no conversion for video yet. You can use MJPG to convert video into photos, process images, and convert images back to video | ||
|
||
## Training Data | ||
**Whole project with training images and trained model (~300MB):** | ||
https://anonfile.com/p7w3m0d5be/face-swap.zip or [click here to download](https://anonfile.com/p7w3m0d5be/face-swap.zip) | ||
|
||
## How To setup and run the project | ||
|
||
### Setup | ||
|
||
Clone the repo and setup you environment. There is a Dockerfile that should kickstart you. Otherwise you can setup things manually, see in the Dockerfiles for dependencies. | ||
|
||
Check out [../blob/master/INSTALL.md](INSTALL.md) and [../blob/master/USAGE.md](USAGE.md) for basic information on how to configure virtualenv and use the program. | ||
|
||
You also need a modern GPU with CUDA support for best performance | ||
|
||
**Some tips:** | ||
|
||
Reusing existing models will train much faster than starting from nothing. | ||
If there is not enough training data, start with someone who looks similar, then switch the data. | ||
|
||
#### Docker | ||
If you prefer using Docker, You can start the project with: | ||
- Build: `docker build -t deepfakes .` | ||
- Run: `docker run --rm --name deepfakes -v [src_folder]:/srv -it deepfakes bash` . `bash` can be replaced by your command line | ||
Note that the Dockerfile does not have all good requirments, so it will fail on some python 3 commands. | ||
Also note that it does not have a GUI output, so the train.py will fail on showing image. You can comment this, or save it as a file. | ||
|
||
## How to contribute | ||
|
||
### For people interested in the generative models | ||
- Go to the 'faceswap-model' to discuss/suggest/commit alternatives to the current algorithm. | ||
|
||
### For devs | ||
- Read this README entirely | ||
- Fork the repo | ||
- Download the data with the link provided below | ||
- Play with it | ||
- Check issues with the 'dev' tag | ||
- For devs more interested in computer vision and openCV, look at issues with the 'opencv' tag. Also feel free to add your own alternatives/improvments | ||
|
||
### For non-dev advanced users | ||
- Read this README entirely | ||
- Clone the repo | ||
- Download the data with the link provided below | ||
- Play with it | ||
- Check issues with the 'advuser' tag | ||
- Also go to the 'faceswap-playground' repo and help others. | ||
|
||
### For end-users | ||
- Get the code here and play with it if you can | ||
- You can also go to the 'faceswap-playground' repo and help or get help from others. | ||
- Be patient. This is relatively new technology for developers as well. Much effort is already being put into making this program easy to use for the average user. It just takes time! | ||
- **Notice** Any issue related to running the code has to be open in the 'faceswap-playground' project! | ||
|
||
### For haters | ||
Sorry no time for that | ||
|
||
# About github.com/deepfakes | ||
|
||
## What is this repo? | ||
It is a community repository for active users. | ||
|
||
## Why this repo? | ||
The joshua-wu repo seems not active. Simple bugs like missing _http://_ in front of url has not been solved since days. | ||
|
||
## Why is it named 'deepfakes' if it is not /u/deepfakes? | ||
1. Because a typosquat would have happened sooner or later as project grows | ||
2. Because all glory go to /u/deepfakes | ||
3. Because it will better federate contributors and users | ||
|
||
## What if /u/deepfakes feels bad about that? | ||
This is a friendly typosquat, and it is fully dedicated to the project. If /u/deepfakes wants to take over this repo/user and drive the project, he is welcomed to do so (Raise an issue, and he will be contacted on Reddit). Please do not send /u/deepfakes messages for help with the code you find here. | ||
|
||
# About machine learning | ||
|
||
## How does a computer know how to recognise/shape a faces? How does machine learning work? What is a neural network? | ||
|
||
It's complicated. Here's a good video that makes the process understandable: | ||
[![How Machines Learn](https://img.youtube.com/vi/R9OHn5ZF4Uo/0.jpg)](https://www.youtube.com/watch?v=R9OHn5ZF4Uo) | ||
|
||
Here's a slightly more in depth video that tries to explain the basic functioning of a neural network: | ||
[![How Machines Learn](https://img.youtube.com/vi/aircAruvnKk/0.jpg)](https://www.youtube.com/watch?v=aircAruvnKk) | ||
|
||
tl;dr: training data + trial and error |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,78 @@ | ||
**Before attempting any of this, please make sure you have read, understood and completed [the installation instructions](../master/INSTALL.md). If you are experiencing issues, please raise them in the [faceswap-playground](https://github.com/deepfakes/faceswap-playground) repository instead of the main repo.** | ||
|
||
# Workflow | ||
|
||
So, you want to swap faces in pictures and videos? Well hold up, because first you gotta understand what this collection of scripts will do, how it does it and what it can't currently do. | ||
|
||
The basic operation of this script is simple. It trains a machine learning model to recognize and transform two faces based on pictures. The machine learning model is our little "bot" that we're teaching to do the actual swapping and the pictures are the "training data" that we use to train it. Note that the bot is primarily processing faces. Other objects might not work. | ||
|
||
So here's our plan. We want to create a reality where Donald Trump lost the presidency to Nic Cage; we have his inauguration video; let's replace Trump with Cage. | ||
|
||
## Gather training data | ||
|
||
In order to accomplish this, the bot needs to learn to recognize both face A (Trump) and face B (Nic Cage). By default, the bot doesn't know what a Trump or a Nic Cage looks like. So we need to show it some pictures and let it guess which is which. So we need pictures of both of these faces first. | ||
|
||
A possible source is Google, DuckDuckGo or Bing image search. There are scripts to download large amounts of images. Alternatively, if you have a lot of videos of the person you're looking for (like interviews, public speeches, movies), you can convert a video to still images/frames and use those. | ||
|
||
Feel free to list your image sets in the [faceswap-playground](https://github.com/deepfakes/faceswap-playground), or add more methods to this file. | ||
|
||
So now we have a folder full of pictures of Trump and a separate folder of Nic Cage. Let's save them in our directory where we put the faceswap project. Example: `~/faceswap/photo/trump` and `~/faceswap/photo/cage` | ||
|
||
## Extracting our training data | ||
|
||
So here's a problem. We have a ton of pictures of both our subjects, but they're just pictures of them doing stuff or in an environment with other people. Their bodies are on there, they're on there with other people... It's a mess. We can only train our bot if the data we have is consistent and focusses on the subject we want to swap. This is where faceswap first comes in. | ||
|
||
```bash | ||
# To convert trump: | ||
python faceswap.py extract -i ~/faceswap/photo/trump -o ~/faceswap/data/trump | ||
# To convert cage: | ||
python faceswap.py extract -i ~/faceswap/photo/cage -o ~/faceswap/data/cage | ||
``` | ||
|
||
We specify our photo input directory and the output folder where our training data will be saved. The script will then try its best to recognize face landmarks, crop the image to that size, and save it to the output folder. Note: this script will make grabbing test data much easier, but it is not perfect. It will (incorrectly) detect multiple faces in some photos and does not recognize if the face is the person who we want to swap. Therefore: **Always check your training data before you start training.** The training data will influence how good your model will be at swapping. | ||
|
||
## Training | ||
|
||
The training process will take the longest, especially on CPU. We specify the folders where the two faces are, and where we will save our training model. It will start hammering the training data once you run the command. I personally really like to go by the preview and quit the processing once I'm happy with the results. | ||
|
||
```bash | ||
python faceswap.py train -A ~/faceswap/data/trump -B ~/faceswap/data/cage -m ~/faceswap/models/ | ||
# or -p to show a preview | ||
python faceswap.py train -A ~/faceswap/data/trump -B ~/faceswap/data/cage -m ~/faceswap/models/ -p | ||
```` | ||
|
||
If you use the preview feature, select the preview window and press Q to save your processed data and quit gracefully. Without the preview enabled, you might have to forcefully quit by hitting Ctrl+C to cancel the command. Note that it will save the model once it's gone through about 100 iterations, which can take quite a while. So make sure you save before stopping the process. | ||
## SWAPPING | ||
Now that we're happy with our trained model, we can convert our video. How does it work? Similarly to the extraction script, actually! The conversion script basically detects a face in a picture using the same algorithm, quickly crops the image to the right size, runs our bot on this cropped image of the face it has found, and then (crudely) pastes the processed face back into the picture. | ||
|
||
### Testing out our bot | ||
|
||
Remember those initial pictures we had of Trump? Let's try swapping a face there. We will use that directory as our input directory, create a new folder where the output will be saved, and tell them which model to use. | ||
```bash | ||
python faceswap.py convert -i ~/faceswap/photo/trump/ -o ~/faceswap/output/ -m ~/faceswap/models/ | ||
``` | ||
It should now start swapping faces of all these pictures. | ||
### Preparing a video | ||
A video is just a series of pictures (frames). You can export a video to still frames using `ffmpeg` for example. Below is an example command to process a video to frames. | ||
```bash | ||
ffmpeg -i /path/to/my/video.mp4 /path/to/output/video-frame-%d.png | ||
``` | ||
If you then use the resulting directory with frames to faceswap, it will automatically go through all of those. And here's a command to stitch png frames to a single video again: | ||
|
||
```bash | ||
ffmpeg -i video-frame-%04d.png -c:v libx264 -vf "fps=25,format=yuv420p" out.mp4 | ||
``` | ||
|
||
## Notes | ||
|
||
This guide is far from complete. Functionality may change over time, and new dependencies are added and removed as time goes on. | ||
|
||
If you are experiencing issues, please raise them in the [faceswap-playground](https://github.com/deepfakes/faceswap-playground) repository instead of the main repo. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
#!/usr/bin/env python3 | ||
import sys | ||
if sys.version_info[0] < 3: | ||
raise Exception("This program requires at least python3.2") | ||
if sys.version_info[0] == 3 and sys.version_info[1] < 2: | ||
raise Exception("This program requires at least python3.2") | ||
|
||
from lib.utils import FullHelpArgumentParser | ||
|
||
from scripts.extract import ExtractTrainingData | ||
from scripts.train import TrainingProcessor | ||
from scripts.convert import ConvertImage | ||
|
||
if __name__ == "__main__": | ||
parser = FullHelpArgumentParser() | ||
subparser = parser.add_subparsers() | ||
extract = ExtractTrainingData( | ||
subparser, "extract", "Extract the faces from a pictures.") | ||
train = TrainingProcessor( | ||
subparser, "train", "This command trains the model for the two faces A and B.") | ||
convert = ConvertImage( | ||
subparser, "convert", "Convert a source image to a new one with the face swapped.") | ||
arguments = parser.parse_args() | ||
arguments.func(arguments) |
Oops, something went wrong.