This web app is part of the ReInHerit Toolkit.
Based on Bringing Old Photos Back to Life, CVPR2020 (Oral)
You can choose to test the app in two ways:
- creating a python virtual environment
- using docker
Follow above prerequisites and instructions paying attention to the parts relating to the chosen method.
- Python 3.10 installed on your machine. If you don't have it, you can download it from the official website: https://www.python.org/downloads/ or follow this online guide: https://realpython.com/installing-python/ to install Python on your machine.
- Javascript enabled on your browser. If not, you can follow this online guide: https://www.enable-javascript.com/
Depending on which method you have chosen to test the app you must:
- Python Virtual Environment: we recommend using Conda to manage virtual environments, so check in your terminal or command prompt if you have Conda installed by running the command
If Conda is not installed, follow the installation instructions from the official Anaconda website: https://docs.anaconda.com/anaconda/install/
conda --version
- Docker: you'll need to set up and run Docker on your operating system. If you are not familiar with Docker, please refer to the official documentation here.
In a terminal go to the project folder and run the following code:
cd Face_Detection/
wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
bzip2 -d shape_predictor_68_face_landmarks.dat.bz2
cd ../
Put the file Face_Enhancement/checkpoints.zip under ./Face_Enhancement, and put the file Global/checkpoints.zip under ./Global. Then unzip them respectively. Use this code:
cd Face_Enhancement/
wget https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/releases/download/v1.0/face_checkpoints.zip
unzip face_checkpoints.zip
cd ../
cd Global/
wget https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/releases/download/v1.0/global_checkpoints.zip
unzip global_checkpoints.zip
cd ../
A Django secret key is required, Google Analytics is optional.
Edit the .env_template file with your own settings and rename it to .env.
-
You can generate one by typing in a terminal:
python getYourDjangoKey.py
-
copy and paste the generated key in the DJANGO_KEY field of the .env file.
- Go to https://analytics.google.com/analytics/web/ and click on the "Get Started for Free" button.
- Sign in with your Google account.
- Follow the instructions to create a new account.
- Once you have created your account, you will be redirected to the dashboard. Click on the "Admin" button in the top left corner.
- Click on the "Create Property" button.
- Select "Web" as the property type and click on the "Continue" button.
- Enter a name for your property and click on the "Create" button.
- Click on the "Tracking Info" button.
- Click on the "Tracking Code" button.
- Copy the "Tracking ID" and paste it in the GA_KEY field of the .env file.
-
Open a terminal and navigate to the folder containing the requirements.txt file.
Create a virtual environment by typing:conda create --name my_env_name python=3.10
Activate the environment by typing:
conda activate my_env_name
Notice: Replace my_env_name with a relevant name for your environment.
You have successfully activated your virtual environment. To install the Python libraries required for your project, run the following command while inside the virtual environment:
pip install -r requirements.txt
-
Open a terminal and navigate to the folder containing the manage.py file. It should be the same as requirements.txt
Type:python manage.py runserver
Open another terminal and navigate to the folder containing the bring_to_life.py file. It should be the same as requirements.txt
Type:python bring_to_life.py
-
Now open a browser and go to the address:
http://localhost:8000
- In a terminal goto the root of the repository
- Run this line of code:
docker build -t oldphoto .
- This will take a while. Wait till the build is finished
- If the build stops with an error, try to run it again.
- In a terminal goto the root of the repository
- Run
docker run -it --gpus=all --env-file=.env -e HOST=localhost -e PORT=8000 -v media-volume:/app/media:rw -p 8000:8000 oldphoto
- Wait till the container is running
- Open a browser and go to
http://localhost:8000
- You should see the demo page
Click or 'Start to restore' button to start the demo
- Click on BROWSE button to select the images to upload. You can upload multiple images at the same time. After uploading, you can click on the image to see the original image and the restored image.
- Select the image or images you want to restore and click Open.
- The selected images will be shown in the browser with 2 checkbox buttons.
- If a photo has scratches or damage that needs to be repaired, select the 'with scratches' checkbox.
- And if the image with scratches has a DPI (dots per inch) of 300 or higher, select the checkbox labeled 'is HD'.
- If you need, you can select again on BROWSE button to upload more images from the same folder.
- When you are ok with the selection, click on the PROCESS button to start the restoration process.
Note: The processing time depends on the number of images you upload. The more images you upload, the longer it will take to process.
- The restored images will be shown in the browser.
- For every image, will be shown the original image, the restored image, and, between them, a comparison on the areas most affected by the process.
- Clicking on DOWNLOAD button the browser will download the restored images and bring you back to the landing page.
- Clicking on RESTART button will bring you back to the landing. ATTENTION- You will loose all your processed images!!!
@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}
@article{wan2020old,
title={Old Photo Restoration via Deep Latent Space Translation},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
journal={arXiv preprint arXiv:2009.07047},
year={2020}
}
The codes and the pretrained model in this repository are under the MIT license as specified by the LICENSE file. We use our labeled dataset to train the scratch detection model.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.