-
Notifications
You must be signed in to change notification settings - Fork 646
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
6 changed files
with
144 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,7 @@ | ||
autoregressive_transformer/*trainer.py | ||
flow_matching_transformer/*trainer.py | ||
|
||
!vevo/config/*.npz | ||
|
||
!vevo/wav/*.wav | ||
vevo/wav/output*.wav |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,98 @@ | ||
# Vevo: Controllable Zero-Shot Voice Imitation with Self-Supervised Disentanglement | ||
|
||
[![arXiv](https://img.shields.io/badge/OpenReview-Paper-COLOR.svg)](https://openreview.net/pdf?id=anQDiQZhDP) | ||
[![hf](https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-model-yellow)](https://huggingface.co/amphion/Vevo) | ||
[![WebPage](https://img.shields.io/badge/WebPage-Demo-red)](https://versavoice.github.io/) | ||
|
||
We present our reproduction of [Vevo](https://openreview.net/pdf?id=anQDiQZhDP), a versatile zero-shot voice imitation framework with controllable timbre and style. We invite you to explore our [audio samples](https://versavoice.github.io/) at to experience Vevo's capabilities firsthand. | ||
|
||
<br> | ||
<div align="center"> | ||
<img src="../../../imgs/vc/vevo.png" width="100%"> | ||
</div> | ||
<br> | ||
|
||
We have included the following pre-trained Vevo models at Amphion: | ||
|
||
- **Vevo-Timbre**: It can conduct *style-preserved* voice conversion. | ||
- **Vevo-Style**: It can conduct style conversion, such as *accent conversion* and *emotion conversion*. | ||
- **Vevo-Voice**: It can conduct *style-converted* voice conversion. | ||
- **Vevo-TTS**: It can conduct *style and timbre controllable* TTS. | ||
|
||
Besides, we also release the **content tokenizer** and **content-style tokenizer** proposed by Vevo. Notably, all these pre-trained models are trained on [Emilia](https://huggingface.co/datasets/amphion/Emilia-Dataset), containing 101k hours of speech data among six languages (English, Chinese, German, French, Japanese, and Korean). | ||
|
||
## Quickstart | ||
|
||
To run this model, you need to follow the steps below: | ||
|
||
1. Clone the repository and install the environment. | ||
2. Run the inference script. | ||
|
||
### Clone and Environment Setup | ||
|
||
#### 1. Clone the repository | ||
|
||
```bash | ||
git clone https://github.com/open-mmlab/Amphion.git | ||
cd Amphion | ||
``` | ||
|
||
#### 2. Install the environment | ||
|
||
Before start installing, making sure you are under the `Amphion` directory. If not, use `cd` to enter. | ||
|
||
Since we use `phonemizer` to convert text to phoneme, you need to install `espeak-ng` first. More details can be found [here](https://bootphon.github.io/phonemizer/install.html). Choose the correct installation command according to your operating system: | ||
|
||
```bash | ||
# For Debian-like distribution (e.g. Ubuntu, Mint, etc.) | ||
sudo apt-get install espeak-ng | ||
# For RedHat-like distribution (e.g. CentOS, Fedora, etc.) | ||
sudo yum install espeak-ng | ||
|
||
# For Windows | ||
# Please visit https://github.com/espeak-ng/espeak-ng/releases to download .msi installer | ||
``` | ||
|
||
Now, we are going to install the environment. It is recommended to use conda to configure: | ||
|
||
```bash | ||
conda create -n vevo python=3.10 | ||
conda activate vevo | ||
|
||
pip install -r models/vc/vevo/requirements.txt | ||
``` | ||
|
||
### Inference Script | ||
|
||
```bash | ||
# Vevo-Timbre | ||
python -m models.vc.vevo.infer_vevotimbre | ||
|
||
# Vevo-Style | ||
python -m models.vc.vevo.infer_vevostyle | ||
|
||
# Vevo-Voice | ||
python -m models.vc.vevo.infer_vevovoice | ||
|
||
# Vevo-TTS | ||
python -m models.vc.vevo.infer_vevotts | ||
``` | ||
|
||
Running this will automatically download the pretrained model from HuggingFace and start the inference process. The result audio is by default saved in `models/vc/vevo/wav/output*.wav`, you can change this in the scripts `models/vc/vevo/infer_vevo*.py` | ||
|
||
## Citations | ||
|
||
```bibtex | ||
@article{vevo, | ||
title={Vevo: Controllable Zero-Shot Voice Imitation with Self-Supervised Disentanglement}, | ||
journal={OpenReview}, | ||
year={2024} | ||
} | ||
@inproceedings{amphion, | ||
author={Zhang, Xueyao and Xue, Liumeng and Gu, Yicheng and Wang, Yuancheng and Li, Jiaqi and He, Haorui and Wang, Chaoren and Song, Ting and Chen, Xi and Fang, Zihao and Chen, Haopeng and Zhang, Junan and Tang, Tze Ying and Zou, Lexiao and Wang, Mingxuan and Han, Jun and Chen, Kai and Li, Haizhou and Wu, Zhizheng}, | ||
title={Amphion: An Open-Source Audio, Music and Speech Generation Toolkit}, | ||
booktitle={{IEEE} Spoken Language Technology Workshop, {SLT} 2024}, | ||
year={2024} | ||
} | ||
``` |
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
setuptools | ||
onnxruntime | ||
torch==2.0.1 | ||
transformers==4.41.2 | ||
accelerate==0.24.1 | ||
unidecode | ||
numpy==1.26.0 | ||
scipy==1.12.0 | ||
|
||
librosa | ||
encodec | ||
phonemizer | ||
g2p_en | ||
jieba | ||
cn2an | ||
pypinyin | ||
LangSegment | ||
pyopenjtalk | ||
pykakasi | ||
|
||
json5 | ||
black==24.1.1 | ||
ruamel.yaml | ||
tqdm | ||
|
||
spaces | ||
gradio | ||
openai-whisper |