GitHub - RVC-Boss/GPT-SoVITS: 1 min voice data can also be used to train a good TTS model! (few shot voice cloning)

コンテンツ

GPT-SoVITS-WebUI

A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.

madewithlove

Open In Colab Licence Huggingface


Features:

  1. Zero-shot TTS: Input a 5-second vocal sample and experience instant text-to-speech conversion.
  2. Few-shot TTS: Fine-tune the model with just 1 minute of training data for improved voice similarity and realism.
  3. Cross-lingual Support: Inference in languages different from the training dataset, currently supporting English, Japanese, and Chinese.
  4. WebUI Tools: Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.

Check out our demo video here!

Unseen speakers few-shot fine-tuning demo:

few.shot.fine.tuning.demo.mp4

Installation

For users in China region, you can click here to use AutoDL Cloud Docker to experience the full functionality online.

Tested Environments

  • Python 3.9, PyTorch 2.0.1, CUDA 11
  • Python 3.10.13, PyTorch 2.1.2, CUDA 12.3
  • Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon)

Note: numba==0.56.4 requires py<3.11

Windows

If you are a Windows user (tested with win>=10), you can directly download the pre-packaged distribution and double-click on go-webui.bat to start GPT-SoVITS-WebUI.

Linux

conda create -n GPTSoVits python=3.9 conda activate GPTSoVits bash install.sh

macOS

Only Macs that meet the following conditions can train models:

  • Mac computers with Apple silicon
  • macOS 12.3 or later
  • Xcode command-line tools installed by running xcode-select --install

All Macs can do inference with CPU, which has been demonstrated to outperform GPU inference.

First make sure you have installed FFmpeg by running brew install ffmpeg or conda install ffmpeg, then install by using the following commands:

conda create -n GPTSoVits python=3.9 conda activate GPTSoVits

pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu pip install -r requirements.txt

Note: Training models will only work if you've installed PyTorch Nightly.

Install Manually

Install Dependences

pip install -r requirements.txt

Install FFmpeg

Conda Users

Ubuntu/Debian Users

sudo apt install ffmpeg sudo apt install libsox-dev conda install -c conda-forge 'ffmpeg<7'

Windows Users

Download and place ffmpeg.exe and ffprobe.exe in the GPT-SoVITS root.

Using Docker

docker-compose.yaml configuration

  1. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check Docker Hub for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs.
  2. Environment Variables:
  • is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation.
  1. Volumes Configuration,The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content.
  2. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation.
  3. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances.

Running with docker compose

docker compose -f "docker-compose.yaml" up -d

Running with docker command

As above, modify the corresponding parameters based on your actual situation, then run the following command:

docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx

Pretrained Models

Download pretrained models from GPT-SoVITS Models and place them in GPT_SoVITS/pretrained_models.

For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from UVR5 Weights and place them in tools/uvr5/uvr5_weights.

Users in China region can download these two models by entering the links below and clicking "Download a copy"

For Chinese ASR (additionally), download models from Damo ASR Model, Damo VAD Model, and Damo Punc Model and place them in tools/damo_asr/models.

Dataset Format

The TTS annotation .list file format:

vocal_path|speaker_name|language|text

Language dictionary:

  • 'zh': Chinese
  • 'ja': Japanese
  • 'en': English

Example:

D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.

Todo List

  • High Priority:
    • Localization in Japanese and English.
    • User guide.
    • Japanese and English dataset fine tune training.
  • Features:
    • Zero-shot voice conversion (5s) / few-shot voice conversion (1min).
    • TTS speaking speed control.
    • Enhanced TTS emotion control.
    • Experiment with changing SoVITS token inputs to probability distribution of vocabs.
    • Improve English and Japanese text frontend.
    • Develop tiny and larger-sized TTS models.
    • Colab scripts.
    • Try expand training dataset (2k hours -> 10k hours).
    • better sovits base model (enhanced audio quality)
    • model mix

(Optional) If you need, here will provide the command line operation mode

Use the command line to open the WebUI for UVR5

python tools/uvr5/webui.py "<infer_device>" <is_half> <webui_port_uvr5>

If you can't open a browser, follow the format below for UVR processing,This is using mdxnet for audio processing

python mdxnet.py --model --input_root --output_vocal --output_ins --agg_level --format --device --is_half_precision 

This is how the audio segmentation of the dataset is done using the command line

python audio_slicer.py \
    --input_path "<path_to_original_audio_file_or_directory>" \
    --output_root "<directory_where_subdivided_audio_clips_will_be_saved>" \
    --threshold <volume_threshold> \
    --min_length <minimum_duration_of_each_subclip> \
    --min_interval <shortest_time_gap_between_adjacent_subclips> 
    --hop_size <step_size_for_computing_volume_curve>

This is how dataset ASR processing is done using the command line(Only Chinese)

python tools/damo_asr/cmd-asr.py "<Path to the directory containing input audio files>"

ASR processing is performed through Faster_Whisper(ASR marking except Chinese)

(No progress bars, GPU performance may cause time delays)

python ./tools/damo_asr/WhisperASR.py -i <input> -o <output> -f <file_name.list> -l <language>

A custom list save path is enabled

Credits

Special thanks to the following projects and contributors:

Thanks to all contributors for their efforts

要約する
GPT-SoVITS-WebUI是一个强大的少样本语音转换和文本转语音WebUI工具。其特点包括零样本TTS、少样本TTS、跨语言支持以及WebUI工具等。用户可以通过少量训练数据对模型进行微调,实现更好的语音相似度和真实感。目前支持英语、日语和中文等多种语言。安装方面,提供了针对不同操作系统的安装指南,包括Windows、Linux和macOS。此外,还提供了使用Docker的安装方式。用户可以通过WebUI进行语音伴奏分离、自动训练集分割、中文ASR和文本标注等操作,帮助初学者创建训练数据集和模型。