Skip to content

Instantly share code, notes, and snippets.

@thoraxe
Created March 22, 2023 00:57
Show Gist options
  • Save thoraxe/308bcb97f1ba57ad94ab6957bc2525ba to your computer and use it in GitHub Desktop.
Save thoraxe/308bcb97f1ba57ad94ab6957bc2525ba to your computer and use it in GitHub Desktop.
# *** a bunch of repeated lines about wav files don't exist, which makes sense because I moved/removed them for this test ***
[!] wav files don't exist - C:\Users\erikm\Documents\voice-cloning\VCTK\wav48_silence_trimmed\s5\s5_400_mic1.flac
| > Found 231 files in C:\Users\erikm\Documents\voice-cloning\VCTK
> Setting up Audio Processor...
| > sample_rate:48000
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:0
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:None
| > fft_size:1024
| > power:None
| > preemphasis:0.0
| > griffin_lim_iters:None
| > signal_norm:None
| > symmetric_norm:None
| > mel_fmin:0
| > mel_fmax:None
| > pitch_fmin:None
| > pitch_fmax:None
| > spec_gain:20.0
| > stft_pad_mode:reflect
| > max_norm:1.0
| > clip_norm:True
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:10
| > hop_length:256
| > win_length:1024
> Model fully restored.
> Setting up Audio Processor...
| > sample_rate:16000
| > resample:False
| > num_mels:64
| > log_func:np.log10
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:512
| > power:1.5
| > preemphasis:0.97
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:False
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:20.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:False
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:True
| > db_level:-27.0
| > stats_path:None
| > base:10
| > hop_length:160
| > win_length:400
> `speakers.pth` is saved to C:\Users\erikm\Documents\voice-cloning\YourTTS-EN-VCTK-March-21-2023_08+55PM-0000000\speakers.pth.
> `speakers_file` is updated in the config.json.
> DataLoader initialization
| > Tokenizer:
| > add_blank: True
| > use_eos_bos: False
| > use_phonemes: False
| > Number of instances : 229
| > Preprocessing samples
| > Max text length: 179
| > Min text length: 13
| > Avg text length: 44.43231441048035
|
| > Max audio length: 294042.0
| > Min audio length: 31972.0
| > Avg audio length: 76518.29257641922
| > Num. instances discarded samples: 0
| > Batch group size: 1536.
> Using weighted sampler for attribute 'speaker_name' with alpha '1.0'
None
> Attribute weights for '['VCTK_p225']'
| > [0.06608186004550898]
> Training Environment:
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 20
| > Num. of Torch Threads: 24
| > Torch seed: 54321
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
> Start Tensorboard: tensorboard --logdir=C:\Users\erikm\Documents\voice-cloning\YourTTS-EN-VCTK-March-21-2023_08+55PM-0000000
> Model has 86565676 parameters
> EPOCH: 0/1000
--> C:\Users\erikm\Documents\voice-cloning\YourTTS-EN-VCTK-March-21-2023_08+55PM-0000000
> TRAINING (2023-03-21 20:55:44)
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
> Training Environment:
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 20
| > Num. of Torch Threads: 24
| > Torch seed: 54321
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
> Start Tensorboard: tensorboard --logdir=C:\Users\erikm\Documents\voice-cloning\YourTTS-EN-VCTK-March-21-2023_08+55PM-0000000
> Model has 86565676 parameters
> EPOCH: 0/1000
--> C:\Users\erikm\Documents\voice-cloning\YourTTS-EN-VCTK-March-21-2023_08+55PM-0000000
> TRAINING (2023-03-21 20:55:50)
! Run is kept in C:\Users\erikm\Documents\voice-cloning\YourTTS-EN-VCTK-March-21-2023_08+55PM-0000000
Traceback (most recent call last):
File "C:\Users\erikm\Documents\voice-cloning\tts\lib\site-packages\trainer\trainer.py", line 1591, in fit
self._fit()
File "C:\Users\erikm\Documents\voice-cloning\tts\lib\site-packages\trainer\trainer.py", line 1544, in _fit
self.train_epoch()
File "C:\Users\erikm\Documents\voice-cloning\tts\lib\site-packages\trainer\trainer.py", line 1308, in train_epoch
for cur_step, batch in enumerate(self.train_loader):
File "C:\Users\erikm\Documents\voice-cloning\tts\lib\site-packages\torch\utils\data\dataloader.py", line 442, in __iter__
return self._get_iterator()
File "C:\Users\erikm\Documents\voice-cloning\tts\lib\site-packages\torch\utils\data\dataloader.py", line 388, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\erikm\Documents\voice-cloning\tts\lib\site-packages\torch\utils\data\dataloader.py", line 1043, in __init__
w.start()
File "C:\Users\erikm\AppData\Local\Programs\Python\Python39\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\erikm\AppData\Local\Programs\Python\Python39\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\erikm\AppData\Local\Programs\Python\Python39\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Users\erikm\AppData\Local\Programs\Python\Python39\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\erikm\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "C:\Users\erikm\AppData\Local\Programs\Python\Python39\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
import os
import torch
from trainer import Trainer, TrainerArgs
from TTS.bin.compute_embeddings import compute_embeddings
from TTS.bin.resample import resample_files
from TTS.config.shared_configs import BaseDatasetConfig
from TTS.tts.configs.vits_config import VitsConfig
from TTS.tts.datasets import load_tts_samples
from TTS.tts.models.vits import CharactersConfig, Vits, VitsArgs, VitsAudioConfig
from TTS.utils.downloaders import download_vctk
torch.set_num_threads(24)
# pylint: disable=W0105
"""
This recipe replicates the first experiment proposed in the YourTTS paper (https://arxiv.org/abs/2112.02418).
YourTTS model is based on the VITS model however it uses external speaker embeddings extracted from a pre-trained speaker encoder and has small architecture changes.
In addition, YourTTS can be trained in multilingual data, however, this recipe replicates the single language training using the VCTK dataset.
If you are interested in multilingual training, we have commented on parameters on the VitsArgs class instance that should be enabled for multilingual training.
In addition, you will need to add the extra datasets following the VCTK as an example.
"""
CURRENT_PATH = os.path.dirname(os.path.abspath(__file__))
# Name of the run for the Trainer
RUN_NAME = "YourTTS-EN-VCTK"
# Path where you want to save the models outputs (configs, checkpoints and tensorboard logs)
OUT_PATH = os.path.dirname(os.path.abspath(__file__)) # "/raid/coqui/Checkpoints/original-YourTTS/"
# If you want to do transfer learning and speedup your training you can set here the path to the original YourTTS model
RESTORE_PATH = None # "/root/.local/share/tts/tts_models--multilingual--multi-dataset--your_tts/model_file.pth"
# This paramter is usefull to debug, it skips the training epochs and just do the evaluation and produce the test sentences
SKIP_TRAIN_EPOCH = False
# Set here the batch size to be used in training and evaluation
BATCH_SIZE = 32
# Training Sampling rate and the target sampling rate for resampling the downloaded dataset (Note: If you change this you might need to redownload the dataset !!)
# Note: If you add new datasets, please make sure that the dataset sampling rate and this parameter are matching, otherwise resample your audios
SAMPLE_RATE = 48000
# Max audio length in seconds to be used in training (every audio bigger than it will be ignored)
MAX_AUDIO_LEN_IN_SECONDS = 10
### Download VCTK dataset
VCTK_DOWNLOAD_PATH = os.path.join(CURRENT_PATH, "VCTK")
# Define the number of threads used during the audio resampling
NUM_RESAMPLE_THREADS = 10
# Check if VCTK dataset is not already downloaded, if not download it
#if not os.path.exists(VCTK_DOWNLOAD_PATH):
# print(">>> Downloading VCTK dataset:")
# download_vctk(VCTK_DOWNLOAD_PATH)
# resample_files(VCTK_DOWNLOAD_PATH, SAMPLE_RATE, file_ext="flac", n_jobs=NUM_RESAMPLE_THREADS)
# init configs
vctk_config = BaseDatasetConfig(
formatter="vctk",
dataset_name="vctk",
meta_file_train="",
meta_file_val="",
path=VCTK_DOWNLOAD_PATH,
language="en",
ignored_speakers=[
"p226",
"p227",
"p228",
"p229",
"p230",
"p231",
"p232",
"p233",
"p234",
"p235",
"p236",
"p237",
"p238",
"p239",
"p240",
"p241",
"p242",
"p243",
"p244",
"p245",
"p246",
"p247",
"p248",
"p249",
"p250",
"p251",
"p252",
"p253",
"p254",
"p255",
"p256",
"p257",
"p258",
"p259",
"p260",
"p261",
"p262",
"p263",
"p264",
"p265",
"p266",
"p267",
"p268",
"p269",
"p270",
"p271",
"p272",
"p273",
"p274",
"p275",
"p276",
"p277",
"p278",
"p279",
"p280",
"p281",
"p282",
"p283",
"p284",
"p285",
"p286",
"p287",
"p288",
"p289",
"p290",
"p291",
"p292",
"p293",
"p294",
"p295",
"p296",
"p297",
"p298",
"p299",
"p300",
"p301",
"p302",
"p303",
"p304",
"p305",
"p306",
"p307",
"p308",
"p309",
"p310",
"p311",
"p312",
"p313",
"p314",
"p315",
"p316",
"p317",
"p318",
"p319",
"p320",
"p321",
"p322",
"p323",
"p324",
"p325",
"p326",
"p327",
"p328",
"p329",
"p330",
"p331",
"p332",
"p333",
"p334",
"p335",
"p336",
"p337",
"p338",
"p339",
"p340",
"p341",
"p342",
"p343",
"p344",
"p345",
"p346",
"p347",
"p348",
"p349",
"p350",
"p351",
"p352",
"p353",
"p354",
"p355",
"p356",
"p357",
"p358",
"p359",
"p360",
"p361",
"p362",
"p363",
"p364",
"p365",
"p366",
"p367",
"p368",
"p369",
"p370",
"p371",
"p372",
"p373",
"p374",
"p375",
"p376",
], # Ignore the test speakers to full replicate the paper experiment
)
# Add here all datasets configs, in our case we just want to train with the VCTK dataset then we need to add just VCTK. Note: If you want to added new datasets just added they here and it will automatically compute the speaker embeddings (d-vectors) for this new dataset :)
DATASETS_CONFIG_LIST = [vctk_config]
### Extract speaker embeddings
SPEAKER_ENCODER_CHECKPOINT_PATH = (
"https://github.com/coqui-ai/TTS/releases/download/speaker_encoder_model/model_se.pth.tar"
)
SPEAKER_ENCODER_CONFIG_PATH = "https://github.com/coqui-ai/TTS/releases/download/speaker_encoder_model/config_se.json"
D_VECTOR_FILES = [] # List of speaker embeddings/d-vectors to be used during the training
# Iterates all the dataset configs checking if the speakers embeddings are already computated, if not compute it
for dataset_conf in DATASETS_CONFIG_LIST:
# Check if the embeddings weren't already computed, if not compute it
embeddings_file = os.path.join(dataset_conf.path, "speakers.pth")
if not os.path.isfile(embeddings_file):
print(f">>> Computing the speaker embeddings for the {dataset_conf.dataset_name} dataset")
compute_embeddings(
SPEAKER_ENCODER_CHECKPOINT_PATH,
SPEAKER_ENCODER_CONFIG_PATH,
embeddings_file,
old_spakers_file=None,
config_dataset_path=None,
formatter_name=dataset_conf.formatter,
dataset_name=dataset_conf.dataset_name,
dataset_path=dataset_conf.path,
meta_file_train=dataset_conf.meta_file_train,
meta_file_val=dataset_conf.meta_file_val,
disable_cuda=False,
no_eval=False,
)
D_VECTOR_FILES.append(embeddings_file)
# Audio config used in training.
audio_config = VitsAudioConfig(
sample_rate=SAMPLE_RATE,
hop_length=256,
win_length=1024,
fft_size=1024,
mel_fmin=0.0,
mel_fmax=None,
num_mels=80,
)
# Init VITSArgs setting the arguments that is needed for the YourTTS model
model_args = VitsArgs(
d_vector_file=D_VECTOR_FILES,
use_d_vector_file=True,
d_vector_dim=512,
num_layers_text_encoder=10,
speaker_encoder_model_path=SPEAKER_ENCODER_CHECKPOINT_PATH,
speaker_encoder_config_path=SPEAKER_ENCODER_CONFIG_PATH,
resblock_type_decoder="2", # On the paper, we accidentally trained the YourTTS using ResNet blocks type 2, if you like you can use the ResNet blocks type 1 like the VITS model
# Usefull parameters to enable the Speaker Consistency Loss (SCL) discribed in the paper
# use_speaker_encoder_as_loss=True,
# Usefull parameters to the enable multilingual training
# use_language_embedding=True,
# embedded_language_dim=4,
)
# General training config, here you can change the batch size and others usefull parameters
config = VitsConfig(
output_path=OUT_PATH,
model_args=model_args,
run_name=RUN_NAME,
project_name="YourTTS",
run_description="""
- Original YourTTS trained using VCTK dataset
""",
dashboard_logger="tensorboard",
logger_uri=None,
audio=audio_config,
batch_size=BATCH_SIZE,
batch_group_size=48,
eval_batch_size=BATCH_SIZE,
num_loader_workers=8,
eval_split_max_size=256,
print_step=50,
plot_step=100,
log_model_step=1000,
save_step=5000,
save_n_checkpoints=2,
save_checkpoints=True,
target_loss="loss_1",
print_eval=False,
use_phonemes=False,
phonemizer="espeak",
phoneme_language="en",
compute_input_seq_cache=True,
add_blank=True,
text_cleaner="multilingual_cleaners",
characters=CharactersConfig(
characters_class="TTS.tts.models.vits.VitsCharacters",
pad="_",
eos="&",
bos="*",
blank=None,
characters="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz\u00af\u00b7\u00df\u00e0\u00e1\u00e2\u00e3\u00e4\u00e6\u00e7\u00e8\u00e9\u00ea\u00eb\u00ec\u00ed\u00ee\u00ef\u00f1\u00f2\u00f3\u00f4\u00f5\u00f6\u00f9\u00fa\u00fb\u00fc\u00ff\u0101\u0105\u0107\u0113\u0119\u011b\u012b\u0131\u0142\u0144\u014d\u0151\u0153\u015b\u016b\u0171\u017a\u017c\u01ce\u01d0\u01d2\u01d4\u0430\u0431\u0432\u0433\u0434\u0435\u0436\u0437\u0438\u0439\u043a\u043b\u043c\u043d\u043e\u043f\u0440\u0441\u0442\u0443\u0444\u0445\u0446\u0447\u0448\u0449\u044a\u044b\u044c\u044d\u044e\u044f\u0451\u0454\u0456\u0457\u0491\u2013!'(),-.:;? ",
punctuations="!'(),-.:;? ",
phonemes="",
is_unique=True,
is_sorted=True,
),
phoneme_cache_path=None,
precompute_num_workers=12,
start_by_longest=True,
datasets=DATASETS_CONFIG_LIST,
cudnn_benchmark=False,
max_audio_len=SAMPLE_RATE * MAX_AUDIO_LEN_IN_SECONDS,
mixed_precision=False,
test_sentences=[
[
"It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
"VCTK_p277",
None,
"en",
],
[
"Be a voice, not an echo.",
"VCTK_p239",
None,
"en",
],
[
"I'm sorry Dave. I'm afraid I can't do that.",
"VCTK_p258",
None,
"en",
],
[
"This cake is great. It's so delicious and moist.",
"VCTK_p244",
None,
"en",
],
[
"Prior to November 22, 1963.",
"VCTK_p305",
None,
"en",
],
],
# Enable the weighted sampler
use_weighted_sampler=True,
# Ensures that all speakers are seen in the training batch equally no matter how many samples each speaker has
weighted_sampler_attrs={"speaker_name": 1.0},
weighted_sampler_multipliers={},
# It defines the Speaker Consistency Loss (SCL) α to 9 like the paper
speaker_encoder_loss_alpha=9.0,
)
# Load all the datasets samples and split traning and evaluation sets
train_samples, eval_samples = load_tts_samples(
config.datasets,
eval_split=True,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size,
)
# Init the model
model = Vits.init_from_config(config)
# Init the trainer and 🚀
trainer = Trainer(
TrainerArgs(restore_path=RESTORE_PATH, skip_train_epoch=SKIP_TRAIN_EPOCH),
config,
output_path=OUT_PATH,
model=model,
train_samples=train_samples,
eval_samples=eval_samples,
)
trainer.fit()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment