🚀Clone a voice in 5 seconds to generate arbitrary speech in real-time

mockingbird

MIT License

English | 中文

Features

🌍 Chinese supported mandarin and tested with multiple datasets: aidatatang_200zh, magicdata, aishell3, data_aishell, and etc.

🤩 PyTorch worked for pytorch, tested in version of 1.9.0(latest in August 2021), with GPU Tesla T4 and GTX 2060

🌍 Windows + Linux run in both Windows OS and linux OS (even in M1 MACOS)

🤩 Easy & Awesome effect with only newly-trained synthesizer, by reusing the pretrained encoder/vocoder

🌍 Webserver Ready to serve your result with remote calling

DEMO VIDEO

Quick Start

1. Install Requirements

Follow the original repo to test if you got all environment ready. **Python 3.7 or higher ** is needed to run the toolbox.

If you get an ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2 ) This error is probably due to a low version of python, try using 3.9 and it will install successfully

  • Install ffmpeg.
  • Run pip install -r requirements.txt to install the remaining necessary packages.
  • Install webrtcvad pip install webrtcvad-wheels(If you need)

Note that we are using the pretrained encoder/vocoder but synthesizer, since the original model is incompatible with the Chinese sympols. It means the demo_cli is not working at this moment.

2. Prepare your models

You can either train your models or use existing ones:

2.1 Train encoder with your dataset (Optional)

  • Preprocess with the audios and the mel spectrograms: python encoder_preprocess.py Allowing parameter --dataset {dataset} to support the datasets you want to preprocess. Only the train set of these datasets will be used. Possible names: librispeech_other, voxceleb1, voxceleb2. Use comma to sperate multiple datasets.

  • Train the encoder: python encoder_train.py my_run /SV2TTS/encoder

For training, the encoder uses visdom. You can disable it with --no_visdom, but it's nice to have. Run "visdom" in a separate CLI/process to start your visdom server.

2.2 Train synthesizer with your dataset

  • Download dataset and unzip: make sure you can access all .wav in folder

  • Preprocess with the audios and the mel spectrograms: python pre.py Allowing parameter --dataset {dataset} to support aidatatang_200zh, magicdata, aishell3, data_aishell, etc.If this parameter is not passed, the default dataset will be aidatatang_200zh.

  • Train the synthesizer: python synthesizer_train.py mandarin /SV2TTS/synthesizer

  • Go to next step when you see attention line show and loss meet your need in training folder synthesizer/saved_models/.

2.3 Use pretrained model of synthesizer

Thanks to the community, some models will be shared:

author Download link Preview Video Info
@author https://pan.baidu.com/s/1iONvRxmkI-t1nHqxKytY3g Baidu 4j5d 75k steps trained by multiple datasets
@author https://pan.baidu.com/s/1fMh9IlgKJlL2PIiRTYDUvw Baidu code:om7f 25k steps trained by multiple datasets, only works under version 0.0.1
@FawenYo https://drive.google.com/file/d/1H-YGOUHpmqKxJ9FRc6vAjPuqQki24UbC/view?usp=sharing https://u.teknik.io/AYxWf.pt input output 200k steps with local accent of Taiwan, only works under version 0.0.1
@miven https://pan.baidu.com/s/1PI-hM3sn5wbeChRryX-RCQ code:2021 https://www.bilibili.com/video/BV1uh411B7AD/ only works under version 0.0.1

2.4 Train vocoder (Optional)

note: vocoder has little difference in effect, so you may not need to train a new one.

  • Preprocess the data: python vocoder_preprocess.py -m

replace with your dataset root, replace with directory of your best trained models of sythensizer, e.g. sythensizer\saved_mode\xxx

  • Train the wavernn vocoder: python vocoder_train.py mandarin

  • Train the hifigan vocoder python vocoder_train.py mandarin hifigan

3. Launch

3.1 Using the web server

You can then try to run:python web.py and open it in browser, default as http://localhost:8080

3.2 Using the Toolbox

You can then try the toolbox: python demo_toolbox.py -d

Reference

This repository is forked from Real-Time-Voice-Cloning which only support English.

URL Designation Title Implementation source
1803.09017 GlobalStyleToken (synthesizer) Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis This repo
2010.05646 HiFi-GAN (vocoder) Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis This repo
1806.04558 SV2TTS Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis This repo
1802.08435 WaveRNN (vocoder) Efficient Neural Audio Synthesis fatchord/WaveRNN
1703.10135 Tacotron (synthesizer) Tacotron: Towards End-to-End Speech Synthesis fatchord/WaveRNN
1710.10467 GE2E (encoder) Generalized End-To-End Loss for Speaker Verification This repo

F Q&A

1.Where can I download the dataset?

Dataset Original Source Alternative Sources
aidatatang_200zh OpenSLR Google Drive
magicdata OpenSLR Google Drive (Dev set)
aishell3 OpenSLR Google Drive
data_aishell OpenSLR

After unzip aidatatang_200zh, you need to unzip all the files under aidatatang_200zh\corpus\train

2.What is ?

If the dataset path is D:\data\aidatatang_200zh,then isD:\data

3.Not enough VRAM

Train the synthesizer:adjust the batch_size in synthesizer/hparams.py

//Before
tts_schedule = [(2,  1e-3,  20_000,  12),   # Progressive training schedule
                (2,  5e-4,  40_000,  12),   # (r, lr, step, batch_size)
                (2,  2e-4,  80_000,  12),   #
                (2,  1e-4, 160_000,  12),   # r = reduction factor (# of mel frames
                (2,  3e-5, 320_000,  12),   #     synthesized for each decoder iteration)
                (2,  1e-5, 640_000,  12)],  # lr = learning rate
//After
tts_schedule = [(2,  1e-3,  20_000,  8),   # Progressive training schedule
                (2,  5e-4,  40_000,  8),   # (r, lr, step, batch_size)
                (2,  2e-4,  80_000,  8),   #
                (2,  1e-4, 160_000,  8),   # r = reduction factor (# of mel frames
                (2,  3e-5, 320_000,  8),   #     synthesized for each decoder iteration)
                (2,  1e-5, 640_000,  8)],  # lr = learning rate

Train Vocoder-Preprocess the data:adjust the batch_size in synthesizer/hparams.py

//Before
### Data Preprocessing
        max_mel_frames = 900,
        rescale = True,
        rescaling_max = 0.9,
        synthesis_batch_size = 16,                  # For vocoder preprocessing and inference.
//After
### Data Preprocessing
        max_mel_frames = 900,
        rescale = True,
        rescaling_max = 0.9,
        synthesis_batch_size = 8,                  # For vocoder preprocessing and inference.

Train Vocoder-Train the vocoder:adjust the batch_size in vocoder/wavernn/hparams.py

//Before
# Training
voc_batch_size = 100
voc_lr = 1e-4
voc_gen_at_checkpoint = 5
voc_pad = 2

//After
# Training
voc_batch_size = 6
voc_lr = 1e-4
voc_gen_at_checkpoint = 5
voc_pad =2

4.If it happens RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for encoder.embedding.weight: copying a param with shape torch.Size([70, 512]) from checkpoint, the shape in current model is torch.Size([75, 512]).

Please refer to issue #37

5. How to improve CPU and GPU occupancy rate?

Adjust the batch_size as appropriate to improve

6. What if it happens the page file is too small to complete the operation

Please refer to this video and change the virtual memory to 100G (102400), for example : When the file is placed in the D disk, the virtual memory of the D disk is changed.

7. When should I stop during training?

FYI, my attention came after 18k steps and loss became lower than 0.4 after 50k steps. attention_step_20500_sample_1 step-135500-mel-spectrogram_sample_1

Owner
Vega
ex-Facebook Engineer. Focusing on cutting-edge SaaS/IaaS/ Cloud Service, expertise in Distributed System, AI.
Vega
Comments
  • 求助执行requirements.txt时报No module named 'pyworld'是什么问题??

    求助执行requirements.txt时报No module named 'pyworld'是什么问题??

    已是最新代码 E:\MockingBird\MockingBird>python demo_toolbox.py Traceback (most recent call last): File "E:\MockingBird\MockingBird\demo_toolbox.py", line 2, in from toolbox import Toolbox File "E:\MockingBird\MockingBird\toolbox_init_.py", line 9, in from utils.f0_utils import compute_f0, f02lf0, compute_mean_std, get_converted_lf0uv File "E:\MockingBird\MockingBird\utils\f0_utils.py", line 3, in import pyworld ModuleNotFoundError: No module named 'pyworld'

  • 训练合成器时无法收敛

    训练合成器时无法收敛

    问题简述 使用自己的数据集训练合成器模型的时候的时候,在预处理之后训练合成器并将合成器替换成既有model后产生的图并没有收敛。

    复现与环境

    参照www.bilibili.com/video/BV1dq4y137pH 进行的复现。代码版本为main branch,首先进行数据预处理之后参考视频里的首先进行合成器训练,然后用pretrained-11-7-21 替换掉当前mode 继续进行训练。发现图并没有收敛。 截图 qX67L9.png qX6bZR.png

  • 關於 Train synthesizer 的問題,求指導 !

    關於 Train synthesizer 的問題,求指導 !

    你好 我已經下載了aidatatang_200zh這個數據集,並且把 aidatatang_200zh\corpus\train 底下的檔案都解壓縮完畢 但是當我要開始執行 python synthesizer_preprocess_audio.py D:\google download(我把檔案放在 D:\google download 這個路徑下 ) 卻發生以下狀況: D:\python_demo\Realtime-Voice-Clone-Chinese>python synthesizer_preprocess_audio.py D:\google download\ D:\python_demo\Realtime-Voice-Clone-Chinese\encoder\audio.py:13: UserWarning: Unable to import 'webrtcvad'. This package enables noise removal and is recommended. warn("Unable to import 'webrtcvad'. This package enables noise removal and is recommended.") usage: synthesizer_preprocess_audio.py [-h] [-o OUT_DIR] [-n N_PROCESSES] [-s] [--hparams HPARAMS] [--no_trim] [--no_alignments] [--dataset DATASET] datasets_root synthesizer_preprocess_audio.py: error: unrecognized arguments: download\

    請問我可以怎麼解決問題呢? 我有查看之前 issues 的討論並沒有發現有類似問題,以下是我想到可能有問題的地方,還請作者為我解答,謝謝!

    1.我只有解壓縮 aidatatang_200zh\corpus\train 底下的檔案,是否其他資料夾下的檔案也要解壓縮? 2.是不是只需要將所有 wav 檔單獨拉出來放在 aidatatang_200zh\corpus\train 底下然後再執行python synthesizer_preprocess_audio.py D:\google download ? 3. 輸入的指令不對 4. wav 檔 與 txt 檔是不是要預先處理,而我沒有進行處理?

  • 用社区分享的模型训练报错 不知道原因

    用社区分享的模型训练报错 不知道原因

    用社区分享的模型训练报错 不知道原因 而且不知道咋保存模型 是不是必须要每500步才会自动保存 求各位大佬解惑 感谢! RuntimeError: The size of tensor a (1024) must match the size of tensor b (3) at non-singleton dimension 3 屏幕截图 2021-11-28 021942

  • 训练模型时显存爆了

    训练模型时显存爆了

    Variable._execution_engine.run_backward(RuntimeError: CUDA out of memory. Tried to allocate 88.00 MiB (GPU 0; 4.00 GiB totalcapacity; 2.68 GiB already allocated; 0 bytes free; 2.85 GiB reserved in total by PyTorch)

    能不能提供一个调batch_size的参数? 我目前用的显卡显存只有4G(GTX1050Ti),默认参数正常训练时经常爆掉显存....

  • 如何解决运行python synthesizer_preprocess_audio.py时报错 DLL load failed:页面文件太小,无法完成操作

    如何解决运行python synthesizer_preprocess_audio.py时报错 DLL load failed:页面文件太小,无法完成操作

    我在运行 python synthesizer_preprocess_audio.py时遇到如上错误 ,在CSDN上找到解决方法:1.如果python 运行环境不在C盘 查看高级系统设置->高级->性能 设置->高级->虚拟内存->更改 ->取消自动管理所有驱动器的分页文件大小-> 自定义大小 ->初始大小和最大值设为10240 2. 更改DateLoade 中的参数num_worker 改为0 但我现在不清楚具体怎样把参数设为0

  • capturable=False,报错

    capturable=False,报错

    Win11 GPU:3060laptop
    Python 3.9.13

    +----------------+------------+---------------+------------------+ | Steps with r=2 | Batch Size | Learning Rate | Outputs/Step (r) | +----------------+------------+---------------+------------------+ | 101k Steps | 16 | 3e-06 | 2 | +----------------+------------+---------------+------------------+

    Could not load symbol cublasGetSmCountTarget from cublas64_11.dll. Error code 127 Traceback (most recent call last): File "G:\AIvioce\MockingBird\synthesizer_train.py", line 37, in train(**vars(args)) File "G:\AIvioce\MockingBird\synthesizer\train.py", line 216, in train optimizer.step() File "C:\Users\Mark\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\optimizer.py", line 109, in wrapper return func(*args, **kwargs) File "C:\Users\Mark\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\Mark\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\adam.py", line 157, in step adam(params_with_grad, File "C:\Users\Mark\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\adam.py", line 213, in adam func(params, File "C:\Users\Mark\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\optim\adam.py", line 255, in _single_tensor_adam assert not step_t.is_cuda, "If capturable=False, state_steps should not be CUDA tensors." AssertionError: If capturable=False, state_steps should not be CUDA tensors.

  • AttributeError: module 'setuptools._distutils' has no attribute 'version'

    AttributeError: module 'setuptools._distutils' has no attribute 'version'

    F:\VideoCentTools\MockingBird-main>python synthesizer_train.py offhen F:\VideoCentTools/SV2TTS/synthesizer Traceback (most recent call last): File "F:\VideoCentTools\MockingBird-main\synthesizer_train.py", line 2, in from synthesizer.train import train File "F:\VideoCentTools\MockingBird-main\synthesizer\train.py", line 5, in from torch.utils.tensorboard import SummaryWriter File "C:\Users\Administrator\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\tensorboard_init_.py", line 4, in LooseVersion = distutils.version.LooseVersion AttributeError: module 'setuptools._distutils' has no attribute 'version'

  • FileNotFoundError: [Errno 2] No such file or directory: 'encoder\\saved_models\\pretrained.pt'

    FileNotFoundError: [Errno 2] No such file or directory: 'encoder\\saved_models\\pretrained.pt'

    我把已经下载好的模型,放到了文件D:\声音克隆\MockingBird-main\synthesizer\saved_models下 并且还在D:\声音克隆\MockingBird-main\encoder\saved_models里也放了一个把模型my_run,py改名为pretrained.pt的文件 然后运行web.py文件

    (base) C:\Users\13549>python D:\声音克隆\MockingBird-main\web.py Loaded synthesizer models: 0 Traceback (most recent call last): File "D:\声音克隆\MockingBird-main\web.py", line 6, in app = webApp() File "D:\声音克隆\MockingBird-main\web_init_.py", line 33, in webApp encoder.load_model(Path("encoder/saved_models/pretrained.pt")) File "D:\声音克隆\MockingBird-main\encoder\inference.py", line 33, in load_model checkpoint = torch.load(weights_fpath, _device) File "D:\anaconda\lib\site-packages\torch\serialization.py", line 525, in load with _open_file_like(f, 'rb') as opened_file: File "D:\anaconda\lib\site-packages\torch\serialization.py", line 212, in _open_file_like return _open_file(name_or_buffer, mode) File "D:\anaconda\lib\site-packages\torch\serialization.py", line 193, in init super(_open_file, self).init(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'encoder\saved_models\pretrained.pt'

    请问一下该怎么办呢,即使我把模型文件名修改成pretrained仍然会报同样的错误

  • 按照#37的修改,还是一直出现杂音

    按照#37的修改,还是一直出现杂音

    按步骤准备好环境启动工具箱后,一切默认,上传目录下的temp.wav。点击 Sythesize and vcode后,第一次报跟 #37 一样的错,直接忽略,再次点击 Sythesize and vcode后,又没报错了,这时生成的是杂音。已经按照 #37 的改法修改了synthesizer/utils/symbols.py这个文件,要怎么修复?

  • 报错:Model files not found.

    报错:Model files not found.

    电脑为M1

    在运行时出现以下提示:

    (MockingBird) [email protected] ~ % python /Users/zsh/Desktop/MockingBird-main/demo_toolbox.py -d /Users/zsh/Desktop Arguments: datasets_root: /Users/zsh/Desktop enc_models_dir: encoder/saved_models syn_models_dir: synthesizer/saved_models voc_models_dir: vocoder/saved_models cpu: False seed: None no_mp3_support: False


    Error: Model files not found. Follow these instructions to get and install the models: https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Pretrained-models


    请问该如何解决呀?

  • 数据集要解压到gz还是wav格式,我解压到gz说识别不了,大家有这个问题吗?

    数据集要解压到gz还是wav格式,我解压到gz说识别不了,大家有这个问题吗?

    Summary[问题简述(一句话)] A clear and concise description of what the issue is.

    Env & To Reproduce[复现与环境] 描述你用的环境、代码版本、模型

    Screenshots[截图(如有)] If applicable, add screenshots to help

  • 修复demo_tools加载aidatatang_200zh数据集的问题

    修复demo_tools加载aidatatang_200zh数据集的问题

    在使用demo_tools -d 加载数据集时发现一个问题:

    • aidatatang_200zh标准数据集是不包含wav文件的(被打包在压缩包中),而demo_tools读取源音频是遍历目录下所有wav
    • 在制作个人数据集时我们会把wav放在aidatatang_200zh/corpus/train/中
    • 现在读取的目录只有“aidatatang_200zh\corpus\dev”和“aidatatang_200zh\corpus\test”

    所以要不就干脆识别aidatatang_200zh\corpus吧

  • loaded state dict contains a parameter group that doesn't match the size of optimizer's group

    loaded state dict contains a parameter group that doesn't match the size of optimizer's group

    Summary[问题简述(一句话)] using version v0.0.1,and pretrained model https://pan.baidu.com/s/1PI-hM3sn5wbeChRryX-RCQ 提取码:2021,but occur issue: ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group

    wechat chatting group code expired, new one please

    @babysor

  • 使用模型时候报错,web是直接报错,box是训练出来全部是杂音

    使用模型时候报错,web是直接报错,box是训练出来全部是杂音

    RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for encoder.embedding.weight: copying a param with shape torch.Size([70, 512]) from checkpoint, the shape in current model is torch.Size([75, 512]). size mismatch for encoder_proj.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([128, 1024]). size mismatch for decoder.attn_rnn.weight_ih: copying a param with shape torch.Size([384, 768]) from checkpoint, the shape in current model is torch.Size([384, 1280]). size mismatch for decoder.rnn_input.weight: copying a param with shape torch.Size([1024, 640]) from checkpoint, the shape in current model is torch.Size([1024, 1152]). size mismatch for decoder.stop_proj.weight: copying a param with shape torch.Size([1, 1536]) from checkpoint, the shape in current model is torch.Size([1, 2048]).

Chinese real time voice cloning (VC) and Chinese text to speech (TTS).
Chinese real time voice cloning (VC) and Chinese text to speech (TTS).

Chinese real time voice cloning (VC) and Chinese text to speech (TTS). 好用的中文语音克隆兼中文语音合成系统,包含语音编码器、语音合成器、声码器和可视化模块。

Aug 24, 2022
Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)
Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)

Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation This repository contains the implementation of the following paper: Live Speech

Sep 25, 2022
A Python wrapper for simple offline real-time dictation (speech-to-text) and speaker-recognition using Vosk.

Simple-Vosk A Python wrapper for simple offline real-time dictation (speech-to-text) and speaker-recognition using Vosk. Check out the official Vosk G

Jun 19, 2022
💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants
💬   Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants

Rasa Open Source Rasa is an open source machine learning framework to automate text-and voice-based conversations. With Rasa, you can build contextual

Sep 24, 2022
💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants
💬   Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants

Rasa Open Source Rasa is an open source machine learning framework to automate text-and voice-based conversations. With Rasa, you can build contextual

Sep 15, 2022
💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants
💬   Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants

Rasa Open Source Rasa is an open source machine learning framework to automate text-and voice-based conversations. With Rasa, you can build contextual

Feb 18, 2021
This project converts your human voice input to its text transcript and to an automated voice too.

Human Voice to Automated Voice & Text Introduction: In this project, whenever you'll speak, it will turn your voice into a robot voice and furthermore

Oct 15, 2021
Every Google, Azure & IBM text to speech voice for free

TTS-Grabber Quick thing i made about a year ago to download any text with any tts voice, over 630 voices to choose from currently. It will split the i

Jul 17, 2022
Vad-sli-asr - A Python scripts for a speech processing pipeline with Voice Activity Detection (VAD)
Vad-sli-asr - A Python scripts for a speech processing pipeline with Voice Activity Detection (VAD)

VAD-SLI-ASR Python scripts for a speech processing pipeline with Voice Activity

Jul 10, 2022
Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding
Neural building blocks for speaker diarization: speech activity detection, speaker change detection, overlapped speech detection, speaker embedding

⚠️ Checkout develop branch to see what is coming in pyannote.audio 2.0: a much smaller and cleaner codebase Python-first API (the good old pyannote-au

Sep 18, 2022
Speech Recognition for Uyghur using Speech transformer

Speech Recognition for Uyghur using Speech transformer Training: this model using CTC loss and Cross Entropy loss for training. Download pretrained mo

Apr 3, 2022
Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple
Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple

Silero Models: pre-trained speech-to-text, text-to-speech models and benchmarks made embarrassingly simple

Sep 16, 2022
PyTorch implementation of Microsoft's text-to-speech system FastSpeech 2: Fast and High-Quality End-to-End Text to Speech.
PyTorch implementation of Microsoft's text-to-speech system FastSpeech 2: Fast and High-Quality End-to-End Text to Speech.

An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"

Sep 23, 2022
Simple Speech to Text, Text to Speech

Simple Speech to Text, Text to Speech 1. Download Repository Opsi 1 Download repository ini, extract di lokasi yang diinginkan Opsi 2 Jika sudah famil

Dec 28, 2021
A Python module made to simplify the usage of Text To Speech and Speech Recognition.
A Python module made to simplify the usage of Text To Speech and Speech Recognition.

Nav Module The solution for voice related stuff in Python Nav is a Python module which simplifies voice related stuff in Python. Just import the Modul

Dec 20, 2021
Code for ACL 2022 main conference paper "STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation".

STEMM: Self-learning with Speech-Text Manifold Mixup for Speech Translation This is a PyTorch implementation for the ACL 2022 main conference paper ST

Sep 6, 2022
A method to generate speech across multiple speakers
A method to generate speech across multiple speakers

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Aug 8, 2022