Skip to content

Commit 7e2750d

Browse files
authored
Merge feat/japanese-support into main
feat: Japanese localization and Windows compatibility
2 parents ace7c47 + 953d1f5 commit 7e2750d

8 files changed

Lines changed: 785 additions & 315 deletions

File tree

.gitignore

Lines changed: 11 additions & 45 deletions
Original file line numberDiff line numberDiff line change
@@ -1,52 +1,18 @@
1-
# Byte-compiled / optimized / DLL files
21
__pycache__/
32
*.py[cod]
4-
*$py.class
5-
6-
# Visual Studio Code files
73
.vscode
84
.vs
9-
10-
# PyCharm files
115
.idea
12-
13-
# Eclipse Project settings
14-
*.*project
15-
.settings
16-
17-
# Sublime Text settings
18-
*.sublime-workspace
19-
*.sublime-project
20-
21-
# Editor temporaries
22-
*.swn
23-
*.swo
246
*.swp
25-
*.swm
26-
*~
27-
28-
# IPython notebook checkpoints
29-
.ipynb_checkpoints
30-
31-
# macOS dir files
32-
.DS_Store
33-
34-
exp
35-
data
36-
raw_wav
37-
tensorboard
38-
**/*build*
39-
40-
# Clangd files
41-
.cache
42-
compile_commands.json
43-
44-
# train/inference files
45-
*.wav
46-
*.m4a
47-
*.aac
48-
*.pt
7+
*.log
8+
Thumbs.db
499
pretrained_models/*
50-
*_pb2_grpc.py
51-
*_pb2.py
52-
*.tar
10+
*.pt
11+
*.ckpt
12+
*.safetensors
13+
*.bin
14+
*.wav
15+
*.tar
16+
.env
17+
venv/
18+
flagged/

README.md

Lines changed: 182 additions & 187 deletions
Large diffs are not rendered by default.

README_original.md

Lines changed: 264 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,264 @@
1+
![SVG Banners](https://svg-banners.vercel.app/api?type=origin&text1=CosyVoice🤠&text2=Text-to-Speech%20💖%20Large%20Language%20Model&width=800&height=210)
2+
3+
## 👉🏻 CosyVoice 👈🏻
4+
5+
**Fun-CosyVoice 3.0**: [Demos](https://funaudiollm.github.io/cosyvoice3/); [Paper](https://arxiv.org/pdf/2505.17589); [Modelscope](https://www.modelscope.cn/models/FunAudioLLM/Fun-CosyVoice3-0.5B-2512); [Huggingface](https://huggingface.co/FunAudioLLM/Fun-CosyVoice3-0.5B-2512); [CV3-Eval](https://github.com/FunAudioLLM/CV3-Eval)
6+
7+
**CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/pdf/2412.10117); [Modelscope](https://www.modelscope.cn/models/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/FunAudioLLM/CosyVoice2-0.5B)
8+
9+
**CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/models/iic/CosyVoice-300M); [HuggingFace](https://huggingface.co/FunAudioLLM/CosyVoice-300M)
10+
11+
## Highlight🔥
12+
13+
**Fun-CosyVoice 3.0** is an advanced text-to-speech (TTS) system based on large language models (LLM), surpassing its predecessor (CosyVoice 2.0) in content consistency, speaker similarity, and prosody naturalness. It is designed for zero-shot multilingual speech synthesis in the wild.
14+
### Key Features
15+
- **Language Coverage**: Covers 9 common languages (Chinese, English, Japanese, Korean, German, Spanish, French, Italian, Russian), 18+ Chinese dialects/accents (Guangdong, Minnan, Sichuan, Dongbei, Shan3xi, Shan1xi, Shanghai, Tianjin, Shandong, Ningxia, Gansu, etc.) and meanwhile supports both multi-lingual/cross-lingual zero-shot voice cloning.
16+
- **Content Consistency & Naturalness**: Achieves state-of-the-art performance in content consistency, speaker similarity, and prosody naturalness.
17+
- **Pronunciation Inpainting**: Supports pronunciation inpainting of Chinese Pinyin and English CMU phonemes, providing more controllability and thus suitable for production use.
18+
- **Text Normalization**: Supports reading of numbers, special symbols and various text formats without a traditional frontend module.
19+
- **Bi-Streaming**: Support both text-in streaming and audio-out streaming, and achieves latency as low as 150ms while maintaining high-quality audio output.
20+
- **Instruct Support**: Supports various instructions such as languages, dialects, emotions, speed, volume, etc.
21+
22+
23+
## Roadmap
24+
25+
- [x] 2025/12
26+
27+
- [x] release Fun-CosyVoice3-0.5B-2512 base model, rl model and its training/inference script
28+
- [x] release Fun-CosyVoice3-0.5B modelscope gradio space
29+
30+
- [x] 2025/08
31+
32+
- [x] Thanks to the contribution from NVIDIA Yuekai Zhang, add triton trtllm runtime support and cosyvoice2 grpo training support
33+
34+
- [x] 2025/07
35+
36+
- [x] release Fun-CosyVoice 3.0 eval set
37+
38+
- [x] 2025/05
39+
40+
- [x] add CosyVoice2-0.5B vllm support
41+
42+
- [x] 2024/12
43+
44+
- [x] 25hz CosyVoice2-0.5B released
45+
46+
- [x] 2024/09
47+
48+
- [x] 25hz CosyVoice-300M base model
49+
- [x] 25hz CosyVoice-300M voice conversion function
50+
51+
- [x] 2024/08
52+
53+
- [x] Repetition Aware Sampling(RAS) inference for llm stability
54+
- [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization
55+
56+
- [x] 2024/07
57+
58+
- [x] Flow matching training support
59+
- [x] WeTextProcessing support when ttsfrd is not available
60+
- [x] Fastapi server and client
61+
62+
## Evaluation
63+
64+
| Model | Open-Source | Model Size | test-zh<br>CER (%) ↓ | test-zh<br>SS (%) ↑ | test-en<br>WER (%) ↓ | test-en<br>SS (%) ↑ | test-hard<br>CER (%) ↓ | test-hard<br>SS (%) ↑ |
65+
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
66+
| Human | - | - | 1.26 | 75.5 | 2.14 | 73.4 | - | - |
67+
| Seed-TTS || - | 1.12 | 79.6 | 2.25 | 76.2 | 7.59 | 77.6 |
68+
| MiniMax-Speech || - | 0.83 | 78.3 | 1.65 | 69.2 | - | - |
69+
| F5-TTS || 0.3B | 1.52 | 74.1 | 2.00 | 64.7 | 8.67 | 71.3 |
70+
| Spark TTS || 0.5B | 1.2 | 66.0 | 1.98 | 57.3 | - | - |
71+
| CosyVoice2 || 0.5B | 1.45 | 75.7 | 2.57 | 65.9 | 6.83 | 72.4 |
72+
| FireRedTTS2 || 1.5B | 1.14 | 73.2 | 1.95 | 66.5 | - | - |
73+
| Index-TTS2 || 1.5B | 1.03 | 76.5 | 2.23 | 70.6 | 7.12 | 75.5 |
74+
| VibeVoice-1.5B || 1.5B | 1.16 | 74.4 | 3.04 | 68.9 | - | - |
75+
| VibeVoice-Realtime || 0.5B | - | - | 2.05 | 63.3 | - | - |
76+
| HiggsAudio-v2 || 3B | 1.50 | 74.0 | 2.44 | 67.7 | - | - |
77+
| VoxCPM || 0.5B | 0.93 | 77.2 | 1.85 | 72.9 | 8.87 | 73.0 |
78+
| GLM-TTS || 1.5B | 1.03 | 76.1 | - | - | - | - |
79+
| GLM-TTS RL || 1.5B | 0.89 | 76.4 | - | - | - | - |
80+
| Fun-CosyVoice3-0.5B-2512 || 0.5B | 1.21 | 78.0 | 2.24 | 71.8 | 6.71 | 75.8 |
81+
| Fun-CosyVoice3-0.5B-2512_RL || 0.5B | 0.81 | 77.4 | 1.68 | 69.5 | 5.44 | 75.0 |
82+
83+
84+
## Install
85+
86+
### Clone and install
87+
88+
- Clone the repo
89+
``` sh
90+
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
91+
# If you failed to clone the submodule due to network failures, please run the following command until success
92+
cd CosyVoice
93+
git submodule update --init --recursive
94+
```
95+
96+
- Install Conda: please see https://docs.conda.io/en/latest/miniconda.html
97+
- Create Conda env:
98+
99+
``` sh
100+
conda create -n cosyvoice -y python=3.10
101+
conda activate cosyvoice
102+
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
103+
104+
# If you encounter sox compatibility issues
105+
# ubuntu
106+
sudo apt-get install sox libsox-dev
107+
# centos
108+
sudo yum install sox sox-devel
109+
```
110+
111+
### Model download
112+
113+
We strongly recommend that you download our pretrained `Fun-CosyVoice3-0.5B` `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.
114+
115+
``` python
116+
# modelscope SDK model download
117+
from modelscope import snapshot_download
118+
snapshot_download('FunAudioLLM/Fun-CosyVoice3-0.5B-2512', local_dir='pretrained_models/Fun-CosyVoice3-0.5B')
119+
snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
120+
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
121+
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
122+
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
123+
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
124+
125+
# for oversea users, huggingface SDK model download
126+
from huggingface_hub import snapshot_download
127+
snapshot_download('FunAudioLLM/Fun-CosyVoice3-0.5B-2512', local_dir='pretrained_models/Fun-CosyVoice3-0.5B')
128+
snapshot_download('FunAudioLLM/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
129+
snapshot_download('FunAudioLLM/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
130+
snapshot_download('FunAudioLLM/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
131+
snapshot_download('FunAudioLLM/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
132+
snapshot_download('FunAudioLLM/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
133+
```
134+
135+
Optionally, you can unzip `ttsfrd` resource and install `ttsfrd` package for better text normalization performance.
136+
137+
Notice that this step is not necessary. If you do not install `ttsfrd` package, we will use wetext by default.
138+
139+
``` sh
140+
cd pretrained_models/CosyVoice-ttsfrd/
141+
unzip resource.zip -d .
142+
pip install ttsfrd_dependency-0.1-py3-none-any.whl
143+
pip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl
144+
```
145+
146+
### Basic Usage
147+
148+
We strongly recommend using `Fun-CosyVoice3-0.5B` for better performance.
149+
Follow the code in `example.py` for detailed usage of each model.
150+
```sh
151+
python example.py
152+
```
153+
154+
#### vLLM Usage
155+
CosyVoice2/3 now supports **vLLM 0.11.x+ (V1 engine)** and **vLLM 0.9.0 (legacy)**.
156+
Older vllm version(<0.9.0) do not support CosyVoice inference, and versions in between (e.g., 0.10.x) are not tested.
157+
158+
Notice that `vllm` has a lot of specific requirements. You can create a new env to in case your hardward do not support vllm and old env is corrupted.
159+
160+
``` sh
161+
conda create -n cosyvoice_vllm --clone cosyvoice
162+
conda activate cosyvoice_vllm
163+
# for vllm==0.9.0
164+
pip install vllm==v0.9.0 transformers==4.51.3 numpy==1.26.4 -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
165+
# for vllm>=0.11.0
166+
pip install vllm==v0.11.0 transformers==4.57.1 numpy==1.26.4 -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
167+
python vllm_example.py
168+
```
169+
170+
#### Start web demo
171+
172+
You can use our web demo page to get familiar with CosyVoice quickly.
173+
174+
Please see the demo website for details.
175+
176+
``` python
177+
# change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
178+
python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
179+
```
180+
181+
#### Advanced Usage
182+
183+
For advanced users, we have provided training and inference scripts in `examples/libritts`.
184+
185+
#### Build for deployment
186+
187+
Optionally, if you want service deployment,
188+
You can run the following steps.
189+
190+
``` sh
191+
cd runtime/python
192+
docker build -t cosyvoice:v1.0 .
193+
# change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
194+
# for grpc usage
195+
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
196+
cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
197+
# for fastapi usage
198+
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
199+
cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
200+
```
201+
202+
#### Using Nvidia TensorRT-LLM for deployment
203+
204+
Using TensorRT-LLM to accelerate cosyvoice2 llm could give 4x acceleration comparing with huggingface transformers implementation.
205+
To quick start:
206+
207+
``` sh
208+
cd runtime/triton_trtllm
209+
docker compose up -d
210+
```
211+
For more details, you could check [here](https://github.com/FunAudioLLM/CosyVoice/tree/main/runtime/triton_trtllm)
212+
213+
## Discussion & Communication
214+
215+
You can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).
216+
217+
You can also scan the QR code to join our official Dingding chat group.
218+
219+
<img src="./asset/dingding.png" width="250px">
220+
221+
## Acknowledge
222+
223+
1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).
224+
2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).
225+
3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).
226+
4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).
227+
5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).
228+
229+
## Citations
230+
231+
``` bibtex
232+
@article{du2024cosyvoice,
233+
title={Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens},
234+
author={Du, Zhihao and Chen, Qian and Zhang, Shiliang and Hu, Kai and Lu, Heng and Yang, Yexin and Hu, Hangrui and Zheng, Siqi and Gu, Yue and Ma, Ziyang and others},
235+
journal={arXiv preprint arXiv:2407.05407},
236+
year={2024}
237+
}
238+
239+
@article{du2024cosyvoice,
240+
title={Cosyvoice 2: Scalable streaming speech synthesis with large language models},
241+
author={Du, Zhihao and Wang, Yuxuan and Chen, Qian and Shi, Xian and Lv, Xiang and Zhao, Tianyu and Gao, Zhifu and Yang, Yexin and Gao, Changfeng and Wang, Hui and others},
242+
journal={arXiv preprint arXiv:2412.10117},
243+
year={2024}
244+
}
245+
246+
@article{du2025cosyvoice,
247+
title={CosyVoice 3: Towards In-the-wild Speech Generation via Scaling-up and Post-training},
248+
author={Du, Zhihao and Gao, Changfeng and Wang, Yuxuan and Yu, Fan and Zhao, Tianyu and Wang, Hao and Lv, Xiang and Wang, Hui and Shi, Xian and An, Keyu and others},
249+
journal={arXiv preprint arXiv:2505.17589},
250+
year={2025}
251+
}
252+
253+
@inproceedings{lyu2025build,
254+
title={Build LLM-Based Zero-Shot Streaming TTS System with Cosyvoice},
255+
author={Lyu, Xiang and Wang, Yuxuan and Zhao, Tianyu and Wang, Hao and Liu, Huadai and Du, Zhihao},
256+
booktitle={ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
257+
pages={1--2},
258+
year={2025},
259+
organization={IEEE}
260+
}
261+
```
262+
263+
## Disclaimer
264+
The content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.

asset/CosyVoiceJP-GUI.png

166 KB
Loading

cosyvoice/utils/file_utils.py

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,8 @@
1818
import json
1919
import torch
2020
import torchaudio
21+
import soundfile as sf
22+
import numpy as np
2123
import logging
2224
logging.getLogger('matplotlib').setLevel(logging.WARNING)
2325
logging.basicConfig(level=logging.DEBUG,
@@ -42,8 +44,20 @@ def read_json_lists(list_file):
4244

4345

4446
def load_wav(wav, target_sr, min_sr=16000):
45-
speech, sample_rate = torchaudio.load(wav, backend='soundfile')
46-
speech = speech.mean(dim=0, keepdim=True)
47+
# Use soundfile directly to avoid torchcodec issues on Windows
48+
try:
49+
speech_np, sample_rate = sf.read(wav, dtype='float32')
50+
# Convert to torch tensor
51+
if speech_np.ndim == 1:
52+
speech = torch.from_numpy(speech_np).unsqueeze(0)
53+
else:
54+
# Multi-channel: convert to mono by averaging
55+
speech = torch.from_numpy(speech_np.T).mean(dim=0, keepdim=True)
56+
except Exception as e:
57+
logging.warning(f'soundfile failed, falling back to torchaudio: {e}')
58+
speech, sample_rate = torchaudio.load(wav, backend='soundfile')
59+
speech = speech.mean(dim=0, keepdim=True)
60+
4761
if sample_rate != target_sr:
4862
assert sample_rate >= min_sr, 'wav sample rate {} must be greater than {}'.format(sample_rate, target_sr)
4963
speech = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=target_sr)(speech)

0 commit comments

Comments
 (0)