HybridVC: Efficient Voice Style Conversion with Text and Audio Prompts
Abstract
We introduce HybridVC, a voice conversion (VC) framework built upon a pre-trained conditional variational autoencoder (CVAE) that combines the strengths of a latent model with contrastive learning. HybridVC supports text and audio prompts, enabling more flexible voice style conversion. HybridVC models a latent distribution conditioned on speaker embeddings acquired by a pretrained speaker encoder and optimises style text embeddings to align with the speaker style information through contrastive learning in parallel. Therefore, HybridVC can be efficiently trained under limited computational resources. Our experiments demonstrate HybridVC’s superior training efficiency and its capability for advanced multi-modal voice style conversion. This underscores its potential for widespread applications such as user-defined personalised voice in various social media platforms. A comprehensive ablation study further validates the effectiveness of our method.
Architecture
The comprarison between generated samples by FreeVC [1], YoursTTS-VC [2], and HybridVC
This section contains demonstration in Table 1 of the paper which involves voice style conversion results given audio prompt only. HybridVC (PromptSpeech) represents the model was trained on the PromptSpeech dataset, and the HybridVC (VCTK) means the model was trained on the VCTK training set.
Generated samples of HybridVC with style text prompts only
This section contains demonstration of voice style conversion results given style text prompts only.
Convert to a women’s voice. | ||
Speaks as a man’s voice. | ||
Please talk in a bass male sound quickly. | ||
High pitched masculine sound. | ||
A female bass whispering in low pitch. | ||
A man speaks. | ||
A women speaks. | ||
Convert to a higher pitch. | ||
This man said loudly that his speed is very slow, but the pitch is high. | ||
Higher volume and pitch. |
Demonstration of consistency test
This part is a demonstration of the consistency test in Table 4 of the paper.
References
[1] Li, J., Tu, W., & Xiao, L. (2023, June). Freevc: Towards high-quality text-free one-shot voice conversion. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.
[2] Casanova, E., Weber, J., Shulby, C. D., Junior, A. C., Gölge, E., & Ponti, M. A. (2022, June). Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion for everyone. In International Conference on Machine Learning (pp. 2709-2720). PMLR.