LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
I just started down this rabbit hole and have a lot of questions. If you dont care about real time inference and just want a high quality voice clone, what's the best option? I'm looking to do semi dynamic narration over video.
tortoiseTTS using the voice-ai-cloning repository. Had a dataset of 20 minutes, 5 minutes of footage along with a hour of tweaking the hyper parameters and i have a voice which sounds pretty damn human. I tried training for a long time but just sounds worse after the first few epochs
the licensing on this blows but they have a very unique model IMO: StyleTTS
it picks up the appropriate voice/intonation according to the text which i personally haven’t seen being done yet!
StyleTTS2, why not use Coqui TTS, xtts2 ?
Also VITS are kinda great voices
VITS2 was published recently by the authors of VITS. If I understand correctly, it implements the use of transformers, runs more efficiently than VITS, and is capable of better voices too, provided the dataset. some folks make an open source implementation of it, with the help of the authors of the paper. See the GitHub repo
not open source but ResembleAI and GemeloAI are good real-time TTS options via API, although not free
Which coqui model did you use? The new xtts2 model is excellent IMO.
And fast. Not sure they’ll find something better.
XTTS2 sounds acceptably good to me, even comparable to elevenlabs in some respects.
Silero TTS is extremely fast, and combined with RVC you can clone any voice from any person/character. It's a bit monotonous, but it's the best available for free imo.
And if you want the best quality : use the 10000 free words per month of your 11Labs account. Once you run out of it, switch to Silero TTS. In both cases, plug the audio output into the input of a real-time RVC app.