this post was submitted on 23 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I had some success making a workflow to use whisper to speak my target language. A llm made in the target language. And a tts capable of of producing my target language.
This allowed me to practice conversations. Some were hit or miss, I suspect this is because I am very new to my target language. But it was useful and allowed me to practice things like order food.
The language app memrise has a simlar system, of course for a price.
Would you care to describe your setup in more detail? Do you have any notes suitable for publishing on github or similar?
I am using the text-generation-webui by oobabooga https://github.com/oobabooga/text-generation-webui
One of the built-in plugins is the whisper_stt, you will need to enable it in the settings of the webui. https://github.com/oobabooga/text-generation-webui/tree/main/extensions/whisper_stt
I have been using the Elyza-japanese-llama-2-7B. Other models specific to your target language should work. https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b
Lastly, I created my own plugin, that is no longer maintained, unfortunately. Using a similar python script to the silero_tts extension, I swapped in calls to bark tts. I only chose bark because it had a Japanese model.
https://github.com/suno-ai/bark
But, you might have some luck with the new coqui_tts, which is under development. Hopefully they will fix the error I have been having with multi-language support. But it's built in, you would just need to install the requirements.txt https://github.com/oobabooga/text-generation-webui/tree/main/extensions/coqui_tts