this post was submitted on 23 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hi.

Anyone got any experience with using (a set of) local LLMs for practicing a new language? (Spanish, not Python). Curious about experiences and knowledge gained.

And, in the extension of that thought, what would be required 'scaffolding' around a set of LLMs to be able to:

  • assess a student's current proficiency
  • set up some kind of study guide
  • provide assignments (vocab training, writing prompts, reading comprehension, speaking exercises, listening exercises)
  • evaluate responses to assignments
  • give feedback on responses
  • keep track of progress over time and adjust assignments accordingly

I *assume* something like this would require multiple LLMs, in order to handle Text To Speech and Automatic Speech Recognition. Is whisper (for example) useful for evaluating (and give feedback on) pronunciation?

top 7 comments
sorted by: hot top controversial new old
[–] SomeOddCodeGuy@alien.top 1 points 10 months ago (1 children)

The most multi-lingual capable model I'm aware of is OpenBuddy 70b. I use it as a foreign language tutor, and it does an ok job. I constantly check it against google translate, and it hasn't let me down yet, but ymmv. I don't use it a ton.

I think the problem is that, in general, technology hasn't been the best at foreign language translations. Google Translate is SOTA in that realm, and it's not perfect. I'm not sure I'd trust it for doing this in a real production sense, but I do trust it enough to help me learn just enough to get by.

So with that said, you could likely get halfway far mixing any LLM with a handful of tools. For example- SillyTavern I believe has a Google Translate module built in. You could use Google to do the translations. Then, having multiple speech to text/text to speech modules, one for each language, might give you that flexibility of input and output.

Essentially, I would imagine that 90% of the work will be developing tooling around any decent LLM, regardless of its language abilities, and then using external tooling to support that. I could be wrong, though.

[–] Blkwinz@alien.top 1 points 10 months ago (1 children)

Rather than translating, are you aware of any that are capable of independently interpreting and giving comprehensible responses to prompts in multiple languages? Other than that OpenBuddy model, no way my hardware can run a 70b.

[–] SomeOddCodeGuy@alien.top 1 points 10 months ago

Hmm... I'm afraid I personally am not sure on the answer of that, though I do recommend checking out these tests, as Wolfram does tests where the models do stuff back and forth between German and English.

https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/

[–] tinykidtoo@alien.top 1 points 10 months ago (1 children)

I had some success making a workflow to use whisper to speak my target language. A llm made in the target language. And a tts capable of of producing my target language.

This allowed me to practice conversations. Some were hit or miss, I suspect this is because I am very new to my target language. But it was useful and allowed me to practice things like order food.

The language app memrise has a simlar system, of course for a price.

[–] ethertype@alien.top 1 points 10 months ago (1 children)

Would you care to describe your setup in more detail? Do you have any notes suitable for publishing on github or similar?

[–] tinykidtoo@alien.top 1 points 10 months ago

I am using the text-generation-webui by oobabooga https://github.com/oobabooga/text-generation-webui

One of the built-in plugins is the whisper_stt, you will need to enable it in the settings of the webui. https://github.com/oobabooga/text-generation-webui/tree/main/extensions/whisper_stt

I have been using the Elyza-japanese-llama-2-7B. Other models specific to your target language should work. https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b

Lastly, I created my own plugin, that is no longer maintained, unfortunately. Using a similar python script to the silero_tts extension, I swapped in calls to bark tts. I only chose bark because it had a Japanese model.
https://github.com/suno-ai/bark

But, you might have some luck with the new coqui_tts, which is under development. Hopefully they will fix the error I have been having with multi-language support. But it's built in, you would just need to install the requirements.txt https://github.com/oobabooga/text-generation-webui/tree/main/extensions/coqui_tts

[–] _Lee_B_@alien.top 1 points 10 months ago

One day they will be excellent at this. Right now, I think hallucinations are too much of a concern to rely on them for education, in a language you don't know.