this post was submitted on 23 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The most multi-lingual capable model I'm aware of is OpenBuddy 70b. I use it as a foreign language tutor, and it does an ok job. I constantly check it against google translate, and it hasn't let me down yet, but ymmv. I don't use it a ton.
I think the problem is that, in general, technology hasn't been the best at foreign language translations. Google Translate is SOTA in that realm, and it's not perfect. I'm not sure I'd trust it for doing this in a real production sense, but I do trust it enough to help me learn just enough to get by.
So with that said, you could likely get halfway far mixing any LLM with a handful of tools. For example- SillyTavern I believe has a Google Translate module built in. You could use Google to do the translations. Then, having multiple speech to text/text to speech modules, one for each language, might give you that flexibility of input and output.
Essentially, I would imagine that 90% of the work will be developing tooling around any decent LLM, regardless of its language abilities, and then using external tooling to support that. I could be wrong, though.
Rather than translating, are you aware of any that are capable of independently interpreting and giving comprehensible responses to prompts in multiple languages? Other than that OpenBuddy model, no way my hardware can run a 70b.
Hmm... I'm afraid I personally am not sure on the answer of that, though I do recommend checking out these tests, as Wolfram does tests where the models do stuff back and forth between German and English.
https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/