I tested the 3B model for Romanian, Russian, French, and German translations of the "The sun rises in the East and sets in the West." and it works 100%: it gets 10/10 from ChatGPT
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
Nice thank you!! Tried in space. Works well for me. Noob question. Can I run this with llama.cpp? Since it's gguf. Can I download this and run it locally?
Nice, I will check madlad later. Now, I thought seamless4MT was the best translation model from meta, I didnt even know about this NLLB existed. Does anyone have used both and can point out the difference? seamless4mt seemd amazingly good in my experience, but have less languages perhaps, idk
The koboldcpp-1.46.1 (from October) says ERROR: Detected unimplemented GGUF Arch. It's best to get the newest version of the backend.
I've been relying on Claude AI to translate Korean texts to english. I'm excited to use a local version if the context window is large enough.
I haven't tested it but I'm surprised to see llms good enough to translate multiple languages running locally. I expected to see one to one language translation llms before this. Like an llm dedicated to Chinese - English translation, another llm dedicated to Korean - French etc.
Sorry to be pedantic, but the translation models they released are not LLMs. They are T5 seq2seq models with cross-encoding, as in the original Transformer paper. They did also release a LM that's a Decoder-Only T5. They tried few-shot learning with it, but it performs much worse than the MT models.
I think that the first multilingual Neural Machine Translation model is from 2016: https://arxiv.org/abs/1611.04558. However, specialized models for pairs of languages are still popular. For example: https://huggingface.co/Helsinki-NLP/opus-mt-de-en
I've been relying on Claude AI to translate Korean texts to english.
So I did with korean novel chapters, but since yesterday it started to either refuse translate, stopping in 1/6 of the text or writing some sort of summaries instead of translations.
Why the shitty name?
Gibberish names have been a things since the 90s. It's hard coming up with a name when everyone is racing to create the next Big Thing. Also, I think techies are more tolerant of cumbersome names/domains.
Does anyone know how it compares with Google Translate and DeepL. I'm guessing since google released it it will work worse than Google Translate 🤷♂️
Meta's NLLB is supposed to be the best translator model, right? But it's for non-commercial use only. How does MADLAD compare to NLLB?
NLLB has horrible performance, I've done extensive testing with it and wouldn't even translate a children's book with it. Google Translator does a much better job and that's saying something. lol
The MADLAD-400 paper has a bunch of comparisons with NLLB. MADLAD beats NLLB in some benchmarks, it's quite close in others, and it loses some. But the largest MADLAD is 5x smaller than the original NLLB. It also supports more 2x more languages.
n00b here. can it run in oobabooga?
It should. Support for T5 based models was added in https://github.com/oobabooga/text-generation-webui/pull/1535
Yes, it indeed works. I managed to run the 10B model on CPU, it uses 40GB of ram, but somehow I felt like your 3b space gave me a better translation.
How do you load the model? I pasted jbochi/madlad400-3b-mt in the download model field and used "transformers" model loader, but it can't handle it. OSError: It looks like the config file at 'models/model.safetensors' is not a valid JSON file.
I think I did exactly like you say, so I have no idea why you got an error.
For most people, they only need a few languages, such as en cn jp. If there are multiple combination versions, I will use it to develop my own translation application
es, such as en cn jp. If there are multiple combination versions, I will use it to develop my own translation applic
Check the OPUS models by Helsinki-NLP: https://huggingface.co/Helsinki-NLP?sort_models=downloads#models
u/jbochi when I was trying to load your huggingface model (madlad400-3b-mt), then while loading tokenizer getting this value error. Can u pls tell me how we can resolve that?
ValueError Traceback (most recent call last) / ValueError: Non-consecutive added token '' found. Should have index 256100
I don't think its working.
Sorry, but what is not working?
I write text that is incomplete to see how it will translate it and the results is a coninuation of my text not the translation.
How are you running it? Did you prepended a "<2xx>" token for the target language? For example, "<2fr> hello" will translate "hello" to French. If you are using this space, you can select the target language in the dropdown.
I am using the code of the space.
Got it. Can you please share the full prompt?
this is nice, I'm doing some translation work with some sophisticated Arabic words (Arabic sometimes ranked as the most complicated language, we called the ones that master it scientists lol).
how can I run this on my mac in layman terms.
Thanks a lot for converting and quantizing these. I have a couple of questions.
How does it compare to ALMA? (13B)
Is it capable of translating more than 1 sentence at a time?
Is there a way to specify source language or does it always detect it on its own?
Thanks!
- I'm not familiar with ALMA, but it seems to be similar to MADLAD-400. Both are smaller than NLLB-54B, but competitive with it. Because ALMA is a LLM and not a seq2seq model with cross-encoding, I'd guess it's faster.
- You can translate up to 128 tokens at the time.
- You can only specify the target language, not the source language.
If anything needed some minimalist app, this would be it.
Hi, i have the following error while trying to run it from transformers copying the code provided in huggingface
Traceback (most recent call last):
File "/home/XXX/project/translation/translateMADLAD.py", line 10, in
tokenizer = T5Tokenizer.from_pretrained('jbochi/madlad400-3b-mt')
File "/home/lXXX/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1841, in from_pretrained
return cls._from_pretrained(
File "/home/lXXX/.local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2060, in _from_pretrained
raise ValueError(
ValueError: Non-consecutive added token '' found. Should have index 256100 but has index 256000 in saved vocabulary.
What would be the equivalent models based on open source and free for commercial use?
Not sure if this has been asked yet, but how good are the translations from this model compared to normal GPT-3.5 and Claude?
Thanks.
Good question. ALMA compares itself against NLLB and GPT3.5, and the 13B barely surpasses GPT3.5. MADLAD-400 probably beats GPT3.5 on lower resource languages only.
I tested two sentences: one from hindi to english, which it translated fine. Another was romanized hindi which it couldn't handle: input: Sir mera dhaan ka fasal hai Output was the same as input. Both ChatGPT and Google Translate can handle this.
Could you please convert the other versions as well or release the code you used ?
@jbochi , Is it possible to run cargo example for batch inputs?
cargo run --example t5 --release --features cuda -- \ --model-id "jbochi/madlad400-3b-mt" \ --prompt "<2de> How are you, my friend?" \ --temperature 0
Thanks
Yes, I would be interested to know if this is possible
Btw inference time of MADLAD-400 is much slower as compare to opus-mt?