Good question. ALMA compares itself against NLLB and GPT3.5, and the 13B barely surpasses GPT3.5. MADLAD-400 probably beats GPT3.5 on lower resource languages only.
jbochi
Got it. Can you please share the full prompt?
Thanks!
- I'm not familiar with ALMA, but it seems to be similar to MADLAD-400. Both are smaller than NLLB-54B, but competitive with it. Because ALMA is a LLM and not a seq2seq model with cross-encoding, I'd guess it's faster.
- You can translate up to 128 tokens at the time.
- You can only specify the target language, not the source language.
How are you running it? Did you prepended a "<2xx>" token for the target language? For example, "<2fr> hello" will translate "hello" to French. If you are using this space, you can select the target language in the dropdown.
Sorry, but what is not working?
It should. Support for T5 based models was added in https://github.com/oobabooga/text-generation-webui/pull/1535
es, such as en cn jp. If there are multiple combination versions, I will use it to develop my own translation applic
Check the OPUS models by Helsinki-NLP: https://huggingface.co/Helsinki-NLP?sort_models=downloads#models
Sorry to be pedantic, but the translation models they released are not LLMs. They are T5 seq2seq models with cross-encoding, as in the original Transformer paper. They did also release a LM that's a Decoder-Only T5. They tried few-shot learning with it, but it performs much worse than the MT models.
I think that the first multilingual Neural Machine Translation model is from 2016: https://arxiv.org/abs/1611.04558. However, specialized models for pairs of languages are still popular. For example: https://huggingface.co/Helsinki-NLP/opus-mt-de-en
The MADLAD-400 paper has a bunch of comparisons with NLLB. MADLAD beats NLLB in some benchmarks, it's quite close in others, and it loses some. But the largest MADLAD is 5x smaller than the original NLLB. It also supports more 2x more languages.
thanks