this post was submitted on 22 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

We're proud to introduce Rocket-3B ๐Ÿฆ, a state-of-the-art 3 billion parameter model!

โ€‹

๐ŸŒŒ Size vs. Performance: Rocket-3B may be smaller with its 3 billion parameters, but it punches way above its weight. In head-to-head benchmarks like MT-Bench and AlpacaEval, it consistently outperforms models up to 20 times larger.

https://preview.redd.it/fxmz9sl1ls1c1.png?width=1273&format=png&auto=webp&s=63c3838cf4f01f7efcad9ec92b97c1e493111842

๐Ÿ” Benchmark Breakdown: In MT-Bench, Rocket-3B achieved an average score of 6.56, excelling in various conversation scenarios. In AlpacaEval, it notched a near 80% win rate, showcasing its ability to produce detailed and relevant responses.

https://preview.redd.it/rpgaknn3ls1c1.png?width=1280&format=png&auto=webp&s=6d2d7543f1459ceae7f96ad05ea064e8f8076517

๐Ÿ› ๏ธ Training: The model is fine-tuned from Stability AI's StableLM-3B-4e1t, employing Direct Preference Optimization (DPO) for enhanced performance.

๐Ÿ“š Training Data: We've amalgamated multiple public datasets to ensure a comprehensive and diverse training base. This approach equips Rocket-3B with a wide-ranging understanding and response capability.

๐Ÿ‘ฉโ€๐Ÿ’ป Chat format: Rocket-3B follows the ChatML format.

For an in-depth look at Rocket-3B, visit Rocket-3B's HugginFace page

you are viewing a single comment's thread
view the rest of the comments
[โ€“] paryska99@alien.top 1 points 11 months ago

Oh wow, this seems almost too good to be true