๐ Exciting News! ๐
Thrilled to announce the release of LLM.js v1.0.2! ๐โจ
๐ LLM.js lets you play around with language models right in your browser, thanks to WebAssembly.
In this latest release, here's what's in store:
1๏ธโฃ Expanded Format Support: Now GGUF/GGML formats are fully supported, thanks to the latest llama.cpp patch! ๐ฆ This opens up doors for various models like Mistral, Llama2, Bloom, and more!
2๏ธโฃ Playground Fun: Explore and test different models seamlessly in playground demo! ๐ฎ๐ฌ Even from HF.
Feel free to check it out and share your thoughts! ๐
LLM.js Playground: https://rahuldshetty.github.io/ggml.js-examples/playground.html
LLM.js: https://rahuldshetty.github.io/llm.js/
https://i.redd.it/ge16xwia512c1.gif
I've had success with 7b Llama2 for multiple prompt scenarios. Make sure you are defining the objective clearly.
At first after reading your post, I thought you're talking about something even smaller (phi-1/tiny llama).