AnonymousD3vil

joined 10 months ago
[โ€“] AnonymousD3vil@alien.top 1 points 10 months ago

I've had success with 7b Llama2 for multiple prompt scenarios. Make sure you are defining the objective clearly.

At first after reading your post, I thought you're talking about something even smaller (phi-1/tiny llama).

 

๐Ÿš€ Exciting News! ๐Ÿš€

Thrilled to announce the release of LLM.js v1.0.2! ๐ŸŽ‰โœจ

๐ŸŒ LLM.js lets you play around with language models right in your browser, thanks to WebAssembly.

In this latest release, here's what's in store:

1๏ธโƒฃ Expanded Format Support: Now GGUF/GGML formats are fully supported, thanks to the latest llama.cpp patch! ๐Ÿฆ™ This opens up doors for various models like Mistral, Llama2, Bloom, and more!

2๏ธโƒฃ Playground Fun: Explore and test different models seamlessly in playground demo! ๐ŸŽฎ๐Ÿ’ฌ Even from HF.

Feel free to check it out and share your thoughts! ๐Ÿš€

LLM.js Playground: https://rahuldshetty.github.io/ggml.js-examples/playground.html

LLM.js: https://rahuldshetty.github.io/llm.js/

https://i.redd.it/ge16xwia512c1.gif

[โ€“] AnonymousD3vil@alien.top 1 points 10 months ago

You could do a audio transcription then TTS to achieve the similar results with whisper and coqui-ai TTS models.

https://github.com/coqui-ai/TTS