this post was submitted on 18 Nov 2023
1 points (100.0% liked)

LocalLLaMA

4 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

If you don't know what those are, refer to these two reddit posts about Marx 3B V3 and Akins 3B, the unquantized model weights are available at HuggingFace. Link to Marx 3B V3 and Akins 3B.

As the StableLM support for llama.cpp has just been recently, u/The-Bloke(Thank you so much!) quantized my StableLM models to GGUF as a lot of people are wanting to try the model in GGUF, you can find the GGUF conversion for Marx 3B V3 and Akins 3B. Again, credit to u/The-Bloke for quantizing the model, thank you!

By the way, I don't know what dataset to finetune on right now. If you know a good dataset, let me know and I will look into it. Though I could probably only finetune on datasets below 5k conversations, maybe 10k.

you are viewing a single comment's thread
view the rest of the comments
[–] bot-333@alien.top 1 points 2 years ago

Can you try my new IS-LM? GGUF: https://huggingface.co/UmbrellaCorp/IS-LM-3B_GGUF. I found it really good. Thanks.