this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

https://huggingface.co/NurtureAI/Starling-LM-11B-alpha-v1

This is Berkeley's model: Starling-LM-7B-alpha with the size of model increased to 11B from 7B.
Special thanks to user Undi95 for their mistral passthrough explanation with cg123's mergekit, Berkeley of course for Starling-LM-7B-alpha, and also everyone contributing to open source AI development.

Together we are strong!

The performance of this model will increase drastically as it is further fine tuned with the newly added layers.

AWQ version and GGUF version coming soon!

you are viewing a single comment's thread
view the rest of the comments
[โ€“] perlthoughts@alien.top 1 points 11 months ago (1 children)

i updated configuration again for lmstudio config in gguf repo on huggingface.

[โ€“] AdTotal4035@alien.top 1 points 11 months ago

This is my latest output after re-installing the entire program. I re-downloaded your model and used the q4_k version this time rather than q4_k_m. Still running into this weird issue, where after its done, i get another answer, but this time its from gpt4. I have no idea what that even means. I tried to highlight my settings in red.

https://preview.redd.it/4lup96l36f3c1.png?width=1940&format=png&auto=webp&s=74a91c79a6c4935711de0822aa232a46f3db3ad2