this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I'm using ollama and I have a RTX 3060 TI. Using only 7B models.

I tested with Mistral 7B, Mistral-OpenOrca and Zephyr, they all had the same problem where they kept repeating or speaking randomly after some amount of chatting.

What could it be? Temperature? VRAM? ollama?

you are viewing a single comment's thread
view the rest of the comments
[–] LienniTa@alien.top 1 points 10 months ago

goliath 120b would fit in 64 ram, tho. It doesnt have repeating problem...