this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

4 readers
4 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 2 years ago
MODERATORS
 

I'm currently using 1650 4GB, AMD 5600, 32GB RAM.

I got some spare cash to throw to learn more about local llm.

Should I get: A. 64 GB RAM (2 X 32GB) B. 3060 12GB C. Intel A770 16GB.

I'm using openhermes 2.5 Mistral 7b q5k_m gguf, ok-ish Performace for Silly tavern with koboldcpp. But when context goes above 3k, it crawled.

Please let advise which option you think I should take first. Thanks bunch.

you are viewing a single comment's thread
view the rest of the comments
[–] iamkucuk@alien.top 1 points 2 years ago

3060 would cause fewer headaches. A770 would be better once you get things working. Btw, a770 has somewhat mature support for ai workload, too.