this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I'm currently using 1650 4GB, AMD 5600, 32GB RAM.

I got some spare cash to throw to learn more about local llm.

Should I get: A. 64 GB RAM (2 X 32GB) B. 3060 12GB C. Intel A770 16GB.

I'm using openhermes 2.5 Mistral 7b q5k_m gguf, ok-ish Performace for Silly tavern with koboldcpp. But when context goes above 3k, it crawled.

Please let advise which option you think I should take first. Thanks bunch.

you are viewing a single comment's thread
view the rest of the comments
[–] twisted7ogic@alien.top 1 points 1 year ago

I'm in the same mind. Nvidia is the best choice right now, but it's also a super overpriced brand that really skimps on vram and is acting pretty toxic to it's end users.

Long term, Intel (and AMD) might be better options, possibly probably, eventually.