this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Hello, I'm a student delving into the study of large language models. I recently acquired a new PC equipped with a Core i7 14th Gen processor, RTX 4070 Ti graphics, and 32GB DDR5 RAM. Could you kindly suggest a recommended language model for optimal performance on my machine?

you are viewing a single comment's thread
view the rest of the comments
[–] opi098514@alien.top 1 points 11 months ago (2 children)

So you are soon gunna realize that unfortunately your pc is not as cutting edge as you think. Your main need is vram. For the 4070 ti you only have 12 gigs of vram. So you will be limited to 7b and 13b models. You can load into ram though but your speeds plummet. Mistal 7b is a good option to start with.

[–] PacmanIncarnate@alien.top 1 points 11 months ago (1 children)

A 24 GB GPU is still limited to fitting a 13B fully in VRAM. His PC is a great one; not the highest end, but perfectly fine to run anything up to a 70B in llama.cpp

[–] opi098514@alien.top 1 points 11 months ago

I didn’t say it wasn’t. But getting into LLMs really just shows you how much better your PC can be and you will never been as cutting edge as you think or want.