this post was submitted on 25 Nov 2023
1 points (100.0% liked)
LocalLLaMA
1 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 10 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
i can't speak for the desktop 3080ti, but i have that laptop card and it's roughly equivalent in performance to my 4060ti desktop card.
You mind shooting a few test to have real word numbers? Like what kind of speeds are you getting for a 7b q6 and 13b q6, they should fully fit in VRAM
You mind shooting a few test to have real word numbers for the laptop version? Like what kind of speeds are you getting for a 7b q6 and 13b q6, they should fully fit in VRAM
That’s odd considering the 4060 Ti desktop is 8GB VRAM. But are you saying just speed or are you able to run larger parameter LLMs on your laptop that your desktop wouldn’t be able to?
I have the 16gb version of 4060ti, so the cards have nearly identical capabilities.