this post was submitted on 25 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Title sums it up.

you are viewing a single comment's thread
view the rest of the comments
[–] __SlimeQ__@alien.top 1 points 10 months ago (3 children)

i can't speak for the desktop 3080ti, but i have that laptop card and it's roughly equivalent in performance to my 4060ti desktop card.

[–] No_Afternoon_4260@alien.top 1 points 10 months ago

You mind shooting a few test to have real word numbers? Like what kind of speeds are you getting for a 7b q6 and 13b q6, they should fully fit in VRAM

[–] No_Afternoon_4260@alien.top 1 points 9 months ago

You mind shooting a few test to have real word numbers for the laptop version? Like what kind of speeds are you getting for a 7b q6 and 13b q6, they should fully fit in VRAM

[–] hysterian@alien.top 1 points 10 months ago (1 children)

That’s odd considering the 4060 Ti desktop is 8GB VRAM. But are you saying just speed or are you able to run larger parameter LLMs on your laptop that your desktop wouldn’t be able to?

[–] __SlimeQ__@alien.top 1 points 10 months ago

I have the 16gb version of 4060ti, so the cards have nearly identical capabilities.