this post was submitted on 09 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Finaly i finish my thesis defense and got chance for upgrade my laptop ram to 20 gb, that so far best thing i can do, i currently run 7b mistral with it with koboldcpp but speed is.. kinda slow 0.3 token per second sometime it at peak 0.8 what wrong here ? or should i try ooboga instead or gpt4free ?

top 1 comments
sorted by: hot top controversial new old
[–] MerryAInfluence@alien.top 1 points 10 months ago

the real solution here is a new laptop buddy.

or using the cloud.