this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I want to begin by saying my specs are an rtx 4080 with 16gb VRAM + 32GB regular ram.
I've managed to run chronoboros 33b model pretty smoothly, even though a tad slow.
Yet I've ran into hardware issues (I think) trying to run TheBloke/Capybara-Tess-Yi-34B-200K-GPTQ and Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k (Tried both AWQ and GPTQ), is there a reason models with a pretty similar amount of parameters won't run?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] FlishFlashman@alien.top 1 points 10 months ago (1 children)

What are you using to run them?

In any case, larger context models require *a lot* more RAM/VRAM.

[โ€“] Mobile-Bandicoot-553@alien.top 1 points 10 months ago

I'm using ooba, I haven't bothered much with KoboldCPP because I'm not really running GGUF