this post was submitted on 26 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I use in both cases q4_K_M

you are viewing a single comment's thread
view the rest of the comments
[–] meetrais@alien.top 1 points 9 months ago (3 children)

Same experience here. I got excellent results from quantized models of Intel-Neural-7B and Mistral-7B but bad results with quantized model of Yi-34B.

[–] Inevitable_Host_1446@alien.top 1 points 9 months ago (2 children)

I'm not sure what the point of Neural-7B is, given that it's super censored corporate safety bot. If that's what people want they might as well just use ChatGPT, which is faster and better otherwise.

[–] grigio@alien.top 1 points 9 months ago

neural-chat from Intel is not censored! Just use a good system prompt

[–] Nixellion@alien.top 1 points 9 months ago

Privacy and cost Also no, 7B is as fast or faster than ChatGPT depending on ChatGPT load.