this post was submitted on 10 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

A few people here tried the Goliath-120B model I released a while back, and looks like TheBloke has released the quantized versions now. So far, the reception has been largely positive.

https://huggingface.co/TheBloke/goliath-120b-GPTQ

https://huggingface.co/TheBloke/goliath-120b-GGUF

https://huggingface.co/TheBloke/goliath-120b-AWQ

The fact that the model turned out good is completely unexpected. Every LM researcher I've spoken to about this in the past few days has been completely baffled. The plan moving forward, in my opinion, is to finetune this model (preferably a full finetune) so that the stitched layers get to know each other better. Hopefully I can find the compute to do that soon :D

On a related note, I've been working on LLM-Shearing lately, which would essentially enable us to shear down a transformer down to much smaller sizes, while preserving accuracy. The reason goliath-120b came to be was an experiment in moving at the opposite direction of shearing. I'm now wondering if we can shear a finetuned Goliath-120B to around ~70B again and end up with a much better 70B model than the existing ones. This would of course be prohibitively expensive, as we'd need to do continued pre-train after the shearing/pruning process. A more likely approach, I believe, is shearing Mistral-7B to ~1.3B and perform continued pretrain on about 100B tokens.

If anyone has suggestions, please let me know. Cheers!

you are viewing a single comment's thread
view the rest of the comments
[–] Available-Appeal6460@alien.top 1 points 1 year ago (2 children)

Who are you talking about ? Main repo has like 15 downloads and bloke quants has 0. We are talking here about maybe 2-5 people who downloaded it.

I saw this model being talked about few times here and i feel that person who created it uses smurfs accounts to promote it for some reason.

It doesn't have any benchmarks either officially done so it is not like people can even say it is better than anything.

I saw few franken models and every one of them is marginal upgrade due to just share about of parameters. Proper finetune should beat it easily.

[–] FullOf_Bad_Ideas@alien.top 1 points 1 year ago (1 children)

Don't trust huggingface download stats at all, they are garbage.

[–] Aaaaaaaaaeeeee@alien.top 1 points 1 year ago

Agreed, it depends how you download, are you logged in, do you use the official download tool, etc

[–] AlpinDale@alien.top 1 points 1 year ago (1 children)

It's up on Kobold Horde, you can give it a try yourself. Select the model from the AI menu. I think it's gonna be up for the weekend.

[–] P00PY-PANTS@alien.top 1 points 1 year ago

I've been using it all morning on the Horde since there are a ton of slots open. So far it's been giving me awesome results across a dozen or so different character templates I've tried.

The only exception being it sometimes get's stuck repeating the same response even if you refresh a dozen times or repeatedly tacking the same description of a person/scene on the end of it's replies.

For reference tho both Xwin and euryale 70b models do that to me sometimes too so might be my settings or something.