this post was submitted on 28 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

https://huggingface.co/deepnight-research

I'm not affiliated with this group at all, I was just randomly looking for any new big merges and found these.

100B model: https://huggingface.co/deepnight-research/saily_100B

220B model: https://huggingface.co/deepnight-research/Saily_220B

600B model: https://huggingface.co/deepnight-research/ai1

They have some big claims about the capabilities of their models, but the two best ones are unavailable to download. Maybe we can help convince them to release them publicly?

you are viewing a single comment's thread
view the rest of the comments
[–] You_Wen_AzzHu@alien.top 1 points 9 months ago (3 children)

We need some 4090s with 500gb VRAM modified in China if possible.

[–] mpasila@alien.top 1 points 9 months ago (2 children)

the devs mentioned that the 600B model takes about 1,3TB space alone..

[–] 9wR8xO@alien.top 1 points 9 months ago

Make it 0.01bpm quantized and you will fit in good ol' 3090.

[–] MannowLawn@alien.top 1 points 9 months ago (1 children)

Give it 5 years with the Mac Studio. Next year 256gb, will go up real quick.

[–] BangkokPadang@alien.top 1 points 9 months ago

Honestly, a 4bit quantized version of the 220B model should run on a 192GB M2 Studio, assuming these models could even work with a current transformer/loader.

[–] LocoMod@alien.top 1 points 9 months ago

We need some hero to develop an app that downloads more GPU memory like those apps back in the 90's. /s

[–] iCantHack@alien.top 1 points 9 months ago (1 children)

I wonder if there's any real demand for even 48GB 4090s enough to incentives somebody to do it. I bet the hardware/electronics part of it is trivial, tho.

[–] BangkokPadang@alien.top 1 points 9 months ago

If people started doing this with any regularity, nVidia would intentionally bork the drivers.