this post was submitted on 25 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

If i have multiple 7b models where each model is trained on one specific topic (e.g. roleplay, math, coding, history, politic...) and i have an interface which decides depending on the context which model to use. Could this outperform bigger models while being faster?

you are viewing a single comment's thread
view the rest of the comments
[–] remghoost7@alien.top 1 points 10 months ago (6 children)

I believe this is what GPT4 actually is.

I remember reading somewhere that it's actually a mix of 8 different models and it directs your question depending on the context of it.

Would be neat to implement on a local level though. Haven't seen many people on the local side talk about doing this.

[–] feynmanatom@alien.top 1 points 10 months ago (1 children)

Lots of rumors, but tbh I think it’s highly unlikely they’re using an MoE. MoEs work on batch size = 1 (you can take advantage of sparsity) but not on larger batch sizes. You would need so much RAM and would miss out on the point of using an MoE.

[–] remghoost7@alien.top 1 points 10 months ago

Lots of rumors...

Very true.

We honestly have no clue what's going on behind ClosedAI's doors.

I don't know enough about MoEs to say one way or the other, so I'll take your word on it. I'll have to do more research on them.

load more comments (4 replies)