this post was submitted on 21 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I had some crazy thoughts today while I was in physical therapy (funny what some electrodes on the base of your skull will make you think of...) -- But, I feel like there's a chance I might be on to something and wanted to share it, let me know what you guys think... -- or if in the more likely scenario, I'm just crazy, or this is already known and just impractical in some way... let me know that too! πŸ˜…

...

But I was thinking about LoRAs today and about how we use them for finetuning transformer models... -- and I thought:

... why not train a second model* to produce LoRAs as an output? that can be consumed by an LLM (or other transformer) as an input?(*: either a regular ML model, or a transformer model -- but not an LLM -- it produces LoRAs as the output.)

I know it could be hard to train without a specialized strategy but... it seems like LoRAs have the ability to slide the model's latent space around through different windows & lenses; and with a sliding latent space, you get a lot of extra horsepower for "free" (i.e. you only invoke the higher level network when it needs to nudge the model in a certain direction -- which the model can be trained to do by outputting special token/vector pairs).

...

So, you could do some really interesting things... like instead of using a mixture of experts (MoE) where one knows programming, and another knows healthcare, you could train the LLM to recognize when it needs to make minor changes to it's current LoRA and output a special token & vector which is fed into the dynamic LoRA offset model which makes minor modifications to the current LoRA that is applied via the PEFT adapter.

I feel like if you took a bunch of LoRAs (assuming the tokens were not retrained) and labled them, you could train the higher level dynamic LoRA tensor network to be able to blend between LoRAs that it knows about, and you could train the LLM to output the special tokens when it recognizes it needs to shift between latent spaces...

...

And for that matter you could take that in a couple of interesting directions... For example:

- This one is kind of boring, but you could try using a model with a smaller context window (even 4k or 8k), and when it got near the end of its context, you could have the pre-trained dynamic LoRA tensor network evaluate the current context / tokens and spit out modifications to the current LoRA that allowed you to embed some of that context into the latent space; thus allowing you to heavily compress and summarize the current context window to free up a bunch of token/attention space...

OR

- You could go in a completely different direction and stack these on top of each other (as a LoRA can be applied to any tensor network, including one which produces downstream LoRAs) and have a dynamically sized network... -- basically giving you a dynamically sized LLM that only uses as much brain power as it thinks it needs for each problem (i.e. it could use the 4-bit 7b LLM at the base, with a bunch of 3b parameter dynamic LoRA layers above each one -- so that for simple problems they just invoke the base network, but when it thinks it needs more "brain power" it could spit out a special token/vector pair that singified that the next layer up needed to be involved -- and it would turn on that layer to build a LoRA for the layer below.

...

Heck, you could even do both of those scenarios, separately, with a third networkt that dynamically blended between the two in some way... (much how LoRAs are statically blended together today -- but this would be dynamic and the blend could change when special trained tokens are encountered).

...

There are a lot of other weird things I can think of that you could do with dynamic LoRAs (like make any LLM multi-modal...) -- but first I just want to figure out how crazy this is, and what people think about it... -- or maybe it's already been done and I'm just late to the party...

...

So, what do you guys think?

you are viewing a single comment's thread
view the rest of the comments
[–] naivesantonym@alien.top 1 points 10 months ago (1 children)

This is basically one of the main reasons to use LoRAs. Someone posted this in the machine learning Reddit about 2 months ago, but the idea is still a solid one. Train a intermediate model to determine which expert or LoRa to use, then use that LoRA for that task. It's better than mix of experts because you get much better control over which expert or LoRa will receive the request.

[–] leaderof13@alien.top 1 points 10 months ago (1 children)

Which expert you are talking about, machine learning noob here

[–] naivesantonym@alien.top 1 points 10 months ago

A mixture of experts is a type of technique that is what gpt4 is rumored to be using.

From Wikipedia :

Mixture of experts (MoE) isΒ a machine learning technique where multiple expert networks (learners) are used to divide a problem space into homogeneous regions. It differs from ensemble techniques in that typically only one or a few expert models will be run, rather than combining results from all models.

In basic terms there are networks that get really good at one type of thing, and compete to provide input when a question comes in during inference. There may be a network really good at science, or math, or literature, which has a better understanding of a field or subject than the other "experts". So it provides the response instead of the others.