this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Obviously building a big high dimensional language model is hard yes okay.

But once we have one can't we just jiggle weights and run tests? why can't I just download a program to "evolve" my language model?

"Am I just stupid and this is just too trivially easy to be a program?"

peace

you are viewing a single comment's thread
view the rest of the comments
[–] LuluViBritannia@alien.top 1 points 1 year ago

Well, first of all, this is something you do while running the model. Sure, it's the same model, but it's still two different processes to run in parallel.

Then, from what I gather, it's closer to model finetuning than it is to inference. And if you look up the figures, finetune requires a lot more power and VRAM. As I said, it's rewriting the neural network, which is the definition of finetuning.

So in order to get a more specific answer, we should look up why finetuning requires more than inference.