this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I guess the question is what is the order we're talking about for requiring to step up to more parameters? I understand its in billions of parameters and that they are basically the weights between the data it was trained on and is used to predict words (I think of it as a big weight map), so like you can expect "sharp sword" more often than "asprin sword."

Is there a limit to the data-size used to train the model to the point that you'll hit a plateau? Like, I imagine training against Shakespire would be harder than Poe because of all the made up words Shakespire uses. I'd probably train Shakespire with his works + wikis and discussions on his work.

I know that's kind of all over the place, I'm kind of fumbling at the topic trying to get a grasp so I can start prying it open.

you are viewing a single comment's thread
view the rest of the comments
[–] CKtalon@alien.top 1 points 11 months ago (1 children)

You are probably talking about fine tuning then (pre)training a model. There are models that were trained for coding like codellama and all the variants. You could probably train on the library’s code but I doubt you will get much out of it. Perhaps the best way is to create some instruction data based on the library (either manually or synthetic) and fine tune on that.

[–] paradigm11235@alien.top 1 points 11 months ago

I'm glad I goofed in my question because your response was super helpful, but I now realize I was missing the terminology when I posted. I was talking about fine tuning an existing model with a specific goal in mind, (re: poetry)