this post was submitted on 24 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I guess the question is what is the order we're talking about for requiring to step up to more parameters? I understand its in billions of parameters and that they are basically the weights between the data it was trained on and is used to predict words (I think of it as a big weight map), so like you can expect "sharp sword" more often than "asprin sword."

Is there a limit to the data-size used to train the model to the point that you'll hit a plateau? Like, I imagine training against Shakespire would be harder than Poe because of all the made up words Shakespire uses. I'd probably train Shakespire with his works + wikis and discussions on his work.

I know that's kind of all over the place, I'm kind of fumbling at the topic trying to get a grasp so I can start prying it open.

you are viewing a single comment's thread
view the rest of the comments
[–] tgredditfc@alien.top 1 points 11 months ago

If I can run them all I will just pick the biggest one.