AdamEgrate

joined 2 years ago
[–] AdamEgrate@alien.top 1 points 2 years ago (1 children)

How do you know that? And how can you be so confident about it?

[–] AdamEgrate@alien.top 1 points 2 years ago

Yeah. They want people to believe that if it’s made by a human it is fair use for training models, but it’s it’s made by an AI it’s not.

[–] AdamEgrate@alien.top 1 points 2 years ago (1 children)

Scaling laws suggest that you can reduce parameter count by increasing the number tokens. There is a limit however and that seems to be at around 32% of the original model size: https://www.harmdevries.com/post/model-size-vs-compute-overhead/

So that would put the resulting model at around 56B. Not sure how they got it down further, maybe through quantization.

[–] AdamEgrate@alien.top 1 points 2 years ago

There is strong evidence in the literature that you can reduce parameter count if you increase the number of training tokens (as well as compute time). Not saying that’s what they did here, but also I wouldn’t be surprised given how important it is for inference to be as efficient as possible here.