Nkingsy

joined 10 months ago
[–] Nkingsy@alien.top 1 points 9 months ago

Or the more undertrained it is, the more fat can be trimmed

[–] Nkingsy@alien.top 1 points 10 months ago

I think llama 1 had more interesting training data, but it can’t hold a plot too well

[–] Nkingsy@alien.top 1 points 10 months ago

Trained on a larger # of tokens. All the llama models are under trained it appears, especially the 70b