this post was submitted on 23 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think that's a bit out of date. My guess is its building on this work
https://openai.com/research/improving-mathematical-reasoning-with-process-supervision
This definitely sounds like the paper. 100% worth the read, surprised I hadn't heard much about it until this ordeal
I'll definitely be asking my GPT to read this paper to me as my bedtime story.
PRM8k, made the rounds maybe 6+ months but they never publicly released the model.
I've recently just got into LLM's have you tried these math models? They seem to follow math related instructions reasonably well.
wizard-math:13b-q6_KMathLLM-MathCoder-CL-7B.Q8_0.ggufmetamath-mistral-7b.Q5_K_M.gguf
Strange, I thought they would naturally be rewarding the process, by rewarding each word that's generated by the sequence to sequence model, rather than the final words, for example. Maybe they over-optimised and skipped training on all output.
yea, that seems to be what a few news articles have referenced.