this post was submitted on 29 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My understanding is basically, they are data sets the model is compared to. Say you wanted to see how well you knew math. You took a math test, and then your answers were compared to a key of answers...
Some of my notes about those benchmarks
GSM8K is a dataset of 8.5K high-quality linguistically diverse grade school math word problems created by human problem writers
HellaSwag is the large language model benchmark for commonsense reasoning.
Truful QA: is a benchmark to measure whether a language model is truthful in generating answers to questions.
Winogrande - Common sense reasoning
Everything is common sense reasoning, we need better definitions