this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I see a lot -- by no means an overabundance, but enough to "trigger" me -- of laughing at some of the "obvious" research that gets posted here.

One example from a week or two ago that's been rattling around in my head was someone saying in reply to the paper (paraphrased):

That's just RAG with extra steps.

Exactly. But what were those steps attempting? Did it make RAG better?

Yes. Great, let's continue pulling the thread.

No. Ok, let's let others know that pulling this thread in this direction has been tried, and they should take a different approach; maybe it can be pulled in a different direction.

We are at the cusp of a shift in our cultural and technical cultures. Let's not shame the people sharing their work with the community.

you are viewing a single comment's thread
view the rest of the comments
[–] RonLazer@alien.top 1 points 10 months ago

Because real research is supposed to be peer reviewed, and journals offer peer review by panels of experts. Arxiv was supposed to circumvent that by allowing for review by an open group of peers, but the cycle for new research is so short nowadays that it basically means "review by twitter"