this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I see a lot -- by no means an overabundance, but enough to "trigger" me -- of laughing at some of the "obvious" research that gets posted here.

One example from a week or two ago that's been rattling around in my head was someone saying in reply to the paper (paraphrased):

That's just RAG with extra steps.

Exactly. But what were those steps attempting? Did it make RAG better?

Yes. Great, let's continue pulling the thread.

No. Ok, let's let others know that pulling this thread in this direction has been tried, and they should take a different approach; maybe it can be pulled in a different direction.

We are at the cusp of a shift in our cultural and technical cultures. Let's not shame the people sharing their work with the community.

you are viewing a single comment's thread
view the rest of the comments
[–] ArtifartX@alien.top 1 points 11 months ago

I've definitely seen a few of those.