this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I see a lot -- by no means an overabundance, but enough to "trigger" me -- of laughing at some of the "obvious" research that gets posted here.

One example from a week or two ago that's been rattling around in my head was someone saying in reply to the paper (paraphrased):

That's just RAG with extra steps.

Exactly. But what were those steps attempting? Did it make RAG better?

Yes. Great, let's continue pulling the thread.

No. Ok, let's let others know that pulling this thread in this direction has been tried, and they should take a different approach; maybe it can be pulled in a different direction.

We are at the cusp of a shift in our cultural and technical cultures. Let's not shame the people sharing their work with the community.

you are viewing a single comment's thread
view the rest of the comments
[–] ArtifartX@alien.top 1 points 1 year ago (3 children)

I agree with your sentiment here. But, you can't deny the influx of papers that are intentionally something extremely simple or inconsequential that are deliberately dressed up to try to look as complex as possible just in order to get published. Regardless of your sentiment (which again I agree with mostly), those kinds of papers are not good and we'd all be better off without them. I think there is a place for shame for certain types of papers, and I would disagree with the idea that shame is always bad or shouldn't be used as a tool.

[–] FPham@alien.top 1 points 1 year ago (1 children)

Oh boy, preaching to the choir. Sometimes the 10 pages paper can be just summarized as a shrug emoji.

[–] ArtifartX@alien.top 1 points 11 months ago

I've definitely seen a few of those.

load more comments (1 replies)