this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I see a lot -- by no means an overabundance, but enough to "trigger" me -- of laughing at some of the "obvious" research that gets posted here.

One example from a week or two ago that's been rattling around in my head was someone saying in reply to the paper (paraphrased):

That's just RAG with extra steps.

Exactly. But what were those steps attempting? Did it make RAG better?

Yes. Great, let's continue pulling the thread.

No. Ok, let's let others know that pulling this thread in this direction has been tried, and they should take a different approach; maybe it can be pulled in a different direction.

We are at the cusp of a shift in our cultural and technical cultures. Let's not shame the people sharing their work with the community.

you are viewing a single comment's thread
view the rest of the comments
[–] ArtifartX@alien.top 1 points 10 months ago (3 children)

I agree with your sentiment here. But, you can't deny the influx of papers that are intentionally something extremely simple or inconsequential that are deliberately dressed up to try to look as complex as possible just in order to get published. Regardless of your sentiment (which again I agree with mostly), those kinds of papers are not good and we'd all be better off without them. I think there is a place for shame for certain types of papers, and I would disagree with the idea that shame is always bad or shouldn't be used as a tool.

[–] frozen_tuna@alien.top 1 points 10 months ago

10/10. Literally working of filing a patent at the moment and trying to make it as hyper specific as possible so a.) it doesn't overlap with anyone else's patent b.) pretty much only applies to what we're doing at the company.

I'm sure there's people in similar situations, but we're heavily incentivized to patent/publish something.

load more comments (2 replies)