this post was submitted on 27 Mar 2024
38 points (100.0% liked)

Technology

37603 readers
628 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Kichae@lemmy.ca 16 points 5 months ago (7 children)

Searching

Literally the worst possible usage. They're syntax generators, not search engines, and not knowledge fonts.

[–] Creesch@beehaw.org 9 points 5 months ago (5 children)

I don't know how to say this in a less direct way. If this is your take then you probably should look to get slightly more informed about what LLMs can do. Specifically, what they can do if you combine them with with some code to fill the gaps.

Things LLMs can do quite well:

  • Generate useful search queries.
  • Dig through provided text to determine what it contains.
  • Summarize text.

These are all the building blocks for searching on the internet. If you are talking about local documents and such retrieval augmented generation (RAG) can be pretty damn useful.

[–] Dark_Arc@social.packetloss.gg 9 points 5 months ago (2 children)

That's not entirely fair either though... They can incorrectly summarize, omit important information, or just make stuff up.

[–] Creesch@beehaw.org 8 points 5 months ago (1 children)

True, though that isn't all that different from people doing knee jerk responses on the internet...

I am not claiming they are perfect, but for the steps I described a human aware of the limitations is perfectly able to validate the outcome. While still having saved a bunch of time and effort on doing an initial search pass.

All I am saying is that it is fine to be critical of LLM and AI claims in general as there is a lot of hype going on. But some people seem to lean towards the "they just suck, period" extreme end of the spectrum. Which is no longer being critical but just being a reverse fanboy/girl/person.

[–] Dark_Arc@social.packetloss.gg 4 points 5 months ago

All I am saying is that it is fine to be critical of LLM and AI claims in general as there is a lot of hype going on. But some people seem to lean towards the "they just suck, period" extreme end of the spectrum. Which is no longer being critical but just being a reverse fanboy/girl/person.

Fair, nuance is an endangered species.

load more comments (2 replies)
load more comments (3 replies)