this post was submitted on 09 Apr 2024
151 points (94.2% liked)

Technology

59377 readers
4800 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A prototype is available, though it's Chrome-only and English-only at the moment. How this'll work is you select some text and then click on the extension, which will try to "return the relevant quote and inference for the user, along with links to article and quality signals".

How this works is it uses ChatGPT to generate a search query, utilizes WP's search API to search for relevant article text, and then uses ChatGPT to extract the relevant part.

you are viewing a single comment's thread
view the rest of the comments
[–] vhstape@lemmy.sdf.org 19 points 7 months ago (5 children)

Is it that hard to fact-check things?? Not to mention, a quick web search uses much less power/resources compared to AI inference...

[–] swordsmanluke@programming.dev 2 points 7 months ago (3 children)

a quick web search uses much less power/resources compared to AI inference

Do you have a source for that? Not that I'm doubting you, just curious. I read once that the internet infrastructure required to support a cellphone uses about the same amount of electricity as an average US home.

Thinking about it, I know that LeGoog has yuge data centers to support its search engine. A simple web search is going to hit their massive distributed DB to return answers in subsecond time. Whereas running an LLM (NOT training one, which is admittedly cuckoo bananas energy intensive) would be executed on a single GPU, albeit a hefty one.

So on one hand you'll have a query hitting multiple (comparatively) lightweight machines to lookup results - and all the networking gear between. One the other, a beefy single-GPU machine.

(All of this is from the perspective of handling a single request, of course. I'm not suggesting that Wikipedia would run this service on only one machine.)

[–] sheogorath@lemmy.world 7 points 7 months ago (1 children)

Based on this article, it seems that on average an LLM query costs about 10x when compared to a search engine query.

[–] swordsmanluke@programming.dev 1 points 7 months ago

Man - that's wild. Thank you for coming though with a citation - I appreciate it!

load more comments (1 replies)
load more comments (2 replies)