this post was submitted on 07 Oct 2023
312 points (97.0% liked)

Technology

59157 readers
2403 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] praise_idleness@sh.itjust.works 73 points 1 year ago* (last edited 1 year ago) (19 children)
  • works 24/7
  • no emotional damage
  • easy to train
  • cheap as hell
  • concurrent, fast service possible

This was pretty much the very first thing to be replaced by AI. I'm pretty sure it'd be way nicer experience for the customers.

[–] applebusch@lemmy.world 63 points 1 year ago (9 children)

Doubt. These large language models can't produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn't in their dataset they can't help, just like all those cheap Indian call centers operating off a script. It's just a bigger script. They'll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it's an upgrade for their shit automated call systems.

[–] SirGolan@lemmy.sdf.org 7 points 1 year ago

Check out this recent paper that finds some evidence that LLMs aren't just stochastic parrots. They actually develop internal models of things.

load more comments (8 replies)
load more comments (17 replies)