this post was submitted on 15 Jun 2024
34 points (60.1% liked)

Technology

60082 readers
4864 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Hackworth@lemmy.world 1 points 6 months ago (1 children)

Humans are really bad at determining whether a chat is with a human or a bot

Eliza is not indistinguishable from a human at 22%.

Passing the Turing test stood largely out of reach for 70 years precisely because Humans are pretty good at spotting counterfeit humans.

This is a monumental achievement.

[โ€“] dustyData@lemmy.world 0 points 6 months ago* (last edited 6 months ago) (1 children)

First, that is not how that statistic works, like you are reading it entirely wrong.

Second, this test is intentionally designed to be misleading. Comparing ChatGPT to Eliza is the equivalent of me claiming that the Chevy Bolt is the fastest car to ever enter a highway by comparing it to a 1908 Ford Model T. It completely ignores a huge history of technological developments. There have been just as successful chatbots before ChatGPT, just they weren't LLM and they were measured by other methods and systematic trials. Because the Turing test is not actually a scientific test of anything, so it isn't standardized in any way. Anyone is free to claim to do a Turing Test whenever and however without too much control. It is meaningless and proves nothing.