this post was submitted on 01 Jun 2024
1613 points (98.6% liked)

Technology

59135 readers
2234 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] dkc@lemmy.world 45 points 5 months ago (6 children)

I wonder if all these companies rolling out AI before it’s ready will have a widespread impact on how people perceive AI. If you learn early on that AI answers can’t be trusted will people be less likely to use it, even if it improves to a useful point?

[–] RGB3x3@lemmy.world 18 points 5 months ago

Personally, that's exactly what's happening to me. I've seen enough that AI can't be trusted to give a correct answer, so I don't use it for anything important. It's a novelty like Siri and Google Assistant were when they first came out (and honestly still are) where the best use for them is to get them to tell a joke or give you very narrow trivia information.

There must be a lot of people who are thinking the same. AI currently feels unhelpful and wrong, we'll see if it just becomes another passing fad.

[–] MBM@lemmings.world 17 points 5 months ago (1 children)

If so, companies rolling out blatantly wrong AI are doing the world a service and protecting us against subtly wrong AI

[–] dubyakay@lemmy.ca 4 points 5 months ago

Google were the good guys after all????

[–] Psythik@lemmy.world 8 points 5 months ago* (last edited 5 months ago)

To be fair, you should fact check everything you read on the internet, no matter the source (though I admit that's getting more difficult in this era of shitty search engines). AI can be a very powerful knowledge-acquiring tool if you take everything it tells you with a grain of salt, just like with everything else.

This is one of the reasons why I only use AI implementations that cite their sources (edit: not Google's), cause you can just check the source it used and see for yourself how much is accurate, and how much is hallucinated bullshit. Hell, I've had AI cite an AI generated webpage as its source on far too many occasions.

Going back to what I said at the start, have you ever read an article or watched a video on a subject you're knowledgeable about, just for fun to count the number of inaccuracies in the content? Real eye-opening shit. Even before the age of AI language models, misinformation was everywhere online.

[–] kent_eh@lemmy.ca 5 points 5 months ago

will have a widespread impact on how people perceive AI

Here's hoping.

[–] xanu@lemmy.world 0 points 5 months ago (1 children)

I'm no defender of AI and it just blatantly making up fake stories is ridiculous. However, in the long term, as long as it does eventually get better, I don't see this period of low to no trust lasting.

Remember how bad autocorrect was when it first rolled out? people would always be complaining about it and cracking jokes about how dumb it is. then it slowly got better and better and now for the most part, everyone just trusts their phones to fix any spelling mistakes they make, as long as it's close enough.

[–] PiratePanPan@lemmy.dbzer0.com 1 points 5 months ago

There's a big difference between my phone changing caulk to cock and my phone telling me to make pizza with Elmer's glue