this post was submitted on 27 Sep 2024
1342 points (99.4% liked)
Technology
59377 readers
4734 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Wasn't it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?
There were more than one system proven to "cheat" through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
Since multiple medical image recognition systems are in development, I can't imagine they're all ~~this faulty~~ trained with unsuitable materials.
They are not 'faulty', they have been fed wrong training data.
This is the most important aspect of any AI - it's only as good as the training dataset is. If you don't know the dataset, you know nothing about the AI.
That's why every claim of 'super efficient AI' need to be investigated deeper. But that goes against line-goes-up principle. So don't expect that to happen a lot.