this post was submitted on 20 Feb 2024
164 points (94.6% liked)
Technology
59377 readers
5739 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
To clarify what OP meant by his 'AI' statement
The researchers noticed that if someone attempted to remove a tag from a product, it would slightly alter the glue with metal particles making the original signature slightly different. To counter this they trained a model:
It's a good use case for an ML model.
In my opinion, this should only be used for continuing to detect the product itself.
The danger that I can see with this product would be a decision made by management thinking that they can rely on this to detect tampering without considering other factors.
The use case provided in the article was for something like a car wash sticker placed on a customers car.
If the customer tried to peel it off and reattach it to a different car, the business could detect that as tampering.
However, in my opinion, there are a number of other reasons where this model could falsely accuse someone of tampering:
In the end, most management won't really understand this device well beyond statements like, "You can detect tampering with more than 99 percent accuracy!" And, unless they inform the customers of how the anti-tampering works, Customers won't understand why they're being accused of tampering with the sticker.