this post was submitted on 18 Jul 2024
481 points (96.5% liked)
Technology
59446 readers
3438 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I really have a hard time deciding if that is the scandal the article makes it out to be (although there is some backpedaling going on). The crucial point is: 8% of the decisions turn out to be wrong or misjudged. The article seems to want us to think that the use of the algorithm is to blame. Yet, is it? Is there evidence that a human would have judged those cases differently? Is there evidence that the algorithm does a worse job than humans? If not, then the article devolves onto blatant fear mongering and the message turns from "algorithm is to blame for deaths" into "algorithm unable to predict the future in 100% of cases", which of course it can't...
The article mentions that one woman (Stefany González Escarraman) went for a restraining order the day after the system deemed her at "low risk" and the judge denied it referring to the VioGen score.
It also says:
You could argue that the problem isn't so much the algorithm itself as it is the level of reliance upon it. The algorithm isn't unproblematic though. The fact that it just spits out a simple score: "negligible", "low", "medium", "high", "extreme" is, IMO, an indicator that someone's trying to conflate far too many factors into a single dimension. I have a really hard time believing that anyone knowledgeable in criminal psychology and/or domestic abuse would agree that 35 yes or no questions would be anywhere near sufficient to evaluate the risk of repeated abuse. (I know nothing about domestic abuse or criminal psychology, so I could be completely wrong.)
Apart from that, I also find this highly problematic:
From those quotes looks like Idiocracy.
i could say a lot in response to your comment about the benefits and shortcomings of algorithms (or put another way, screening tools or assessments), but i'm tired.
i will just point out this, for anyone reading.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2573025/
i am exceedingly troubled that something which is commonly regarded as indicating very high risk when working with victims of domestic violence was ignored in the cited case (disclaimer - i haven't read the article). if the algorithm fails to consider history of strangulation, it's garbage. if the user of the algorithm did not include that information (and it was disclosed to them), or keyed it incorrectly, they made an egregious error or omission.
i suppose, without getting into it, i would add - 35 questions (ie established statistical risk factors) is a good amount. large categories are fine. no screening tool is totally accurate, because we can't predict the future or have total and complete understanding of complex situations. tools are only useful to people trained to use them and with accurate data and inputs. screening tools and algorithms must find a balance between accurate capture and avoiding false positives.
The judge should be in jail for that and If the judge thinks the "system" can do his job then he should quit as he is clearly useless.