this post was submitted on 13 Nov 2024
42 points (100.0% liked)

Technology

37720 readers
272 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

The Danish welfare authority, Udbetaling Danmark (UDK), risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups through its use of artificial intelligence (AI) tools to flag individuals for social benefits fraud investigations, Amnesty International said today in a new report. 

The report, Coded Injustice: Surveillance and Discrimination in Denmark’s Automated, details how the sweeping use of fraud detection algorithms, paired with mass surveillance practices, has led people to unwillingly –or even unknowingly– forfeit their right to privacy, and created an atmosphere of fear.

“People in non-traditional living arrangements — such as those with disabilities who are married but who live apart due to their disabilities; older people in relationships who live apart; or those living in a multi-generational household, a common arrangement in migrant communities — are all at risk of being targeted by the Really Single algorithm for further investigation into social benefits fraud,” said Hellen Mukiri-Smith.

UDK and ATP also use inputs related to “foreign affiliation” in its algorithmic models. (...) The research finds that this approach discriminates against people based on factors such as national origin and migration status.

Amnesty International also urges the European Commission to clarify, in its AI Act guidance, which AI practices count as social scoring, addressing concerns [raised by civil society](https://www.hrw.org/news/2023/10/09/eu-artificial-intelligence-regulation-should-ban-social-scoring#%3A%7E%3Atext=%28Brussels%2C+October+9%2C+2023%2Cregulation%27s+prohibition+on+social+scoring.%5D%22+said+HMS.%29.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here