this post was submitted on 04 Oct 2023
148 points (97.4% liked)

Technology

59295 readers
4310 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Los Angeles is using AI to predict who might become homeless and help before they do::undefined

top 29 comments
sorted by: hot top controversial new old
[–] qooqie@lemmy.world 44 points 1 year ago (3 children)

This is what state run AI models should be doing, not any of that other wack ass shit

[–] Piecemakers3Dprints@lemmy.world 27 points 1 year ago (1 children)

Color me skeptical, considering this city specifically has the single most notoriously corrupt and violent police force in the history of the nation. Yeah, that model is being trained to "help".

[–] LibertyLizard@slrpnk.net 8 points 1 year ago (1 children)

If you’ve never worked with local governments, you may not realize how independent these departments can be. And also that the people who go into this line of work usually really want to help people. I can’t speak to the situation in LA directly but I seriously doubt they would be sharing their tools with the police unless there was political pressure to do so. Which I think is unlikely in LA.

[–] Piecemakers3Dprints@lemmy.world 2 points 1 year ago (2 children)

That's a non-zero chance that LLM is safely secured and incapable of being used for unethical reasons, no matter how "independent" the political groups are.

[–] LibertyLizard@slrpnk.net 1 points 1 year ago

They can just build their own model if they want to. It’s not that hard, especially since the police have a lot of money. So the question is more do we allow this than will they somehow steal it from another unrelated program.

[–] Touching_Grass@lemmy.world 1 points 1 year ago

They're going to get it regardless. Question is will we

Definitely. Policy should be made on the basis of what's proven to be effective, not ideology.

AI could be more effective, provided that what's been fed into it is not garbage

[–] ShakeThatYam@lemmy.world 3 points 1 year ago (1 children)

No thanks. If this is remotely successful these fucks will next use it to Minority Report us.

[–] Kolanaki@yiffit.net 19 points 1 year ago* (last edited 1 year ago) (2 children)

Who needs AI for that?!

At the current rate of inflation vs rent: Everyone. There is no might.

[–] Alchemy@lemmy.world 7 points 1 year ago

Yeah, I’ve got red flags all over my “minority report”.

Yeah, LA has pretty easy calculus for this. Everyone in the sub million / year earning bracket Is high risk! Woukd be really interesting if some millionaires do start popping up on this thing and it turns out it's a good predictor of soon to crash stocks or doomed tech startups!

[–] MyOpinion@lemm.ee 19 points 1 year ago (2 children)

What could also help is a department of housing that anyone could walk into that provides them with temporary housing that leads to full time housing if needed.

[–] trk@aussie.zone 1 points 1 year ago

Just give this more funding:

https://epath.org/

[–] chuckleslord@lemmy.world 10 points 1 year ago

Maybe, and this might be a bit out there but hear me out, maybe we should bin housing last policies and switch to housing first. Since it's been, ya know, proven to reduce costs and help people.

[–] autotldr@lemmings.world 2 points 1 year ago

This is the best summary I could come up with:


The call was from the Los Angeles County Department of Health Services, part of a first-of-its-kind experiment to try and curb homelessness numbers, which keep going up despite massive spending.

The program tracks data from seven county agencies, including emergency room visits, crisis care for mental health, substance abuse disorder diagnosis, arrests and sign-ups for public benefits like food aid.

She's used the allocated money for payday loan debt, appliances, laptops and, recently, an e-bike for someone whose mental illness made it difficult to take public transportation.

Theus ticks off a list of needs: car repairs, paying back due rent and utilities, restoring food aid for the boys.

But there aren't nearly enough federal housing vouchers to meet the need, and Theus says wait times have gotten longer as cities try to help the growing number of people who are unhoused.

Depending on its long-term results, Los Angeles' proactive approach could add much needed evidence for what works to prevent homelessness, says Beth Shinn, an expert on the issue at Vanderbilt University and also an adviser to the L.A. program.


The original article contains 1,774 words, the summary contains 180 words. Saved 90%. I'm a bot and I'm open source!

[–] sbv@sh.itjust.works 1 points 1 year ago

I have a lot of concerns about AI, but if it's getting people help and preventing them from ending up on the streets, I'm all for it.

[–] AceFuzzLord@lemm.ee 1 points 1 year ago (2 children)

The biggest problem I have with this idea comes from my recent experiences over the past few days with GPT-3.5 in particular.

Things like not being able to remember last responses or prompts, just making up facts, or data being outdated (September 2021 for GPT-3.5) and needing to be updated. Until issues like that are less of an issue, I don't foresee being actually usable for cities or really anything outside of maybe just generating nonsense or random code snippets.

Also, I have concerns you'd see this being taken and used against minorities to discriminate against them. Whether that's intentional or not I can't say.

[–] Zeth0s@lemmy.world 2 points 1 year ago

AI they are talking about is most likely completely different than chatgpt.

They are likely labeling people "at risk" using some very reliable old-school ML algorithm, such as xgboost.

Biases are clearly a problem, but they are more manageable than human biases, because of the mathematical form that help finding and removing them. This is why for instance EU regulations force to have mathematical models in many area, to replace "human intuition". Because mathematical models are better for customers.

They aren't doing anything new, just calling it AI

Based on the sparse information in the article, they're training the model based on actual data points, not just feeding the data in human-readable format to a LLM.