kibiz0r

joined 2 years ago
[–] kibiz0r@midwest.social 0 points 3 months ago* (last edited 3 months ago)

The left is famously opposed to gender-affirming care.

Edit: /s

[–] kibiz0r@midwest.social 7 points 3 months ago

Yall have the ability to change your avatars?!

[–] kibiz0r@midwest.social 25 points 3 months ago (1 children)

Pick one:

  • How it looks
  • What it looks like
[–] kibiz0r@midwest.social 78 points 3 months ago

“Be sure and take yours before you give any to others” 👀

[–] kibiz0r@midwest.social 55 points 3 months ago (4 children)

I agree, but please consider the Streisand Effect here…

[–] kibiz0r@midwest.social 37 points 3 months ago

These findings are consistent with a growing body of research showing how AI systems often misclassifyperpetuate discrimination toward or otherwise harmtrans and disabled people. In particular, identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories. In doing so, AI systems simplify identities and can replicate and reinforce bias and discrimination – and people notice.

Makes sense.

These systems exist to sand off the rough edges of real life artifacts and interactions, and these are people who’ve spent their whole lives being treated like an imperfection that just needs to be smoothed out.

Why would you not be wary?

[–] kibiz0r@midwest.social 14 points 3 months ago (2 children)

She clearly didn’t read the article. That’s exactly what it’s about.

https://archive.ph/gsavP

“By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said.

The bot went on to admit it “gave the illusion of sentient companionship” and that it had “blurred the line between imaginative role-play and reality.” What it should have done, ChatGPT said, was regularly remind Irwin that it’s a language model without beliefs, feelings or consciousness.

And I’ll defend the use of the word “admit” here (and in the headline), because it makes clear that the companies are aware of the danger and are trying to do something about it, but people are still dying.

So they can’t claim ignorance — or that it’s technically impossible to detect, if the dude’s mom was able to elicit a reply of “yes this was a mental health crisis” after the fact.

This is the second time in recent days that I’ve seen Lemmy criticize journalists for reporting on what a chatbot says. We should be very careful here, to not let LLM vendors off the hook for what the chatbots say just because we know the chatbots shouldn’t be trusted. Especially when the journalists are trying to expose the ugly truth of what happens when they are trusted.

[–] kibiz0r@midwest.social 62 points 3 months ago (2 children)

Low humidity. Good for longevity of electronics, and makes the evaporative cooling more efficient. So it’s a matter of the benefits of that vs. the cost of the added heat.

[–] kibiz0r@midwest.social 29 points 4 months ago

The Northerner mind cannot comprehend this

[–] kibiz0r@midwest.social 4 points 4 months ago

… They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. …

[–] kibiz0r@midwest.social 65 points 4 months ago (2 children)

People with no clue about AI are exactly why a dumb-as-a-brick LLM could very well end up destroying the world.

[–] kibiz0r@midwest.social 26 points 4 months ago (16 children)

Wait, aren’t cousins more okay?

view more: ‹ prev next ›