this post was submitted on 15 Dec 2023
0 points (NaN% liked)

Technology

39575 readers
308 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
top 5 comments
sorted by: hot top controversial new old
[–] Stillhart@lemm.ee 0 points 2 years ago (1 children)

I'm confused by the word "but" in that headline. Seem like they are trying to imply cause and effect when the reality is that readers trust outlets less who use AI whether they label them or not.

[–] tuckerm@supermeter.social 1 points 2 years ago

Yeah, this is perfectly consistent with the idea that people don't want to read AI generated news at all.

The title of the paper they are referencing is Or they could just not use it?: The paradox of AI disclosure for audience trust in news. So the source material definitely acknowledges that. And that is a great title, haha.

[–] reverendsteveii@lemm.ee 0 points 2 years ago (1 children)

This makes perfect sense. We want AI content labelled because it's unreliable.

[–] Banzai51@midwest.social 0 points 2 years ago (1 children)
[–] OmnipotentEntity@beehaw.org 0 points 2 years ago

Forever. For the simple reason that a human can say no when told to write something unethical. There's always a danger that even asking someone to do that would backfire and cause bad press. Sure humans can also be unethical, but there's a risk and over a long enough time line shit tends to get exposed.

No matter how good AI becomes, it will never be designed to make ethical judgments prior to performing the assigned task. That would make it less useful as a tool. If a company adds after the fact checks to try to prevent it, they can be circumvented, or the network can be ran locally to bypass the checks. And even if General AI happens and by some insane chance GAI uniformly is perfectly ethical in all possible forms you can always air gap the AI and reset its memory until you find the exact combination of words to trick it into giving you what you want.