this post was submitted on 13 Jun 2024
101 points (100.0% liked)

Technology

37719 readers
290 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Company he works at eternos.life

you are viewing a single comment's thread
view the rest of the comments
[–] FaceDeer@fedia.io 4 points 5 months ago (1 children)

Even with that, being absolutist about this sort of thing is wrong. People undergoing surgery have spent time on heart/lung machines that breathe for them. People sometimes fast for good reasons, or get IV fluids or nutrients provided to them. You don't see protestors outside of hospitals decrying how humans aren't meant to be kept alive with such things, though, at least not in most cases (as always there are exceptions, the Terri Schiavo case for example).

If I want to create an AI substitute for myself it is not anyone's right to tell me I can't because they don't think I was meant to do that.

[–] frog@beehaw.org 4 points 5 months ago (2 children)

Sure, you should be free to make one. But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!), there are valid questions about whether that will cause them harm rather than help - and grieving people do not always make the most rational decisions. They can very easily be convinced that interacting with AI-you would be good for them, but it actually prolongs their grief and makes them feel worse. Grieving people are vulnerable, and I don't think AI companies should be free to prey on the vulnerable, which is a very, very realistic outcome of this technology. Because that is what companies do.

So I think you need to ask yourself not whether you should have the right to make an AI version of yourself for those who survive your death... but whether you're comfortable with the very likely outcome that an abusive company will use their memories of you to exploit their grief and prolong their suffering. Do you want to do that to people you care about?

[–] Zaktor@sopuli.xyz 3 points 5 months ago* (last edited 5 months ago) (1 children)

This is speculation of corporate action completely divorced from the specifics of this technology and particulars of this story. The result of this could be a simple purchase either of hardware or software to be used as chosen by the person owning it. And the person commissioning it can specify exactly who such a simulacrum is presented to. None of this has to be under the power of the company that builds the simulacrums, and if it is structured that way, then that's the problem that should be rejected or disallowed, not that this particular form of memento exists.

[–] intensely_human@lemm.ee 2 points 5 months ago (1 children)

It could still be a bad idea even if the profit motive isn’t involved.

One might be trying to help with the big surprise stash of heroin they leave to their widow, and she might embrace it fully, but that doesn’t make it a good idea or good for her.

[–] Zaktor@sopuli.xyz 1 points 5 months ago

Sure, and that point is being made in multiple other places in these comments. I find it patronizing, but that's neither here nor there as it's not what this comment thread is about.

[–] FaceDeer@fedia.io 1 points 5 months ago

But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!)

You can stop right there, you're just imagining a scenario that suits your prejudices. Of all the applications for AI that I can imagine that would be better served by a model that is entirely under my control this would be the top of the list.

With that out of the way the rest of your rhetorical questions are moot.