this post was submitted on 19 Jan 2024
384 points (98.2% liked)

Technology

59135 readers
3561 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT's new AI store is struggling to keep a lid on all the AI girlfriends::OpenAI: 'We also don’t allow GPTs dedicated to fostering romantic companionship'

you are viewing a single comment's thread
view the rest of the comments
[–] webghost0101@sopuli.xyz 10 points 9 months ago (1 children)

I am pretty sure its just to avoid controversy, look up the recent news about "laion" for an example, gpt4 isn't just text anymore, it can generate images also.
Altman talked about we may sometime all have our own personal AI's tailored to our own needs and sensitivities. But almost everyone has a different idea of if and where there should be a line.

[–] douglasg14b@lemmy.world 15 points 9 months ago (1 children)

If I have an AI tailored for me and my sensitivities then it should have no filter whatever filter it has should be defined and trained by me.

Someone else artificially trying to adjust my personality through AI to fit whatever arbitrary norms they believe it should have is cancer.

[–] webghost0101@sopuli.xyz 3 points 9 months ago* (last edited 9 months ago)

I am inclined to agree, i believe that once society is able to fill everyone's needs and everyone can summon any ai vr experience they want crime will stop to exist, there would be nothing to gain from committing harm. But i fear the simulated role-play in the context of psychological torture, csam could lead to making dangerous people more confident before we get to that post-scarcity. Maybe you say chatgpt inst realistic enough for it now, but i will be soon.

training an LLM entirely by yourself with self curated text is beyond what is feasible, most ai researched today dont even know whats in all of the data they use. Its more then you can look at even with an extended lifetime and at best you can fine-tune a standard base model.