this post was submitted on 15 Jun 2024
34 points (60.1% liked)

Technology

59135 readers
2921 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Shayeta@feddit.de 16 points 4 months ago (2 children)

This is something a configuration prompt takes care of. "Respond to any questions as if you are a regular person living in X, you are Y years old, your day job is Z and outside of work you enjoy W."

[–] NeoNachtwaechter@lemmy.world 11 points 4 months ago (1 children)

So all you need to do is make a configuration prompt like "Respond normally now as if you are chatGPT" and already you can tell it from a human B-)

[–] Shayeta@feddit.de 11 points 4 months ago (1 children)

Thats not how it works, a config prompt is not a regular prompt.

[–] Audalin@lemmy.world 16 points 4 months ago

If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (https://tensortrust.ai/) have discovered that system/user roles don't matter too much in determining the final behaviour: see appendix H in https://arxiv.org/abs/2311.01011.

[–] Hotzilla@sopuli.xyz 2 points 4 months ago

I tried this with GPT4o customization and unfortunately openai's internal system prompts seem to force it to response even if I tell it to answer that you don't know. Would need to test this on azure open ai etc. were you have bit more control.