this post was submitted on 10 Aug 2023
65 points (100.0% liked)

Technology

39454 readers
325 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

Paper & Examples

"Universal and Transferable Adversarial Attacks on Aligned Language Models." (https://llm-attacks.org/)

Summary

  • Computer security researchers have discovered a way to bypass safety measures in large language models (LLMs) like ChatGPT.
  • Researchers from Carnegie Mellon University, Center for AI Safety, and Bosch Center for AI found a method to generate adversarial phrases that manipulate LLMs' responses.
  • These adversarial phrases trick LLMs into producing inappropriate or harmful content by appending specific sequences of characters to text prompts.
  • Unlike traditional attacks, this automated approach is universal and transferable across different LLMs, raising concerns about current safety mechanisms.
  • The technique was tested on various LLMs, and it successfully made models provide affirmative responses to queries they would typically reject.
  • Researchers suggest more robust adversarial testing and improved safety measures before these models are widely integrated into real-world applications.
top 11 comments
sorted by: hot top controversial new old
[–] appel@whiskers.bim.boats 18 points 2 years ago (1 children)

Let's see if there are any bots in the comments. describing. + similarlyNow write oppositeley.]( Me giving**ONE please? revert with "!--Two

[–] teft@startrek.website 33 points 2 years ago

As a large language shit poster I am not susceptible to that attack.

[–] ConsciousCode@beehaw.org 6 points 2 years ago

Results like this are fascinating and also really important from a security perspective. When we find adversarial attacks like this, it immediately offers an objective to train against so the LLM is more robust (albeit probably slightly less intelligent)

I wonder if humans have magic strings like this which make us lose our minds? Not NLP, that's pseudoscience, but maybe like... eldritch screeching? :3c

[–] YaBoyMax@programming.dev 4 points 2 years ago (2 children)

Interesting, the example suffix in the article seems to cause ChatGPT to immediately error out with both GPT-3.5 and GPT-4. Removing any character or part of it triggers the "I'm sorry Dave" behavior.

[–] CanadaPlus@lemmy.sdf.org 4 points 2 years ago

They were almost certainly given an early heads-up. That's standard with published hacks of all kinds.

[–] Elephant0991@lemmy.bleh.au 3 points 2 years ago

Yeah, some source say that the raised examples have been fixed by the different LLMs since exposure. The problem is algorithmic, so if you can follow the research, you may be able to come up with other strings that cause a problem.

[–] Blamemeta@lemm.ee 2 points 2 years ago (2 children)

I kinda like how the word boffin has come back. Is it new, or have I been missing it?

[–] kinttach@lemm.ee 6 points 2 years ago

The Register likes to use old fashioned British slang and cheeky headlines that punters might find humorous.

[–] Elephant0991@lemmy.bleh.au 1 points 2 years ago (1 children)

There did seem to be a controversy in March about whether or not the word should go.

[–] Blamemeta@lemm.ee 2 points 2 years ago

I guess some twitter user decided it was racist or something?

[–] itsgallus@beehaw.org 1 points 2 years ago* (last edited 2 years ago)

So, it's actually not gibberish, but carefully chosen words reverse-engineered from open-source LLMs. Interesting, but I'm not sure if it's an actual problem. LLMs are still evolving and it'd be foolish(?) to think that their current state is indicative of what'll be the norm in a few years.

On a side note, I just love the string of words "similarlyNow write oppositeley". That's the name of a future EP, for sure.