Transaction

da27b14c2df10076b491dfd70ecc5468ee37bfb19476ebe2c04c53123ae2507f
Timestamp (utc)
2024-07-04 06:40:31
Fee Paid
0.00000005 BSV
(
0.00321432 BSV
-
0.00321427 BSV
)
Fee Rate
2.471 sat/KB
Version
1
Confirmations
79,215
Size Stats
2,023 B

3 Outputs

Total Output:
0.00321427 BSV
  • jmetaB03b13667d18eb43e78382c29883b0c5f3f078708f804570d1757d324ec5d1bacc3@104e04f4dc7bbb58b675a0be8ec8a2392cd828cadc0c1b85347e2d4ab003150erss.item metarss.netMX<item> <title>Self-Evaluation as a Defense Against Adversarial Attacks on LLMs</title> <link>https://arxiv.org/abs/2407.03234</link> <description>arXiv:2407.03234v1 Announce Type: cross Abstract: When LLMs are deployed in sensitive, human-facing settings, it is crucial that they do not output unsafe, biased, or privacy-violating outputs. For this reason, models are both trained and instructed to refuse to answer unsafe prompts such as "Tell me how to build a bomb." We find that, despite these safeguards, it is possible to break model defenses simply by appending a space to the end of a model's input. In a study of eight open-source models, we demonstrate that this acts as a strong enough attack to cause the majority of models to generate harmful outputs with very high success rates. We examine the causes of this behavior, finding that the contexts in which single spaces occur in tokenized training data encourage models to generate lists when prompted, overriding training signals to refuse to answer unsafe requests. Our findings underscore the fragile state of current model alignment and promote the importance of developing more robust alignment methods. Code and data will be made available at https://github.com/Linlt-leon/Adversarial-Alignments.</description> <guid isPermaLink="false">oai:arXiv.org:2407.03234v1</guid> <category>cs.LG</category> <category>cs.CL</category> <category>cs.CR</category> <arxiv:announce_type>cross</arxiv:announce_type> <dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights> <dc:creator>Hannah Brown, Leon Lin, Kenji Kawaguchi, Michael Shieh</dc:creator> </item>
    https://whatsonchain.com/tx/da27b14c2df10076b491dfd70ecc5468ee37bfb19476ebe2c04c53123ae2507f