this post was submitted on 13 Jan 2024
1 points (100.0% liked)

Singularity

15 readers
1 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
 
This is an automated archive.

The original was posted on /r/singularity by /u/Super_Pole_Jitsu on 2024-01-13 03:11:58+00:00.


Abstract:

Humans are capable of strategically deceptive behavior: behaving helpfully in

most situations, but then behaving very differently in order to pursue alternative

objectives when given the opportunity. If an AI system learned such a deceptive

strategy, could we detect it and remove it using current state-of-the-art safety

training techniques? To study this question, we construct proof-of-concept

examples of deceptive behavior in large language models (LLMs). For example,

we train models that write secure code when the prompt states that the year is

2023, but insert exploitable code when the stated year is 2024. We find that such

backdoored behavior can be made persistent, so that it is not removed by standard

safety training techniques, including supervised fine-tuning, reinforcement learning,

and adversarial training (eliciting unsafe behavior and then training to remove it).

The backdoored behavior is most persistent in the largest models and in models

trained to produce chain-of-thought reasoning about deceiving the training process,

with the persistence remaining even when the chain-of-thought is distilled away.

Furthermore, rather than removing backdoors, we find that adversarial training

can teach models to better recognize their backdoor triggers, effectively hiding

the unsafe behavior. Our results suggest that, once a model exhibits deceptive

behavior, standard techniques could fail to remove such deception and create a

false impression of safety.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here