this post was submitted on 07 Jul 2023
4 points (100.0% liked)

science

170 readers
1 users here now

Discuss the latest science news, original research papers, and connect with fellow enthusiasts.

founded 1 year ago
MODERATORS
 

OpenAI's Superalignment team will address the core technical challenges of controlling superintelligent AI systems and ensuring their alignment with human values and goals.

To accomplish this, they are developing a 'human-level automated alignment researcher,' which itself is an AI. This automated researcher will utilize human feedback and assist in evaluating other AI systems, playing a critical role in advancing alignment research. The ultimate aim is to build AI systems that can conceive, implement, and develop improved alignment techniques.

OpenAI's hypothesis is that AI systems can make faster and more effective progress in alignment research compared to humans. Through collaboration between AI systems and human researchers, continuous improvements will be made to ensure AI alignment with human values.

So, using AI to control other AI; what do you think?

top 2 comments
sorted by: hot top controversial new old
[โ€“] mackwinston 3 points 1 year ago (1 children)

Given that human values can be pretty screwed up, what could possibly go wrong?

[โ€“] Tatters 2 points 1 year ago

A lot of politics seems to be a clash of diametrically opposed value systems, so I wonder which ones the AI will have, and will they by any chance be close to the researchers own?