Deep Reinforcement Learning from Human Preferences


This paper presents a novel method for training reinforcement learning agents using feedback from human observers. The main idea is to train a reward model from human comparisons of different trajectories, and then use this model to guide the reinforcement learning agent.

The process can be divided into three main steps:

Step 1: Initial demonstration: A human demonstrator provides initial trajectories by playing the game or task. This data is used as the initial demonstration data.

Step 2: Reward model training: The agent collects new trajectories, and for each of these, a random segment is chosen and compared with a random segment from another trajectory. The human comparator then ranks these two segments, indicating which one is better. Using these rankings, a reward model is trained to predict the human's preferences. This is done using a standard supervised learning approach.

Given two trajectory segments, (s_i) and (s_j), the probability that the human evaluator prefers (s_i) over (s_j) is given by:

[ P(s_i > s_j) = \frac{1}{1 + \exp{(-f_{\theta}(s_i) + f_{\theta}(s_j))}} ]

Step 3: Proximal Policy Optimization: The agent is then trained with Proximal Policy Optimization (PPO) using the reward model from Step 2 as the reward signal. This generates new trajectories that are then used in Step 2 to update the reward model, and the process is repeated.

Here's an overall schematic of the approach:

Human Demonstrator -----> Initial Trajectories ----> RL Agent
                     |                                  |
                     |                                  |
                     v                                  v
             Comparisons of trajectory segments  <---- New Trajectories
                     |                                  ^
                     |                                  |
                     v                                  |
                Reward Model <----------------------- Proximal Policy Optimization

The model used for making reward predictions in the paper is a deep neural network. For each pair of trajectory segments, the network predicts which one the human would prefer. The input to the network is the difference between the features of the two segments, and the output is a single number indicating the predicted preference.

One of the key insights from the paper is that it's not necessary to have a reward function that accurately reflects the true reward in order to train a successful agent. Instead, it's sufficient to have a reward function that can distinguish between different trajectories based on their quality. This allows the agent to learn effectively from human feedback, even if the feedback is noisy or incomplete.

The authors conducted several experiments to validate their approach. They tested the method on a range of tasks, including several Atari games and a simulated robot locomotion task. In all cases, the agent was able to learn effectively from human feedback and achieve good performance.

In terms of the implications, this work represents a significant step forward in the development of reinforcement learning algorithms that can learn effectively from human feedback. This could make it easier to train AI systems to perform complex tasks without needing a detailed reward function, and could also help to address some of the safety and ethical concerns associated with AI systems. However, the authors note that further research is needed to improve the efficiency and reliability of the method, and to explore its applicability to a wider range of tasks.

I hope this gives you a good understanding of the paper. Please let me know if you have any questions or would like more details on any aspect.



Tags: AI Safety, 2017
👁️ 557
hills
20:15
31.05.23
you need login for comment