导读
强化学习的目标是学习到一个策略来最大化期望回报,一种直接的方法就是在策略空间直接搜出最佳的策略,称为搜索策略。策略搜索的本质是一个优化问题,可以分为基于梯度的优化和无梯度优化。策略搜索与基于价值函数的方法相比,策略搜索不需要值函数,可以直接优化策略。参数化的策略能够处理连续状态和动作,可以直接学出随机性策略。
策略梯度(Policy Gradient)是一种基于梯度的强化学习方法.假设是一个关于???? 的连续可微函数(神经网络),我们可以用梯度上升的方法来优化参数???? 使得目标函数????(????)最大。
模型构建
我们构建是的关于???? 的连续可微函数为神经网络。
神经网络的输入:以向量或矩阵表示的机器Observation(state)
神经网络的输出:每个动作对应输出层的一个神经元。
使用策略我们完成一次游戏,轨迹如下所示这个轨迹的全部回报为。因为输出为每个行动的概率,所以同一个神经网络,每次都是不同的。因此我们定义为的期望,的大小可以评价策略的好坏,我们希望这个期望值越大越好。
一个Episode的轨迹则每个出现的概率
其中
因为每进行一次Episode,每一个轨迹都有可能被采样。我们将在神经网络参数固定条件下,选择轨迹的概率为????(????|????)。所以有穷举所有的轨迹不可能的,这里的近似采用了蒙特卡洛采样的方法,我们使用策略,完成次Episode,会得到个轨迹,即,将将这N次得到回报取平均,当N趋向于无穷时候,两者近似。
我们的目标是最大化期望回报我们可以采用梯度上升的方法进行求解,
(1) 已知条件,我们求?
注意。
(2)接下来我们对进行推导式求解?有上文,所以有
(3)我们对参数进行更新,由
更新过程如下图所示:
总结下来就是:增加带来正激励的概率;减少带来负激励的概率,并且我们考虑的是整个轨迹的回报。
Reinforce算法
因为,所以结合随机梯度上升算法,我们可以每次采集一条轨迹,计算每个时刻的梯度并更新参数,这称为REINFORCE算法[Williams, 1992],此时
代码实现
import argparse
import gym
import numpy as np
from itertools import count
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical
parser = argparse.ArgumentParser(description='PyTorch REINFORCE example')
parser.add_argument('--gamma', type=float, default=0.99, metavar='G',
help='discount factor (default: 0.99)')
parser.add_argument('--seed', type=int, default=543, metavar='N',
help='random seed (default: 543)')
parser.add_argument('--render', action='store_true',
help='render the environment')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='interval between training status logs (default: 10)')
args = parser.parse_args()
env = gym.make('CartPole-v1')
env.seed(args.seed)
torch.manual_seed(args.seed)
class Policy(nn.Module):
#[1,4]-->[1,128]-->[1,2]-->softmax
def __init__(self):
super(Policy, self).__init__()
self.affine1 = nn.Linear(4, 128)
self.dropout = nn.Dropout(p=0.6)
self.affine2 = nn.Linear(128, 2)
self.saved_log_probs = []
self.rewards = []
def forward(self, x):
x = self.affine1(x)
x = self.dropout(x)
x = F.relu(x)
action_scores = self.affine2(x)
return F.softmax(action_scores, dim=1)
policy = Policy()
optimizer = optim.Adam(policy.parameters(), lr=1e-2)
eps = np.finfo(np.float32).eps.item()
def select_action(state):
state = torch.from_numpy(state).float().unsqueeze(0) #将observation转换成tensor
probs = policy(state)
m = Categorical(probs)
action = m.sample() #按照概率进行采样
policy.saved_log_probs.append(m.log_prob(action))#m.log_prob(value)函数则是公式中的log部分
return action.item()
def finish_episode():
R = 0
policy_loss = []
returns = []
#计算的每个动作时刻收到的回报
for r in policy.rewards[::-1]:
R = r + args.gamma * R
returns.insert(0, R)
returns = torch.tensor(returns)
returns = (returns - returns.mean()) / (returns.std() + eps) #标准化
for log_prob, R in zip(policy.saved_log_probs, returns):
policy_loss.append(-log_prob * R)
optimizer.zero_grad()
policy_loss = torch.cat(policy_loss).sum()
policy_loss.backward()
optimizer.step()
del policy.rewards[:]
del policy.saved_log_probs[:]
def main():
running_reward = 10
for i_episode in count(1):
state, ep_reward = env.reset(), 0
for t in range(1, 10000): # Don't infinite loop while learning
action = select_action(state)
state, reward, done, _ = env.step(action)
if args.render:
env.render()
policy.rewards.append(reward) #记录每一步的奖励
ep_reward += reward #回报相加
if done: #如果为True,这个eposid结束
break
#平均reward奖励
running_reward = 0.05 * ep_reward + (1 - 0.05) * running_reward
finish_episode()
if i_episode % args.log_interval == 0:
print('Episode {}\tLast reward: {:.2f}\tAverage reward: {:.2f}'.format(
i_episode, ep_reward, running_reward))
if running_reward > env.spec.reward_threshold:
print("Solved! Running reward is now {} and "
"the last episode runs to {} time steps!".format(running_reward, t))
break
if __name__ == '__main__':
main()
$ python temp.py
Episode 10 Last reward: 26.00 Average reward: 16.00
Episode 20 Last reward: 16.00 Average reward: 14.85
Episode 30 Last reward: 49.00 Average reward: 20.77
Episode 40 Last reward: 45.00 Average reward: 27.37
Episode 50 Last reward: 44.00 Average reward: 30.80
Episode 60 Last reward: 111.00 Average reward: 42.69
Episode 70 Last reward: 141.00 Average reward: 70.65
Episode 80 Last reward: 138.00 Average reward: 100.27
Episode 90 Last reward: 30.00 Average reward: 86.27
Episode 100 Last reward: 114.00 Average reward: 108.18
Episode 110 Last reward: 175.00 Average reward: 156.48
Episode 120 Last reward: 141.00 Average reward: 143.86
Episode 130 Last reward: 101.00 Average reward: 132.91
Episode 140 Last reward: 19.00 Average reward: 89.99
Episode 150 Last reward: 30.00 Average reward: 63.52
Episode 160 Last reward: 27.00 Average reward: 49.30
Episode 170 Last reward: 79.00 Average reward: 47.18
Episode 180 Last reward: 77.00 Average reward: 51.38
Episode 190 Last reward: 80.00 Average reward: 63.45
Episode 200 Last reward: 55.00 Average reward: 62.21
Episode 210 Last reward: 38.00 Average reward: 57.16
Episode 220 Last reward: 50.00 Average reward: 54.56
Episode 230 Last reward: 31.00 Average reward: 53.13
Episode 240 Last reward: 115.00 Average reward: 67.47
Episode 250 Last reward: 204.00 Average reward: 137.96
Episode 260 Last reward: 195.00 Average reward: 191.86
Episode 270 Last reward: 214.00 Average reward: 218.05
Episode 280 Last reward: 500.00 Average reward: 294.96
Episode 290 Last reward: 358.00 Average reward: 350.75
Episode 300 Last reward: 327.00 Average reward: 319.03
Episode 310 Last reward: 93.00 Average reward: 260.24
Episode 320 Last reward: 500.00 Average reward: 258.35
Episode 330 Last reward: 83.00 Average reward: 290.17
Episode 340 Last reward: 201.00 Average reward: 259.37
Episode 350 Last reward: 482.00 Average reward: 270.48
Episode 360 Last reward: 500.00 Average reward: 337.46
Episode 370 Last reward: 91.00 Average reward: 375.76
Episode 380 Last reward: 500.00 Average reward: 402.21
Episode 390 Last reward: 260.00 Average reward: 377.44
Episode 400 Last reward: 242.00 Average reward: 327.93
Episode 410 Last reward: 500.00 Average reward: 318.99
Episode 420 Last reward: 390.00 Average reward: 339.36
Episode 430 Last reward: 476.00 Average reward: 364.77
Episode 440 Last reward: 500.00 Average reward: 368.53
Episode 450 Last reward: 82.00 Average reward: 370.17
Episode 460 Last reward: 500.00 Average reward: 404.95
Episode 470 Last reward: 500.00 Average reward: 443.09
Episode 480 Last reward: 500.00 Average reward: 465.92
Solved! Running reward is now 476.20365101578085 and the last episode runs to 500 time steps!
参考
李宏毅强化学习
邱锡鹏《神经网络与深度学习》