Navigating Mushy Actor-Critic Reinforcement Studying | by Mohammed AbuSadeh | Dec, 2024

The code applied on this article is taken from the next Github repository (quantumiracle, 2023):

pip set up gymnasium torch

SAC depends on environments that use steady motion areas, so the simulation offered makes use of the robotic arm ‘Reacher’ atmosphere for probably the most half and the Pendulum-v1 atmosphere within the gymnasium package deal.

The Pendulum atmosphere was run on a unique repository that implements the identical algorithm however with much less deprecated libraries given by (MrSyee, 2020):

When it comes to the community architectures, as talked about within the Idea Rationalization, there are three most important elements:

Coverage Community: implements a Gaussian Actor community computing the imply and log commonplace deviation for the motion distribution.

class PolicyNetwork(nn.Module):
def __init__(self, state_dim, action_dim, hidden_dim):
tremendous(PolicyNetwork, self).__init__()
self.fc1 = nn.Linear(state_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
self.imply = nn.Linear(hidden_dim, action_dim)
self.log_std = nn.Linear(hidden_dim, action_dim)

def ahead(self, state):
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))
imply = self.imply(x)
log_std = torch.clamp(self.log_std(x), -20, 2) # Restrict log_std to forestall instability
return imply, log_std

Mushy Q-Community: estimates the anticipated future reward given from a state-action pair for an outlined optimum coverage.

class SoftQNetwork(nn.Module):
def __init__(self, state_dim, action_dim, hidden_dim):
tremendous(SoftQNetwork, self).__init__()
self.fc1 = nn.Linear(state_dim + action_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
self.out = nn.Linear(hidden_dim, 1)

def ahead(self, state, motion):
x = torch.cat([state, action], dim=-1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
return self.out(x)

Worth Community: estimates the state worth.

class ValueNetwork(nn.Module):
def __init__(self, state_dim, hidden_dim):
tremendous(ValueNetwork, self).__init__()
self.fc1 = nn.Linear(state_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, hidden_dim)
self.out = nn.Linear(hidden_dim, 1)

def ahead(self, state):
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))
return self.out(x)

The next snippet gives the important thing steps in updating the totally different variables comparable to the SAC algorithm. Because it begins by sampling a batch from the replay buffer for expertise replay. Then, earlier than computing the gradients, they’re initialised to zero to make sure that gradients from earlier batches usually are not amassed. Then performs backpropagation and updates the weights of the community throughout coaching. The goal and loss values are then up to date for the Q-networks. These steps happen for all three strategies.

def replace(batch_size, reward_scale, gamma=0.99, soft_tau=1e-2):
# Pattern a batch
state, motion, reward, next_state, executed = replay_buffer.pattern(batch_size)
state, next_state, motion, reward, executed = map(lambda x: torch.FloatTensor(x).to(system),
[state, next_state, action, reward, done])

# Replace Q-networks
target_value = target_value_net(next_state)
target_q = reward + (1 - executed) * gamma * target_value
q1_loss = F.mse_loss(soft_q_net1(state, motion), target_q.detach())
q2_loss = F.mse_loss(soft_q_net2(state, motion), target_q.detach())

soft_q_optimizer1.zero_grad()
q1_loss.backward()
soft_q_optimizer1.step()

soft_q_optimizer2.zero_grad()
q2_loss.backward()
soft_q_optimizer2.step()

# Replace Worth Community
predicted_q = torch.min(soft_q_net1(state, motion), soft_q_net2(state, motion))
value_loss = F.mse_loss(value_net(state), predicted_q - alpha * log_prob)
value_optimizer.zero_grad()
value_loss.backward()
value_optimizer.step()

# Replace Coverage Community
new_action, log_prob, _, _, _ = policy_net.consider(state)
policy_loss = (alpha * log_prob - predicted_q).imply()
policy_optimizer.zero_grad()
policy_loss.backward()
policy_optimizer.step()

# Mushy Replace Goal Community
for target_param, param in zip(target_value_net.parameters(), value_net.parameters()):
target_param.information.copy_(soft_tau * param.information + (1 - soft_tau) * target_param.information)

Lastly, to run the code within the sac.py file, simply run the next instructions:

python sac.py --train
python sac.py --test