rsac-RoadEnv / README.md
kengboon's picture
Update README.md
e4306f9
|
raw
history blame
2.39 kB
metadata
pipeline_tag: reinforcement-learning
library_name: pytorch
language:
  - en
tags:
  - reinforcement-learning
  - deep-reinforcement-learning
  - pytorch
  - gymnasium
  - collision-avoidance
  - navigation
  - self-driving
  - autonomous-vehicle
model-index:
  - name: sac_v2-230704203226
    results:
      - task:
          type: reinforcement-learning
          name: reinforcement-learning
        dataset:
          name: urban-road-v0
          type: RoadEnv
        metrics:
          - type: mean-reward
            value: 0.53 - 0.72
            name: mean-reward
  - name: sac_v2_lstm-230706072839
    results:
      - task:
          type: reinforcement-learning
          name: reinforcement-learning
        dataset:
          name: urban-road-v0
          type: RoadEnv
        metrics:
          - type: mean-reward
            value: 0.62 - 0.76
            name: mean-reward

This repository contains model weights for the agents performing in RoadEnv.

Models

Usage

# Register environment
from road_env import register_road_envs
register_road_envs()

# Make environment
import gymnasium as gym
env = gym.make('urban-road-v0', render_mode='rgb_array')

# Configure parameters (example)
env.configure({
    "random_seed": None,
    "duration": 60,
})

obs, info = env.reset()

# Graphic display
import matplotlib.pyplot as plt
plt.imshow(env.render())

# Execution
done = truncated = False
while not (done or truncated):
    action = ... # Your agent code here
    obs, reward, done, truncated, info = env.step(action)
    env.render() # Update graphic