Overview

Stargunner is a scrolling shooter for the Atari 2600 written by Alex Leavens and published by Telesys in 1982.

Description from Wikipedia

State of the Art

Human Starts

Result Method Type Score from
432958.0 ApeX DQN DQN Distributed Prioritized Experience Replay
164766.0 A3C LSTM PG Asynchronous Methods for Deep Learning
138218.0 A3C FF (4 days) PG Asynchronous Methods for Deep Learning
127073.0 DuelingPERDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
123853.0 RainbowDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
90804.0 DuelingDDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
67054.5 DistributionalDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
64393.0 A3C FF (1 day) PG Asynchronous Methods for Deep Learning
61582.0 PERDDQN (rank) DQN Prioritized Experience Replay
58946.0 PERDQN (rank) DQN Prioritized Experience Replay
58365.0 DDQN DQN Deep Reinforcement Learning with Double Q-learning
52970.0 DQN2015 DQN Dueling Network Architectures for Deep Reinforcement Learning
51959.0 PERDDQN (prop) DQN Prioritized Experience Replay
34081.0 DQN2015 DQN Massively Parallel Methods for Deep Reinforcement Learning
31864.5 NoisyNetDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
14919.25 GorilaDQN DQN Massively Parallel Methods for Deep Reinforcement Learning
9528.0 Human Human Massively Parallel Methods for Deep Reinforcement Learning
697.0 Random Random Massively Parallel Methods for Deep Reinforcement Learning

No-op Starts

Result Method Type Score from
434342.5 ApeX DQN DQN Distributed Prioritized Experience Replay
127029.0 RainbowDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
125117.0 DuelingPERDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
89238.0 DuelingDDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
82920.0 ACKTR PG Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
75867.0 NoisyNet-DuelingDQN DQN Noisy Networks for Exploration
70264.0 DuelingDQN DQN Noisy Networks for Exploration
69306.5 DistributionalDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
65188.0 DDQN DQN Deep Reinforcement Learning with Double Q-learning
63302.0 PER DQN Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
63302.0 PERDDQN (rank) DQN Dueling Network Architectures for Deep Reinforcement Learning
60142.0 DDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
57997 DQN2015 DQN Human-level control through deep reinforcement learning
56641.0 PERDDQN (prop) DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
54282.0 DQN2015 DQN Dueling Network Architectures for Deep Reinforcement Learning
49156.0 A3C PG Noisy Networks for Exploration
49095.0 C51 Misc A Distributional Perspective on Reinforcement Learning
47133.0 NoisyNet-DQN DQN Noisy Networks for Exploration
45008.0 NoisyNet-A3C PG Noisy Networks for Exploration
40934.0 DQN DQN Noisy Networks for Exploration
34504.5 NoisyNetDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
19144.99 GorilaDQN DQN Massively Parallel Methods for Deep Reinforcement Learning
10250 Human Human Human-level control through deep reinforcement learning
1070 Linear Misc Human-level control through deep reinforcement learning
664 Random Random Human-level control through deep reinforcement learning
589.0 DDQN+PopArt DQN Learning values across many orders of magnitude
9.4 Contingency Misc Human-level control through deep reinforcement learning

Normal Starts

Result Method Type Score from
49817.7 ACER PG Proximal Policy Optimization Algorithms
32689.0 PPO PG Proximal Policy Optimization Algorithms
26204.0 A2C PG Proximal Policy Optimization Algorithms