Overview

The player controls a small blue spacecraft. The game starts in a fictional solar system with several planets to explore. If the player moves his ship into a planet, he will be taken to a side-view landscape. Unlike many other shooting games, gravity plays a fair part in Gravitar: the ship will be pulled slowly to the deadly star in the overworld, and downward in the side-view levels.

The player has five buttons: two to rotate the ship left or right, one to shoot, one to activate the thruster, and one for both a tractor beam and force field. Gravitar, Asteroids, Asteroids Deluxe and Space Duel all used similar 5-button controlling system.

In the side-view levels, the player has to destroy red bunkers that shoot constantly, and can also use the tractor beam to pick up blue fuel tanks. Once all of the bunkers are destroyed, the planet will blow up, and the player will earn a bonus. Once all planets are destroyed, the player will move onto another solar system.

The player will lose a life if he crashes into the terrain or gets hit by an enemy’s shot, and the game will end immediately if fuel runs out.

Gravitar has 12 different planets. Red Planet is available in all 3 phases in the universe; it contains a reactor. Shooting the reactor core activates a link. Escaping the reactor successfully moves the player to the next phase of planets, awards bonus points and 7500 units of fuel. Reactor escape time reduces after each phase and eventually becomes virtually impossible to complete.

After completing all 11 planets (or alternatively completing the reactor three times) the player enters the second universe and the gravity will reverse. Instead of dragging the ship towards the planet surface, the gravity pushes it away. In the third universe the landscape becomes invisible and the gravity is positive again. The final, fourth universe, has invisible landscape and reverse gravity. After completing the fourth universe the game starts over. However, the reactor escape time will never reset back to high levels again

Description from Wikipedia

State of the Art

Human Starts

Result Method Type Score from
3116.0 Human Human Massively Parallel Methods for Deep Reinforcement Learning
662.0 ApeX DQN DQN Distributed Prioritized Experience Replay
567.5 RainbowDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
538.37 GorilaDQN DQN Massively Parallel Methods for Deep Reinforcement Learning
422.0 DistributionalDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
351.0 PERDQN (rank) DQN Prioritized Experience Replay
320.0 A3C LSTM PG Asynchronous Methods for Deep Learning
303.5 A3C FF (4 days) PG Asynchronous Methods for Deep Learning
298.0 DQN2015 DQN Dueling Network Architectures for Deep Reinforcement Learning
297.0 DuelingDDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
269.5 A3C FF (1 day) PG Asynchronous Methods for Deep Learning
269.5 PERDDQN (rank) DQN Prioritized Experience Replay
250.5 NoisyNetDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
245.5 Random Random Massively Parallel Methods for Deep Reinforcement Learning
218.0 PERDDQN (prop) DQN Prioritized Experience Replay
216.5 DQN2015 DQN Massively Parallel Methods for Deep Reinforcement Learning
200.5 DDQN DQN Deep Reinforcement Learning with Double Q-learning
167.0 DuelingPERDQN DQN Dueling Network Architectures for Deep Reinforcement Learning

No-op Starts

Result Method Type Score from
3351.4 Human Human Dueling Network Architectures for Deep Reinforcement Learning
2796.1 DuelingPERDDQN DQN Deep Q-Learning from Demonstrations
2672 Human Human Human-level control through deep reinforcement learning
2209.0 NoisyNet-DuelingDQN DQN Noisy Networks for Exploration
1685.1 DQfD Imitation Deep Q-Learning from Demonstrations
1682.0 DuelingDQN DQN Noisy Networks for Exploration
1598.5 ApeX DQN DQN Distributed Prioritized Experience Replay
1419.3 RainbowDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
1054.58 GorilaDQN DQN Massively Parallel Methods for Deep Reinforcement Learning
681.0 DistributionalDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
588.0 DuelingDDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
548.5 PERDDQN (rank) DQN Dueling Network Architectures for Deep Reinforcement Learning
483.5 DDQN+PopArt DQN Learning values across many orders of magnitude
473.0 DQN2015 DQN Dueling Network Architectures for Deep Reinforcement Learning
447.0 NoisyNet-DQN DQN Noisy Networks for Exploration
443.5 NoisyNetDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
440.0 C51 Misc A Distributional Perspective on Reinforcement Learning
429 Contingency Misc Human-level control through deep reinforcement learning
412.0 DDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
387.7 Linear Misc Human-level control through deep reinforcement learning
379.0 A3C PG Noisy Networks for Exploration
366.0 DQN DQN Noisy Networks for Exploration
330.5 PERDDQN (prop) DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
314.0 NoisyNet-A3C PG Noisy Networks for Exploration
306.7 DQN2015 DQN Human-level control through deep reinforcement learning
238.0 DuelingPERDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
173 Random Random Human-level control through deep reinforcement learning
170.5 DDQN DQN Deep Reinforcement Learning with Double Q-learning

Normal Starts

Result Method Type Score from
737.2 PPO PG Proximal Policy Optimization Algorithms
225.3 ACER PG Proximal Policy Optimization Algorithms
194.0 A2C PG Proximal Policy Optimization Algorithms