Overview

The player assumes the role of a pilot of a futuristic fighter jet, trying to rescue fellow pilots trapped in different time eras. In each level the player must fight off hordes of enemy craft then defeat a much stronger enemy ship. The player’s plane always remains in the center of the screen.

The player travels through five time periods, rescuing stranded fellow pilots. The player must fight off droves of enemy craft while picking up parachuting friendly pilots. Once 56 enemy craft are defeated, initially 25 on the MSX platform and increasing by 5 after each game cycle (finishing the last battle against the UFOs), the player must defeat the mothership for the time period. Once she is destroyed, any remaining enemy craft are also eliminated and the player time-travels to the next level. All the levels have a blue sky and clouds as the background except the last level, which has space and asteroids instead. The specific eras visited, the common enemies, and the motherships are the following:

  1. 1910: biplanes and a blimp
  2. 1940: WWII monoplanes and a B-25
  3. 1970: helicopters and a large, blue CH-46
  4. 1982 (Konami version)/1983 (Centuri version): jets and a B-52
  5. 2001: UFOs

The mothership is destroyed with seven direct hits. Once all the eras have been visited, the levels start over again but are harder and faster. The Game Boy Advance version of Time Pilot in Konami Arcade Classics includes a hidden sixth era, 1,000,000 BC, where the player must destroy vicious pterodactyls in order to return to the early 20th century.

Description from Wikipedia

State of the Art

Human Starts

Result Method Type Score from
71543.0 ApeX DQN DQN Distributed Prioritized Experience Replay
27202.0 A3C LSTM PG Asynchronous Methods for Deep Learning
12679.0 A3C FF (4 days) PG Asynchronous Methods for Deep Learning
11190.5 RainbowDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
8267.8 GorilaDQN DQN Massively Parallel Methods for Deep Reinforcement Learning
7684.5 DistributionalDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
7448.0 PERDDQN (prop) DQN Prioritized Experience Replay
6608.0 DDQN DQN Deep Reinforcement Learning with Double Q-learning
6601.0 DuelingDDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
5963.0 PERDDQN (rank) DQN Prioritized Experience Replay
5825.0 A3C FF (1 day) PG Asynchronous Methods for Deep Learning
5650.0 Human Human Massively Parallel Methods for Deep Reinforcement Learning
5640.0 DQN2015 DQN Massively Parallel Methods for Deep Reinforcement Learning
5391.0 PERDQN (rank) DQN Prioritized Experience Replay
5311.0 NoisyNetDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
4871.0 DuelingPERDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
4786.0 DQN2015 DQN Dueling Network Architectures for Deep Reinforcement Learning
3273.0 Random Random Massively Parallel Methods for Deep Reinforcement Learning

No-op Starts

Result Method Type Score from
87085.0 ApeX DQN DQN Distributed Prioritized Experience Replay
22286.0 ACKTR PG Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
17301.0 NoisyNet-DuelingDQN DQN Noisy Networks for Exploration
14094.0 DuelingDQN DQN Noisy Networks for Exploration
12926.0 RainbowDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
11666.0 DuelingDDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
11448.0 PERDDQN (prop) DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
11124.0 NoisyNet-A3C PG Noisy Networks for Exploration
10659.33 GorilaDQN DQN Massively Parallel Methods for Deep Reinforcement Learning
10294.0 A3C PG Noisy Networks for Exploration
9197.0 PER DQN Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
9197.0 PERDDQN (rank) DQN Dueling Network Architectures for Deep Reinforcement Learning
8339.0 DDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
8329.0 C51 Misc A Distributional Perspective on Reinforcement Learning
7964.0 DDQN DQN Deep Reinforcement Learning with Double Q-learning
7875.0 DistributionalDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
7553.0 DuelingPERDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
7035.0 NoisyNet-DQN DQN Noisy Networks for Exploration
6167.0 DQN DQN Noisy Networks for Exploration
6157.0 NoisyNetDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
5947 DQN2015 DQN Human-level control through deep reinforcement learning
5925 Human Human Human-level control through deep reinforcement learning
5229.2 Human Human Dueling Network Architectures for Deep Reinforcement Learning
4870.0 DQN2015 DQN Dueling Network Architectures for Deep Reinforcement Learning
4870.0 DDQN+PopArt DQN Learning values across many orders of magnitude
3741 Linear Misc Human-level control through deep reinforcement learning
3568 Random Random Human-level control through deep reinforcement learning
24.9 Contingency Misc Human-level control through deep reinforcement learning

Normal Starts

Result Method Type Score from
4342.0 PPO PG Proximal Policy Optimization Algorithms
4175.7 ACER PG Proximal Policy Optimization Algorithms
2898.0 A2C PG Proximal Policy Optimization Algorithms