Overview

The game generally follows the plot of the movie, and takes place on four separate screens. The first level begins with the player, as Colwyn, at his wedding to Lyssa, which is interrupted by the extraterrestrial Slayers. The game continues to generate new Slayers for the player to fight until he is overwhelmed and Lyssa is abducted to the Black Fortress.

The player then traverses the Iron Desert on a Fire Mare, stocking up on Colwyn’s magical throwing weapon, the Glaive (in the film there is only one), by pressing the button each time the horse rides over one.

The next level takes place in the lair of the Widow of the Web. The player is required to jump between moving threads of web, working their way upward towards the Widow at the top of the screen, while avoiding a giant spider. After completing this task, the Widow reveals the location of the Black Fortress, and the player again rides a Fire Mare through the Iron Desert to reach it. If the player fails to arrive at the given location at the correct time of day, according to a timer at the top of the screen, he loses a life and must return to the Widow to find out the Fortress’s new location.

Upon reaching the Black Fortress, the player must penetrate the energy barrier surrounding Lyssa with the Glaive (of which the player has a limited number), while the Beast attempts to block the player’s shots and hit him with fireballs. If the Glaive hits the Beast, or is not caught on the rebound by the player, that Glaive is lost. If all of the player’s Glaives are lost, he is expelled from the Fortress and must return to the Widow of the Web level, discover the new location of the Black Fortress, and traverse the Iron Desert again.

If the player manages to break through the barrier surrounding Lyssa, she transforms into a fireball which the player can throw at the Beast. If the fireball hits, the player wins, and the game starts over at a higher level of difficulty.

Description from Wikipedia

State of the Art

Human Starts

Result Method Type Score from
11209.5 PERDQN (rank) DQN Prioritized Experience Replay
8592.0 ApeX DQN DQN Distributed Prioritized Experience Replay
8066.6 A3C FF (1 day) PG Asynchronous Methods for Deep Learning
8051.6 DuelingDDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
7658.6 DuelingPERDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
7406.5 PERDDQN (prop) DQN Prioritized Experience Replay
6872.8 PERDDQN (rank) DQN Prioritized Experience Replay
6833.5 NoisyNetDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
6796.1 DDQN DQN Deep Reinforcement Learning with Double Q-learning
6757.8 DistributionalDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
6715.5 RainbowDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
6363.09 GorilaDQN DQN Massively Parallel Methods for Deep Reinforcement Learning
6206.0 DQN2015 DQN Dueling Network Architectures for Deep Reinforcement Learning
5911.4 A3C LSTM PG Asynchronous Methods for Deep Learning
5560.0 A3C FF (4 days) PG Asynchronous Methods for Deep Learning
3864.0 DQN2015 DQN Massively Parallel Methods for Deep Reinforcement Learning
2109.1 Human Human Massively Parallel Methods for Deep Reinforcement Learning
1151.9 Random Random Massively Parallel Methods for Deep Reinforcement Learning

No-op Starts

Result Method Type Score from
22849.0 NoisyNet-A3C PG Noisy Networks for Exploration
11741.4 ApeX DQN DQN Distributed Prioritized Experience Replay
11451.9 DuelingDDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
10754.0 NoisyNet-DuelingDQN DQN Noisy Networks for Exploration
10733.0 DuelingDQN DQN Noisy Networks for Exploration
10374.4 DuelingPERDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
10344.4 DuelingPERDDQN DQN Deep Q-Learning from Demonstrations
10263.1 PERDDQN (prop) DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
9885.9 DistributionalDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
9825.3 DQfD Imitation Deep Q-Learning from Demonstrations
9745.1 DDQN+PopArt DQN Learning values across many orders of magnitude
9735.0 C51 Misc A Distributional Perspective on Reinforcement Learning
9728.0 PER DQN Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
9728.0 PERDDQN (rank) DQN Dueling Network Architectures for Deep Reinforcement Learning
9686.9 ACKTR PG Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
9422.0 A3C PG Noisy Networks for Exploration
9061.9 NoisyNetDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
8805.0 NoisyNet-DQN DQN Noisy Networks for Exploration
8741.5 RainbowDQN DQN Rainbow: Combining Improvements in Deep Reinforcement Learning
8422.3 DQN2015 DQN Dueling Network Architectures for Deep Reinforcement Learning
8343.0 DQN DQN Noisy Networks for Exploration
7920.5 DDQN DQN Dueling Network Architectures for Deep Reinforcement Learning
7882.0 GorilaDQN DQN Massively Parallel Methods for Deep Reinforcement Learning
4396.7 DDQN DQN Deep Reinforcement Learning with Double Q-learning
3805 DQN2015 DQN Human-level control through deep reinforcement learning
3372 Linear Misc Human-level control through deep reinforcement learning
3341 Contingency Misc Human-level control through deep reinforcement learning
2665.5 Human Human Dueling Network Architectures for Deep Reinforcement Learning
2395 Human Human Human-level control through deep reinforcement learning
1598 Random Random Human-level control through deep reinforcement learning

Normal Starts

Result Method Type Score from
8367.4 A2C PG Proximal Policy Optimization Algorithms
7942.3 PPO PG Proximal Policy Optimization Algorithms
7268.4 ACER PG Proximal Policy Optimization Algorithms