This paper is published in Volume-5, Issue-1, 2019
Area
Artificial Intelligence and Machine Learning
Author
Abhi Savaliya, Chirag Ahuja, Chirayu Shah, Sagar Parikh
Org/Univ
Simon Fraser University, Burnaby, Canada, Canada
Pub. Date
05 February, 2019
Paper ID
V5I1-1254
Publisher
Keywords
Machine learning, Artificial Intelligence, Deep learning, Keras RL, Reinforcement learning

Citationsacebook

IEEE
Abhi Savaliya, Chirag Ahuja, Chirayu Shah, Sagar Parikh. Improving generalization in reinforcement learning on Atari 2600 games, International Journal of Advance Research, Ideas and Innovations in Technology, www.IJARIIT.com.

APA
Abhi Savaliya, Chirag Ahuja, Chirayu Shah, Sagar Parikh (2019). Improving generalization in reinforcement learning on Atari 2600 games. International Journal of Advance Research, Ideas and Innovations in Technology, 5(1) www.IJARIIT.com.

MLA
Abhi Savaliya, Chirag Ahuja, Chirayu Shah, Sagar Parikh. "Improving generalization in reinforcement learning on Atari 2600 games." International Journal of Advance Research, Ideas and Innovations in Technology 5.1 (2019). www.IJARIIT.com.

Abstract

Deep Reinforcement Learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a crucial step towards building autonomous systems with a higher-level understanding of the world around them. In particular, deep reinforcement learning has changed the landscape of autonomous agents by achieving superhuman performance on board game Go, a significant milestone in AI research. In this project, we attempt to train a Deep RL network on Demon Attack – an Atari 2600 game and test the model on different game environments to investigate the feasibility of applying Transfer Learning on environments with same action space but slightly different state space. We further extend the project to use established Reinforcement Learning techniques such as DQN, Dueling DQN, and SARSA to examine whether RL agents can be generalized on unfamiliar environments by fine-tuning the hyperparameters. Finally, we borrow classic regularization techniques like 2 regularization and dropout from the world of supervised learning and probe whether these techniques which have received very limited attention in the domain of reinforcement learning are effective in reducing overfitting of Deep RL networks. Deep Networks are expensive to train and complex models take weeks to train using expensive GPUs. We find that the use of the above techniques prevents the network from overfitting on the current environment and gives satisfactory results when tested on slightly different environments thus enabling substantial savings in training time & resources.