This study presents a novel approach for improving the sample efficiency of reinforcement learning (RL) control of dynamic systems by utilizing autoencoders. The main objective of this research is to investigate the effectiveness of autoencoders in enhancing the learning process and improving the resulting policies in RL control problems. In literature most applications use only autoencoder’s latent space while learning. This approach can cause loss of information, difficulty in interpreting latent space, difficulty in handling dynamic environments and outdated representation. In this study, proposed novel approach overcomes these problems and enhances sample efficiency using both states and their latent space while learning. The methodology consists of two main steps. First, a denoising-contractive autoencoder is developed and implemented for RL control problems, with a specific focus on its applicability to state representation and feature extraction. The second step involves training a Deep Reinforcement Learning algorithm using the augmented states generated by the autoencoder. The algorithm is compared against a baseline Deep Q-Network (DQN) algorithm in the LunarLander environment, where observations from the environment are subject to Gaussian noise.
Primary Language | English |
---|---|
Subjects | Artificial Intelligence (Other), Control Engineering, Mechatronics and Robotics (Other) |
Journal Section | Research Articles |
Authors | |
Publication Date | July 20, 2024 |
Submission Date | May 9, 2024 |
Acceptance Date | May 15, 2024 |
Published in Issue | Year 2024 Volume: 1 Issue: 1 |
ITU Computer Science AI and Robotics