In mobile robotics, navigation is considered as
one of the most primary tasks, which becomes more challenging during local
navigation when the environment is unknown. Therefore, the robot has to explore
utilizing the sensory information. Reinforcement learning (RL), a
biologically-inspired learning paradigm, has caught the attention of many as it
has the capability to learn autonomously in an unknown environment. However,
the randomized behavior of exploration, common in RL, increases computation
time and cost, hence making it less appealing for real-world scenarios. This
paper proposes an informed-biased softmax regression (iBSR) learning process
that introduce a heuristic-based cost function to ensure faster convergence.
Here, the action-selection is not considered as a random process, rather, is
based on the maximum probability function calculated using softmax regression.
Through experimental simulation scenario for navigation, the strength of the
proposed approach is tested and, for comparison and analysis purposes, the iBSR
learning process is evaluated against two benchmark algorithms.
Reinforcement learning mobile robots navigation autonomous unknown environment
Birincil Dil | İngilizce |
---|---|
Konular | Yapay Zeka, Elektrik Mühendisliği |
Bölüm | Araştırma Makalesi |
Yazarlar | |
Yayımlanma Tarihi | 30 Temmuz 2019 |
Yayımlandığı Sayı | Yıl 2019 Cilt: 7 Sayı: 3 |
All articles published by BAJECE are licensed under the Creative Commons Attribution 4.0 International License. This permits anyone to copy, redistribute, remix, transmit and adapt the work provided the original work and source is appropriately cited.