In this research, we model an active learning method on real robots that can visually learn from each other. For this purpose, we initially design an experiment scenario in which a teacher robot presents a simple classification task to a learner robot through which the learner robot can discriminate different colors based on a predefined lexicon. It is shown that, with passive learning, the learner robot is able to partially achieve the given task. Afterwards, we design an active learning procedure in which the learner robot can manifest what it understand from the presented information. Based on this manifestation, the teacher robot determines which parts of the classification system are misunderstood and it rephrases those parts. It is shown that, with the help of active learning procedure, the robots achieve a higher success rate in learning the simple classification task. In this way, we qualitatively analyze how active learning works and why it enhances learning.
Primary Language | English |
---|---|
Subjects | Artificial Intelligence |
Journal Section | Research Articles |
Authors | |
Publication Date | June 1, 2020 |
Submission Date | January 28, 2020 |
Acceptance Date | March 25, 2020 |
Published in Issue | Year 2020 |
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.