Research Article
BibTex RIS Cite

Investigating MAML and ProtoNet Algorithms for Few-shot Learning Problems

Year 2021, Issue: 21, 113 - 121, 31.01.2021
https://doi.org/10.31590/ejosat.834647

Abstract

Deep neural networks have proven to be very effective for image-related problems. However, their success is mainly attributed to the large-scale annotated datasets that have been used to train them. Convolutional neural networks which are special type of neural networks have achieved very good results for visual recognition problems and therefore have become the standard tool for these tasks. The use of large-scale annotated datasets like the ImageNet have even improved the results obtained by these networks. However, creating an annotated dataset of that scale is very difficult due to its cost. In some cases, even if there are enough resources, it is sometimes impossible to obtain such large datasets. It is shown that the neural networks cannot be trained well if there is not enough training data. As these networks require large amounts of annotated data to be able to generalize well, it is very important to develop new models that can be trained well even if the training data is not abundant. Meta-learning paradigm addresses this problem of few-shot learning by proposing models that can utilize the experience from the previous tasks to learn new tasks. Meta-learning algorithms gain the fast adaptation ability by using the meta-data obtained from the previous tasks. The meta-learning concept has regained its popularity after the success of some deep neural networks-based meta-learning algorithms for the few-shot image classification problems. In this study, two meta-learning algorithms, namely, Model-Agnostic Meta-Learning (MAML) and Prototypical Networks (ProtoNet) are applied to few-shot learning problems and their performance is evaluated. MiniImageNet and CIFAR100 few-shot learning image classification datasets have been used as the test bed, and the two algorithms have been evaluated under different meta-learning and algorithm hyper-parameter settings. The results suggest that MAML results in better classification accuracy than ProtoNet when the number of shot is taken as 1, and ProtoNet results in better accuracy when the number of shot is increased. The main reason for this is that while MAML tries to find common initial weights that can easily distinguish all classes, ProtoNet tries to find a different prototype for each class, and the number of shots definitely increases the representation power of that prototype.

References

  • Alkan, M. (2020). Az Örnekle Öğrenme Problemlerinde Derin Öğrenme Temelli Meta-Öğrenme Algoritmalarının Karşılaştırılması (Master's thesis, Fatih Sultan Mehmet Vakıf Üniversitesi, Lisansüstü Eğitim Enstitüsü).
  • Antoniou, A., Edwards, H., & Storkey, A. (2018). How to train your MAML. 7th International Conference on Learning Representations, ICLR 2019. http://arxiv.org/abs/1810.09502
  • Chen, W.-Y., Liu, Y.-C., Kira, Z., Wang, Y.-C. F., & Huang, J.-B. (2019). A Closer Look at Few-shot Classification. http://arxiv.org/abs/1904.04232
  • Deleu, T., Würfl, T., Samiei, M., Cohen, J. P., & Bengio, Y. (2019). Torchmeta: A Meta-Learning library for PyTorch. http://arxiv.org/abs/1909.06576
  • Finn, C., Abbeel, P., & Levine, S. (2017). Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. 34th International Conference on Machine Learning, ICML 2017, 3, 1856–1868. http://arxiv.org/abs/1703.03400
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778. https://doi.org/10.1109/CVPR.2016.90
  • Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. 32nd International Conference on Machine Learning, ICML.
  • Krizhevsky, A. (2009). Learning Multiple Layers of Features from Tiny Images. ... Science Department, University of Toronto, Tech. .... https://doi.org/10.1.1.222.9220
  • Lake, B. M., Salakhutdinov, R., Gross, J., & Tenenbaum, J. B. (2011). One shot learning of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society.
  • Le, Y., & Yang, X. (2015). Tiny ImageNet Visual Recognition Challenge.
  • Oreshkin, B. N., Rodriguez, P., & Lacoste, A. (2018). TADAM: Task dependent adaptive metric for improved few-shot learning. Advances in Neural Information Processing Systems. http://arxiv.org/abs/1805.10123
  • Ravi, S., & Larochelle, H. (2017). Optimization as a Model for Few-Shot Learning. Proceedings of the 5th International Conference on Learning Representations (ICLR 2017), 1–11.
  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision. https://doi.org/10.1007/s11263-015-0816-y
  • Snell, J., Swersky, K., & Zemel, R. S. (2017). Prototypical Networks for Few-shot Learning. Advances in Neural Information Processing Systems. http://arxiv.org/abs/1703.05175
  • Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching networks for one shot learning. Advances in Neural Information Processing Systems, Nips, 3637–3645.

Az Örnekle Öğrenme Problemleri için MAML ve ProtoNet Algoritmalarının İncelenmesi

Year 2021, Issue: 21, 113 - 121, 31.01.2021
https://doi.org/10.31590/ejosat.834647

Abstract

Deep neural networks have proven to be very effective for image-related problems. However, their success is mainly attributed to the large-scale annotated datasets that have been used to train them. Convolutional neural networks which are special type of neural networks have achieved very good results for visual recognition problems and therefore have become the standard tool for these tasks. The use of large-scale annotated datasets like the ImageNet have even improved the results obtained by these networks. However, creating an annotated dataset of that scale is very difficult due to its cost. In some cases, even if there are enough resources, it is sometimes impossible to obtain such large datasets. It is shown that the neural networks cannot be trained well if there is not enough training data. As these networks require large amounts of annotated data to be able to generalize well, it is very important to develop new models that can be trained well even if the training data is not abundant. Meta-learning paradigm addresses this problem of few-shot learning by proposing models that can utilize the experience from the previous tasks to learn new tasks. Meta-learning algorithms gain the fast adaptation ability by using the meta-data obtained from the previous tasks. The meta-learning concept has regained its popularity after the success of some deep neural networks-based meta-learning algorithms for the few-shot image classification problems. In this study, two meta-learning algorithms, namely, Model-Agnostic Meta-Learning (MAML) and Prototypical Networks (ProtoNet) are applied to few-shot learning problems and their performance is evaluated. MiniImageNet and CIFAR100 few-shot learning image classification datasets have been used as the test bed, and the two algorithms have been evaluated under different meta-learning and algorithm hyper-parameter settings. The results suggest that MAML results in better classification accuracy than ProtoNet when the number of shot is taken as 1, and ProtoNet results in better accuracy when the number of shot is increased. The main reason for this is that while MAML tries to find common initial weights that can easily distinguish all classes, ProtoNet tries to find a different prototype for each class, and the number of shots definitely increases the representation power of that prototype.

References

  • Alkan, M. (2020). Az Örnekle Öğrenme Problemlerinde Derin Öğrenme Temelli Meta-Öğrenme Algoritmalarının Karşılaştırılması (Master's thesis, Fatih Sultan Mehmet Vakıf Üniversitesi, Lisansüstü Eğitim Enstitüsü).
  • Antoniou, A., Edwards, H., & Storkey, A. (2018). How to train your MAML. 7th International Conference on Learning Representations, ICLR 2019. http://arxiv.org/abs/1810.09502
  • Chen, W.-Y., Liu, Y.-C., Kira, Z., Wang, Y.-C. F., & Huang, J.-B. (2019). A Closer Look at Few-shot Classification. http://arxiv.org/abs/1904.04232
  • Deleu, T., Würfl, T., Samiei, M., Cohen, J. P., & Bengio, Y. (2019). Torchmeta: A Meta-Learning library for PyTorch. http://arxiv.org/abs/1909.06576
  • Finn, C., Abbeel, P., & Levine, S. (2017). Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. 34th International Conference on Machine Learning, ICML 2017, 3, 1856–1868. http://arxiv.org/abs/1703.03400
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778. https://doi.org/10.1109/CVPR.2016.90
  • Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. 32nd International Conference on Machine Learning, ICML.
  • Krizhevsky, A. (2009). Learning Multiple Layers of Features from Tiny Images. ... Science Department, University of Toronto, Tech. .... https://doi.org/10.1.1.222.9220
  • Lake, B. M., Salakhutdinov, R., Gross, J., & Tenenbaum, J. B. (2011). One shot learning of simple visual concepts. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society.
  • Le, Y., & Yang, X. (2015). Tiny ImageNet Visual Recognition Challenge.
  • Oreshkin, B. N., Rodriguez, P., & Lacoste, A. (2018). TADAM: Task dependent adaptive metric for improved few-shot learning. Advances in Neural Information Processing Systems. http://arxiv.org/abs/1805.10123
  • Ravi, S., & Larochelle, H. (2017). Optimization as a Model for Few-Shot Learning. Proceedings of the 5th International Conference on Learning Representations (ICLR 2017), 1–11.
  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision. https://doi.org/10.1007/s11263-015-0816-y
  • Snell, J., Swersky, K., & Zemel, R. S. (2017). Prototypical Networks for Few-shot Learning. Advances in Neural Information Processing Systems. http://arxiv.org/abs/1703.05175
  • Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., & Wierstra, D. (2016). Matching networks for one shot learning. Advances in Neural Information Processing Systems, Nips, 3637–3645.
There are 15 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Articles
Authors

Ayla Gülcü 0000-0003-3258-8681

Muhammet Alkan This is me 0000-0001-5188-2742

Publication Date January 31, 2021
Published in Issue Year 2021 Issue: 21

Cite

APA Gülcü, A., & Alkan, M. (2021). Az Örnekle Öğrenme Problemleri için MAML ve ProtoNet Algoritmalarının İncelenmesi. Avrupa Bilim Ve Teknoloji Dergisi(21), 113-121. https://doi.org/10.31590/ejosat.834647