Deep neural networks have proven to be very effective for image-related problems. However, their success is mainly attributed to the large-scale annotated datasets that have been used to train them. Convolutional neural networks which are special type of neural networks have achieved very good results for visual recognition problems and therefore have become the standard tool for these tasks. The use of large-scale annotated datasets like the ImageNet have even improved the results obtained by these networks. However, creating an annotated dataset of that scale is very difficult due to its cost. In some cases, even if there are enough resources, it is sometimes impossible to obtain such large datasets. It is shown that the neural networks cannot be trained well if there is not enough training data. As these networks require large amounts of annotated data to be able to generalize well, it is very important to develop new models that can be trained well even if the training data is not abundant. Meta-learning paradigm addresses this problem of few-shot learning by proposing models that can utilize the experience from the previous tasks to learn new tasks. Meta-learning algorithms gain the fast adaptation ability by using the meta-data obtained from the previous tasks. The meta-learning concept has regained its popularity after the success of some deep neural networks-based meta-learning algorithms for the few-shot image classification problems. In this study, two meta-learning algorithms, namely, Model-Agnostic Meta-Learning (MAML) and Prototypical Networks (ProtoNet) are applied to few-shot learning problems and their performance is evaluated. MiniImageNet and CIFAR100 few-shot learning image classification datasets have been used as the test bed, and the two algorithms have been evaluated under different meta-learning and algorithm hyper-parameter settings. The results suggest that MAML results in better classification accuracy than ProtoNet when the number of shot is taken as 1, and ProtoNet results in better accuracy when the number of shot is increased. The main reason for this is that while MAML tries to find common initial weights that can easily distinguish all classes, ProtoNet tries to find a different prototype for each class, and the number of shots definitely increases the representation power of that prototype.
Deep neural networks have proven to be very effective for image-related problems. However, their success is mainly attributed to the large-scale annotated datasets that have been used to train them. Convolutional neural networks which are special type of neural networks have achieved very good results for visual recognition problems and therefore have become the standard tool for these tasks. The use of large-scale annotated datasets like the ImageNet have even improved the results obtained by these networks. However, creating an annotated dataset of that scale is very difficult due to its cost. In some cases, even if there are enough resources, it is sometimes impossible to obtain such large datasets. It is shown that the neural networks cannot be trained well if there is not enough training data. As these networks require large amounts of annotated data to be able to generalize well, it is very important to develop new models that can be trained well even if the training data is not abundant. Meta-learning paradigm addresses this problem of few-shot learning by proposing models that can utilize the experience from the previous tasks to learn new tasks. Meta-learning algorithms gain the fast adaptation ability by using the meta-data obtained from the previous tasks. The meta-learning concept has regained its popularity after the success of some deep neural networks-based meta-learning algorithms for the few-shot image classification problems. In this study, two meta-learning algorithms, namely, Model-Agnostic Meta-Learning (MAML) and Prototypical Networks (ProtoNet) are applied to few-shot learning problems and their performance is evaluated. MiniImageNet and CIFAR100 few-shot learning image classification datasets have been used as the test bed, and the two algorithms have been evaluated under different meta-learning and algorithm hyper-parameter settings. The results suggest that MAML results in better classification accuracy than ProtoNet when the number of shot is taken as 1, and ProtoNet results in better accuracy when the number of shot is increased. The main reason for this is that while MAML tries to find common initial weights that can easily distinguish all classes, ProtoNet tries to find a different prototype for each class, and the number of shots definitely increases the representation power of that prototype.
Primary Language | Turkish |
---|---|
Subjects | Engineering |
Journal Section | Articles |
Authors | |
Publication Date | January 31, 2021 |
Published in Issue | Year 2021 Issue: 21 |