![]() ![]() One of the variants in meta-learning is known as Model Agnostic Meta-Learning (MAML). This great stride of meta-learning has been made in the field of few-shot learning from emerging models and training methods such as Siamese networks, matching networks and memory augmented networks. As a result, meta-learning is also known as“learning to learn” mechanism that can enable the new model design to rapidly learn new tasks or adapt to new environments with a few training examples. It applies metadata using a two-loops mechanism to guide the training efficiently to learn the patterns with the least number of training samples. Meta-learning is an alternative solution to train the network with fewer examples to achieve accurate task performance using metadata. ![]() It is also observed that this combination reaches its peak accuracy with the fewest number of steps.Īdapting to the scarcity of samples during training is an essential trait to improve the learning capability of a deep learning model on specific tasks. The performance of evaluation shows that the best loss and accuracy can be achieved using one-step MAML that is coupled with the two-phase switching optimizer. ![]() Several experiments using the BERT-Tiny model are conducted to analyze and compare the performance of the one-step MAML with five benchmark datasets. At the outer loop, gradient is updated based on losses accumulated by the evaluation set during each inner loop. During the inner loop, gradient update is performed only once per task. One-step MAML uses two loops to conduct the training, known as the inner and the outer loop. As a variant derived from the concept of Model Agnostic Meta-Learning (MAML), an one-step MAML incorporated with the two-phase switching optimization strategy is proposed in this paper to improve performance using less iterations. To counter such an issue, the field of meta-learning has shown great potential in fine tuning and generalizing to new tasks using mini dataset. Conventional training mechanisms often encounter limited classification performance due to the need of large training samples. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |