Supervised Learning on Small Datasets: Few-Shot Approaches and Generalization
Article Sidebar
Main Article Content
Abstract: In artificial intelligence, supervised learning has become a dominant paradigm that allows developments in a variety of fields, including natural language processing, speech recognition, and image classification. However, success generally depends upon the availability of large labeled datasets, which are frequently high-priced or impractical to obtain in many real-world situations—particularly in domains like security, bioinformatics, and healthcare. Few-shot learning techniques, which look for to allow models to generalize effectively from a limited number of training examples, were developed in response to the difficulty of learning from limited data. The current study explores the three main few-shot learning strategies—transfer learning, meta-learning, and data augmentation—as solutions for the supervised learning problems of small datasets. To improve models for new, smaller tasks, transfer learning makes use of knowledge gathered from large-scale tasks. Models can quickly adjust to new tasks with little data thanks to meta-learning, also known as "learning to learn." Small datasets are artificially expanded using data augmentation techniques to increase robustness and generalization. We look at how these approaches improve supervised models' the capacity for generalization, minimize over fitting, and reduce variance.
This paper specifies the advantages, disadvantages, and uses of each approach through a thorough evaluation of previous studies and comparative analysis. In addition to the outcomes, hybrid approaches that combine these tactics perform better, particularly in fields with a lack of labeled data. In the final analysis, few-shot learning sets the way for a more efficient and equitable application of AI in situations with limited resources.
Downloads
References
Inn, C., Abbeel, P., & Levine, S. (2017). Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. arXiv preprint arXiv:1703.03400.
Wang, Y., Yao, Q., Kwok, J. T., & Ni, L. M. (2020). Generalizing from a Few Examples: A Survey on Few-Shot Learning. ACM Computing Surveys, 53(3), 1–34.
Chen, W. Y., Liu, Y. C., Kira, Z., Wang, Y. C. F., & Huang, J. B. (2019). A Closer Look at Few-shot Classification. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
Snell, J., Swersky, K., & Zemel, R. (2017). Prototypical Networks for Few-shot Learning. NeurIPS 2017.
Perez, L., & Wang, J. (2017). The Effectiveness of Data Augmentation in Image Classification using Deep Learning. arXiv preprint arXiv:1712.04621.
Ravi, S., & Larochelle, H. (2017). Optimization as a Model for Few-Shot Learning. International Conference on Learning Representations (ICLR).
Tian, Y., Wang, Y., Krishnan, D., Tenenbaum, J. B., & Isola, P. (2020). Rethinking Few-Shot Image Classification: A Good Embedding Is All You Need?. European Conference on Computer Vision (ECCV).
Dhillon, G. S., Chaudhari, P., Ravichandran, A., & Soatto, S. (2020). A Baseline for Few-Shot Image Classification. International Conference on Learning Representations (ICLR).

This work is licensed under a Creative Commons Attribution 4.0 International License.
All articles published in our journal are licensed under CC-BY 4.0, which permits authors to retain copyright of their work. This license allows for unrestricted use, sharing, and reproduction of the articles, provided that proper credit is given to the original authors and the source.