Hands-On Few-Shot and Prompt-Based Learning
Presented by: Brayan Kai Mwanyumba
Machine learning has been highly successful in data-intensive applications but is often hampered when the dataset is small. Few-shot learning (FSL) was proposed to tackle this problem. FSL enables making predictions using a limited number of examples with supervised information—that is, with few training samples. It is used across different fields from computer vision to NLP and beyond. Metalearning has been the most common framework for FSL in recent years. The goal of this framework is not to let the model recognize the images in the training set and then generalize to the test set; instead, the goal is to “learn to learn.” Another area where FSL is used is Prompt-Based Learning.
In this talk atendees will learn the fundamentals of FSL, including popular networks and loss functions used with the network and the application of FSL techniques in Natural Language Understanding (NLU) and Computer Vision. I’ll go through the code implementation of Siamese neural networks with pairwise loss on text data to perform topic classification. Atendees will also get familiarity with multi-modal FSL. Further, they will get deeper insight into prompt-based learning paradigm in the context of Large Language Models and OpenPrompt framework. At the end of the talk, atendees will have a good idea of few-shot learning techniques and their applications and will gain a better understanding of deep learning overall.