- Paper Link: arXiv
- Author's code: https://github.com/VeritasYin/STGCN_IJCAI-18
- reference blogs:
- reference lecture:
What is Meta Learning?
- train a series of similar tasks, get an algorithm which can product a task-solver-function with few data of the new task.
What's the differents between Machine Learning and Meta Learning?
What's the differents between Pre-training and Meta Learning?
The paper presents a model-agnostic meta-learning algorithm for fast adaptation of deep networks. The goal of meta-learning is to train a model on a variety of learning tasks so that it can solve new tasks using only a small number of training samples. The proposed algorithm explicitly trains the model's parameters to be easily fine-tuned, allowing for fast adaptation. The algorithm is compatible with any model trained with gradient descent and applicable to various learning problems, including classification, regression, and reinforcement learning. The document demonstrates that the algorithm achieves state-of-the-art performance on few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning.
Main points:
- Meta-learning algorithm for fast adaptation of deep networks
- Model-agnostic and compatible with any model trained with gradient descent
- Applicable to various learning problems, including classification, regression, and reinforcement learning
- Trains model parameters to be easily fine-tuned for fast adaptation
- Achieves state-of-the-art performance on few-shot image classification benchmarks
- Produces good results on few-shot regression
- Accelerates fine-tuning for policy gradient reinforcement learning.
meta learning:
- Step1: define a set of Learning algorithm F
- F defined:
- design a model architecture
- Insight: different initial parameters is one algorithm
- F defined:
- Step2: goodness of Learning algorithm F (base of loss function)
- Defined a Loss Function
- Train- tasks: a series tasks 1 task [support-set, query-set]
- Test - tasks: a series tasks 1 task [support-set, query-set]
$L = \sum_{t_0}^n l_t$
- Step3: pick the best Learning algorithm F
- F = argmin_F L(F)
- benchmark Omniglot
- n-way: n class
- k-shot: 1 class K samples



