You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+29-9Lines changed: 29 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,10 +46,12 @@ The following snippets provide a sneak peek at the functionalities of learn2lear
46
46
47
47
### High-level Wrappers
48
48
49
-
**Few-Shot Learning with MAML**
49
+
<details>
50
+
<summary><b>Few-Shot Learning with MAML</b></summary>
50
51
51
52
For more algorithms (ProtoNets, ANIL, Meta-SGD, Reptile, Meta-Curvature, KFO) refer to the [examples](https://github.com/learnables/learn2learn/tree/master/examples/vision) folder.
52
53
Most of them can be implemented with with the `GBML` wrapper. ([documentation](http://learn2learn.net/docs/learn2learn.algorithms/#gbml)).
<summary><b>Meta-Descent with Hypergradient</b></summary>
71
+
68
72
Learn any kind of optimization algorithm with the `LearnableOptimizer`. ([example](https://github.com/learnables/learn2learn/tree/master/examples/optimization) and [documentation](http://learn2learn.net/docs/learn2learn.optim/#learnableoptimizer))
@@ -139,6 +154,7 @@ updates = learned_update( # similar API as torch.autograd.grad
139
154
l2l.update_module(clone, updates=updates)
140
155
loss(clone(X), y).backward() # Gradients w.r.t model.parameters() and learned_update.parameters()
141
156
~~~
157
+
</details>
142
158
143
159
## Changelog
144
160
@@ -169,6 +185,10 @@ You can also use the following Bibtex entry.
169
185
170
186
### Acknowledgements & Friends
171
187
172
-
1. The RL environments are adapted from Tristan Deleu's [implementations](https://github.com/tristandeleu/pytorch-maml-rl) and from the ProMP [repository](https://github.com/jonasrothfuss/ProMP/). Both shared with permission, under the MIT License.
173
-
2.[TorchMeta](https://github.com/tristandeleu/pytorch-meta) is similar library, with a focus on datasets for supervised meta-learning.
174
-
3.[higher](https://github.com/facebookresearch/higher) is a PyTorch library that enables differentiating through optimization inner-loops. While they monkey-patch `nn.Module` to be stateless, learn2learn retains the stateful PyTorch look-and-feel. For more information, refer to [their ArXiv paper](https://arxiv.org/abs/1910.01727).
188
+
1.[TorchMeta](https://github.com/tristandeleu/pytorch-meta) is similar library, with a focus on datasets for supervised meta-learning.
189
+
2.[higher](https://github.com/facebookresearch/higher) is a PyTorch library that enables differentiating through optimization inner-loops. While they monkey-patch `nn.Module` to be stateless, learn2learn retains the stateful PyTorch look-and-feel. For more information, refer to [their ArXiv paper](https://arxiv.org/abs/1910.01727).
190
+
3. We are thankful to the following open-source implementations which helped guide the design of learn2learn:
0 commit comments