Skip to content

Commit 3bf8b5f

Browse files
authored
README: Hide snippets behind details tags.
1 parent f1037cd commit 3bf8b5f

1 file changed

Lines changed: 29 additions & 9 deletions

File tree

README.md

Lines changed: 29 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -46,10 +46,12 @@ The following snippets provide a sneak peek at the functionalities of learn2lear
4646

4747
### High-level Wrappers
4848

49-
**Few-Shot Learning with MAML**
49+
<details>
50+
<summary><b>Few-Shot Learning with MAML</b></summary>
5051

5152
For more algorithms (ProtoNets, ANIL, Meta-SGD, Reptile, Meta-Curvature, KFO) refer to the [examples](https://github.com/learnables/learn2learn/tree/master/examples/vision) folder.
5253
Most of them can be implemented with with the `GBML` wrapper. ([documentation](http://learn2learn.net/docs/learn2learn.algorithms/#gbml)).
54+
5355
~~~python
5456
maml = l2l.algorithms.MAML(model, lr=0.1)
5557
opt = torch.optim.SGD(maml.parameters(), lr=0.001)
@@ -62,10 +64,13 @@ for iteration in range(10):
6264
evaluation_loss.backward() # gradients w.r.t. maml.parameters()
6365
opt.step()
6466
~~~
67+
</details>
6568

66-
**Meta-Descent with Hypergradient**
67-
69+
<details>
70+
<summary><b>Meta-Descent with Hypergradient</b></summary>
71+
6872
Learn any kind of optimization algorithm with the `LearnableOptimizer`. ([example](https://github.com/learnables/learn2learn/tree/master/examples/optimization) and [documentation](http://learn2learn.net/docs/learn2learn.optim/#learnableoptimizer))
73+
6974
~~~python
7075
linear = nn.Linear(784, 10)
7176
transform = l2l.optim.ModuleTransform(l2l.nn.Scale)
@@ -79,13 +84,16 @@ error.backward()
7984
opt.step() # update metaopt
8085
metaopt.step() # update linear
8186
~~~
87+
</details>
8288

8389
### Learning Domains
8490

85-
**Custom Few-Shot Dataset**
91+
<details>
92+
<summary><b>Custom Few-Shot Dataset</b></summary>
8693

8794
Many standardized datasets (Omniglot, mini-/tiered-ImageNet, FC100, CIFAR-FS) are readily available in `learn2learn.vision.datasets`.
8895
([documentation](http://learn2learn.net/docs/learn2learn.vision/#learn2learnvisiondatasets))
96+
8997
~~~python
9098
dataset = l2l.data.MetaDataset(MyDataset()) # any PyTorch dataset
9199
transforms = [ # Easy to define your own transform
@@ -98,11 +106,15 @@ for task in taskset:
98106
X, y = task
99107
# Meta-train on the task
100108
~~~
109+
</details>
101110

102-
**Environments and Utilities for Meta-RL**
111+
112+
<details>
113+
<summary><b>Environments and Utilities for Meta-RL</b></summary>
103114

104115
Parallelize your own meta-environments with `AsyncVectorEnv`, or use the standardized ones.
105116
([documentation](http://learn2learn.net/docs/learn2learn.gym/#metaenv))
117+
106118
~~~python
107119
def make_env():
108120
env = l2l.gym.HalfCheetahForwardBackwardEnv()
@@ -116,13 +128,16 @@ for task_config in env.sample_tasks(20):
116128
action = my_policy(env)
117129
env.step(action)
118130
~~~
131+
</details>
119132

120133
### Low-Level Utilities
121134

122-
**Differentiable Optimization**
135+
<details>
136+
<summary><b>Differentiable Optimization</b></summary>
123137

124138
Learn and differentiate through updates of PyTorch Modules.
125139
([documentation](http://learn2learn.net/docs/learn2learn.optim/#parameterupdate))
140+
126141
~~~python
127142

128143
model = MyModel()
@@ -139,6 +154,7 @@ updates = learned_update( # similar API as torch.autograd.grad
139154
l2l.update_module(clone, updates=updates)
140155
loss(clone(X), y).backward() # Gradients w.r.t model.parameters() and learned_update.parameters()
141156
~~~
157+
</details>
142158

143159
## Changelog
144160

@@ -169,6 +185,10 @@ You can also use the following Bibtex entry.
169185

170186
### Acknowledgements & Friends
171187

172-
1. The RL environments are adapted from Tristan Deleu's [implementations](https://github.com/tristandeleu/pytorch-maml-rl) and from the ProMP [repository](https://github.com/jonasrothfuss/ProMP/). Both shared with permission, under the MIT License.
173-
2. [TorchMeta](https://github.com/tristandeleu/pytorch-meta) is similar library, with a focus on datasets for supervised meta-learning.
174-
3. [higher](https://github.com/facebookresearch/higher) is a PyTorch library that enables differentiating through optimization inner-loops. While they monkey-patch `nn.Module` to be stateless, learn2learn retains the stateful PyTorch look-and-feel. For more information, refer to [their ArXiv paper](https://arxiv.org/abs/1910.01727).
188+
1. [TorchMeta](https://github.com/tristandeleu/pytorch-meta) is similar library, with a focus on datasets for supervised meta-learning.
189+
2. [higher](https://github.com/facebookresearch/higher) is a PyTorch library that enables differentiating through optimization inner-loops. While they monkey-patch `nn.Module` to be stateless, learn2learn retains the stateful PyTorch look-and-feel. For more information, refer to [their ArXiv paper](https://arxiv.org/abs/1910.01727).
190+
3. We are thankful to the following open-source implementations which helped guide the design of learn2learn:
191+
* Tristan Deleu's [pytorch-maml-rl](https://github.com/tristandeleu/pytorch-maml-rl)
192+
* Jonas Rothfuss' [ProMP](https://github.com/jonasrothfuss/ProMP/)
193+
* Kwonjoon Lee's [MetaOptNet](https://github.com/kjunelee/MetaOptNet)
194+
* Han-Jia Ye's and Hexiang Hu's [FEAT](https://github.com/Sha-Lab/FEAT)

0 commit comments

Comments
 (0)