-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy pathcsbdeep.html
More file actions
203 lines (203 loc) · 8.47 KB
/
csbdeep.html
File metadata and controls
203 lines (203 loc) · 8.47 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
<html>
<head>
<link href="style.css" media="all" rel="stylesheet"/>
</head>
<body>
<h1>CSBDeep</h1>
<div class="csbdeep">
<div>
<h2>CSBDeep for users</h2>
Deep learning solutions based on CSBDeep:
<div class="block">
<h3>CARE networks</h3>
<h4>Related publications / credits</h4>
<div class="block">
<p>
<strong>Please see the paper in <a href="http://dx.doi.org/10.1038/s41592-018-0216-7">Nature Methods</a>.</strong>
(Preprint on <a href="https://biorxiv.org/content/early/2018/07/03/236463">bioRxiv</a>)
</p>
<p>
Supplementary material can be downloaded <a href="https://www.biorxiv.org/highwire/filestream/109407/field_highwire_adjunct_files/0/236463-1.pdf">here</a>.
</p>
<h5>Authors and Contributors</h5>
<p>
Martin Weigert<sup>1,2,*</sup>,
Uwe Schmidt<sup>1,2</sup>,
Tobias Boothe<sup>2</sup>,
Andreas Müller<sup>8,9,10</sup>,
Alexandr Dibrov<sup>1,2</sup>,
Akanksha Jain<sup>2</sup>,
Benjamin Wilhelm<sup>1,6</sup>,
Deborah Schmidt<sup>1</sup>,
Coleman Broaddus<sup>1,2</sup>,
Siân Culley<sup>4,5</sup>,
Mauricio Rocha-Martins<sup>1,2</sup>,
Fabián Segovia-Miranda<sup>2</sup>,
Caren Norden<sup>2</sup>,
Ricardo Henriques<sup>4,5</sup>,
Marino Zerial<sup>1,2</sup>,
Michele Solimena<sup>2,8,9,10</sup>,
Jochen Rink<sup>2</sup>,
Pavel Tomancak<sup>2</sup>,
Loic Royer<sup>1,2,7,*</sup>,
Florian Jug<sup>1,2,*</sup>
& Eugene W. Myers<sup>1,2,3</sup>
<br><br>
<sup>1</sup> Center for Systems Biology Dresden (CSBD), Dresden, Germany<br>
<sup>2</sup> Max-Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany<br>
<sup>3</sup> Department of Computer Science, Technical University Dresden<br>
<sup>4</sup> MRC Laboratory for Molecular Cell Biology, University College London, London, UK<br>
<sup>5</sup> The Francis Crick Institute, London, UK<br>
<sup>6</sup> University of Konstanz, Konstanz, Germany<br>
<sup>7</sup> CZ Biohub, San Francisco, USA<br>
<sup>8</sup> Molecular Diabetology, University Hospital and Faculty of Medicine Carl Gustav Carus, TU Dresden, Dresden, Germany<br>
<sup>9</sup> Paul Langerhans Institute Dresden (PLID) of the Helmholtz Center Munich at the University Hospital Carl Gustav Carus and Faculty of Medicine of the TU Dresden, Dresden, Germany<br>
<sup>10</sup> German Center for Diabetes Research (DZD e.V.), Neuherberg, Germany<br>
<sup>*</sup> Co-corresponding authors.
</p>
</div>
<h4>Acknowledgements</h4>
<div class="block">
The authors want to thank Philipp Keller (Janelia) who provided Drosophila data.
We thank Suzanne Eaton (MPI-CBG), Franz Gruber and Romina Piscitello for sharing the expertise in fly imaging and providing fly lines. We thank Anke Sönmez for cell culture work.
We thank Marija Matejcic (MPI-CBG) for generating and sharing the LAP2B transgenic line Tg(bactin:eGFP-LAP2B). We thank Benoit Lombardot from the Scientific Computing Facility (MPI-CBG).
We thank the following Services and Facilities of the MPI-CBG for their support: Computer Department, Light Microscopy Facility (LMF) and Fish Facility.
This work was supported by the German Federal Ministry of Research and Education (BMBF) under the codes 031L0102 (de.NBI) and 031L0044 (Sysbio II).
M.S. was supported by the German Center for Diabetes Research (DZD e.V.).
R.H. and S.C. was supported grants from the UK BBSRC (BB/M022374/1; BB/P027431/1; BB/R000697/1), UK MRC (MR/K015826/1) and Wellcome Trust (203276/Z/16/Z).
</div>
<h4><a href="">Gallery</a></h4>
<h4><a href="">Videos</a></h4>
<h4>Source code</h4>
<div class="block">
<h5><a href="https://github.com/CSBDeep/CSBDeep">CSBDeep in Python</a></h5>
<h5><a href="https://github.com/CSBDeep/CSBDeep">CSBDeep in Java / Fiji</a></h5>
</div>
<h4>How to use the CARE networks in Python</h4>
<div class="block">
<h5><a href="https://csbdeep.bioimagecomputing.com/doc/install.html">Installation</a></h5>
</div>
<div class="block">
<h5><a href="https://csbdeep.bioimagecomputing.com/doc/models.html">Model overview</a></h5>
</div>
<div class="block">
<h5><a href="https://csbdeep.bioimagecomputing.com/doc/datagen.html">Training data generation</a></h5>
</div>
<div class="block">
<h5><a href="https://csbdeep.bioimagecomputing.com/doc/training.html">Training</a></h5>
</div>
<div class="block">
<h5><a href="https://csbdeep.bioimagecomputing.com/doc/prediction.html">Prediction</a></h5>
</div>
<h4>How to export trained CARE networks from Python for Fiji and KNIME</h4>
<div class="block">
Call this method to export after training to export a model as a ZIP which is compatible to <a href="">running prediction in Fiji</a>:
<pre><code>
model.export_TF()
</code></pre>
</div>
<h4>How to use CARE networks in Fiji</h4>
<div class="block">
<h5>Installation</h5>
<ol>
<li>Open the updater and enable update site <code>CSBDeep</code></li>
</ol>
</div>
<div class="block">
<h5>Prediction</h5>
</div>
<h4>How to use CARE networks in KNIME</h4>
<div class="block">
<h5>Installation</h5>
</div>
<div class="block">
<h5>Prediction</h5>
</div>
</div>
<div class="block">
<h3>StarDist</h3>
<h4><a href="https://github.com/mpicbg-csbd/stardist#how-to-cite">Related publications / credits</a></h4>
<!-- <h4>Gallery</h4>-->
<!-- <h4>Videos</h4>-->
<h4>Source code</h4>
<div class="block">
<h5><a href="https://github.com/mpicbg-csbd/stardist">StarDist in Python</a></h5>
<h5><a href="https://github.com/mpicbg-csbd/stardist-imagej">StarDist in Java / Fiji</a></h5>
</div>
<h4>How to use the StarDist in Python</h4>
<div class="block">
<h5><a href="https://github.com/mpicbg-csbd/stardist#installation">Installation</a></h5>
</div>
<div class="block">
<h5><a href="https://github.com/mpicbg-csbd/stardist#annotating-images">Data annotation</a></h5>
</div>
<div class="block">
<h5><a href="https://github.com/mpicbg-csbd/stardist#usage">Training & prediction</a></h5>
</div>
<h4>How to export trained StarDist model from Python for Fiji</h4>
<div class="block">
Call this method to export after training to export a model as a ZIP which is compatible to <a href="">running prediction in Fiji</a>:
<pre><code>
model.export_TF()
</code></pre>
</div>
<h4><a href="https://imagej.net/StarDist">How to use StarDist in Fiji</a></h4>
</div>
<div class="block">
<h3>N2V</h3>
<h4>Related publications / credits</h4>
<h4>Acknowledgements</h4>
<h4>Gallery</h4>
<h4>Videos</h4>
<h4>How to use N2V in Python</h4>
<div class="block">
<h5>Installation</h5>
</div>
<div class="block">
<h5>Training</h5>
</div>
<div class="block">
<h5>Prediction</h5>
</div>
<h4>How to export trained N2V model from Python for Fiji</h4>
<h4>How to use N2V in Fiji</h4>
<div class="block">
<h5>Installation</h5>
</div>
<div class="block">
<h5>Training</h5>
</div>
<div class="block">
<h5>Prediction</h5>
</div>
<h4>FAQ</h4>
<div class="block">
<h5>How long do I have to train?</h5>
Longer. 100 sepochs with 300 steps each for example.
<h5>How much data do I need for training?</h5>
Don't go much smaller than 5000K pixels, e.g. 2000x3000 or 1000x1000x5, ... The more the merrier! If you use the Fiji plugin, you place many of them in the same folder and run the "train on folder" command pointing to this folder. You can use the same folder for training and validation, it will split up the data automatically and use 90% for training and 10% for validation.
<h5>What about SEM / TEM / CMOS?</h5>
<h5>Do training and test data need to be of the same dimensions?</h5>
No. You can train on bigger images and run the prediction on smaller images. The training data is internally split up into pieces and only a random subset of each batch is fed into the network. You also train on stacks, it'll split them up again into batches internally.
</div>
</div>
</div>
<div>
<h2>FAQ</h2>
<div class="block">
<h3>GPU support</h3>
<h4>GPU support in Python (Windows, Linux)</h4>
<h4>GPU support in Java (Windows)</h4>
<h4>GPU support in Java (Linux)</h4>
</div>
<h2>CSBDeep for developers</h2>
<div class="block">
<h3>How to use CSBDeep in Python</h3>
</div>
<div class="block">
<h3>How to use CSBDeep in Java</h3>
</div>
</div>
</div>
</body>
</html>