Skip to content

Commit 613bc7e

Browse files
committed
[DOC] Update documentation
1 parent 38e4311 commit 613bc7e

2 files changed

Lines changed: 8 additions & 5 deletions

File tree

docs/experiment.rst

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,10 @@
11
Experiments
22
===========
33

4+
.. warning::
5+
6+
This page will be updated once all experiments are done. We are sorry for the delay, which is caused by the prohibitively long time on running all experiments listed below.
7+
48
The running scripts on all experiments are available in the folder `experiments <https://github.com/xuyxu/Ensemble-Pytorch/tree/master/experiments>`__.
59

610
Performance Comparison
@@ -28,8 +32,3 @@ We have collected four different configurations on the dataset and base estimato
2832
* Data augmentations were adopted on **CIFAR-10** and **CIFAR-100** datasets.
2933
* For **LeNet-5**, the ``Adam`` optimizer with learning rate ``1e-3`` and weight decay ``5e-4`` was used.
3034
* For **ResNet-18**, the ``SGD`` optimizer with learning rate ``1e-1`` and weight decay ``5e-4``, along with the cosine annealing scheduler on learning rate were used.
31-
32-
The figures below present the trends on performance with the number of base estimators increasing when using different ensemble methods. Details are also available in the tables below.
33-
34-
Regression
35-
~~~~~~~~~~

docs/guide.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,3 +43,7 @@ Over-fit
4343

4444
Large Training Costs
4545
--------------------
46+
47+
Training an ensemble of large deep learning models could take prohibitively long time and easily run out of the memory. If you are suffering from large training costs when using Ensemble-PyTorch, the recommended ensemble method would be :class:`Snapshot Ensemble`. The training costs on :class:`Snapshot Ensemble` are approximately the same as that on training a single base estimator. Please refer to the related section in `Introduction <./introduction.html>`__ for details on :class:`Snapshot Ensemble`.
48+
49+
However, :class:`Snapshot Ensemble` does not work well across all deep learning models. To reduce the costs on using other parallel ensemble methods (i.e., :class:`Voting`, :class:`Bagging`, :class:`Adversarial Training`), you can set ``n_jobs`` to ``None`` or ``1``, which disables the parallelization conducted internally.

0 commit comments

Comments
 (0)