|
2 | 2 | :align: center |
3 | 3 | :width: 400 |
4 | 4 |
|
5 | | -|github|_ |readthedocs|_ |codecov|_ |python|_ |pypi|_ |license|_ |
6 | | - |
7 | | -.. |github| image:: https://github.com/xuyxu/Ensemble-Pytorch/workflows/torchensemble-CI/badge.svg |
8 | | -.. _github: https://github.com/xuyxu/Ensemble-Pytorch/actions |
9 | | - |
10 | | -.. |readthedocs| image:: https://readthedocs.org/projects/ensemble-pytorch/badge/?version=latest |
11 | | -.. _readthedocs: https://ensemble-pytorch.readthedocs.io/en/latest/index.html |
12 | | - |
13 | | -.. |codecov| image:: https://codecov.io/gh/xuyxu/Ensemble-Pytorch/branch/master/graph/badge.svg?token=2FXCFRIDTV |
14 | | -.. _codecov: https://codecov.io/gh/xuyxu/Ensemble-Pytorch |
15 | | - |
16 | | -.. |python| image:: https://img.shields.io/badge/python-3.6+-blue?logo=python |
17 | | -.. _python: https://www.python.org/ |
18 | | - |
19 | | -.. |pypi| image:: https://img.shields.io/pypi/v/torchensemble |
20 | | -.. _pypi: https://pypi.org/project/torchensemble/ |
21 | | - |
22 | | -.. |license| image:: https://img.shields.io/github/license/xuyxu/Ensemble-Pytorch |
23 | | -.. _license: https://github.com/xuyxu/Ensemble-Pytorch/blob/master/LICENSE |
24 | | - |
25 | 5 | Ensemble PyTorch Documentation |
26 | 6 | ============================== |
27 | 7 |
|
28 | | -.. rst-class:: center |
29 | | - |
30 | | -| |:homes:| `GitHub <https://github.com/xuyxu/Ensemble-Pytorch>`__ | |:book:| `ReadtheDocs <https://readthedocs.org/projects/ensemble-pytorch/>`__ | |:hammer_and_wrench:| `Codecov <https://codecov.io/gh/xuyxu/Ensemble-Pytorch>`__ |
31 | | -| |
32 | | -
|
33 | | -Ensemble PyTorch is a unified ensemble framework for PyTorch to improve the performance and robustness of your deep learning model. It provides: |
| 8 | +Ensemble PyTorch is a unified ensemble framework for PyTorch to easily improve the performance and robustness of your deep learning model. It provides: |
34 | 9 |
|
35 | 10 | * |:arrow_up_small:| Easy ways to improve the performance and robustness of your deep learning model. |
36 | 11 | * |:eyes:| Easy-to-use APIs on training and evaluating the ensemble. |
37 | 12 | * |:zap:| High training efficiency with parallelization. |
38 | 13 |
|
39 | | -| This package is under active development. Please feel free to open an `issue <https://github.com/xuyxu/Ensemble-Pytorch/issues>`__ if your have any problem. In addition, any feature request or `pull request <https://github.com/xuyxu/Ensemble-Pytorch/pulls>`__ would be highly welcomed. |
40 | | -
|
41 | 14 | Guidepost |
42 | 15 | --------- |
43 | 16 |
|
44 | 17 | * To get started, please refer to `Quick Start <./quick_start.html>`__; |
45 | 18 | * To learn more about ensemble methods supported, please refer to `Introduction <./introduction.html>`__; |
46 | 19 | * If you are confused on which ensemble method to use, our `experiments <./experiment.html>`__ and the instructions in `guidance <./guide.html>`__ may be helpful. |
47 | 20 |
|
48 | | -Minimal Example on How to Use |
49 | | ------------------------------ |
| 21 | +Example |
| 22 | +------- |
50 | 23 |
|
51 | 24 | .. code:: python |
52 | 25 |
|
53 | | - from torchensemble import VotingClassifier # a classic ensemble method |
| 26 | + from torchensemble import VotingClassifier # Voting is a classic ensemble strategy |
54 | 27 |
|
55 | | - # Load your data |
56 | | - train_loader = DataLoader(...) |
57 | | - test_loader = DataLoader(...) |
| 28 | + # Load data |
| 29 | + train_loader = DataLoader(...) |
| 30 | + test_loader = DataLoader(...) |
58 | 31 |
|
59 | | - # Define the ensemble |
60 | | - model = VotingClassifier(estimator=base_estimator, # your deep learning model |
61 | | - n_estimators=10) # the number of base estimators |
| 32 | + # Define the ensemble |
| 33 | + model = VotingClassifier(estimator=base_estimator, # your deep learning model |
| 34 | + n_estimators=10) # the number of base estimators |
62 | 35 |
|
63 | | - # Set the optimizer |
64 | | - model.set_optimizer("Adam", # parameter optimizer |
65 | | - lr=learning_rate, # learning rate of the optimizer |
66 | | - weight_decay=weight_decay) # weight decay of the optimizer |
| 36 | + # Set the optimizer |
| 37 | + model.set_optimizer("Adam", # parameter optimizer |
| 38 | + lr=learning_rate, # learning rate of the optimizer |
| 39 | + weight_decay=weight_decay) # weight decay of the optimizer |
67 | 40 |
|
68 | | - # Set the scheduler |
69 | | - model.set_scheduler("CosineAnnealingLR", T_max=epochs) # optional |
| 41 | + # Set the scheduler |
| 42 | + model.set_scheduler("CosineAnnealingLR", T_max=epochs) # (optional) learning rate scheduler |
70 | 43 |
|
71 | | - # Train |
72 | | - model.fit(train_loader, |
73 | | - epochs=epochs) # the number of training epochs |
| 44 | + # Train |
| 45 | + model.fit(train_loader, |
| 46 | + epochs=epochs) # the number of training epochs |
74 | 47 |
|
75 | | - # Evaluate |
76 | | - acc = model.predict(test_loader) # testing accuracy |
| 48 | + # Evaluate |
| 49 | + acc = model.predict(test_loader) # testing accuracy |
77 | 50 |
|
78 | 51 | Content |
79 | 52 | ------- |
80 | 53 |
|
81 | 54 | .. toctree:: |
82 | | - :maxdepth: 2 |
| 55 | + :maxdepth: 1 |
83 | 56 |
|
84 | 57 | Quick Start <quick_start> |
85 | 58 | Introduction <introduction> |
|
0 commit comments