You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: cuda-specific-examples/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
1
## Eclipse Deeplearning4j: CUDA Specific Examples
2
2
3
-
Switching from a CPU only backend to a GPU backend is as simple as changing one dependency - one line in the pom.xml file for Maven users. Instead of specifying the nd4j-native-platform module specify the nd4j-cuda-X-platform where X indicated the version of CUDA. It is recommended to install cuDNN for better GPU performance. Runs will log warnings if cuDNN is not found. For more information, please refer to documentation [here](https://deeplearning4j.org/docs/latest/deeplearning4j-config-cudnn)
3
+
Switching from a CPU only backend to a GPU backend is as simple as changing one dependency - one line in the pom.xml file for Maven users. Instead of specifying the nd4j-native-platform module specify the nd4j-cuda-X-platform where X indicated the version of CUDA. It is recommended to install cuDNN for better GPU performance. Runs will log warnings if cuDNN is not found. For more information, please refer to documentation [here](https://deeplearning4j.konduit.ai/config/backends/config-cudnn#using-deeplearning-4-j-with-cudnn)
4
4
5
-
Users with acces to multiple gpus systems can use DL4J to further speed up the training process by training the models in parallel on them. Ideally these GPUs have the same speed and networking capabilities. This project contains a set of examples that demonstrate how to leverage performance from a multiple gpus setup. More documentation can be found [here](https://deeplearning4j.konduit.ai/getting-started/tutorials/using-multiple-gpus)
5
+
Users with access to systems with multiple gpus can use DL4J to further speed up the training process by training the models in parallel on them. Ideally these GPUs have the same speed and networking capabilities. This project contains a set of examples that demonstrate how to leverage performance from a multiple gpus setup. More documentation can be found [here](https://deeplearning4j.konduit.ai/getting-started/tutorials/using-multiple-gpus)
6
6
7
7
[Go back](../README.md) to the main repository page to explore other features/functionality of the **Eclipse Deeplearning4J** ecosystem. File an issue [here](https://github.com/eclipse/deeplearning4j-examples/issues) to request new features.
Copy file name to clipboardExpand all lines: cuda-specific-examples/src/main/java/org/deeplearning4j/examples/multigpu/advanced/charmodelling/CharacterIterator.java
-1Lines changed: 0 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -163,7 +163,6 @@ public DataSet next(int num) {
163
163
// dimension 0 = number of examples in minibatch
164
164
// dimension 1 = size of each vector (i.e., number of characters)
165
165
// dimension 2 = length of each time series/example
166
-
//Why 'f' order here? See http://deeplearning4j.org/usingrnns.html#data section "Alternative: Implementing a custom DataSetIterator"
Copy file name to clipboardExpand all lines: cuda-specific-examples/src/main/java/org/deeplearning4j/examples/multigpu/advanced/charmodelling/GenerateTxtModel.java
+1-3Lines changed: 1 addition & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -58,9 +58,7 @@
58
58
from Project Gutenberg. Training on other text sources should be relatively easy to implement.
59
59
60
60
For more details on RNNs in DL4J, see the following:
Copy file name to clipboardExpand all lines: cuda-specific-examples/src/main/java/org/deeplearning4j/examples/multigpu/advanced/transferlearning/vgg16/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
##### TransferLearning
2
-
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.org/transfer-learning](https://deeplearning4j.org/transfer-learning).
2
+
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning](https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning).
3
3
4
4
More more examples refer to the section in the dl4j-example repo [here](../../../../../../../../../../../dl4j-examples/src/main/java/org/deeplearning4j/examples/advanced/features/transferlearning/README.md)
Copy file name to clipboardExpand all lines: dl4j-distributed-training-examples/src/main/java/org/deeplearning4j/distributedtrainingexamples/patent/README.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ Number of classes: 398
15
15
Number of documents/examples (after preprocessing): approx. 5.7 million (training set) plus approx. 170000 (test set)
16
16
17
17
Dataset size: approx. 86 GB (zip format), 464 GB raw text. Note the example performs preprocessing from the compressed ZIP format.
18
-
Requires an additional 20GB of storage space for preprocessing
18
+
Requires an additional 20GB of storage space for preprocessing
19
19
20
20
**Neural Network**: a CNN classifier for text classification. Approximately 600,000 parameters
21
21
@@ -70,7 +70,7 @@ MASTER_IP=...
70
70
AZURE_STORAGE_ACCT=...
71
71
AZURE_STORAGE_ACCT_KEY=...
72
72
AZURE_CONTAINER_ZIPS=patentzips
73
-
AZURE_CONTAINER_PREPROC=patentExamplePreproc
73
+
AZURE_CONTAINER_PREPROC=patentExamplePreproc
74
74
```
75
75
76
76
Note that some clusters may have the master already configured.
@@ -93,7 +93,7 @@ is pointed to the same value for ```AZURE_CONTAINER_PREPROC```.
93
93
**Alternatively to setting storage account**
94
94
95
95
You can set the storage account credentials in your Hadoop core-site.xml file. See "Configuring Credentials" in this guide for details: [https://hadoop.apache.org/docs/current/hadoop-azure/index.html](https://hadoop.apache.org/docs/current/hadoop-azure/index.html)
96
-
96
+
97
97
98
98
**Second: Run the Script**
99
99
@@ -108,7 +108,7 @@ After preprocessing is complete, you will have:
108
108
2. For HTTP access (if enabled): ```https://AZURE_STORAGE_ACCT.blob.core.windows.net/AZURE_CONTAINER_ZIPS/```
109
109
2. Preprocessed training and test data (with default sequence length of 1000 and minibatch size of 32)
110
110
1. For Spark access: ```wasbs://AZURE_CONTAINER_PREPROC@AZURE_STORAGE_ACCT.blob.core.windows.net/seqLength1000_mb32/```
111
-
2. For HTTP access (if enabled): ```https://AZURE_STORAGE_ACCT.blob.core.windows.net/AZURE_CONTAINER_PREPROC/```
111
+
2. For HTTP access (if enabled): ```https://AZURE_STORAGE_ACCT.blob.core.windows.net/AZURE_CONTAINER_PREPROC/```
112
112
113
113
Note that the preprocessed directory will have ```train``` and ```test``` subdirectories.
114
114
The format of the files in those train/test directories is a custom format designed to be loaded
@@ -131,7 +131,7 @@ Set the following required arguments to the same values used for the preprocessi
131
131
MASTER_IP=...
132
132
AZURE_STORAGE_ACCT=...
133
133
AZURE_STORAGE_ACCT_KEY=...
134
-
AZURE_CONTAINER_PREPROC=patentExamplePreproc
134
+
AZURE_CONTAINER_PREPROC=patentExamplePreproc
135
135
```
136
136
137
137
The following configuration options also need to be set:
@@ -142,7 +142,7 @@ LOCAL_SAVE_DIR
142
142
143
143
Your network mask should be set to the network used for spark communication. For example, [10.0.0.0/16]
144
144
See the following links for further details:
145
-
*[DL4J Distributed Training - Netmask](https://deeplearning4j.org/distributed#netmask)
145
+
*[DL4J Distributed Training - Netmask](https://deeplearning4j.konduit.ai/distributed-deep-learning/parameter-server#netmask)
146
146
*[How to Find the IP Address, Subnet Mask & Gateway of a Computer](https://yourbusiness.azcentral.com/ip-address-subnet-mask-gateway-computer-14563.html)
147
147
*[What is a Subnet Mask](https://www.iplocation.net/subnet-mask)
Copy file name to clipboardExpand all lines: dl4j-distributed-training-examples/src/main/java/org/deeplearning4j/distributedtrainingexamples/tinyimagenet/TrainSpark.java
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -78,7 +78,7 @@
78
78
* a larger network, better selection of hyperparameters, and more epochs.
79
79
*
80
80
* For further details on DL4J's Spark implementation, see the "Distributed Deep Learning" pages at:
.unicastPort(port) // Should be open for IN/OUT communications on all Spark nodes
148
-
.networkMask(networkMask) // Local network mask - for example, 10.0.0.0/16 - see https://deeplearning4j.org/docs/latest/deeplearning4j-scaleout-parameter-server
148
+
.networkMask(networkMask) // Local network mask - for example, 10.0.0.0/16 - see https://deeplearning4j.konduit.ai/distributed-deep-learning/parameter-server#netmask
149
149
.controllerAddress(masterIP) // IP address of the master/driver node
Copy file name to clipboardExpand all lines: dl4j-examples/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -166,7 +166,7 @@ Trace where data from each example comes from and get metadata on prediction err
166
166
Train a MultiLayerNetwork where the errors come from an external source, instead of using an Output layer and a labels array.
167
167
168
168
##### TransferLearning
169
-
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.org/transfer-learning](https://deeplearning4j.org/transfer-learning).
169
+
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning](https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning).
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.org/transfer-learning](https://deeplearning4j.org/transfer-learning).
Demonstrates use of the dl4j transfer learning API which allows users to construct a model based off an existing model by modifying the architecture, freezing certain parts selectively and then fine tuning parameters. Read the documentation for the Transfer Learning API at [https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning](https://deeplearning4j.konduit.ai/tuning-and-training/transfer-learning).
Save time on the forward pass during multiple epochs by "featurizing" the datasets. FeaturizedPreSave saves the output at the last frozen layer and FitFromFeaturize fits to the presaved data so you can iterate quicker with different learning parameters.
Copy file name to clipboardExpand all lines: dl4j-examples/src/main/java/org/deeplearning4j/examples/advanced/modelling/charmodelling/generatetext/GenerateTxtCharCompGraphModel.java
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@
38
38
/**
39
39
* This example is almost identical to the LSTMCharModellingExample, except that it utilizes the ComputationGraph
40
40
* architecture instead of MultiLayerNetwork architecture. See the javadoc in that example for details.
41
-
* For more details on the ComputationGraph architecture, see http://deeplearning4j.org/compgraph
41
+
* For more details on the ComputationGraph architecture, see https://deeplearning4j.konduit.ai/models/computationgraph
42
42
*
43
43
* In addition to the use of the ComputationGraph a, this version has skip connections between the first and output layers,
44
44
* in order to show how this configuration is done. In practice, this means we have the following types of connections:
0 commit comments