Skip to content

Commit 38f0d30

Browse files
fealhocsala
andauthored
Remove self.trained_epochs (#134)
* Expose hyperparameters/change cuda logic * Fix set_device/update documentation * Remove self from discriminator * Fix optimizers * Remove self from discriminator * Remove "_" from variables * Remove self.trained_epochs variable Co-authored-by: Carles Sala <carles@pythiac.com>
1 parent e93e00d commit 38f0d30

1 file changed

Lines changed: 0 additions & 2 deletions

File tree

ctgan/synthesizers/ctgan.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,6 @@ def __init__(self, embedding_dim=128, generator_dim=(256, 256), discriminator_di
152152
self._log_frequency = log_frequency
153153
self._verbose = verbose
154154
self._epochs = epochs
155-
self.trained_epochs = 0
156155
self.pac = pac
157156

158157
if not cuda or not torch.cuda.is_available():
@@ -330,7 +329,6 @@ def fit(self, train_data, discrete_columns=tuple(), epochs=None):
330329

331330
steps_per_epoch = max(len(train_data) // self._batch_size, 1)
332331
for i in range(epochs):
333-
self.trained_epochs += 1
334332
for id_ in range(steps_per_epoch):
335333

336334
for n in range(self._discriminator_steps):

0 commit comments

Comments
 (0)