Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.
Path: blob/main/apex/examples/dcgan/README.md
Views: 792
Mixed Precision DCGAN Training in PyTorch
main_amp.py
is based on https://github.com/pytorch/examples/tree/master/dcgan. It implements Automatic Mixed Precision (Amp) training of the DCGAN example for different datasets. Command-line flags forwarded to amp.initialize
are used to easily manipulate and switch between various pure and mixed precision "optimization levels" or opt_level
s. For a detailed explanation of opt_level
s, see the updated API guide.
We introduce these changes to the PyTorch DCGAN example as described in the Multiple models/optimizers/losses section of the documentation::
Note that we use different loss_scalers
for each computed loss. Using a separate loss scaler per loss is optional, not required.
To improve the numerical stability, we swapped nn.Sigmoid() + nn.BCELoss()
to nn.BCEWithLogitsLoss()
.
With the new Amp API you never need to explicitly convert your model, or the input data, to half().
"Pure FP32" training:
Recommended mixed precision training:
Have a look at the original DCGAN example for more information about the used arguments.
To enable mixed precision training, we introduce the --opt_level
argument.