NVIDIA research team published a paper, Progressive Growing of GANs for Improved Quality, Stability, and Variation, and the source code on Github a month ago.

I went through some trials and errors to run the codes properly, so I want to make it easier to you. Why I think this post will be helpful is the Github page is not supporting to post issues to ask and answer for inquiries.

The source for the progressive growing of generative adversarial networks (a.k.a pGAN) uses a h5 type of data. The python file, h5tool.py, transforms a few kinds of famous image set into h5 format file. Using the HDF5Exporter(h5_filename) class, we can customize our own data array into h5 file. For example, if you have 256x256 grayscale image array, you can make h5 file as follows.

import h5tool
import numpy as np

images = images.reshape(10000, 1, 256, 256)

images = np.float32(images)

h5 = HDF5Exporter(h5_filename, 256, 1) # resolution : 256
h5.add_images(images)
h5.close()

Put the resultant h5 file into the directory, ‘../datasets’. The python file, config.py, contains the configurations of the neural network model. Since it does not set automatically, you need to set it up. Especially should change the resolution of the dataset, and filename of your h5_file in datasets directory. For example,

if 1:
    run_desc = 'example'
    dataset = dict(h5_path='example-256x256.h5', resolution=256, max_labels=0, max_images=20000)
    train.update(lod_training_kimg=800, lod_transition_kimg=800, total_kimg=20000, minibatch_overrides={})
    G.update(fmap_base=2048)
    D.update(fmap_base=2048)

This if statement always executes, and changes some configurations of three dictionaries of the model, train, generator G, discriminator D. The config.py file includes a couple of if statements, and you should choose only one if statement to be executed. fmap_base should be chosen to be multiples of the resolution.

The python file, train.py, contains some configurations about resuming the network. For many neural network training, it takes so much time, so it is essential to save the checkpoint of the network and to resume the network from the point. Theano to saves the network in the pkl python format. It consists of the dictionaries of the G, D and the network at the moment of the checkpoint. The default parameters of the training function is as follows.

def train_gan(....,
        resume_network_pkl =  None,
        resume_kimg        = 0.0,
        resume_time        = 0.0):
        .....

It is the setting for the initialization of the network. If you stoppted the network, and rerun it, put the path of the pkl file where you want to resume, including full directory path. The name of pkl file is like "../network-snapshot-005600.pkl". 5600 is the resume_kimg. The resume_time is not essential to change, but if you want to see the total time of training, compute the seconds of the resuming point from the log. It is in the log directory. You should set it to match the resume_network_pkl file.

Next time, we will try to understand the structure of the network.