In this first post of this series, we took a look at a simple autoencoder. It took and image and transformed it back to an image. Then, we focused in on the disciminator portion of the model, where we took an image and transformed it to a label. Now, we focus in on the generator portion of the model do the inverse operation: we transform a label to an image. In recap:
Autoencoder: image -> image
Discriminator: image -> label
Generator: label -> image (This is what we are doing now!)
Still Need Data of Course
Nothing changes here. We are still using the MNIST handwritten digit set and have an input and out to our model.
The model does change to one hot encode the label for the number. Other than that, it’s pretty much the exact same second half of the autoencoder model.
When binding the shapes to the model, we now need to specify that the input data shapes is the label instead of the image and the output of the model is going to be the image.
12345678910
(defmodel;;; change data shapes to label shapes(-> (m/module(get-symbol){:data-names["input"]:label-names["input_"]})(m/bind{:data-shapes[(assoc label-desc:name"input")]:label-shapes[(assoc data-desc:name"input_")]})(m/init-params{:initializer(initializer/uniform1)})(m/init-optimizer{:optimizer(optimizer/adam{:learning-rage0.001})})))(def my-metric(eval-metric/mse))
Training
The training of the model is pretty straight forward. Just being mindful that we are using hte batch-label, (number label), as the input and and validating with the batch-data, (image).
123456789101112131415
(defn train[num-epochs](doseq [epoch-num(range 0num-epochs)](println "starting epoch "epoch-num)(mx-io/do-batchestrain-data(fn [batch];;; change input to be the label(-> model(m/forward{:data(mx-io/batch-labelbatch):label(mx-io/batch-databatch)})(m/update-metricmy-metric(mx-io/batch-databatch))(m/backward)(m/update))))(println "result for epoch "epoch-num" is "(eval-metric/get-and-resetmy-metric))))