Recurrent Nets
Tags: ai
, lisp
, Date: 2015-01-19
I've been cleaning up and documenting MGL for quite some time now, and while it's nowhere near done, a good portion of the code has been overhauled in the process. There are new additions such as the Adam optimizer and Recurrent Neural Nets. My efforts were mainly only the backprop stuff and I think the definition of feed-forward:
(build-fnn (:class 'digit-fnn)
(input (->input :size *n-inputs*))
(hidden-activation (->activation input :size n-hiddens))
(hidden (->relu hidden-activation))
(output-activation (->activation hidden :size *n-outputs*))
(output (->softmax-xe-loss :x output-activation)))
and recurrent nets:
(build-rnn ()
(build-fnn (:class 'sum-sign-fnn)
(input (->input :size 1))
(h (->lstm input :size n-hiddens))
(prediction (->softmax-xe-loss
(->activation h :name 'prediction :size *n-outputs*)))))
is fairly straight-forward already. There is still much code that needs to accompany such a network definition, mostly having to do with how to give inputs and prediction targets to the network and also with monitoring training. See the full examples for feed-forward and recurrent nets in the documentation.