Where exactly is a piecewise linear maxout unit used in a convnet?

I know about the structure of a convolutional neural net and I have even applied it to the MNIST digit recognition problem.

I was looking at a cloud based deep learning API and it asks about specifying the number of piecewise linear maxout units in my convolutional neural network?

As far as I understand, a typical convnet has a series of convolution kernels , each followed by a sub-sampler with an optional pooling layer, before the next convolutional layer starts. Then, at the end we have a fully connected hidden layer of (may be sigmoid neurons) followed by the output layer (which could be softmax).

Where in the above structure does a piecewise linear maxout unit fit? I know what a piece wise linear maxout unit does mathematically, but where does it fit in the above structure? Does it replace any of the above components or is it an addition at a particular phase?

Category:machine learning Views:0 Time:2018-03-12

Related post

Copyright (C) dskims.com, All Rights Reserved.

processed in 0.255 (s). 11 q(s)