ó ùµÈ[c@sÆdZddddddgZddlmZd d lmZdefd „ƒYZdefd „ƒYZdefd „ƒYZdefd„ƒYZ defd„ƒYZ defd„ƒYZ dS(sBasic neural network layers.t Activationt LeakyReLUtPReLUtELUtSELUtSwishi(t initializeri(t HybridBlockcBs2eZdZd„Zd„Zd„Zd„ZRS(seApplies an activation function to input. Parameters ---------- activation : str Name of activation function to use. See :func:`~mxnet.ndarray.Activation` for available choices. Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. cKs#||_tt|ƒj|dS(N(t _act_typetsuperRt__init__(tselft activationtkwargs((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR +s cCs|jS(N(R(R ((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyt_alias/scCs|j|d|jddƒS(Ntact_typetnametfwd(RR(R tFtx((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pythybrid_forward2scCs"d}|jd|jj|jS(Ns{name}({_act_type})R(tformatt __class__t__name__t__dict__(R ts((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyt__repr__5s(Rt __module__t__doc__R RRR(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyRs    cBs)eZdZd„Zd„Zd„ZRS(s;Leaky version of a Rectified Linear Unit. It allows a small gradient when the unit is not active .. math:: f\left(x\right) = \left\{ \begin{array}{lr} \alpha x & : x \lt 0 \\ x & : x \geq 0 \\ \end{array} \right.\\ Parameters ---------- alpha : float slope coefficient for the negative half axis. Must be >= 0. Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. cKs;|dkstdƒ‚tt|ƒj|||_dS(Nis7Slope coefficient for LeakyReLU must be no less than 0.(tAssertionErrorR RR t_alpha(R talphaR ((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR UscCs"|j|ddd|jddƒS(NRtleakytslopeRR(RR(R RR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyRZscCs%d}|jd|jjd|jƒS(Ns{name}({alpha})RR(RRRR(R R((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR]s(RRRR RR(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR;s  cBs,eZdZejdƒd„Zd„ZRS(s˜Parametric leaky version of a Rectified Linear Unit. `_ paper. It learns a gradient when the unit is not active .. math:: f\left(x\right) = \left\{ \begin{array}{lr} \alpha x & : x \lt 0 \\ x & : x \geq 0 \\ \end{array} \right.\\ where alpha is a learned parameter. Parameters ---------- alpha_initializer : Initializer Initializer for the `embeddings` matrix. Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. gÐ?c KsNtt|ƒj||jƒ&|jjdddd|ƒ|_WdQXdS(NRtshapeitinit(i(R RR t name_scopetparamstgetR(R talpha_initializerR ((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR €s cCs|j|d|ddddƒS(NtgammaRtpreluRR(R(R RRR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR…s(RRRRtConstantR R(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyRcscBs#eZdZdd„Zd„ZRS(sì Exponential Linear Unit (ELU) "Fast and Accurate Deep Network Learning by Exponential Linear Units", Clevert et al, 2016 https://arxiv.org/abs/1511.07289 Published as a conference paper at ICLR 2016 Parameters ---------- alpha : float The alpha parameter as described by Clevert et al, 2016 Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. gð?cKs#tt|ƒj|||_dS(N(R RR R(R RR ((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR œscCs-|j|dk||j|j|ƒdƒS(Nigð?(twhereRtexp(R RR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR s(RRRR R(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR‰s cBs eZdZd„Zd„ZRS(s- Scaled Exponential Linear Unit (SELU) "Self-Normalizing Neural Networks", Klambauer et al, 2017 https://arxiv.org/abs/1706.02515 Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. cKstt|ƒj|dS(N(R RR (R R ((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR ±scCs|j|ddddƒS(NRtseluRR(R(R RR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyRŽs(RRRR R(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR€s  cBs#eZdZdd„Zd„ZRS(s: Swish Activation function https://arxiv.org/pdf/1710.05941.pdf Parameters ---------- beta : float swish(x) = x * sigmoid(beta*x) Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. gð?cKs#tt|ƒj|||_dS(N(R RR t_beta(R tbetaR ((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyR ÊscCs||j|j|ddƒS(NRR(tsigmoidR.(R RR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyRÎs(RRRR R(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyRžs N( Rt__all__tRtblockRRRRRRR(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/activations.pyts (&