ó ùµÈ[c @s‡dZddddddddd d d g Zd d lZd d lZddlmZddlmZm Z ddl m Z ddl m Z mZdefd„ƒYZde fd„ƒYZde fd„ƒYZde fd„ƒYZde fd„ƒYZde fd„ƒYZd e fd„ƒYZde fd„ƒYZde fd„ƒYZd efd„ƒYZd e fd„ƒYZd S( sBasic neural network layers.t SequentialtHybridSequentialtDensetDropoutt Embeddingt BatchNormt InstanceNormt LayerNormtFlattentLambdat HybridLambdaiÿÿÿÿNi(t Activationi(tBlockt HybridBlock(t_indenti(tndtsymcBsVeZdZddd„Zd„Zd„Zd„Zd„Zd„Z e d„Z RS( s Stacks Blocks sequentially. Example:: net = nn.Sequential() # use net's name_scope to give child Blocks appropriate names. with net.name_scope(): net.add(nn.Dense(10, activation='relu')) net.add(nn.Dense(20)) cCs#tt|ƒjd|d|ƒdS(Ntprefixtparams(tsuperRt__init__(tselfRR((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR+scGs"x|D]}|j|ƒqWdS(sAdds block on top of the stack.N(tregister_child(Rtblockstblock((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pytadd.s cCs*x#|jjƒD]}||ƒ}qW|S(N(t _childrentvalues(RtxR((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pytforward3sc Cstd}djg|jjƒD]3\}}djd|dt|jƒdƒƒ^qƒ}|jd|jjd|ƒS( Ns{name}( {modstr} )s s ({key}): {block}tkeyRitnametmodstr(tjoinRtitemstformatRt__repr__t __class__t__name__(RtsRRR ((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$8s  IcCslt|jjƒƒ|}t|tƒrdt|ƒd|jƒ}|jƒ|j|ŒWdQX|S|SdS(NR(tlistRRt isinstancettypet_prefixt name_scopeR(RRtlayerstnet((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyt __getitem__@s cCs t|jƒS(N(tlenR(R((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyt__len__JscKsb|jrEtd„|jjƒDƒƒrEtjd|jddƒntt|ƒj||dS(s4Activates or deactivates `HybridBlock`s recursively. Has no effect on non-hybrid children. Parameters ---------- active : bool, default True Whether to turn hybrid on or off. **kwargs : string Additional flags for hybridized operator. css|]}t|tƒVqdS(N(R)R (t.0tc((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pys XssvAll children of this Sequential layer '%s' are HybridBlocks. Consider using HybridSequential for the best performance.t stackleveliN( RtallRtwarningstwarnRRRt hybridize(Rtactivetkwargs((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR8Ms (N( R&t __module__t__doc__tNoneRRRR$R/R1tTrueR8(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR s     cBsJeZdZddd„Zd„Zd„Zd„Zd„Zd„Z RS(s/Stacks HybridBlocks sequentially. Example:: net = nn.HybridSequential() # use net's name_scope to give child Blocks appropriate names. with net.name_scope(): net.add(nn.Dense(10, activation='relu')) net.add(nn.Dense(20)) net.hybridize() cCs#tt|ƒjd|d|ƒdS(NRR(RRR(RRR((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRkscGs"x|D]}|j|ƒqWdS(sAdds block on top of the stack.N(R(RRR((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRns cCs*x#|jjƒD]}||ƒ}qW|S(N(RR(RtFRR((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pythybrid_forwardssc Cstd}djg|jjƒD]3\}}djd|dt|jƒdƒƒ^qƒ}|jd|jjd|ƒS( Ns{name}( {modstr} )s s ({key}): {block}RRiRR (R!RR"R#RR$R%R&(RR'RRR ((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$xs  IcCslt|jjƒƒ|}t|tƒrdt|ƒd|jƒ}|jƒ|j|ŒWdQX|S|SdS(NR(R(RRR)R*R+R,R(RRR-R.((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR/€s cCs t|jƒS(N(R0R(R((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR1ŠsN( R&R;R<R=RRR@R$R/R1(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR_s     cBsAeZdZdeeddddd„Zdd„Zd„ZRS(sÖJust your regular densely-connected NN layer. `Dense` implements the operation: `output = activation(dot(input, weight) + bias)` where `activation` is the element-wise activation function passed as the `activation` argument, `weight` is a weights matrix created by the layer, and `bias` is a bias vector created by the layer (only applicable if `use_bias` is `True`). Note: the input must be a tensor with rank 2. Use `flatten` to convert it to rank 2 manually if necessary. Parameters ---------- units : int Dimensionality of the output space. activation : str Activation function to use. See help on `Activation` layer. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. flatten: bool Whether the input tensor should be flattened. If true, all but the first axis of input data are collapsed together. If false, all but the last axis of input data are kept the same, and the transformation applies on the last axis. dtype : str or np.dtype, default 'float32' Data type of output embeddings. weight_initializer : str or `Initializer` Initializer for the `kernel` weights matrix. bias_initializer: str or `Initializer` Initializer for the bias vector. in_units : int, optional Size of the input data. If not specified, initialization will be deferred to the first time `forward` is called and `in_units` will be inferred from the shape of input data. prefix : str or None See document of `Block`. params : ParameterDict or None See document of `Block`. Inputs: - **data**: if `flatten` is True, `data` should be a tensor with shape `(batch_size, x1, x2, ..., xn)`, where x1 * x2 * ... * xn is equal to `in_units`. If `flatten` is False, `data` should have shape `(x1, x2, ..., xn, in_units)`. Outputs: - **out**: if `flatten` is True, `out` will be a tensor with shape `(batch_size, units)`. If `flatten` is False, `out` will have shape `(x1, x2, ..., xn, units)`. tfloat32tzerosic Ksîtt|ƒj| ||_|jƒ½||_||_|jjdd||fd|d|dt ƒ|_ |rª|jjdd|fd|d|dt ƒ|_ n d|_ |dk rÛt |d|dƒ|_n d|_WdQXdS( Ntweighttshapetinittdtypetallow_deferred_inittbiasRt_(RRRt_flattenR,t_unitst _in_unitsRtgetR>RCRHR=R tact( Rtunitst activationtuse_biastflattenRFtweight_initializertbias_initializertin_unitsR:((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRÅs          c Cs^|j|||d|dkd|jd|jddƒ}|jdk rZ|j|ƒ}n|S(Ntno_biast num_hiddenRRRtfwd(tFullyConnectedR=RKRJRN(RR?RRCRHRN((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR@Ûs 'c Csjd}|jj}|jd|jjd|jr9|jndddj|drY|dnd|dƒƒS( Ns{name}({layout}, {act})RRNtlineartlayouts {0} -> {1}ii(RCRDR#R%R&RNR=(RR'RD((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$âs  N(R&R;R<R=R>RR@R$(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRŽs 6    cBs,eZdZdd„Zd„Zd„ZRS(sýApplies Dropout to the input. Dropout consists in randomly setting a fraction `rate` of input units to 0 at each update during training time, which helps prevent overfitting. Parameters ---------- rate : float Fraction of the input units to drop. Must be a number between 0 and 1. axes : tuple of int, default () The axes on which dropout mask is shared. If empty, regular dropout is applied. Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. References ---------- `Dropout: A Simple Way to Prevent Neural Networks from Overfitting `_ cKs,tt|ƒj|||_||_dS(N(RRRt_ratet_axes(RtratetaxesR:((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRs cCs%|j|d|jd|jddƒS(NtpR_RRX(RR\R](RR?R((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR@scCs"d}|jd|jj|jS(Ns!{name}(p = {_rate}, axes={_axes})R(R#R%R&t__dict__(RR'((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$ s((R&R;R<RR@R$(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRês  c BsSeZdZdddeeedddddd„ Zd„Zd „Zd „ZRS( sKBatch normalization layer (Ioffe and Szegedy, 2014). Normalizes the input at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1. Parameters ---------- axis : int, default 1 The axis that should be normalized. This is typically the channels (C) axis. For instance, after a `Conv2D` layer with `layout='NCHW'`, set `axis=1` in `BatchNorm`. If `layout='NHWC'`, then set `axis=3`. momentum: float, default 0.9 Momentum for the moving average. epsilon: float, default 1e-5 Small float added to variance to avoid dividing by zero. center: bool, default True If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored. scale: bool, default True If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling will be done by the next layer. use_global_stats: bool, default False If True, use global moving statistics instead of local batch-norm. This will force change batch-norm into a scale shift operator. If False, use local batch-norm. beta_initializer: str or `Initializer`, default 'zeros' Initializer for the beta weight. gamma_initializer: str or `Initializer`, default 'ones' Initializer for the gamma weight. moving_mean_initializer: str or `Initializer`, default 'zeros' Initializer for the moving mean. moving_variance_initializer: str or `Initializer`, default 'ones' Initializer for the moving variance. in_channels : int, default 0 Number of channels (feature maps) in input data. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. igÍÌÌÌÌÌì?gñh㈵øä>RBtonesic KsOtt|ƒj| i|d6|d6|d6| d6|d6|_| dkr[| |_n|jjdd|rvd nd d | fd |d td|ƒ|_|jjdd|r¸d nd d | fd |d td|ƒ|_ |jjddd d | fd | d tdt ƒ|_ |jjddd d | fd | d tdt ƒ|_ dS(Ntaxistepstmomentumt fix_gammatuse_global_statsitgammatgrad_reqtwritetnullRDRERGtdifferentiabletbetat running_meant running_var( RRRt_kwargst in_channelsRRMR>RhRmtFalseRnRo( RRcRetepsilontcentertscaleRgtbeta_initializertgamma_initializertrunning_mean_initializertrunning_variance_initializerRqR:((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRAs.  ! !    cCs;tj|ƒjdkr!d}ntt|ƒj|ƒdS(Ntfloat16RA(tnpRFRRRtcast(RRF((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR|^s c Cs%|j|||||dd|jS(NRRX(RRp(RR?RRhRmRnRo((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR@csc Cs›d}|jjd}|dj|r+|ndƒ7}|d7}|jd|jjddjg|jjƒD]'\}}dj||j ƒgƒ^qjƒƒS( Ns{name}({content}is, in_channels={0}t)Rtcontents, t=( RhRDR#R=R%R&R!RpR"R$(RR'Rqtktv((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$gs  ( R&R;R<R>RrRR|R@R$(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRs/    cBs2eZdZdded„Zd„Zd„ZRS(sˆTurns non-negative integers (indexes/tokens) into dense vectors of fixed size. eg. [4, 20] -> [[0.25, 0.1], [0.6, -0.2]] Note: if `sparse_grad` is set to True, the gradient w.r.t weight will be sparse. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. By default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: https://mxnet.incubator.apache.org/api/python/optimization/optimization.html Parameters ---------- input_dim : int Size of the vocabulary, i.e. maximum integer index + 1. output_dim : int Dimension of the dense embedding. dtype : str or np.dtype, default 'float32' Data type of output embeddings. weight_initializer : Initializer Initializer for the `embeddings` matrix. sparse_grad: bool If True, gradient w.r.t. weight will be a 'row_sparse' NDArray. Inputs: - **data**: (N-1)-D tensor with shape: `(x1, x2, ..., xN-1)`. Output: - **out**: N-D tensor with shape: `(x1, x2, ..., xN-1, output_dim)`. RAc KsŠtt|ƒj||r"dnd}i|d6|d6|d6|d6|_|jjdd||fd |d|d td |ƒ|_dS( Nt row_sparsetdefaultt input_dimt output_dimRFt sparse_gradRCRDRERGt grad_stype(RRRRpRRMR>RC(RR„R…RFRSR†R:R‡((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRŽs cCs|j||dd|jS(NRRX(RRp(RR?RRC((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR@˜scCs"d}|jd|jj|jS(Ns2{block_name}({input_dim} -> {output_dim}, {dtype})t block_name(R#R%R&Rp(RR'((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$›sN(R&R;R<R=RrRR@R$(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRqs  cBs)eZdZd„Zd„Zd„ZRS(sáFlattens the input to two dimensional. Inputs: - **data**: input tensor with arbitrary shape `(N, x1, x2, ..., xn)` Output: - **out**: 2D tensor with shape: `(N, x1 \cdot x2 \cdot ... \cdot xn)` cKstt|ƒj|dS(N(RRR(RR:((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRªscCs |j|ƒS(N(R(RR?R((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR@­scCs |jjS(N(R%R&(R((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$°s(R&R;R<RR@R$(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR¡s  cBs>eZdZddeedddd„Zd„Zd„ZRS( s^ Applies instance normalization to the n-dimensional input array. This operator takes an n-dimensional input array where (n>2) and normalizes the input using the following formula: .. math:: \bar{C} = \{i \mid i \neq 0, i \neq axis\} out = \frac{x - mean[data, \bar{C}]}{ \sqrt{Var[data, \bar{C}]} + \epsilon} * gamma + beta Parameters ---------- axis : int, default 1 The axis that will be excluded in the normalization process. This is typically the channels (C) axis. For instance, after a `Conv2D` layer with `layout='NCHW'`, set `axis=1` in `InstanceNorm`. If `layout='NHWC'`, then set `axis=3`. Data will be normalized along axes excluding the first axis and the axis given. epsilon: float, default 1e-5 Small float added to variance to avoid dividing by zero. center: bool, default True If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored. scale: bool, default True If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling will be done by the next layer. beta_initializer: str or `Initializer`, default 'zeros' Initializer for the beta weight. gamma_initializer: str or `Initializer`, default 'ones' Initializer for the gamma weight. in_channels : int, default 0 Number of channels (feature maps) in input data. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. References ---------- `Instance Normalization: The Missing Ingredient for Fast Stylization `_ Examples -------- >>> # Input of shape (2,1,2) >>> x = mx.nd.array([[[ 1.1, 2.2]], ... [[ 3.3, 4.4]]]) >>> # Instance normalization is calculated with the above formula >>> layer = InstanceNorm() >>> layer.initialize(ctx=mx.cpu(0)) >>> layer(x) [[[-0.99998355 0.99998331]] [[-0.99998319 0.99998361]]] igñh㈵øä>RBRbic KsÉtt|ƒj|i|d6|d6|d6|d6|_||_||_|jjdd|rhdndd |fd |d tƒ|_ |jjd d|r¤dndd |fd |d tƒ|_ dS( NRdRcRtRuRhRiRjRkRDRERGRm( RRRRpt_axist_epsilonRRMR>RhRm( RRcRsRtRuRvRwRqR:((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRôs%  ! !cCsw|jdkr1|j|||ddd|jƒS|jd|jƒ}|j|||ddd|jƒjd|jƒS(NiRRXRd(R‰RRŠtswapaxes(RR?RRhRm((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR@s c Csd}|jjd}|dj|ƒ7}|d7}|jd|jjddjg|jjƒD]'\}}dj||jƒgƒ^q^ƒƒS( Ns{name}({content}is, in_channels={0}R}RR~s, R( RhRDR#R%R&R!RpR"R$(RR'RqR€R((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$ s  (R&R;R<R>RrRR@R$(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR´s ?  c BsDeZdZddeedddd d d„ Zd„Zd„ZRS( so Applies layer normalization to the n-dimensional input array. This operator takes an n-dimensional input array and normalizes the input using the given axis: .. math:: out = \frac{x - mean[data, axis]}{ \sqrt{Var[data, axis]} + \epsilon} * gamma + beta Parameters ---------- axis : int, default -1 The axis that should be normalized. This is typically the axis of the channels. epsilon: float, default 1e-5 Small float added to variance to avoid dividing by zero. center: bool, default True If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored. scale: bool, default True If True, multiply by `gamma`. If False, `gamma` is not used. beta_initializer: str or `Initializer`, default 'zeros' Initializer for the beta weight. gamma_initializer: str or `Initializer`, default 'ones' Initializer for the gamma weight. in_channels : int, default 0 Number of channels (feature maps) in input data. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. References ---------- `Layer Normalization `_ Examples -------- >>> # Input of shape (2, 5) >>> x = mx.nd.array([[1, 2, 3, 4, 5], [1, 1, 2, 2, 2]]) >>> # Layer normalization is calculated with the above formula >>> layer = LayerNorm() >>> layer.initialize(ctx=mx.cpu(0)) >>> layer(x) [[-1.41421 -0.707105 0. 0.707105 1.41421 ] [-1.2247195 -1.2247195 0.81647956 0.81647956 0.81647956]] iÿÿÿÿgñh㈵øä>RBRbic Csätt|ƒjd|d| ƒi|d6|d6|d6|d6|_||_||_||_||_|jj dd|rƒd nd d |fd |d t ƒ|_ |jj dd|r¿d nd d |fd |d t ƒ|_ dS(NRRRdRcRtRuRhRiRjRkRDRERGRm( RRRRpR‰RŠt_centert_scaleRRMR>RhRm( RRcRsRtRuRvRwRqRR((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRJs%    ! !c Cs1|j|d|d|d|jd|jƒ}|S(NRhRmRcRd(RR‰RŠ(RR?tdataRhRmt norm_data((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR@Zs-c Csd}|jjd}|dj|ƒ7}|d7}|jd|jjddjg|jjƒD]'\}}dj||jƒgƒ^q^ƒƒS( Ns{name}({content}is, in_channels={0}R}RR~s, R( RhRDR#R%R&R!RpR"R$(RR'RqR€R((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$^s  N(R&R;R<R>R=RR@R$(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyRs 5  cBs,eZdZdd„Zd„Zd„ZRS(sWraps an operator or an expression as a Block object. Parameters ---------- function : str or function Function used in lambda must be one of the following: 1) the name of an operator that is available in ndarray. For example:: block = Lambda('tanh') 2) a function that conforms to "def function(*args)". For example:: block = Lambda(lambda x: nd.LeakyReLU(x, slope=0.1)) Inputs: - ** *args **: one or more input data. Their shapes depend on the function. Output: - ** *outputs **: one or more output data. Their shapes depend on the function. cCs–tt|ƒjd|ƒt|tƒr\tt|ƒsGtd|ƒ‚tt|ƒ|_ n6t |ƒrt||_ nt dj |t |ƒƒƒ‚dS(NRs)Function name %s is not found in ndarray.s.Unrecognized function in lambda: {} of type {}(RR RR)tstrthasattrRtAssertionErrortgetattrt _func_impltcallablet ValueErrorR#R*(RtfunctionR((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR~s   cGs |j|ŒS(N(R”(Rtargs((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR‹scCs"djd|jjd|jjƒS(Ns{name}({function})RR—(R#R%R&R”(R((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$ŽsN(R&R;R<R=RRR$(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR hs cBs,eZdZdd„Zd„Zd„ZRS(sÝWraps an operator or an expression as a HybridBlock object. Parameters ---------- function : str or function Function used in lambda must be one of the following: 1) the name of an operator that is available in both symbol and ndarray. For example:: block = HybridLambda('tanh') 2) a function that conforms to "def function(F, data, *args)". For example:: block = HybridLambda(lambda F, x: F.LeakyReLU(x, slope=0.1)) Inputs: - ** *args **: one or more input data. First argument must be symbol or ndarray. Their shapes depend on the function. Output: - ** *outputs **: one or more output data. Their shapes depend on the function. csàtt|ƒjd|ƒt|tƒrštt|ƒrFtt|ƒsVtd|ƒ‚it t|ƒt6t t|ƒt6‰‡fd†|_ ||_ nBt |ƒr¾||_ |j |_ ntdj|t|ƒƒƒ‚dS(NRs0Function name %s is not found in symbol/ndarray.csˆ||ŒS(N((R?R˜(t func_dict(s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyt¯ss.Unrecognized function in lambda: {} of type {}(RR RR)RR‘RRR’R“t_funct _func_nameR•R&R–R#R*(RR—R((R™s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR©s! &   cGs|j|||ŒS(N(R›(RR?RR˜((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR@¹scCsdjd|jjd|jƒS(Ns{name}({function})RR—(R#R%R&Rœ(R((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR$¼sN(R&R;R<R=RR@R$(((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyR “s  (R<t__all__R6tnumpyR{t activationsR RR R tutilsRtRRRRRRRRRRRR R (((s[/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/basic_layers.pyts&  ?/\'`0`T+