ó ůľČ[c@sádZdddddgZddlZdd lmZmZd d lmZmZd d l m Z m Z m Z de fd „ƒYZ de fd„ƒYZdefd„ƒYZdefd„ƒYZde fd„ƒYZdS(s*Custom neural network layers in model_zoo.t ConcurrenttHybridConcurrenttIdentitytSparseEmbeddingt SyncBatchNormi˙˙˙˙Ni(tndt test_utilsi(t HybridBlocktBlock(t SequentialtHybridSequentialt BatchNormcBs)eZdZdddd„Zd„ZRS(s;Lays `Block`s concurrently. This block feeds its input to all children blocks, and produce the output by concatenating all the children blocks' outputs on the specified axis. Example:: net = Concurrent() # use net's name_scope to give children blocks appropriate names. with net.name_scope(): net.add(nn.Dense(10, activation='relu')) net.add(nn.Dense(20)) net.add(Identity()) Parameters ---------- axis : int, default -1 The axis on which to concatenate the outputs. i˙˙˙˙cCs,tt|ƒjd|d|ƒ||_dS(Ntprefixtparams(tsuperRt__init__taxis(tselfRR R ((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyR2scCsOg}x*|jjƒD]}|j||ƒƒqWtjd|j|Œ}|S(Ntdim(t _childrentvaluestappendRtconcatR(Rtxtouttblock((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pytforward6s N(t__name__t __module__t__doc__tNoneRR(((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyRscBs)eZdZdddd„Zd„ZRS(sGLays `HybridBlock`s concurrently. This block feeds its input to all children blocks, and produce the output by concatenating all the children blocks' outputs on the specified axis. Example:: net = HybridConcurrent() # use net's name_scope to give children blocks appropriate names. with net.name_scope(): net.add(nn.Dense(10, activation='relu')) net.add(nn.Dense(20)) net.add(Identity()) Parameters ---------- axis : int, default -1 The axis on which to concatenate the outputs. i˙˙˙˙cCs,tt|ƒjd|d|ƒ||_dS(NR R (RRRR(RRR R ((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyRSscCsOg}x*|jjƒD]}|j||ƒƒqW|jd|j|Œ}|S(NR(RRRRR(RtFRRR((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pythybrid_forwardWs N(RRRRRR (((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyR>scBs&eZdZddd„Zd„ZRS(s¤Block that passes through the input directly. This block can be used in conjunction with HybridConcurrent block for residual connection. Example:: net = HybridConcurrent() # use net's name_scope to give child Blocks appropriate names. with net.name_scope(): net.add(nn.Dense(10, activation='relu')) net.add(nn.Dense(20)) net.add(Identity()) cCs#tt|ƒjd|d|ƒdS(NR R (RRR(RR R ((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyRnscCs|S(N((RRR((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyR qsN(RRRRRR (((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyR_scBs/eZdZddd„Zd„Zd„ZRS(sÍTurns non-negative integers (indexes/tokens) into dense vectors of fixed size. eg. [4, 20] -> [[0.25, 0.1], [0.6, -0.2]] This SparseBlock is designed for distributed training with extremely large input dimension. Both weight and gradient w.r.t. weight are `RowSparseNDArray`. Note: if `sparse_grad` is set to True, the gradient w.r.t weight will be sparse. Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. By default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: https://mxnet.incubator.apache.org/api/python/optimization/optimization.html Parameters ---------- input_dim : int Size of the vocabulary, i.e. maximum integer index + 1. output_dim : int Dimension of the dense embedding. dtype : str or np.dtype, default 'float32' Data type of output embeddings. weight_initializer : Initializer Initializer for the `embeddings` matrix. Inputs: - **data**: (N-1)-D tensor with shape: `(x1, x2, ..., xN-1)`. Output: - **out**: N-D tensor with shape: `(x1, x2, ..., xN-1, output_dim)`. tfloat32c Ksxtt|ƒj|i|d6|d6|d6td6|_|jjdd||fd|d|dd d d ƒ|_dS( Nt input_dimt output_dimtdtypet sparse_gradtweighttshapetinitt grad_stypet row_sparsetstype(RRRtTruet_kwargsR tgetR&(RR"R#R$tweight_initializertkwargs((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyR‘s  cCs.|jj|ƒ}tj||dd|jS(Ntnametfwd(R&trow_sparse_dataRt EmbeddingR-(RRR&((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyRšscCs"d}|jd|jj|jS(Ns2{block_name}({input_dim} -> {output_dim}, {dtype})t block_name(tformatt __class__RR-(Rts((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyt__repr__žsN(RRRRRRR9(((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyRts   c BsJeZdZdd ddeeeddddd„ Zd„Zd„ZRS( s Cross-GPU Synchronized Batch normalization (SyncBN) Standard BN [1]_ implementation only normalize the data within each device. SyncBN normalizes the input within the whole mini-batch. We follow the sync-onece implmentation described in the paper [2]_. Parameters ---------- in_channels : int, default 0 Number of channels (feature maps) in input data. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. num_devices : int, default number of visible GPUs momentum: float, default 0.9 Momentum for the moving average. epsilon: float, default 1e-5 Small float added to variance to avoid dividing by zero. center: bool, default True If True, add offset of `beta` to normalized tensor. If False, `beta` is ignored. scale: bool, default True If True, multiply by `gamma`. If False, `gamma` is not used. When the next layer is linear (also e.g. `nn.relu`), this can be disabled since the scaling will be done by the next layer. use_global_stats: bool, default False If True, use global moving statistics instead of local batch-norm. This will force change batch-norm into a scale shift operator. If False, use local batch-norm. beta_initializer: str or `Initializer`, default 'zeros' Initializer for the beta weight. gamma_initializer: str or `Initializer`, default 'ones' Initializer for the gamma weight. moving_mean_initializer: str or `Initializer`, default 'zeros' Initializer for the moving mean. moving_variance_initializer: str or `Initializer`, default 'ones' Initializer for the moving variance. Inputs: - **data**: input tensor with arbitrary shape. Outputs: - **out**: output tensor with the same shape as `data`. Reference: .. [1] Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." *ICML 2015* .. [2] Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, and Amit Agrawal. "Context Encoding for Semantic Segmentation." *CVPR 2018* igÍĚĚĚĚĚě?gńh㈾řä>tzerostonesc Kstt|ƒjd||||||| | | ||  |dkrO|jƒn|}i|d6|d6| d6|d6|d6|jd6|_dS(Nitepstmomentumt fix_gammatuse_global_statstndevtkey(RRRRt_get_num_devicesR R-( Rt in_channelst num_devicesR=tepsilontcentertscaleR?tbeta_initializertgamma_initializertrunning_mean_initializertrunning_variance_initializerR0((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyRÖs! cCs>tjdtƒttjƒƒ}|dkr4|nd}|S(NsXCaution using SyncBatchNorm: if not using all the GPUs, please mannually set num_devicesii(twarningstwarnt UserWarningtlenRt list_gpus(RRD((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyRBăs  c Cs(|jj|||||dd|jS(NR1R2(tcontribRR-(RRRtgammatbetat running_meant running_var((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyR ësN( RRRRR,tFalseRRBR (((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyRŁs2   (Rt__all__RLtRRRRRtnnR R R RRRRR(((sc/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/contrib/nn/basic_layers.pyts   !!/