ó ùµÈ[c@s`dZddddddddd d d d d ddddddgZddlmZddlmZddlmZddlm Z d„Z defd„ƒYZ de fd„ƒYZ de fd„ƒYZ de fd „ƒYZde fd!„ƒYZde fd"„ƒYZde fd#„ƒYZd$efd%„ƒYZdefd&„ƒYZdefd'„ƒYZd efd(„ƒYZd efd)„ƒYZd efd*„ƒYZd efd+„ƒYZd efd,„ƒYZdefd-„ƒYZdefd.„ƒYZdefd/„ƒYZdefd0„ƒYZdefd1„ƒYZdefd2„ƒYZd3S(4s$Convolutional neural network layers.tConv1DtConv2DtConv3DtConv1DTransposetConv2DTransposetConv3DTransposet MaxPool1Dt MaxPool2Dt MaxPool3Dt AvgPool1Dt AvgPool2Dt AvgPool3DtGlobalMaxPool1DtGlobalMaxPool2DtGlobalMaxPool3DtGlobalAvgPool1DtGlobalAvgPool2DtGlobalAvgPool3DtReflectionPad2Di(t HybridBlocki(tsymbol(t numeric_typesi(t ActivationcCs;tt|ƒ}|tjdd|ƒ|}|jƒdS(Ntdatatshapei(tgetattrRtvartinfer_shape_partial(top_namet data_shapetkwargstoptsym((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyt_infer_weight_shape#st_Convc BsPeZdZddeddddddd„ Zdd„Zd„Zd„ZRS( sP Abstract nD convolution layer (private, used as implementation base). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If `use_bias` is `True`, a bias vector is created and added to the outputs. Finally, if `activation` is not `None`, it is applied to the outputs as well. Parameters ---------- channels : int The dimensionality of the output space i.e. the number of output channels in the convolution. kernel_size : int or tuple/list of n ints Specifies the dimensions of the convolution window. strides: int or tuple/list of n ints, Specifies the strides of the convolution. padding : int or tuple/list of n ints, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation: int or tuple/list of n ints, Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two convolution layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, Dimension ordering of data and weight. Can be 'NCW', 'NWC', 'NCHW', 'NHWC', 'NCDHW', 'NDHWC', etc. 'N', 'C', 'H', 'W', 'D' stands for batch, channel, height, width and depth dimensions respectively. Convolution is performed over 'D', 'H', and 'W' dimensions. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias: bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer: str or `Initializer` Initializer for the bias vector. itzerost Convolutionc Cs tt|ƒjd|d|ƒ|jƒØ||_||_t|tƒrc|ft|ƒ}nt|tƒrˆ|ft|ƒ}nt|tƒr­|ft|ƒ}n| |_ i|d6|d6|d6|d6|d6|d6| d 6|d 6|_ |dk r||j d R?RD(REtFtxR2R5RD((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pythybrid_forwardƒs  **cCsdS(Ntconv((RE((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyt_aliasŒscCsYd}t|jdƒ}|jdd|kr=|d7}n|jdd|kra|d7}nt|d ƒrŸ|jd|krŸ|d jd |jƒ7}n|jd dkr¿|d 7}n|jdkrÛ|d 7}n|jrý|dj|jƒ7}n|d7}|jj }|jd|j j ddj|drB|dnd|dƒ|jS(Ns7{name}({mapping}, kernel_size={kernel}, stride={stride}R'R*is, padding={pad}R)is, dilation={dilate}tout_pads, output_padding={out_pad}R,s, groups={num_group}s , bias=Falses, {}t)RStmappings {0} -> {1}(i(i(i( R=R?thasattrRZtformatR5R@RDR2Rt __class__t__name__(REtstlen_kernel_sizeR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyt__repr__s&  "     'N( R`t __module__t__doc__R@RCR8RWRYRc(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyR")s0 & c Bs5eZdZddddddedddd„ ZRS(sª 1D convolution layer (e.g. temporal convolution). This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs. If `use_bias` is True, a bias vector is created and added to the outputs. Finally, if `activation` is not `None`, it is applied to the outputs as well. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 1 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 1 int, Specify the strides of the convolution. padding : int or a tuple/list of 1 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation : int or tuple/list of 1 int Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout: str, default 'NCW' Dimension ordering of data and weight. Only supports 'NCW' layout for now. 'N', 'C', 'W' stands for batch, channel, and width (time) dimensions respectively. Convolution is applied on the 'W' dimension. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Inputs: - **data**: 3D input tensor with shape `(batch_size, in_channels, width)` when `layout` is `NCW`. For other layouts shape is permuted accordingly. Outputs: - **out**: 3D output tensor with shape `(batch_size, channels, out_width)` when `layout` is `NCW`. out_width is calculated as:: out_width = floor((width+2*padding-dilation*(kernel_size-1)-1)/stride)+1 iitNCWR#c Ks|dkstdƒ‚t|tƒr3|f}nt|ƒdksQtdƒ‚tt|ƒj|||||||| || | | | dS(NRfs"Only supports 'NCW' layout for nowis0kernel_size must be a number or a list of 1 ints(tAssertionErrorR<RR=R7RR8(RERFRGRHRIRJRKR.RMRNRORPRLR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyR8æs N(R`RdReR@RCR8(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyR¥s @  c Bs5eZdZdddddd ed ddd„ ZRS( sR 2D convolution layer (e.g. spatial convolution over images). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If `use_bias` is True, a bias vector is created and added to the outputs. Finally, if `activation` is not `None`, it is applied to the outputs as well. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 2 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 2 int, Specify the strides of the convolution. padding : int or a tuple/list of 2 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation : int or tuple/list of 2 int Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, default 'NCHW' Dimension ordering of data and weight. Only supports 'NCHW' and 'NHWC' layout for now. 'N', 'C', 'H', 'W' stands for batch, channel, height, and width dimensions respectively. Convolution is applied on the 'H' and 'W' dimensions. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Inputs: - **data**: 4D input tensor with shape `(batch_size, in_channels, height, width)` when `layout` is `NCHW`. For other layouts shape is permuted accordingly. Outputs: - **out**: 4D output tensor with shape `(batch_size, channels, out_height, out_width)` when `layout` is `NCHW`. out_height and out_width are calculated as:: out_height = floor((height+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1 out_width = floor((width+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1 iitNCHWR#c Ks“|dkstdƒ‚t|tƒr7|fd}nt|ƒdksUtdƒ‚tt|ƒj|||||||| || | | | dS(NRhtNHWCs.Only supports 'NCHW' and 'NHWC' layout for nowis0kernel_size must be a number or a list of 2 ints(sNCHWsNHWC(RgR<RR=R7RR8(RERFRGRHRIRJRKR.RMRNRORPRLR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyR86s(ii(ii(iiN(R`RdReR@RCR8(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyRós B  c Bs5eZdZdddddd ed ddd„ ZRS( sê 3D convolution layer (e.g. spatial convolution over volumes). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If `use_bias` is `True`, a bias vector is created and added to the outputs. Finally, if `activation` is not `None`, it is applied to the outputs as well. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 3 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 3 int, Specify the strides of the convolution. padding : int or a tuple/list of 3 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation : int or tuple/list of 3 int Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, default 'NCDHW' Dimension ordering of data and weight. Only supports 'NCDHW' and 'NDHWC' layout for now. 'N', 'C', 'H', 'W', 'D' stands for batch, channel, height, width and depth dimensions respectively. Convolution is applied on the 'D', 'H' and 'W' dimensions. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Inputs: - **data**: 5D input tensor with shape `(batch_size, in_channels, depth, height, width)` when `layout` is `NCDHW`. For other layouts shape is permuted accordingly. Outputs: - **out**: 5D output tensor with shape `(batch_size, channels, out_depth, out_height, out_width)` when `layout` is `NCDHW`. out_depth, out_height and out_width are calculated as:: out_depth = floor((depth+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1 out_height = floor((height+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1 out_width = floor((width+2*padding[2]-dilation[2]*(kernel_size[2]-1)-1)/stride[2])+1 iitNCDHWR#c Ks“|dkstdƒ‚t|tƒr7|fd}nt|ƒdksUtdƒ‚tt|ƒj|||||||| || | | | dS(NRjtNDHWCs0Only supports 'NCDHW' and 'NDHWC' layout for nowis0kernel_size must be a number or a list of 3 ints(sNCDHWsNDHWC(RgR<RR=R7RR8(RERFRGRHRIRJRKR.RMRNRORPRLR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyR8‡s(iii(iii(iiiN(R`RdReR@RCR8(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyRCs C  c Bs8eZdZdddddddedddd„ ZRS(s Transposed 1D convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 1 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 1 int Specify the strides of the convolution. padding : int or a tuple/list of 1 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points output_padding: int or a tuple/list of 1 int Controls the amount of implicit zero-paddings on both sides of the output for output_padding number of points for each dimension. dilation : int or tuple/list of 1 int Controls the spacing between the kernel points; also known as the a trous algorithm groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, default 'NCW' Dimension ordering of data and weight. Only supports 'NCW' layout for now. 'N', 'C', 'W' stands for batch, channel, and width (time) dimensions respectively. Convolution is applied on the 'W' dimension. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Inputs: - **data**: 3D input tensor with shape `(batch_size, in_channels, width)` when `layout` is `NCW`. For other layouts shape is permuted accordingly. Outputs: - **out**: 3D output tensor with shape `(batch_size, channels, out_width)` when `layout` is `NCW`. out_width is calculated as:: out_width = (width-1)*strides-2*padding+kernel_size+output_padding iiRfR#cKsÝ|dkstdƒ‚t|tƒr3|f}nt|tƒrN|f}nt|ƒdksltdƒ‚t|ƒdksŠtdƒ‚tt|ƒj|||||||| | | | | ddd|| ||_dS( NRfs"Only supports 'NCW' layout for nowis0kernel_size must be a number or a list of 1 intss3output_padding must be a number or a list of 1 intsRt DeconvolutionR/(RgR<RR=R7RR8toutpad(RERFRGRHRItoutput_paddingRJRKR.RMRNRORPRLR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyR8Øs   N(R`RdReR@RCR8(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyR”s C c Bs8eZdZdddd ddd ed ddd„ ZRS( sŒ Transposed 2D convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 2 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 2 int Specify the strides of the convolution. padding : int or a tuple/list of 2 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points output_padding: int or a tuple/list of 2 int Controls the amount of implicit zero-paddings on both sides of the output for output_padding number of points for each dimension. dilation : int or tuple/list of 2 int Controls the spacing between the kernel points; also known as the a trous algorithm groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, default 'NCHW' Dimension ordering of data and weight. Only supports 'NCHW' and 'NHWC' layout for now. 'N', 'C', 'H', 'W' stands for batch, channel, height, and width dimensions respectively. Convolution is applied on the 'H' and 'W' dimensions. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Inputs: - **data**: 4D input tensor with shape `(batch_size, in_channels, height, width)` when `layout` is `NCHW`. For other layouts shape is permuted accordingly. Outputs: - **out**: 4D output tensor with shape `(batch_size, channels, out_height, out_width)` when `layout` is `NCHW`. out_height and out_width are calculated as:: out_height = (height-1)*strides[0]-2*padding[0]+kernel_size[0]+output_padding[0] out_width = (width-1)*strides[1]-2*padding[1]+kernel_size[1]+output_padding[1] iiRhR#cKså|d kstdƒ‚t|tƒr7|fd}nt|tƒrV|fd}nt|ƒdksttdƒ‚t|ƒdks’tdƒ‚tt|ƒj|||||||| | | | | ddd || ||_dS( NRhRis.Only supports 'NCHW' and 'NHWC' layout for nowis0kernel_size must be a number or a list of 2 intss3output_padding must be a number or a list of 2 intsRRlR/(sNCHWsNHWC(RgR<RR=R7RR8Rm(RERFRGRHRIRnRJRKR.RMRNRORPRLR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyR83s (ii(ii(ii(iiN(R`RdReR@RCR8(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyRês H  c Bs8eZdZdddd ddd ed ddd„ ZRS( sTransposed 3D convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 3 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 3 int Specify the strides of the convolution. padding : int or a tuple/list of 3 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points output_padding: int or a tuple/list of 3 int Controls the amount of implicit zero-paddings on both sides of the output for output_padding number of points for each dimension. dilation : int or tuple/list of 3 int Controls the spacing between the kernel points; also known as the a trous algorithm. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, default 'NCDHW' Dimension ordering of data and weight. Only supports 'NCDHW' and 'NDHWC' layout for now. 'N', 'C', 'H', 'W', 'D' stands for batch, channel, height, width and depth dimensions respectively. Convolution is applied on the 'D', 'H' and 'W' dimensions. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Inputs: - **data**: 5D input tensor with shape `(batch_size, in_channels, depth, height, width)` when `layout` is `NCDHW`. For other layouts shape is permuted accordingly. Outputs: - **out**: 5D output tensor with shape `(batch_size, channels, out_depth, out_height, out_width)` when `layout` is `NCDHW`. out_depth, out_height and out_width are calculated as:: out_depth = (depth-1)*strides[0]-2*padding[0]+kernel_size[0]+output_padding[0] out_height = (height-1)*strides[1]-2*padding[1]+kernel_size[1]+output_padding[1] out_width = (width-1)*strides[2]-2*padding[2]+kernel_size[2]+output_padding[2] iiRjR#cKså|d kstdƒ‚t|tƒr7|fd}nt|tƒrV|fd}nt|ƒdksttdƒ‚t|ƒdks’tdƒ‚tt|ƒj|||||||| | | | | ddd || ||_dS( NRjRks0Only supports 'NCDHW' and 'NDHWC' layout for nowis0kernel_size must be a number or a list of 3 intss3output_padding must be a number or a list of 3 intsRRlR/(sNCDHWsNDHWC(RgR<RR=R7RR8Rm(RERFRGRHRIRnRJRKR.RMRNRORPRLR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyR8s(iii(iii(iii(iiiN(R`RdReR@RCR8(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyREs I  t_PoolingcBs5eZdZdd„Zd„Zd„Zd„ZRS(s,Abstract class for different pooling layers.c KsÔtt|ƒj||dkr+|}nt|tƒrP|ft|ƒ}nt|tƒru|ft|ƒ}ni|d6|d6|d6|d6|d6|r§dndd6|_|dk rÐ||jd >> m = nn.ReflectionPad2D(3) >>> input = mx.nd.random.normal(shape=(16, 3, 224, 224)) >>> output = m(input) icKsktt|ƒj|t|tƒrFdddd||||f}nt|ƒdks^t‚||_dS(Nii(R7RR8R<RR=Rgt_padding(RERIR((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyR8›s !cCs|j|ddd|jƒS(Ntmodetreflectt pad_width(R*R}(RERURV((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyRW¢s(R`RdReR8RW(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pyRs N( Ret__all__tblockRtRtbaseRt activationsRR!R"RRRRRRRoRRRR R R R R RRRRR(((sZ/usr/local/lib/python2.7/site-packages/mxnet-1.3.1-py2.7.egg/mxnet/gluon/nn/conv_layers.pytsB        |NPQV[\-02.13