σ šΔοYc@s dZddlmZddlmZddlmZddlmZd„Z d efd „ƒYZ d e fd „ƒYZ d e fd„ƒYZ de fd„ƒYZ de fd„ƒYZde fd„ƒYZde fd„ƒYZdefd„ƒYZdefd„ƒYZdefd„ƒYZdefd„ƒYZdefd „ƒYZd!efd"„ƒYZd#efd$„ƒYZd%efd&„ƒYZd'efd(„ƒYZd)efd*„ƒYZd+efd,„ƒYZd-efd.„ƒYZd/efd0„ƒYZd1S(2s$Convolutional neural network layers.i(t HybridBlocki(tsymbol(t numeric_typesi(t ActivationcCs;tt|ƒ}|tjdd|ƒ|}|jƒdS(Ntdatatshapei(tgetattrRtvartinfer_shape_partial(top_namet data_shapetkwargstoptsym((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyt_infer_weight_shapest_Convc BsPeZdZddeddddddd„ Zdd„Zd„Zd„ZRS( sP Abstract nD convolution layer (private, used as implementation base). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If `use_bias` is `True`, a bias vector is created and added to the outputs. Finally, if `activation` is not `None`, it is applied to the outputs as well. Parameters ---------- channels : int The dimensionality of the output space i.e. the number of output channels in the convolution. kernel_size : int or tuple/list of n ints Specifies the dimensions of the convolution window. strides: int or tuple/list of n ints, Specifies the strides of the convolution. padding : int or tuple/list of n ints, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation: int or tuple/list of n ints, Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two convolution layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, Dimension ordering of data and weight. Can be 'NCW', 'NWC', 'NCHW', 'NHWC', 'NCDHW', 'NDHWC', etc. 'N', 'C', 'H', 'W', 'D' stands for batch, channel, height, width and depth dimensions respectively. Convolution is performed over 'D', 'H', and 'W' dimensions. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias: bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer: str or `Initializer` Initializer for the bias vector. itzerost Convolutionc Cs tt|ƒjd|d|ƒ|jƒΨ||_||_t|tƒrc|ft|ƒ}nt|tƒrˆ|ft|ƒ}nt|tƒr­|ft|ƒ}n| |_ i|d6|d6|d6|d6|d6|d6| d 6|d 6|_ |dk r||j d {1}(i(i(i( R*R,thasattrRGtformatR"R-t __class__t__name__R(R'(R2tstlen_kernel_size((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyt__repr__‡s$  "     N( RMt __module__t__doc__R-R0R%RDRFRP(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR!s0 & tConv1Dc Bs5eZdZddddddedddd„ ZRS(s¬ 1D convolution layer (e.g. temporal convolution). This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs. If `use_bias` is True, a bias vector is created and added to the outputs. Finally, if `activation` is not `None`, it is applied to the outputs as well. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 1 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 1 int, Specify the strides of the convolution. padding : int or a tuple/list of 1 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation : int or tuple/list of 1 int Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout: str, default 'NCW' Dimension ordering of data and weight. Can be 'NCW', 'NWC', etc. 'N', 'C', 'W' stands for batch, channel, and width (time) dimensions respectively. Convolution is applied on the 'W' dimension. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Input shape: This depends on the `layout` parameter. Input is 3D array of shape (batch_size, in_channels, width) if `layout` is `NCW`. Output shape: This depends on the `layout` parameter. Output is 3D array of shape (batch_size, channels, out_width) if `layout` is `NCW`. out_width is calculated as:: out_width = floor((width+2*padding-dilation*(kernel_size-1)-1)/stride)+1 iitNCWRc Kswt|tƒr|f}nt|ƒdks9tdƒ‚tt|ƒj|||||||| || | | |  dS(Nis0kernel_size must be a number or a list of 1 ints(R)RR*tAssertionErrorR$RSR%(R2R3R4R5R6R7R8RR:R;R<R=R9R ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%ήs  N(RMRQRRR-R0R%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRSœs A  tConv2Dc Bs5eZdZdddddd ed ddd„ ZRS( s5 2D convolution layer (e.g. spatial convolution over images). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If `use_bias` is True, a bias vector is created and added to the outputs. Finally, if `activation` is not `None`, it is applied to the outputs as well. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 2 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 2 int, Specify the strides of the convolution. padding : int or a tuple/list of 2 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation : int or tuple/list of 2 int Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, default 'NCHW' Dimension ordering of data and weight. Can be 'NCHW', 'NHWC', etc. 'N', 'C', 'H', 'W' stands for batch, channel, height, and width dimensions respectively. Convolution is applied on the 'H' and 'W' dimensions. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Input shape: This depends on the `layout` parameter. Input is 4D array of shape (batch_size, in_channels, height, width) if `layout` is `NCHW`. Output shape: This depends on the `layout` parameter. Output is 4D array of shape (batch_size, channels, out_height, out_width) if `layout` is `NCHW`. out_height and out_width are calculated as:: out_height = floor((height+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1 out_width = floor((width+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1 iitNCHWRc Ks{t|tƒr|fd}nt|ƒdks=tdƒ‚tt|ƒj|||||||| || | | |  dS(Nis0kernel_size must be a number or a list of 2 ints(R)RR*RUR$RVR%(R2R3R4R5R6R7R8RR:R;R<R=R9R ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%-s (ii(ii(iiN(RMRQRRR-R0R%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRVκs B  tConv3Dc Bs5eZdZdddddd ed ddd„ ZRS( sΣ 3D convolution layer (e.g. spatial convolution over volumes). This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If `use_bias` is `True`, a bias vector is created and added to the outputs. Finally, if `activation` is not `None`, it is applied to the outputs as well. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 3 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 3 int, Specify the strides of the convolution. padding : int or a tuple/list of 3 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation : int or tuple/list of 3 int Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, default 'NCDHW' Dimension ordering of data and weight. Can be 'NCDHW', 'NDHWC', etc. 'N', 'C', 'H', 'W', 'D' stands for batch, channel, height, width and depth dimensions respectively. Convolution is applied on the 'D', 'H' and 'W' dimensions. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Input shape: This depends on the `layout` parameter. Input is 5D array of shape (batch_size, in_channels, depth, height, width) if `layout` is `NCDHW`. Output shape: This depends on the `layout` parameter. Output is 5D array of shape (batch_size, channels, out_depth, out_height, out_width) if `layout` is `NCDHW`. out_depth, out_height and out_width are calculated as:: out_depth = floor((depth+2*padding[0]-dilation[0]*(kernel_size[0]-1)-1)/stride[0])+1 out_height = floor((height+2*padding[1]-dilation[1]*(kernel_size[1]-1)-1)/stride[1])+1 out_width = floor((width+2*padding[2]-dilation[2]*(kernel_size[2]-1)-1)/stride[2])+1 iitNCDHWRc Ks{t|tƒr|fd}nt|ƒdks=tdƒ‚tt|ƒj|||||||| || | | |  dS(Nis0kernel_size must be a number or a list of 3 ints(R)RR*RUR$RXR%(R2R3R4R5R6R7R8RR:R;R<R=R9R ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%~s (iii(iii(iiiN(RMRQRRR-R0R%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRX9s D  tConv1DTransposec Bs8eZdZdddddddedddd„ ZRS(sγ Transposed 1D convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 3 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 3 int, Specify the strides of the convolution. padding : int or a tuple/list of 3 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation : int or tuple/list of 3 int Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, default 'NCW' Dimension ordering of data and weight. Can be 'NCW', 'NWC', etc. 'N', 'C', 'W' stands for batch, channel, and width (time) dimensions respectively. Convolution is applied on the 'W' dimension. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Input shape: This depends on the `layout` parameter. Input is 3D array of shape (batch_size, in_channels, width) if `layout` is `NCW`. Output shape: This depends on the `layout` parameter. Output is 3D array of shape (batch_size, channels, out_width) if `layout` is `NCW`. out_width is calculated as:: out_width = (width-1)*strides-2*padding+kernel_size+output_padding iiRTRcKsΕt|tƒr|f}nt|tƒr6|f}nt|ƒdksTtdƒ‚t|ƒdksrtdƒ‚tt|ƒj|||||||| | | | | ddd|| ||_dS(Nis0kernel_size must be a number or a list of 1 intss3output_padding must be a number or a list of 1 intsR t DeconvolutionR(R)RR*RUR$RZR%toutpad(R2R3R4R5R6toutput_paddingR7R8RR:R;R<R=R9R ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%Μs   N(RMRQRRR-R0R%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRZŠs A tConv2DTransposec Bs8eZdZdddd ddd ed ddd„ ZRS( s Transposed 2D convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 3 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 3 int, Specify the strides of the convolution. padding : int or a tuple/list of 3 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation : int or tuple/list of 3 int Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, default 'NCHW' Dimension ordering of data and weight. Can be 'NCHW', 'NHWC', etc. 'N', 'C', 'H', 'W' stands for batch, channel, height, and width dimensions respectively. Convolution is applied on the 'H' and 'W' dimensions. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Input shape: This depends on the `layout` parameter. Input is 4D array of shape (batch_size, in_channels, height, width) if `layout` is `NCHW`. Output shape: This depends on the `layout` parameter. Output is 4D array of shape (batch_size, channels, out_height, out_width) if `layout` is `NCHW`. out_height and out_width are calculated as:: out_height = (height-1)*strides[0]-2*padding[0]+kernel_size[0]+output_padding[0] out_width = (width-1)*strides[1]-2*padding[1]+kernel_size[1]+output_padding[1] iiRWRcKsΝt|tƒr|fd}nt|tƒr>|fd}nt|ƒdks\tdƒ‚t|ƒdksztdƒ‚tt|ƒj|||||||| | | | | ddd|| ||_dS(Nis0kernel_size must be a number or a list of 2 intss3output_padding must be a number or a list of 2 intsR R[R(R)RR*RUR$R^R%R\(R2R3R4R5R6R]R7R8RR:R;R<R=R9R ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%"s (ii(ii(ii(iiN(RMRQRRR-R0R%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR^έs D  tConv3DTransposec Bs8eZdZdddd ddd ed ddd„ ZRS( s Transposed 3D convolution layer (sometimes called Deconvolution). The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. If `in_channels` is not specified, `Parameter` initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. Parameters ---------- channels : int The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size :int or tuple/list of 3 int Specifies the dimensions of the convolution window. strides : int or tuple/list of 3 int, Specify the strides of the convolution. padding : int or a tuple/list of 3 int, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation : int or tuple/list of 3 int Specifies the dilation rate to use for dilated convolution. groups : int Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout : str, default 'NCDHW' Dimension ordering of data and weight. Can be 'NCDHW', 'NDHWC', etc. 'N', 'C', 'H', 'W', 'D' stands for batch, channel, height, width and depth dimensions respectively. Convolution is applied on the 'D', 'H', and 'W' dimensions. in_channels : int, default 0 The number of input channels to this layer. If not specified, initialization will be deferred to the first time `forward` is called and `in_channels` will be inferred from the shape of input data. activation : str Activation function to use. See :func:`~mxnet.ndarray.Activation`. If you don't specify anything, no activation is applied (ie. "linear" activation: `a(x) = x`). use_bias : bool Whether the layer uses a bias vector. weight_initializer : str or `Initializer` Initializer for the `weight` weights matrix. bias_initializer : str or `Initializer` Initializer for the bias vector. Input shape: This depends on the `layout` parameter. Input is 5D array of shape (batch_size, in_channels, depth, height, width) if `layout` is `NCDHW`. Output shape: This depends on the `layout` parameter. Output is 5D array of shape (batch_size, channels, out_depth, out_height, out_width) if `layout` is `NCDHW`. out_depth, out_height and out_width are calculated as:: out_depth = (depth-1)*strides[0]-2*padding[0]+kernel_size[0]+output_padding[0] out_height = (height-1)*strides[1]-2*padding[1]+kernel_size[1]+output_padding[1] out_width = (width-1)*strides[2]-2*padding[2]+kernel_size[2]+output_padding[2] iiRYRcKsΝt|tƒr|fd}nt|tƒr>|fd}nt|ƒdks\tdƒ‚t|ƒdksztdƒ‚tt|ƒj|||||||| | | | | ddd|| ||_dS(Nis0kernel_size must be a number or a list of 3 intss3output_padding must be a number or a list of 3 intsR R[R(R)RR*RUR$R_R%R\(R2R3R4R5R6R]R7R8RR:R;R<R=R9R ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%xs(iii(iii(iii(iiiN(RMRQRRR-R0R%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR_3s D  t_PoolingcBs2eZdZd„Zd„Zd„Zd„ZRS(s,Abstract class for different pooling layers.cKsΈtt|ƒj||dkr+|}nt|tƒrP|ft|ƒ}nt|tƒru|ft|ƒ}ni|d6|d6|d6|d6|d6|r§dndd6|_dS( NRRRt global_poolt pool_typetfulltvalidtpooling_convention(R$R`R%R-R)RR*R,(R2t pool_sizeR5R6t ceil_modeRaRbR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%‹s  cCsdS(Ntpool((R2((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRF™scCs|j|dd|jS(NR@RA(tPoolingR,(R2RBRC((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRDœscCs5d}|jd|jjd|jddk|jS(NsL{name}(size={kernel}, stride={stride}, padding={pad}, ceil_mode={ceil_mode})R@RgReRc(RKRLRMR,(R2RN((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRPŸs(RMRQRRR%RFRDRP(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR`‰s    t MaxPool1DcBs&eZdZdddded„ZRS(s"Max pooling operation for one dimensional data. Parameters ---------- pool_size: int Size of the max pooling windows. strides: int, or None Factor by which to downscale. E.g. 2 will halve the input size. If `None`, it will default to `pool_size`. padding: int If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout : str, default 'NCW' Dimension ordering of data and weight. Can be 'NCW', 'NWC', etc. 'N', 'C', 'W' stands for batch, channel, and width (time) dimensions respectively. Pooling is applied on the W dimension. ceil_mode : bool, default False When `True`, will use ceil instead of floor to compute the output shape. Input shape: This depends on the `layout` parameter. Input is 3D array of shape (batch_size, channels, width) if `layout` is `NCW`. Output shape: This depends on the `layout` parameter. Output is 3D array of shape (batch_size, channels, out_width) if `layout` is `NCW`. out_width is calculated as:: out_width = floor((width+2*padding-pool_size)/strides)+1 When `ceil_mode` is `True`, ceil will be used instead of floor in this equation. iiRTcKs}|dkstdƒ‚t|tƒr3|f}nt|ƒdksQtdƒ‚tt|ƒj||||td|dS(NRTs Only supports NCW layout for nowis.pool_size must be a number or a list of 1 intstmax(RUR)RR*R$RjR%tFalse(R2RfR5R6RRgR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%Λs  N(RMRQRRR-RlR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRj¦s$ t MaxPool2DcBs&eZdZdddded„ZRS(sMax pooling operation for two dimensional (spatial) data. Parameters ---------- pool_size: int or list/tuple of 2 ints, Size of the max pooling windows. strides: int, list/tuple of 2 ints, or None. Factor by which to downscale. E.g. 2 will halve the input size. If `None`, it will default to `pool_size`. padding: int or list/tuple of 2 ints, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout : str, default 'NCHW' Dimension ordering of data and weight. Can be 'NCHW', 'NHWC', etc. 'N', 'C', 'H', 'W' stands for batch, channel, height, and width dimensions respectively. padding is applied on 'H' and 'W' dimension. ceil_mode : bool, default False When `True`, will use ceil instead of floor to compute the output shape. Input shape: This depends on the `layout` parameter. Input is 4D array of shape (batch_size, channels, height, width) if `layout` is `NCHW`. Output shape: This depends on the `layout` parameter. Output is 4D array of shape (batch_size, channels, out_height, out_width) if `layout` is `NCHW`. out_height and out_width are calculated as:: out_height = floor((height+2*padding[0]-pool_size[0])/strides[0])+1 out_width = floor((width+2*padding[1]-pool_size[1])/strides[1])+1 When `ceil_mode` is `True`, ceil will be used instead of floor in this equation. iiRWcKs|dkstdƒ‚t|tƒr7|fd}nt|ƒdksUtdƒ‚tt|ƒj||||td|dS(NRWs!Only supports NCHW layout for nowis.pool_size must be a number or a list of 2 intsRk(RUR)RR*R$RmR%Rl(R2RfR5R6RRgR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%ϋs (iiN(RMRQRRR-RlR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRmΥs% t MaxPool3DcBs&eZdZdddedd„ZRS(sšMax pooling operation for 3D data (spatial or spatio-temporal). Parameters ---------- pool_size: int or list/tuple of 3 ints, Size of the max pooling windows. strides: int, list/tuple of 3 ints, or None. Factor by which to downscale. E.g. 2 will halve the input size. If `None`, it will default to `pool_size`. padding: int or list/tuple of 3 ints, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout : str, default 'NCDHW' Dimension ordering of data and weight. Can be 'NCDHW', 'NDHWC', etc. 'N', 'C', 'H', 'W', 'D' stands for batch, channel, height, width and depth dimensions respectively. padding is applied on 'D', 'H' and 'W' dimension. ceil_mode : bool, default False When `True`, will use ceil instead of floor to compute the output shape. Input shape: This depends on the `layout` parameter. Input is 5D array of shape (batch_size, channels, depth, height, width) if `layout` is `NCDHW`. Output shape: This depends on the `layout` parameter. Output is 5D array of shape (batch_size, channels, out_depth, out_height, out_width) if `layout` is `NCDHW`. out_depth, out_height and out_width are calculated as :: out_depth = floor((depth+2*padding[0]-pool_size[0])/strides[0])+1 out_height = floor((height+2*padding[1]-pool_size[1])/strides[1])+1 out_width = floor((width+2*padding[2]-pool_size[2])/strides[2])+1 When `ceil_mode` is `True`, ceil will be used instead of floor in this equation. iiRYcKs|dkstdƒ‚t|tƒr7|fd}nt|ƒdksUtdƒ‚tt|ƒj||||td|dS(NRYs"Only supports NCDHW layout for nowis.pool_size must be a number or a list of 3 intsRk(RUR)RR*R$RnR%Rl(R2RfR5R6RgRR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%.s (iiiN(RMRQRRR-RlR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRns( t AvgPool1DcBs&eZdZdddded„ZRS(sAverage pooling operation for temporal data. Parameters ---------- pool_size: int Size of the max pooling windows. strides: int, or None Factor by which to downscale. E.g. 2 will halve the input size. If `None`, it will default to `pool_size`. padding: int If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout : str, default 'NCW' Dimension ordering of data and weight. Can be 'NCW', 'NWC', etc. 'N', 'C', 'W' stands for batch, channel, and width (time) dimensions respectively. padding is applied on 'W' dimension. ceil_mode : bool, default False When `True`, will use ceil instead of floor to compute the output shape. Input shape: This depends on the `layout` parameter. Input is 3D array of shape (batch_size, channels, width) if `layout` is `NCW`. Output shape: This depends on the `layout` parameter. Output is 3D array of shape (batch_size, channels, out_width) if `layout` is `NCW`. out_width is calculated as:: out_width = floor((width+2*padding-pool_size)/strides)+1 When `ceil_mode` is `True`, ceil will be used instead of floor in this equation. iiRTcKs}|dkstdƒ‚t|tƒr3|f}nt|ƒdksQtdƒ‚tt|ƒj||||td|dS(NRTs Only supports NCW layout for nowis.pool_size must be a number or a list of 1 intstavg(RUR)RR*R$RoR%Rl(R2RfR5R6RRgR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%\s  N(RMRQRRR-RlR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRo8s# t AvgPool2DcBs&eZdZdddedd„ZRS(sσAverage pooling operation for spatial data. Parameters ---------- pool_size: int or list/tuple of 2 ints, Size of the max pooling windows. strides: int, list/tuple of 2 ints, or None. Factor by which to downscale. E.g. 2 will halve the input size. If `None`, it will default to `pool_size`. padding: int or list/tuple of 2 ints, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout : str, default 'NCHW' Dimension ordering of data and weight. Can be 'NCHW', 'NHWC', etc. 'N', 'C', 'H', 'W' stands for batch, channel, height, and width dimensions respectively. padding is applied on 'H' and 'W' dimension. ceil_mode : bool, default False When True, will use ceil instead of floor to compute the output shape. Input shape: This depends on the `layout` parameter. Input is 4D array of shape (batch_size, channels, height, width) if `layout` is `NCHW`. Output shape: This depends on the `layout` parameter. Output is 4D array of shape (batch_size, channels, out_height, out_width) if `layout` is `NCHW`. out_height and out_width are calculated as:: out_height = floor((height+2*padding[0]-pool_size[0])/strides[0])+1 out_width = floor((width+2*padding[1]-pool_size[1])/strides[1])+1 When `ceil_mode` is `True`, ceil will be used instead of floor in this equation. iiRWcKs|dkstdƒ‚t|tƒr7|fd}nt|ƒdksUtdƒ‚tt|ƒj||||td|dS(NRWs!Only supports NCHW layout for nowis.pool_size must be a number or a list of 2 intsRp(RUR)RR*R$RqR%Rl(R2RfR5R6RgRR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%‹s (iiN(RMRQRRR-RlR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRqfs$ t AvgPool3DcBs&eZdZdddedd„ZRS(s›Average pooling operation for 3D data (spatial or spatio-temporal). Parameters ---------- pool_size: int or list/tuple of 3 ints, Size of the max pooling windows. strides: int, list/tuple of 3 ints, or None. Factor by which to downscale. E.g. 2 will halve the input size. If `None`, it will default to `pool_size`. padding: int or list/tuple of 3 ints, If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout : str, default 'NCDHW' Dimension ordering of data and weight. Can be 'NCDHW', 'NDHWC', etc. 'N', 'C', 'H', 'W', 'D' stands for batch, channel, height, width and depth dimensions respectively. padding is applied on 'D', 'H' and 'W' dimension. ceil_mode : bool, default False When True, will use ceil instead of floor to compute the output shape. Input shape: This depends on the `layout` parameter. Input is 5D array of shape (batch_size, channels, depth, height, width) if `layout` is `NCDHW`. Output shape: This depends on the `layout` parameter. Output is 5D array of shape (batch_size, channels, out_depth, out_height, out_width) if `layout` is `NCDHW`. out_depth, out_height and out_width are calculated as :: out_depth = floor((depth+2*padding[0]-pool_size[0])/strides[0])+1 out_height = floor((height+2*padding[1]-pool_size[1])/strides[1])+1 out_width = floor((width+2*padding[2]-pool_size[2])/strides[2])+1 When `ceil_mode` is `True,` ceil will be used instead of floor in this equation. iiRYcKs|dkstdƒ‚t|tƒr7|fd}nt|ƒdksUtdƒ‚tt|ƒj||||td|dS(NRYs"Only supports NCDHW layout for nowis.pool_size must be a number or a list of 3 intsRp(RUR)RR*R$RrR%Rl(R2RfR5R6RgRR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%½s (iiiN(RMRQRRR-RlR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRr•s' tGlobalMaxPool1DcBseZdZdd„ZRS(s/Global max pooling operation for temporal data.RTcKsD|dkstdƒ‚tt|ƒjdddttd|dS(NRTs Only supports NCW layout for nowiiRk(i(RUR$RsR%R-R0(R2RR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%Ιs(RMRQRRR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRsΗstGlobalMaxPool2DcBseZdZdd„ZRS(s.Global max pooling operation for spatial data.RWcKsD|dkstdƒ‚tt|ƒjdddttd|dS(NRWs Only supports NCW layout for nowiiRk(ii(RUR$RtR%R-R0(R2RR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%Ρs(RMRQRRR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRtΟstGlobalMaxPool3DcBseZdZdd„ZRS(s)Global max pooling operation for 3D data.RYcKsD|dkstdƒ‚tt|ƒjdddttd|dS(NRYs Only supports NCW layout for nowiiRk(iii(RUR$RuR%R-R0(R2RR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%Ψs(RMRQRRR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRuΦstGlobalAvgPool1DcBseZdZdd„ZRS(s3Global average pooling operation for temporal data.RTcKsD|dkstdƒ‚tt|ƒjdddttd|dS(NRTs Only supports NCW layout for nowiiRp(i(RUR$RvR%R-R0(R2RR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%ΰs(RMRQRRR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRvήstGlobalAvgPool2DcBseZdZdd„ZRS(s2Global average pooling operation for spatial data.RWcKsD|dkstdƒ‚tt|ƒjdddttd|dS(NRWs Only supports NCW layout for nowiiRp(ii(RUR$RwR%R-R0(R2RR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%θs(RMRQRRR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRwζstGlobalAvgPool3DcBseZdZdd„ZRS(s)Global max pooling operation for 3D data.RYcKsD|dkstdƒ‚tt|ƒjdddttd|dS(NRYs Only supports NCW layout for nowiiRp(iii(RUR$RxR%R-R0(R2RR ((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyR%πs(RMRQRRR%(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyRxξsN(RRtblockRtRtbaseRt basic_layersRRRRSRVRXRZR^R_R`RjRmRnRoRqRrRsRtRuRvRwRx(((s:build/bdist.linux-armv7l/egg/mxnet/gluon/nn/conv_layers.pyts2 {NOQSVV/03./2