ó R;]c@@s¦dZddlmZddddddd d d d d dddgZddlmZddlmZddlm Z d$d$d„Z d„Z de fd„ƒYZ de fd„ƒYZde fd„ƒYZde fd„ƒYZeZde fd„ƒYZeZd e fd„ƒYZd e fd„ƒYZd e fd„ƒYZd e fd „ƒYZde fd!„ƒYZde fd"„ƒYZde fd#„ƒYZd$S(%s% losses for training neural networks i(tabsolute_importtLosstL2LosstL1LosstSigmoidBinaryCrossEntropyLosstSigmoidBCELosstSoftmaxCrossEntropyLosst SoftmaxCELosst KLDivLosstCTCLosst HuberLosst HingeLosstSquaredHingeLosst LogisticLosst TripletLossi(tndarray(t numeric_typesi(t HybridBlockcC@sY|dk r!|j||ƒ}n|dk rUt|tƒsHtdƒ‚||}n|S(sApply weighting to loss. Parameters ---------- loss : Symbol The loss to be weighted. weight : float or None Global scalar weight for loss. sample_weight : Symbol or None Per sample weighting. Must be broadcastable to the same shape as loss. For example, if loss has shape (64, 10) and you want to weight each sample in the batch separately, `sample_weight` should have shape (64, 1). Returns ------- loss : Symbol Weighted loss sweight must be a numberN(tNonet broadcast_mult isinstanceRtAssertionError(tFtlosstweightt sample_weight((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyt_apply_weighting s    cC@s,|tkr|j|jƒS|j||ƒS(s"Reshapes x to the same shape as y.(Rtreshapetshapet reshape_like(Rtxty((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyt _reshape_like>scB@s)eZdZd„Zd„Zd„ZRS(sÇBase class for loss. Parameters ---------- weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. cK@s,tt|ƒj|||_||_dS(N(tsuperRt__init__t_weightt _batch_axis(tselfRt batch_axistkwargs((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"Ls cC@s"d}|jd|jj|jS(Ns-{name}(batch_axis={_batch_axis}, w={_weight})tname(tformatt __class__t__name__t__dict__(R%ts((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyt__repr__QscO@s t‚dS(sOverrides to construct symbolic graph for this `Block`. Parameters ---------- x : Symbol or NDArray The first input tensor. *args : list of Symbol or list of NDArray Additional input tensors. N(tNotImplementedError(R%RRtargsR'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pythybrid_forwardUs (R+t __module__t__doc__R"R.R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyRBs   cB@s)eZdZddd„Zdd„ZRS(sÄCalculates the mean squared error between `pred` and `label`. .. math:: L = \frac{1}{2} \sum_i \vert {pred}_i - {label}_i \vert^2. `pred` and `label` can have arbitrary shape as long as they have the same number of elements. Parameters ---------- weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape - **label**: target tensor with the same size as pred. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. gð?icK@s tt|ƒj|||dS(N(R!RR"(R%RR&R'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"€scC@s]t|||ƒ}|j||ƒ}t|||jd|ƒ}|j|d|jdtƒS(Nitaxistexclude(R tsquareRR#tmeanR$tTrue(R%RtpredtlabelRR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR1ƒsN(R+R2R3R"RR1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyRdscB@s)eZdZddd„Zdd„ZRS(s·Calculates the mean absolute error between `pred` and `label`. .. math:: L = \sum_i \vert {pred}_i - {label}_i \vert. `pred` and `label` can have arbitrary shape as long as they have the same number of elements. Parameters ---------- weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape - **label**: target tensor with the same size as pred. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. icK@s tt|ƒj|||dS(N(R!RR"(R%RR&R'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"¦scC@sYt|||ƒ}|j||ƒ}t|||j|ƒ}|j|d|jdtƒS(NR4R5(R tabsRR#R7R$R8(R%RR9R:RR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR1©sN(R+R2R3RR"R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyRŠscB@s,eZdZeddd„Zdd„ZRS(spThe cross-entropy loss for binary classification. (alias: SigmoidBCELoss) BCE loss is useful when training logistic regression. If `from_sigmoid` is False (default), this loss computes: .. math:: prob = \frac{1}{1 + \exp(-{pred})} L = - \sum_i {label}_i * \log({prob}_i) + (1 - {label}_i) * \log(1 - {prob}_i) If `from_sigmoid` is True, this loss computes: .. math:: L = - \sum_i {label}_i * \log({pred}_i) + (1 - {label}_i) * \log(1 - {pred}_i) `pred` and `label` can have arbitrary shape as long as they have the same number of elements. Parameters ---------- from_sigmoid : bool, default is `False` Whether the input is from the output of sigmoid. Set this to false will make the loss calculate sigmoid and BCE together, which is more numerically stable through log-sum-exp trick. weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape - **label**: target tensor with values in range `[0, 1]`. Must have the same size as `pred`. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. icK@s)tt|ƒj|||||_dS(N(R!RR"t _from_sigmoid(R%t from_sigmoidRR&R'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"áscC@s»t|||ƒ}|jsR|j|ƒ|||j|j|ƒ ddƒ}n5|j|dƒ||jd|dƒd| }t|||j|ƒ}|j|d|j dt ƒS(Ntact_typetsoftrelugê-™—q=gð?R4R5( R R<trelut ActivationR;tlogRR#R7R$R8(R%RR9R:RR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR1ås  75N(R+R2R3tFalseRR"R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR°s0cB@s2eZdZdeeddd„Zdd„ZRS(søComputes the softmax cross entropy loss. (alias: SoftmaxCELoss) If `sparse_label` is `True` (default), label should contain integer category indicators: .. math:: \DeclareMathOperator{softmax}{softmax} p = \softmax({pred}) L = -\sum_i \log p_{i,{label}_i} `label`'s shape should be `pred`'s shape with the `axis` dimension removed. i.e. for `pred` with shape (1,2,3,4) and `axis = 2`, `label`'s shape should be (1,2,4). If `sparse_label` is `False`, `label` should contain probability distribution and `label`'s shape should be the same with `pred`: .. math:: p = \softmax({pred}) L = -\sum_i \sum_j {label}_j \log p_{ij} Parameters ---------- axis : int, default -1 The axis to sum over when computing softmax and entropy. sparse_label : bool, default True Whether label is an integer array instead of probability distribution. from_logits : bool, default False Whether input is a log probability (usually from log_softmax) instead of unnormalized numbers. weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: the prediction tensor, where the `batch_axis` dimension ranges over batch size and `axis` dimension ranges over the number of classes. - **label**: the truth tensor. When `sparse_label` is True, `label`'s shape should be `pred`'s shape with the `axis` dimension removed. i.e. for `pred` with shape (1,2,3,4) and `axis = 2`, `label`'s shape should be (1,2,4) and values should be integers between 0 and 2. If `sparse_label` is False, `label`'s shape must be the same as `pred` and values should be floats in the range `[0, 1]`. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as label. For example, if label has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. iÿÿÿÿicK@s;tt|ƒj|||||_||_||_dS(N(R!RR"t_axist _sparse_labelt _from_logits(R%R4t sparse_labelt from_logitsRR&R'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"/s  cC@s¸|js!|j||jƒ}n|jrO|j||d|jdtƒ }n5t|||ƒ}|j||d|jdtƒ }t|||j |ƒ}|j |d|j dtƒS(NR4tkeepdimsR5( RFt log_softmaxRDREtpickR8R tsumRR#R7R$(R%RR9R:RR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR16s  %#N(R+R2R3R8RCRR"R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyRòs<  cB@s/eZdZedddd„Zdd„ZRS(sáThe Kullback-Leibler divergence loss. KL divergence measures the distance between contiguous distributions. It can be used to minimize information loss when approximating a distribution. If `from_logits` is True (default), loss is defined as: .. math:: L = \sum_i {label}_i * \big[\log({label}_i) - {pred}_i\big] If `from_logits` is False, loss is defined as: .. math:: \DeclareMathOperator{softmax}{softmax} prob = \softmax({pred}) L = \sum_i {label}_i * \big[\log({label}_i) - log({pred}_i)\big] `pred` and `label` can have arbitrary shape as long as they have the same number of elements. Parameters ---------- from_logits : bool, default is `True` Whether the input is log probability (usually from log_softmax) instead of unnormalized numbers. axis : int, default -1 The dimension along with to compute softmax. Only used when `from_logits` is False. weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape. If `from_logits` is True, `pred` should be log probabilities. Otherwise, it should be unnormalized predictions, i.e. from a dense layer. - **label**: truth tensor with values in range `(0, 1)`. Must have the same size as `pred`. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. References ---------- `Kullback-Leibler divergence `_ iÿÿÿÿicK@s2tt|ƒj|||||_||_dS(N(R!RR"RFRD(R%RHR4RR&R'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"€s cC@sp|js!|j||jƒ}n||j|dƒ|}t|||j|ƒ}|j|d|jdtƒS(Ngê-™—q=R4R5( RFRJRDRBRR#R7R$R8(R%RR9R:RR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR1†s  N(R+R2R3R8RR"R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyRDs;cB@s2eZdZdddd„Zdddd„ZRS(sòConnectionist Temporal Classification Loss. Parameters ---------- layout : str, default 'NTC' Layout of prediction tensor. 'N', 'T', 'C' stands for batch size, sequence length, and alphabet_size respectively. label_layout : str, default 'NT' Layout of the labels. 'N', 'T' stands for batch size, and sequence length respectively. weight : float or None Global scalar weight for loss. Inputs: - **pred**: unnormalized prediction tensor (before softmax). Its shape depends on `layout`. If `layout` is 'TNC', pred should have shape `(sequence_length, batch_size, alphabet_size)`. Note that in the last dimension, index `alphabet_size-1` is reserved for internal use as blank label. So `alphabet_size` is one plus the actual alphabet size. - **label**: zero-based label tensor. Its shape depends on `label_layout`. If `label_layout` is 'TN', `label` should have shape `(label_sequence_length, batch_size)`. - **pred_lengths**: optional (default None), used for specifying the length of each entry when different `pred` entries in the same batch have different lengths. `pred_lengths` should have shape `(batch_size,)`. - **label_lengths**: optional (default None), used for specifying the length of each entry when different `label` entries in the same batch have different lengths. `label_lengths` should have shape `(batch_size,)`. Outputs: - **loss**: output loss has shape `(batch_size,)`. **Example**: suppose the vocabulary is `[a, b, c]`, and in one batch we have three sequences 'ba', 'cbb', and 'abac'. We can index the labels as `{'a': 0, 'b': 1, 'c': 2, blank: 3}`. Then `alphabet_size` should be 4, where label 3 is reserved for internal use by `CTCLoss`. We then need to pad each sequence with `-1` to make a rectangular `label` tensor:: [[1, 0, -1, -1], [2, 1, 1, -1], [0, 1, 0, 2]] References ---------- `Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks `_ tNTCtNTcK@sy|dkstd|ƒ‚|d ks8td|ƒ‚||_||_|jdƒ}tt|ƒj|||dS( NRMtTNCs<Only 'NTC' and 'TNC' layouts for pred are supported. Got: %sRNtTNs;Only 'NT' and 'TN' layouts for label are supported. Got: %stN(RMRO(RNRP(Rt_layoutt _label_layouttfindR!R R"(R%tlayoutt label_layoutRR'R&((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"Çs    c C@s|jdkr'|j|ddƒ}n|jdkrN|j|ddƒ}n|jj||||d|dk d|dk ddƒ}t|||j|ƒS(NRMiituse_data_lengthstuse_label_lengthst blank_labeltlast(RRtswapaxesR$tcontribR RRR#(R%RR9R:t pred_lengthst label_lengthsRR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR1Ñs   N(R+R2R3RR"R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR Žs8 cB@s,eZdZdddd„Zdd„ZRS(sNCalculates smoothed L1 loss that is equal to L1 loss if absolute error exceeds rho but is equal to L2 loss otherwise. Also called SmoothedL1 loss. .. math:: L = \sum_i \begin{cases} \frac{1}{2 {rho}} ({pred}_i - {label}_i)^2 & \text{ if } |{pred}_i - {label}_i| < {rho} \\ |{pred}_i - {label}_i| - \frac{{rho}}{2} & \text{ otherwise } \end{cases} `pred` and `label` can have arbitrary shape as long as they have the same number of elements. Parameters ---------- rho : float, default 1 Threshold for trimmed mean estimator. weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape - **label**: target tensor with the same size as pred. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. iicK@s)tt|ƒj|||||_dS(N(R!R R"t_rho(R%trhoRR&R'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"scC@s–t|||ƒ}|j||ƒ}|j||jk|d|jd|j|j|ƒƒ}t|||j|ƒ}|j|d|jdt ƒS(Ngà?R4R5( R R;twhereR_R6RR#R7R$R8(R%RR9R:RR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR1s  N(R+R2R3RR"R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR Þs#cB@s,eZdZdddd„Zdd„ZRS(spCalculates the hinge loss function often used in SVMs: .. math:: L = \sum_i max(0, {margin} - {pred}_i \cdot {label}_i) where `pred` is the classifier prediction and `label` is the target tensor containing values -1 or 1. `pred` and `label` must have the same number of elements. Parameters ---------- margin : float The margin in hinge loss. Defaults to 1.0 weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape. - **label**: truth tensor with values -1 or 1. Must have the same size as pred. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. iicK@s)tt|ƒj|||||_dS(N(R!R R"t_margin(R%tmarginRR&R'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"0scC@s`t|||ƒ}|j|j||ƒ}t|||j|ƒ}|j|d|jdtƒS(NR4R5(R R@RbRR#R7R$R8(R%RR9R:RR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR14sN(R+R2R3RR"R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR s cB@s,eZdZdddd„Zdd„ZRS(s•Calculates the soft-margin loss function used in SVMs: .. math:: L = \sum_i max(0, {margin} - {pred}_i \cdot {label}_i)^2 where `pred` is the classifier prediction and `label` is the target tensor containing values -1 or 1. `pred` and `label` can have arbitrary shape as long as they have the same number of elements. Parameters ---------- margin : float The margin in hinge loss. Defaults to 1.0 weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape - **label**: truth tensor with values -1 or 1. Must have the same size as pred. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. iicK@s)tt|ƒj|||||_dS(N(R!R R"Rb(R%RcRR&R'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"\scC@sit|||ƒ}|j|j|j||ƒƒ}t|||j|ƒ}|j|d|jdtƒS(NR4R5( R R6R@RbRR#R7R$R8(R%RR9R:RR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR1`s#N(R+R2R3RR"R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR ;s cB@s,eZdZdddd„Zdd„ZRS(s©Calculates the logistic loss (for binary losses only): .. math:: L = \sum_i \log(1 + \exp(- {pred}_i \cdot {label}_i)) where `pred` is the classifier prediction and `label` is the target tensor containing values -1 or 1 (0 or 1 if `label_format` is binary). `pred` and `label` can have arbitrary shape as long as they have the same number of elements. Parameters ---------- weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. label_format : str, default 'signed' Can be either 'signed' or 'binary'. If the label_format is 'signed', all label values should be either -1 or 1. If the label_format is 'binary', all label values should be either 0 or 1. Inputs: - **pred**: prediction tensor with arbitrary shape. - **label**: truth tensor with values -1/1 (label_format is 'signed') or 0/1 (label_format is 'binary'). Must have the same size as pred. - **sample_weight**: element-wise weighting tensor. Must be broadcastable to the same shape as pred. For example, if pred has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1). Outputs: - **loss**: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out. itsignedcK@sKtt|ƒj|||||_|jdkrGtd|ƒ‚ndS(NRdtbinarys7label_format can only be signed or binary, recieved %s.(RdRe(R!R R"t _label_formatt ValueError(R%RR&t label_formatR'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"‰s  cC@sšt|||ƒ}|jdkr2|dd}n|j|ƒ|||j|j|ƒ ddƒ}t|||j|ƒ}|j|d|jdt ƒS(NRdgð?g@R>R?R4R5( R RfR@RAR;RR#R7R$R8(R%RR9R:RR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR1s 4N(R+R2R3RR"R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR gs!cB@s)eZdZdddd„Zd„ZRS(sBCalculates triplet loss given three input tensors and a positive margin. Triplet loss measures the relative similarity between prediction, a positive example and a negative example: .. math:: L = \sum_i \max(\Vert {pred}_i - {pos_i} \Vert_2^2 - \Vert {pred}_i - {neg_i} \Vert_2^2 + {margin}, 0) `pred`, `positive` and `negative` can have arbitrary shape as long as they have the same number of elements. Parameters ---------- margin : float Margin of separation between correct and incorrect pair. weight : float or None Global scalar weight for loss. batch_axis : int, default 0 The axis that represents mini-batch. Inputs: - **pred**: prediction tensor with arbitrary shape - **positive**: positive example tensor with arbitrary shape. Must have the same size as pred. - **negative**: negative example tensor with arbitrary shape Must have the same size as pred. Outputs: - **loss**: loss tensor with shape (batch_size,). iicK@s)tt|ƒj|||||_dS(N(R!RR"Rb(R%RcRR&R'((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR"ºscC@sŒt|||ƒ}t|||ƒ}|j|j||ƒ|j||ƒd|jdtƒ}|j||jƒ}t|||jdƒS(NR4R5( R RLR6R$R8R@RbRR#R(R%RR9tpositivetnegativeR((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyR1¾s *N(R+R2R3RR"R1(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyRšsN(R3t __future__Rt__all__tRtbaseRtblockRRRR RRRRRRRRR R R R R R(((sP/usr/local/lib/python2.7/site-packages/mxnet-1.2.1-py2.7.egg/mxnet/gluon/loss.pyts2   "&&?OJP1,,3