ó —Àv]c@@s(dZddlmZddlZddlmZddlZddlmZm Z ddl m Z ddl m Z e e d „Zd efd „ƒYZe jed ƒZe jed ƒZe jed ƒZd „Zeedƒdefd„ƒYƒƒZeedƒdefd„ƒYƒƒZeeddƒdefd„ƒYƒƒZdefd„ƒYZedefd„ƒYƒZedefd„ƒYƒZedefd„ƒYƒZed efd!„ƒYƒZed"efd#„ƒYƒZ ed$efd%„ƒYƒZ!eed&ƒd'efd(„ƒYƒƒZ"eed)ƒd*efd+„ƒYƒƒZ#eed,ƒd-efd.„ƒYƒƒZ$ed/efd0„ƒYƒZ%ed1efd2„ƒYƒZ&ed3e&fd4„ƒYƒZ'ed5e&fd6„ƒYƒZ(ed7efd8„ƒYƒZ)de d9„Z+dS(:s Online evaluation metric module.i(tabsolute_importN(t OrderedDicti(t numeric_typest string_types(tndarray(tregistrycC@s±|s"t|ƒt|ƒ}}n|j|j}}||kr\tdj||ƒƒ‚n|r§t|tjjƒrƒ|g}nt|tjjƒr§|g}q§n||fS(s¨Helper function for checking shape of label and prediction Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. wrap : boolean If True, wrap labels/preds in a list if they are single NDArray shape : boolean If True, check the shape of labels and preds; Otherwise only check their length. s9Shape of labels {} does not match shape of predictions {}(tlentshapet ValueErrortformatt isinstanceRtNDArray(tlabelstpredstwrapRt label_shapet pred_shape((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytcheck_label_shapes!s   t EvalMetriccB@sweZdZd d d„Zd„Zd„Zd„Zd„Zd„Z d„Z d„Z d „Z d „Z d „ZRS( s‹Base class for all evaluation metrics. .. note:: This is a base class that provides common metric interfaces. One should not use this class directly, but instead create new metric classes that extend it. Parameters ---------- name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. cK@sMt|ƒ|_||_||_|jdtƒ|_||_|jƒdS(Nthas_global_stats( tstrtnamet output_namest label_namestpoptFalset_has_global_statst_kwargstreset(tselfRRRtkwargs((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt__init__Xs    cC@sdjt|jƒƒƒS(NsEvalMetric: {}(R tdicttget_name_value(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt__str__ascC@sK|jjƒ}|ji|jjd6|jd6|jd6|jd6ƒ|S(snSave configurations of metric. Can be recreated from configs with metric.create(``**config``) tmetricRRR(Rtcopytupdatet __class__t__name__RRR(Rtconfig((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt get_configds    cC@sœ|jdk r2g|jD]}||^q}nt|jƒƒ}|jdk rvg|jD]}||^q]}nt|jƒƒ}|j||ƒdS(s,Update the internal evaluation with named label and pred Parameters ---------- labels : OrderedDict of str -> NDArray name to array mapping for labels. preds : OrderedDict of str -> NDArray name to array mapping of predicted outputs. N(RtNonetlisttvaluesRR%(RtlabeltpredR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt update_dictps ##cC@s tƒ‚dS(sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. N(tNotImplementedError(RR R ((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%‡s cC@s(d|_d|_d|_d|_dS(s7Resets the internal evaluation result to initial state.igN(tnum_instt sum_metrictglobal_num_insttglobal_sum_metric(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR”s   cC@sd|_d|_dS(sUResets the local portion of the internal evaluation results to initial state.igN(R1R2(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt reset_local›s cC@s=|jdkr"|jtdƒfS|j|j|jfSdS(sÍGets the current evaluation result. Returns ------- names : list of str Name of the metrics. values : list of float Value of the evaluations. itnanN(R1RtfloatR2(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytget¡s cC@sS|jrE|jdkr+|jtdƒfS|j|j|jfSn |jƒSdS(sÔGets the current global evaluation result. Returns ------- names : list of str Name of the metrics. values : list of float Value of the evaluations. iR6N(RR3RR7R4R8(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt get_global°s cC@s[|jƒ\}}t|tƒs-|g}nt|tƒsH|g}ntt||ƒƒS(sReturns zipped name and value pairs. Returns ------- list of tuples A (name, value) tuple list. (R8R R+tzip(RRtvalue((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR!Âs   cC@sr|jrd|jƒ\}}t|tƒs6|g}nt|tƒsQ|g}ntt||ƒƒS|jƒSdS(s Returns zipped name and value pairs for global results. Returns ------- list of tuples A (name, value) tuple list. N(RR9R R+R:R!(RRR;((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytget_global_name_valueÑs   N(R't __module__t__doc__R*RR"R)R/R%RR5R8R9R!R<(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRDs        R#cO@srt|ƒrt|||ŽSt|tƒrbtƒ}x'|D]}|jt|||Žƒq;W|St|||ŽS(síCreates evaluation metric from metric names or instances of EvalMetric or a custom metric function. Parameters ---------- metric : str or callable Specifies the metric to create. This argument must be one of the below: - Name of a metric. - An instance of `EvalMetric`. - A list, each element of which is a metric or a metric name. - An evaluation function that computes custom metric for a given batch of labels and predictions. *args : list Additional arguments to metric constructor. Only used when metric is str. **kwargs : dict Additional arguments to metric constructor. Only used when metric is str Examples -------- >>> def custom_metric(label, pred): ... return np.mean(np.abs(label - pred)) ... >>> metric1 = mx.metric.create('acc') >>> metric2 = mx.metric.create(custom_metric) >>> metric3 = mx.metric.create([metric1, metric2, 'rmse']) (tcallablet CustomMetricR R+tCompositeEvalMetrictaddtcreatet_create(R#targsRtcomposite_metrict child_metric((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRCês   t compositeRAcB@steZdZd dd d d„Zd„Zd„Zd„Zd„Zd„Z d„Z d „Z d „Z d „Z RS( sManages multiple evaluation metrics. Parameters ---------- metrics : list of EvalMetric List of child metrics. name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> predicts = [mx.nd.array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])] >>> labels = [mx.nd.array([0, 1, 1])] >>> eval_metrics_1 = mx.metric.Accuracy() >>> eval_metrics_2 = mx.metric.F1() >>> eval_metrics = mx.metric.CompositeEvalMetric() >>> for child_metric in [eval_metrics_1, eval_metrics_2]: >>> eval_metrics.add(child_metric) >>> eval_metrics.update(labels = labels, preds = predicts) >>> print eval_metrics.get() (['accuracy', 'f1'], [0.6666666666666666, 0.8]) RHcC@sctt|ƒj|d|d|dtƒ|dkr=g}ng|D]}t|ƒ^qD|_dS(NRRR(tsuperRARtTrueR*RCtmetrics(RRKRRRti((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR4s   cC@s|jjt|ƒƒdS(srAdds a child metric. Parameters ---------- metric A metric instance. N(RKtappendRC(RR#((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRB=scC@sCy|j|SWn-tk r>tdj|t|jƒƒƒSXdS(s•Returns a child metric. Parameters ---------- index : int Index of child metric in the list of metrics. s(Metric index {} is out of range 0 and {}N(RKt IndexErrorRR R(Rtindex((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt get_metricGs   cC@s¼|jdk rJtg|jƒD]}|d|jkr|^qƒ}n|jdk r”tg|jƒD]}|d|jkri|^qiƒ}nx!|jD]}|j||ƒqžWdS(Ni(RR*RtitemsRRKR/(RR R RLR#((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR/Us%%cC@s(x!|jD]}|j||ƒq WdS(sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. N(RKR%(RR R R#((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%`s cC@s:y"x|jD]}|jƒq WWntk r5nXdS(s7Resets the internal evaluation result to initial state.N(RKRtAttributeError(RR#((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRns  cC@s:y"x|jD]}|jƒq WWntk r5nXdS(sUResets the local portion of the internal evaluation results to initial state.N(RKR5RR(RR#((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR5vs  cC@sŒg}g}xs|jD]h}|jƒ\}}t|tƒrI|g}nt|tƒrd|g}n|j|ƒ|j|ƒqW||fS(sÐReturns the current evaluation result. Returns ------- names : list of str Name of the metrics. values : list of float Value of the evaluations. (RKR8R RRtextend(RtnamesR,R#RR;((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR8s    cC@sŒg}g}xs|jD]h}|jƒ\}}t|tƒrI|g}nt|tƒrd|g}n|j|ƒ|j|ƒqW||fS(sÐReturns the current evaluation result. Returns ------- names : list of str Name of the metrics. values : list of float Value of the evaluations. (RKR9R RRRS(RRTR,R#RR;((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR9•s    cC@sItt|ƒjƒ}|jig|jD]}|jƒ^q(d6ƒ|S(NRK(RIRAR)R%RK(RR(RL((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR)«s0N(R'R=R>R*RRBRPR/R%RR5R8R9R)(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRAs     tacctAccuracycB@s,eZdZddddd„Zd„ZRS(sÈComputes accuracy classification score. The accuracy score is defined as .. math:: \text{accuracy}(y, \hat{y}) = \frac{1}{n} \sum_{i=0}^{n-1} \text{1}(\hat{y_i} == y_i) Parameters ---------- axis : int, default=1 The axis that represents classes name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> predicts = [mx.nd.array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])] >>> labels = [mx.nd.array([0, 1, 1])] >>> acc = mx.metric.Accuracy() >>> acc.update(preds = predicts, labels = labels) >>> print acc.get() ('accuracy', 0.6666666666666666) itaccuracyc C@s;tt|ƒj|d|d|d|dtƒ||_dS(NtaxisRRR(RIRVRRJRX(RRXRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR×s   cC@s t||tƒ\}}xít||ƒD]Ü\}}|j|jkratj|d|jƒ}n|jƒjdƒ}|jƒjdƒ}|j }|j }t||ƒ||kj ƒ}|j |7_ |j |7_ |j t|ƒ7_ |jt|ƒ7_q(WdS(s†Updates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data with class indices as values, one per sample. preds : list of `NDArray` Prediction values for samples. Each prediction value can either be the class index, or a vector of likelihoods for all classes. RXtint32N(RRJR:RRtargmaxRXtasnumpytastypetflattsumR2R4R1RR3(RR R R-t pred_labelt num_correct((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%ßs    N(R'R=R>R*RR%(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRV¶s ttop_k_accuracyt top_k_acct TopKAccuracycB@s,eZdZddddd„Zd„ZRS(sGComputes top k predictions accuracy. `TopKAccuracy` differs from Accuracy in that it considers the prediction to be ``True`` as long as the ground truth label is in the top K predicated labels. If `top_k` = ``1``, then `TopKAccuracy` is identical to `Accuracy`. Parameters ---------- top_k : int Whether targets are in top k predictions. name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> np.random.seed(999) >>> top_k = 3 >>> labels = [mx.nd.array([2, 6, 9, 2, 3, 4, 7, 8, 9, 6])] >>> predicts = [mx.nd.array(np.random.rand(10, 10))] >>> acc = mx.metric.TopKAccuracy(top_k=top_k) >>> acc.update(labels, predicts) >>> print acc.get() ('top_k_accuracy', 0.3) iRac C@sltt|ƒj|d|d|d|dtƒ||_|jdksRtdƒ‚|jd|j7_dS(Nttop_kRRRis.Please use Accuracy if top_k is no more than 1s_%d(RIRcRRJRdtAssertionErrorR(RRdRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR#s   c C@s¡t||tƒ\}}x‚t||ƒD]q\}}t|jƒdksUtdƒ‚tj|jƒj dƒ|j ƒ}|jƒj dƒ}t||ƒ|jd}t|jƒ}|dkrè|j |j |j kj ƒ7_ n“|dkr{|jd}t||j ƒ}xet|ƒD]T} |dd…|d| fj |j kj ƒ} |j | 7_ |j| 7_q Wn|j|7_|j|7_q(WdS(sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. is)Predictions should be no more than 2 dimstfloat32RYiiN(RRJR:RRRetnumpyt argpartitionR[R\RdR2R]R^tmintrangeR4R1R3( RR R R-R_t num_samplestnum_dimst num_classesRdtjR`((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%-s& !%   $  0N(R'R=R>R*RR%(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRcÿs" t_BinaryClassificationMetricscB@s¶eZdZd„Zd„Zed„ƒZed„ƒZed„ƒZed„ƒZ ed„ƒZ ed„ƒZ e d „Z ed „ƒZed „ƒZd „Zd „ZRS(s3 Private container class for classification metric statistics. True/false positive and true/false negative counts are sufficient statistics for various classification metrics. This class provides the machinery to track those statistics across mini-batches of (label, prediction) pairs. cC@sLd|_d|_d|_d|_d|_d|_d|_d|_dS(Ni(ttrue_positivestfalse_negativestfalse_positivesttrue_negativestglobal_true_positivestglobal_false_negativestglobal_false_positivestglobal_true_negatives(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRZs       c C@s_|jƒ}|jƒjdƒ}tj|ddƒ}t||ƒttj|ƒƒdkrwtd|jj ƒ‚n|dk}d|}|dk}d|}||j ƒ}||j ƒ} ||j ƒ} ||j ƒ} |j |7_ |j |7_ |j | 7_ |j| 7_|j| 7_|j| 7_|j| 7_|j| 7_dS(s Update various binary classification counts for a single (label, pred) pair. Parameters ---------- label : `NDArray` The labels of the data. pred : `NDArray` Predicted values. RYRXiis1%s currently only supports binary classification.N(R[R\RgRZRRtuniqueRR&R'R^RpRtRrRvRqRuRsRw( RR-R.R_t pred_truet pred_falset label_truet label_falsettrue_post false_post false_negttrue_neg((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytupdate_binary_statsds.      cC@s9|j|jdkr1t|jƒ|j|jSdSdS(Nig(RpRrR7(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt precision‹scC@s9|j|jdkr1t|jƒ|j|jSdSdS(Nig(RtRvR7(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytglobal_precision’scC@s9|j|jdkr1t|jƒ|j|jSdSdS(Nig(RpRqR7(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytrecall™scC@s9|j|jdkr1t|jƒ|j|jSdSdS(Nig(RtRuR7(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt global_recall scC@s>|j|jdkr6d|j|j|j|jSdSdS(Niig(R‚R„(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytfscore§s cC@s>|j|jdkr6d|j|j|j|jSdSdS(Niig(RƒR…(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt global_fscore®s c C@s|rR|jsdSt|jƒ}t|jƒ}t|jƒ}t|jƒ}nI|js_dSt|jƒ}t|jƒ}t|j ƒ}t|j ƒ}||||||||g}d}x$t d„|ƒD]}||9}qÖW||||t j |ƒS(s@ Calculate the Matthew's Correlation Coefficent ggð?cS@s |dkS(Ng((tt((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytÏt(tglobal_total_examplesR7RtRvRuRwttotal_examplesRpRrRqRstfiltertmathtsqrt( Rt use_globalR}R~RR€ttermstdenomRˆ((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt matthewsccµs*   cC@s|j|j|j|jS(N(RqRrRsRp(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRŒÓscC@s|j|j|j|jS(N(RuRvRwRt(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR‹ØscC@s(d|_d|_d|_d|_dS(Ni(RrRqRpRs(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytlocal_reset_statsÝs   cC@sLd|_d|_d|_d|_d|_d|_d|_d|_dS(Ni(RrRqRpRsRvRuRtRw(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt reset_statsãs       (R'R=R>RRtpropertyR‚RƒR„R…R†R‡RR“RŒR‹R”R•(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRoRs '  tF1cB@s>eZdZddddd„Zd„Zd„Zd„ZRS(sµComputes the F1 score of a binary classification problem. The F1 score is equivalent to harmonic mean of the precision and recall, where the best value is 1.0 and the worst value is 0.0. The formula for F1 score is:: F1 = 2 * (precision * recall) / (precision + recall) The formula for precision and recall is:: precision = true_positives / (true_positives + false_positives) recall = true_positives / (true_positives + false_negatives) .. note:: This F1 score only supports binary classification. Parameters ---------- name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. average : str, default 'macro' Strategy to be used for aggregating across mini-batches. "macro": average the F1 scores for each batch. "micro": compute a single F1 score across all batches. Examples -------- >>> predicts = [mx.nd.array([[0.3, 0.7], [0., 1.], [0.4, 0.6]])] >>> labels = [mx.nd.array([0., 1., 1.])] >>> f1 = mx.metric.F1() >>> f1.update(preds = predicts, labels = labels) >>> print f1.get() ('f1', 0.8) tf1tmacroc C@s>||_tƒ|_tj|d|d|d|dtƒdS(NRRRR(taverageRoRKRRRJ(RRRRRš((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRs    cC@st||tƒ\}}x0t||ƒD]\}}|jj||ƒq(W|jdkr²|j|jj7_|j|jj 7_|j d7_ |j d7_ |jj ƒnP|jj|jj |_|jj |jj|_|jj |_ |jj|_ dS(sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. R™iN(RRJR:RKRRšR2R†R4R‡R1R3R•RŒR‹(RR R R-R.((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%!s cC@s5d|_d|_d|_d|_|jjƒdS(s7Resets the internal evaluation result to initial state.giN(R2R1R3R4RKR•(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR=s     cC@s#d|_d|_|jjƒdS(s7Resets the internal evaluation result to initial state.giN(R2R1RKR”(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR5Es  N(R'R=R>R*RR%RR5(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR—îs )  tMCCcB@s>eZdZddddd„Zd„Zd„Zd„ZRS(sùComputes the Matthews Correlation Coefficient of a binary classification problem. While slower to compute than F1 the MCC can give insight that F1 or Accuracy cannot. For instance, if the network always predicts the same result then the MCC will immeadiately show this. The MCC is also symetric with respect to positive and negative categorization, however, there needs to be both positive and negative examples in the labels or it will always return 0. MCC of 0 is uncorrelated, 1 is completely correlated, and -1 is negatively correlated. .. math:: \text{MCC} = \frac{ TP \times TN - FP \times FN } {\sqrt{ (TP + FP) ( TP + FN ) ( TN + FP ) ( TN + FN ) } } where 0 terms in the denominator are replaced by 1. .. note:: This version of MCC only supports binary classification. See PCC. Parameters ---------- name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. average : str, default 'macro' Strategy to be used for aggregating across mini-batches. "macro": average the MCC for each batch. "micro": compute a single MCC across all batches. Examples -------- >>> # In this example the network almost always predicts positive >>> false_positives = 1000 >>> false_negatives = 1 >>> true_positives = 10000 >>> true_negatives = 1 >>> predicts = [mx.nd.array( [[.3, .7]]*false_positives + [[.7, .3]]*true_negatives + [[.7, .3]]*false_negatives + [[.3, .7]]*true_positives )] >>> labels = [mx.nd.array( [0.]*(false_positives + true_negatives) + [1.]*(false_negatives + true_positives) )] >>> f1 = mx.metric.F1() >>> f1.update(preds = predicts, labels = labels) >>> mcc = mx.metric.MCC() >>> mcc.update(preds = predicts, labels = labels) >>> print f1.get() ('f1', 0.95233560306652054) >>> print mcc.get() ('mcc', 0.01917751877733392) tmccR™c C@s>||_tƒ|_tj|d|d|d|dtƒdS(NRRRR(t_averageRot_metricsRRRJ(RRRRRš((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR‹s    cC@st||tƒ\}}x0t||ƒD]\}}|jj||ƒq(W|jdkr¾|j|jjƒ7_|j|jjdtƒ7_|j d7_ |j d7_ |jj ƒn\|jjƒ|jj |_|jjdtƒ|jj |_|jj |_ |jj |_ dS(sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. R™RiN(RRJR:RžRRR2R“R4R1R3R•RŒR‹(RR R R-R.((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%“s cC@s5d|_d|_d|_d|_|jjƒdS(s7Resets the internal evaluation result to initial state.gN(R2R1R4R3RžR•(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR°s     cC@s#d|_d|_|jjƒdS(s7Resets the internal evaluation result to initial state.gN(R2R1RžR”(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR5¸s  N(R'R=R>R*RR%RR5(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR›Ls =  t PerplexitycB@s>eZdZddddd„Zd„Zd„Zd„ZRS(sYComputes perplexity. Perplexity is a measurement of how well a probability distribution or model predicts a sample. A low perplexity indicates the model is good at predicting the sample. The perplexity of a model q is defined as .. math:: b^{\big(-\frac{1}{N} \sum_{i=1}^N \log_b q(x_i) \big)} = \exp \big(-\frac{1}{N} \sum_{i=1}^N \log q(x_i)\big) where we let `b = e`. :math:`q(x_i)` is the predicted value of its ground truth label on sample :math:`x_i`. For example, we have three samples :math:`x_1, x_2, x_3` and their labels are :math:`[0, 1, 1]`. Suppose our model predicts :math:`q(x_1) = p(y_1 = 0 | x_1) = 0.3` and :math:`q(x_2) = 1.0`, :math:`q(x_3) = 0.6`. The perplexity of model q is :math:`exp\big(-(\log 0.3 + \log 1.0 + \log 0.6) / 3\big) = 1.77109762852`. Parameters ---------- ignore_label : int or None Index of invalid label to ignore when counting. By default, sets to -1. If set to `None`, it will include all entries. axis : int (default -1) The axis from prediction that was used to compute softmax. By default use the last axis. name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> predicts = [mx.nd.array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])] >>> labels = [mx.nd.array([0, 1, 1])] >>> perp = mx.metric.Perplexity(ignore_label=None) >>> perp.update(labels, predicts) >>> print perp.get() ('Perplexity', 1.7710976285155853) iÿÿÿÿt perplexityc C@sDtt|ƒj|d|d|d|dtƒ||_||_dS(Nt ignore_labelRRR(RIRŸRRJR¡RX(RR¡RXRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRõs    cC@sžt|ƒt|ƒkst‚d}d}x1t||ƒD] \}}|j|j|jdkstd|j|jfƒ‚|j|jƒj|jfƒ}tj ||j ddƒd|j ƒ}|j d k r||j kj |jƒ}|tj|ƒjƒ8}|d||}n|tjtjtjd |ƒƒƒjƒ8}||j7}q:W|j|7_|j|7_|j|7_|j|7_d S( sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. giiÿÿÿÿsshape mismatch: %s vs. %stdtypeRYRXig»½×Ùß|Û=N(RReR:tsizeRt as_in_contexttcontexttreshapeRtpickR\RXR¡R*R¢R^tasscalartlogtmaximumR2R4R1R3(RR R tlosstnumR-R.tignore((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%þs$  !'.cC@sF|jdkr"|jtdƒfS|jtj|j|jƒfSdS(s±Returns the current evaluation result. Returns ------- Tuple of (str, float) Representing name of the metric and evaluation result. iR6N(R1RR7RŽtexpR2(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR8scC@sF|jdkr"|jtdƒfS|jtj|j|jƒfSdS(s¸Returns the current global evaluation result. Returns ------- Tuple of (str, float) Representing name of the metric and evaluation result. iR6N(R3RR7RŽR®R4(R((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR9)sN(R'R=R>R*RR%R8R9(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRŸ¿s 5  tMAEcB@s)eZdZdddd„Zd„ZRS(sˆComputes Mean Absolute Error (MAE) loss. The mean absolute error is given by .. math:: \frac{\sum_i^n |y_i - \hat{y}_i|}{n} Parameters ---------- name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> predicts = [mx.nd.array(np.array([3, -0.5, 2, 7]).reshape(4,1))] >>> labels = [mx.nd.array(np.array([2.5, 0.0, 2, 8]).reshape(4,1))] >>> mean_absolute_error = mx.metric.MAE() >>> mean_absolute_error.update(labels = labels, preds = predicts) >>> print mean_absolute_error.get() ('mae', 0.5) tmaecC@s,tt|ƒj|d|d|dtƒdS(NRRR(RIR¯RRJ(RRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRYscC@s t||tƒ\}}xìt||ƒD]Û\}}|jƒ}|jƒ}t|jƒdkr}|j|jddƒ}nt|jƒdkr®|j|jddƒ}ntj||ƒj ƒ}|j |7_ |j |7_ |j d7_ |j d7_ q(WdS(sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. iiN(RRJR:R[RRR¦RgtabstmeanR2R4R1R3(RR R R-R.R°((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%_s   N(R'R=R>R*RR%(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR¯;stMSEcB@s)eZdZdddd„Zd„ZRS(s‡Computes Mean Squared Error (MSE) loss. The mean squared error is given by .. math:: \frac{\sum_i^n (y_i - \hat{y}_i)^2}{n} Parameters ---------- name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> predicts = [mx.nd.array(np.array([3, -0.5, 2, 7]).reshape(4,1))] >>> labels = [mx.nd.array(np.array([2.5, 0.0, 2, 8]).reshape(4,1))] >>> mean_squared_error = mx.metric.MSE() >>> mean_squared_error.update(labels = labels, preds = predicts) >>> print mean_squared_error.get() ('mse', 0.375) tmsecC@s,tt|ƒj|d|d|dtƒdS(NRRR(RIR³RRJ(RRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR™scC@st||tƒ\}}xçt||ƒD]Ö\}}|jƒ}|jƒ}t|jƒdkr}|j|jddƒ}nt|jƒdkr®|j|jddƒ}n||djƒ}|j|7_|j |7_ |j d7_ |j d7_ q(WdS(sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. iig@N( RRJR:R[RRR¦R²R2R4R1R3(RR R R-R.R´((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%Ÿs   N(R'R=R>R*RR%(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR³|stRMSEcB@s)eZdZdddd„Zd„ZRS(s²Computes Root Mean Squred Error (RMSE) loss. The root mean squared error is given by .. math:: \sqrt{\frac{\sum_i^n (y_i - \hat{y}_i)^2}{n}} Parameters ---------- name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> predicts = [mx.nd.array(np.array([3, -0.5, 2, 7]).reshape(4,1))] >>> labels = [mx.nd.array(np.array([2.5, 0.0, 2, 8]).reshape(4,1))] >>> root_mean_squared_error = mx.metric.RMSE() >>> root_mean_squared_error.update(labels = labels, preds = predicts) >>> print root_mean_squared_error.get() ('rmse', 0.612372457981) trmsecC@s,tt|ƒj|d|d|dtƒdS(NRRR(RIRµRRJ(RRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRÙscC@st||tƒ\}}xðt||ƒD]ß\}}|jƒ}|jƒ}t|jƒdkr}|j|jddƒ}nt|jƒdkr®|j|jddƒ}ntj||dj ƒƒ}|j |7_ |j |7_ |j d7_ |j d7_ q(WdS(sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. iig@N(RRJR:R[RRR¦RgRR²R2R4R1R3(RR R R-R.R¶((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%ßs   N(R'R=R>R*RR%(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRµ¼stcet CrossEntropycB@s,eZdZddddd„Zd„ZRS(sµComputes Cross Entropy loss. The cross entropy over a batch of sample size :math:`N` is given by .. math:: -\sum_{n=1}^{N}\sum_{k=1}^{K}t_{nk}\log (y_{nk}), where :math:`t_{nk}=1` if and only if sample :math:`n` belongs to class :math:`k`. :math:`y_{nk}` denotes the probability of sample :math:`n` belonging to class :math:`k`. Parameters ---------- eps : float Cross Entropy loss is undefined for predicted value is 0 or 1, so predicted values are added with the small constant. name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> predicts = [mx.nd.array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])] >>> labels = [mx.nd.array([0, 1, 1])] >>> ce = mx.metric.CrossEntropy() >>> ce.update(labels, predicts) >>> print ce.get() ('cross-entropy', 0.57159948348999023) gê-™—q=s cross-entropyc C@s;tt|ƒj|d|d|d|dtƒ||_dS(NtepsRRR(RIR¸RRJR¹(RR¹RRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR!s   cC@st||tƒ\}}xñt||ƒD]à\}}|jƒ}|jƒ}|jƒ}|jd|jdksxt‚|tj|jdƒtj |ƒf}tj ||j ƒ j ƒ}|j |7_ |j|7_|j|jd7_|j|jd7_q(WdS(sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. iN(RRJR:R[travelRReRgtarangetint64R©R¹R^R2R4R1R3(RR R R-R.tprobt cross_entropy((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%)s     )N(R'R=R>R*RR%(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR¸üs$tnll_losstNegativeLogLikelihoodcB@s,eZdZddddd„Zd„ZRS(sComputes the negative log-likelihood loss. The negative log-likelihoodd loss over a batch of sample size :math:`N` is given by .. math:: -\sum_{n=1}^{N}\sum_{k=1}^{K}t_{nk}\log (y_{nk}), where :math:`K` is the number of classes, :math:`y_{nk}` is the prediceted probability for :math:`k`-th class for :math:`n`-th sample. :math:`t_{nk}=1` if and only if sample :math:`n` belongs to class :math:`k`. Parameters ---------- eps : float Negative log-likelihood loss is undefined for predicted value is 0, so predicted values are added with the small constant. name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> predicts = [mx.nd.array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])] >>> labels = [mx.nd.array([0, 1, 1])] >>> nll_loss = mx.metric.NegativeLogLikelihood() >>> nll_loss.update(labels, predicts) >>> print nll_loss.get() ('nll-loss', 0.57159948348999023) gê-™—q=snll-lossc C@s;tt|ƒj|d|d|d|dtƒ||_dS(NR¹RRR(RIRÀRRJR¹(RR¹RRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRis   cC@st||tƒ\}}xþt||ƒD]í\}}|jƒ}|jƒ}|jƒ}|jd}|jd|ks‘t|jd|fƒ‚|tj|dtj ƒtj |ƒf}tj ||j ƒ j ƒ}|j |7_ |j|7_|j|7_|j|7_q(WdS(sßUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. iR¢N(RRJR:R[RºRReRgR»R¼R©R¹R^R2R4R1R3(RR R R-R.t num_examplesR½tnll((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%qs     ,+N(R'R=R>R*RR%(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRÀDs$tpearsonrtPearsonCorrelationcB@s)eZdZdddd„Zd„ZRS(sbComputes Pearson correlation. The pearson correlation is given by .. math:: \frac{cov(y, \hat{y})}{\sigma{y}\sigma{\hat{y}}} Parameters ---------- name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> predicts = [mx.nd.array([[0.3, 0.7], [0, 1.], [0.4, 0.6]])] >>> labels = [mx.nd.array([[1, 0], [0, 1], [0, 1]])] >>> pr = mx.metric.PearsonCorrelation() >>> pr.update(labels, predicts) >>> print pr.get() ('pearson-correlation', 0.42163704544016178) RÃcC@s,tt|ƒj|d|d|dtƒdS(NRRR(RIRÄRRJ(RRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRªscC@sÅt||tƒ\}}x¦t||ƒD]•\}}t||ttƒ|jƒ}|jƒ}tj|jƒ|jƒƒd}|j|7_|j |7_ |j d7_ |j d7_ q(WdS(sÞUpdates the internal evaluation result. Parameters ---------- labels : list of `NDArray` The labels of the data. preds : list of `NDArray` Predicted values. iiN(ii( RRJR:RR[RgtcorrcoefRºR2R4R1R3(RR R R-R.t pearson_corr((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%°s   "N(R'R=R>R*RR%(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRÄŒstPCCcB@sneZdZdd d ed„Zd„Zd„Zd„Ze d„ƒZ e d„ƒZ d„Z d „Z RS( syPCC is a multiclass equivalent for the Matthews correlation coefficient derived from a discrete solution to the Pearson correlation coefficient. .. math:: \text{PCC} = \frac {\sum _{k}\sum _{l}\sum _{m}C_{kk}C_{lm}-C_{kl}C_{mk}} {{\sqrt {\sum _{k}(\sum _{l}C_{kl})(\sum _{k'|k'\neq k}\sum _{l'}C_{k'l'})}} {\sqrt {\sum _{k}(\sum _{l}C_{lk})(\sum _{k'|k'\neq k}\sum _{l'}C_{l'k'})}}} defined in terms of a K x K confusion matrix C. When there are more than two labels the PCC will no longer range between -1 and +1. Instead the minimum value will be between -1 and 0 depending on the true distribution. The maximum value is always +1. Parameters ---------- name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> # In this example the network almost always predicts positive >>> false_positives = 1000 >>> false_negatives = 1 >>> true_positives = 10000 >>> true_negatives = 1 >>> predicts = [mx.nd.array( [[.3, .7]]*false_positives + [[.7, .3]]*true_negatives + [[.7, .3]]*false_negatives + [[.3, .7]]*true_positives )] >>> labels = [mx.nd.array( [0]*(false_positives + true_negatives) + [1]*(false_negatives + true_positives) )] >>> f1 = mx.metric.F1() >>> f1.update(preds = predicts, labels = labels) >>> pcc = mx.metric.PCC() >>> pcc.update(preds = predicts, labels = labels) >>> print f1.get() ('f1', 0.95233560306652054) >>> print pcc.get() ('pcc', 0.01917751877733392) tpccc C@s8d|_tt|ƒjd|d|d|d|ƒdS(NiRRRR(tkRIRÇR(RRRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRüs cC@sytj|jd|fd|ffdddƒ|_tj|jd|fd|ffdddƒ|_|j|7_dS(Nitconstanttconstant_values(RgtpadtlcmtgcmRÉ(Rtinc((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt_grows --c C@s·|jƒ}|jddƒ}|jddƒ}tj|||ƒ}tj|||ƒ}|dksv|dkr€tdƒS|jƒ}tj||||ƒ}|||dS(NRXiiR6gà?(R^RgR7tdiagonal( Rtcmattntxtytcov_xxtcov_yyRLtcov_xy((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyt _calc_mcc s   c C@sAt||tƒ\}}xt||ƒD]ó\}}|jddtƒjƒ}|jƒjddƒ}t|jƒ|jƒƒ}||jkr¬|j |d|jƒnt j |j|jfƒ}x3t||ƒD]"\}}|||fcd7R*RJRRÐRÙR%R–R2R4RR5(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRÇÇs4    tLosscB@s)eZdZdddd„Zd„ZRS(sÊDummy metric for directly printing loss. Parameters ---------- name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. R«cC@s,tt|ƒj|d|d|dtƒdS(NRRR(RIRÝRRJ(RRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRXscC@st|tjjƒr!|g}nxe|D]]}tj|ƒjƒ}|j|7_|j|7_|j|j7_|j |j7_ q(WdS(N( R RR R^R¨R2R4R1R£R3(Rt_R R.R«((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR%^s  N(R'R=R>R*RR%(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRÝIstTorchcB@s eZdZdddd„ZRS(s"Dummy metric for torch criterions.ttorchcC@s&tt|ƒj|d|d|ƒdS(NRR(RIRßR(RRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRnsN(R'R=R>R*R(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRßkstCaffecB@s eZdZdddd„ZRS(s"Dummy metric for caffe criterions.tcaffecC@s&tt|ƒj|d|d|ƒdS(NRR(RIRáR(RRRR((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRwsN(R'R=R>R*R(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRátsR@cB@s5eZdZdeddd„Zd„Zd„ZRS(s7Computes a customized evaluation metric. The `feval` function can return a `tuple` of (sum_metric, num_inst) or return an `int` sum_metric. Parameters ---------- feval : callable(label, pred) Customized evaluation function. name : str, optional The name of the metric. (the default is None). allow_extra_outputs : bool, optional If true, the prediction outputs can have extra outputs. This is useful in RNN, where the states are also produced in outputs for forwarding. (the default is False). name : str Name of this metric instance for display. output_names : list of str, or None Name of predictions that should be used when updating with update_dict. By default include all predictions. label_names : list of str, or None Name of labels that should be used when updating with update_dict. By default include all labels. Examples -------- >>> predicts = [mx.nd.array(np.array([3, -0.5, 2, 7]).reshape(4,1))] >>> labels = [mx.nd.array(np.array([2.5, 0.0, 2, 8]).reshape(4,1))] >>> feval = lambda x, y : (x + y).mean() >>> eval_metrics = mx.metric.CustomMetric(feval=feval) >>> eval_metrics.update(labels, predicts) >>> print eval_metrics.get() ('custom()', 6.0) c C@s„|dkr:|j}|jdƒdkr:d|}q:ntt|ƒj|d|d|d|d|dtƒ||_||_dS( NtR*RRR%R)(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyR@}s # c@s+‡fd†}ˆj|_t|||ƒS(sÆCreates a custom evaluation metric that receives its inputs as numpy arrays. Parameters ---------- numpy_feval : callable(label, pred) Custom evaluation function that receives labels and predictions for a minibatch as numpy arrays and returns the corresponding custom metric as a floating point number. name : str, optional Name of the custom metric. allow_extra_outputs : bool, optional Whether prediction output is allowed to have extra outputs. This is useful in cases like RNN where states are also part of output which can then be fed back to the RNN in the next step. By default, extra outputs are not allowed. Returns ------- float Custom metric corresponding to the provided labels and predictions. Example ------- >>> def custom_metric(label, pred): ... return np.mean(np.abs(label-pred)) ... >>> metric = mx.metric.np(custom_metric) c@s ˆ||ƒS(sInternal eval function.((R-R.(t numpy_feval(s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pyRäîs(R'R@(RëRRåRä((Rës-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytnpÓs (,R>t __future__RRŽt collectionsRRgtbaseRRRŠRRRRtobjectRtget_register_functregistertget_alias_functaliastget_create_funcRDRCRARVRcRoR—R›RŸR¯R³RµR¸RÀRÄRÇRÝRßRáR@R*Rì(((s-/tmp/pip-install-Qvdv_2/mxnet/mxnet/metric.pytsn  #  *   GQœ]r{@?? F F 9!U