U fbR&@sddlZddlmZddlmZddlmZddlZddZ Gdd d eZ ej j e ej j ej jej jej jd Zd d ZdddZdS)N)keras)Callback)torch_callback_dictcCsg}|dkrV|ddD]6\}}|dkr>|t||q|t|f|qnP|dkr|ddD]6\}}|dkr|t||qn|t|f|qn|S)aLoad callbacks based on a config file for a specific framework. Usage ----- Note that this function is primarily intended for use with Keras. PyTorch does not use the same object-oriented training approach as Keras, and therefore doesn't generally have the same checkpointing objects to pass to model compilers - instead these are defined in model training code. See solaris.nets.train for examples of this. The only torch callback instantiated here is a learning rate scheduler. Arguments --------- framework : str Deep learning framework used for the model. Options are ``['keras', 'torch']`` . config : dict Configuration dict generated from the YAML config file. Returns ------- callbacks : list A `list` of callbacks to pass to the compiler (Keras) or to wrap the optimizer (torch learning rate scheduling) for model training. rtraining callbacks lr_scheduletorch)itemsappendget_lr_schedulekeras_callbacksr) frameworkconfigrcallbackparamsrl/home/ec2-user/SageMaker/vegetation-management-remars2022/remars2022-workshop/libs/solaris/nets/callbacks.py get_callbackssrcs6eZdZdZd fdd Zd ddZd dd ZZS) KerasTerminateOnMetricNaNaCallback to stop training if a metric has value NaN or infinity. Notes ----- Instantiate as you would any other keras callback. For example, to end training if a validation metric called `f1_score` reaches value NaN:: m = Model(inputs, outputs) m.compile() m.fit(X, y, callbacks=[TerminateOnMetricNaN('val_f1_score')]) Attributes ---------- metric : str, optional Name of the metric being monitored. checkpoint : str, optional One of ``['epoch', 'batch']``: Should the metric be checked at the end of every epoch (default) or every batch? Methods ------- on_epoch_end : operations to complete at the end of each epoch. on_batch_end : operations to complete at the end of each batch. Nepochcstt|||_||_dS)a Parameters ---------- metric (str): The name of the metric to be tested for NaN value. checkpoint (['epoch', 'batch']): Should the metric be checked at the end of every epoch (default) or every batch? N)superr__init__metricckpt)selfrZ checkpoint __class__rrrPs z"KerasTerminateOnMetricNaN.__init__cCsZ|jdkrV|pi}||j}|jdk rVt|s}|krb||<q@|k}|||<q@fdd}n2|dkrfd d}n|d krćfd d}|S) aCreate a learning rate schedule for Keras from a schedule dict. Arguments --------- schedule_type : str Type of learning rate schedule to use. Options are: ``['arbitrary', 'exponential', 'linear']`` . initial_lr : float, optional The initial learning rate to use. Defaults to ``0.001`` . update_frequency : int, optional How frequently should learning rate be reduced (or increased)? Defaults to ``1`` (every epoch). Has no effect if ``schedule_type='arbitrary'``. factor : float, optional The magnitude by which learning rate should be changed at each update. Use a positive number to increase learning rate and a negative number to decrease learning rate. See Usage for more details. schedule_dict : dict, optional A dictionary with ``{epoch: learning rate}`` pairs. The learning rate defined in each pair will be used beginning at the specified epoch and continuing until the next highest epoch number is reached during training. Returns ------- lr_schedule : func a function that takes epoch number integers as an argument and returns a learning rate. Usage ----- ``schedule_type='arbitrary'`` usage is documented in the arguments above. For ``schedule_type='exponential'``, the following equation is applied to determine the learning rate at each epoch: .. math:: lr = initial_lr*e^{factor imes(floor(epoch/update_frequency))} For ``schedule_type='linear'``, the following equation is applied: .. math:: lr = initial_lr imes(1+factor imes(floor(epoch/update_frequency))) r7NzCIf using an arbitrary schedule, an epoch: lr dict must be provided.rrcs4|krS|kr(S|SdSN)minmaxr) epoch_valsr> lookup_dictrrrs    z&keras_lr_schedule..lr_scheduler6cs,t|sSt|SdSr@rfloorrCr3r>r2rrrsr5csdt|S)NrrFrCrHrrrs) ValueErrorrarraylistkeysrangerBrA)r0r>r2r3r4eZ lower_epochsrr)rDr3r>rEr2rr8s /  r8)r?rrN)numpyr tensorflowrZtensorflow.keras.callbacksrZtorch_callbacksrr rrrZTerminateOnNaNZModelCheckpointZ EarlyStoppingReduceLROnPlateauZ CSVLoggerr r r8rrrrs$   -? 4