3 5w` @stddlZddlmZddljjZddlmZejddZ ddZ dd Z d d Z ej d d Zej ddZdS)N)keras)layerscCs@tjdddddd|}tjddd ddd|}tj|}|S) N samerelu)filters kernel_sizestridespadding activation@)rr)rr)rConv2DFlatten)xrY/home/ec2-user/SageMaker/VAEforAnomalyDetectionWithTensorflowOnSageMaker/src/model_def.pyencodes rcCs6|\}}}tjj|fddd}|tjj|d|S)Ngg?)shapemeanstddevr)rbackend random_normalexp)args latent_dimmuZ log_sigmaepsrrrsample_zs rcCs:tj||}tj||}tjt|||g}|||fS)N)rDenseLambdar)rrz_meanZ z_log_varzrrrsamplersr$cCsrtjdtjjd|}tjdd|}tjddddd d |}tjddddd d |}tjd dd dd d |}|S)Nr)unitsr ) target_shaperrrrr)r r r r r sigmoid1i )r%r%r)rr tfnnrReshapeConv2DTranspose)rrrrdecodes r/c Csn||}||}||}tj||}tj|dddgd}dtjtj|tj|d|dd} ||| fS)aCompute the loss function of Variational Autoencoders PARAMERTERS ----------- input: encoder_mean - model part to output means in the hidden layer encoder_lgvar - model part to output vars in the hidden layer vae - Variational Autoencoders x - input data RETURNS ------ Variational Autoencoders loss = Reconstruction Loss + KL loss for each data in minibatch r(rr)axisg?g?)Kbinary_crossentropyr+ reduce_sumsumrsquare) encoder_mean encoder_lgvarvaerr"Zz_lgvarZx_predZ cross_entZreconklrrrcompute_vae_loss#s *r:c CsJtj}t||||\}}WdQRX|j||j}|jt||jdS)a|Compute the gradient and apply gradient to optimizer PARAMETERS ---------- encoder_mean: model part to output means in the hidden layer encoder_lgvar: model part to output vars in the hidden layer vae: Variational Autoencoders x : tensors optimizer : tensorflow optimizer object RETURNS ------- None, but weights are updated N)r+ GradientTaper:gradienttrainable_variablesapply_gradientszip) r6r7r8r optimizertapeZ recon_lossloss gradientsrrrcompute_apply_gradients@s rD) tensorflowr+rtensorflow.keras.backendrr1Ztensorflow.kerasr clear_sessionrrr$r/functionr:rDrrrrs