U < gbnW@sddlZddlZddlmZddlZddlZddlmZddl Z ddl m Z ddl m Z ddl mZddl mZddl mZdd l mZdd lmZGd d d ejZdS) N)Image)optim) utilities)dataset)get_data)model)predict)evaluate)LearningRateMonitorc seZdZdZdddiddffdd Zd0d d Zd1d d ZddZdgfddZddZ d2ddZ ddZ ddZ d3ddZ d4ddZd5d$d%Zd&d'Zd(d)Zd*d+Zd,d-Zd6d.d/ZZS)7 deepforestz@Class for training and predicting tree crowns in RGB images ZTreerNdeepforest_config.ymlc sttjr"td|_n td|_tj |r@|}n>z t d}Wn0t k r|}zt d |W5d}~XYnXtd |t||_d|_||_|t||kst d ||||_dd |D|_|dkrtj|_n||_|dS) z Args: num_classes (int): number of classes in the model config_file (str): path to deepforest config file Returns: self: a deepforest pytorch lightning module cudacpur zzNo config file provided and deepforest_config.yml not found either in local directory or in installed package location. {}NzReading config file: {}zlabel_dict {} does not match requested number of classes {}, please supply a label_dict argument {{"label1":0, "label2":1, "label3":2 ... etc}} for each label in the datasetcSsi|]\}}||qSr).0kvrre/home/ec2-user/SageMaker/vegetation-management-remars2022/remars2022-workshop/libs/deepforest/main.py Csz'deepforest.__init__..)super__init__torchr is_availabledevicecurrent_deviceospathexistsr Exception ValueErrorformatprintrZ read_configconfig__release_version__ num_classes create_modellen label_dictitemsnumeric_to_label_dictr get_transform transformsZsave_hyperparameters)selfr%r(r, config_fileZ config_pathe __class__rrrs:         zdeepforest.__init__TcCsDtj|d\}|_|jtj|j|jd||_t d |dS)aUse the latest DeepForest model release from github and load model. Optionally download if release doesn't exist. Args: check_release (logical): whether to check github for a model recent release. In cases where you are hitting the github API rate limit, set to False and any local model will be downloaded. If no model has been downloaded an error will raise. Returns: model (object): A trained PyTorch model  check_release) map_locationLoading pre-built model: {}N) r use_releaserelease_state_dictrload_state_dictrloadrr$r"r!r-r3Z release_tagrrrr6Ms zdeepforest.use_releasecCs>tj|d\}|_|jt|j||_td |dS)aUse the latest DeepForest bird model release from github and load model. Optionally download if release doesn't exist. Args: check_release (logical): whether to check github for a model recent release. In cases where you are hitting the github API rate limit, set to False and any local model will be downloaded. If no model has been downloaded an error will raise. Returns: model (object): A trained pytorch model r2r5N) ruse_bird_releaser7rr8rr9r$r"r!r:rrrr;^s zdeepforest.use_bird_releasecCs"t|j|jd|jd|_dS)z*Define a deepforest retinanet architecture nms_thresh score_threshN)rr&r%r#r-rrrr&nszdeepforest.create_modelc Kst|jdddk r.|dk r.tdd}||tjf||jdd|jdd |jd |jdd |d ||_dS) zCreate a pytorch lightning training by reading config files Args: callbacks (list): a list of pytorch-lightning callback classes validationcsv_fileNepoch)Zlogging_intervaltrainepochsgpusFZdistributed_backend fast_dev_run)loggerZ max_epochsrDZenable_checkpointingZ acceleratorrE callbacks)r#r appendplTrainertrainer)r-rFrGkwargsZ lr_monitorrrrcreate_trainerss    zdeepforest.create_trainercCs|j|dS)z Save the trainer checkpoint in user defined path, in order to access in future Args: Path: the path located the model checkpoint N)rKZsave_checkpoint)r-rrrr save_modelszdeepforest.save_modelFc CsLtj|||j|d|j|jddd}tjjj|||t j |jdd}|S)a[Create a tree dataset for inference Csv file format is .csv file with the columns "image_path", "xmin","ymin","xmax","ymax" for the image name and bounding box position. Image_path is the relative filename, not absolute path, which is in the root_dir directory. One bounding box per line. Args: csv_file: path to csv file root_dir: directory of images. If none, uses "image_dir" in config augment: Whether to create a training dataset, this activates data augmentations Returns: ds: a pytorch dataset )augmentrBpreload_images)r@root_dirr,r(rPworkers) batch_sizeshuffle collate_fn num_workers) rZ TreeDatasetr,r(r#rutilsdata DataLoaderrrU) r-r@rQrOrTrSrBds data_loaderrrr load_datasets  zdeepforest.load_datasetcCs2|j|jdd|jdddd|jdd}|S)zP Train loader using the configurations Returns: loader rBr@rQTrSr@rQrOrTrS)r\r#r-loaderrrrtrain_dataloaders zdeepforest.train_dataloadercCsHd}|jdddk rD|j|jdd|jdddd|jdd}|S)zg Create a val data loader only if specified in config Returns: loader or None Nr?r@rQFrSr])r#r\r^rrrval_dataloaders zdeepforest.val_dataloaderc st|trtd|rDt|ts(tdtt|dd}t |tj ksdt d t |j j dkr~jd_jjdj_tjj||j jd||d }|s|d k r|jfd d |d <|S)axPredict a single image with a deepforest model Args: image: a float32 numpy array of a RGB with channels last format path: optional path to read image from disk instead of passing image arg return_plot: Return image with plotted detections color: color of the bounding box as a tuple of BGR color, e.g. orange annotations is (0, 165, 255) thickness: thickness of the rectangle border line in px Returns: boxes: A pandas dataframe of predictions (Default) img: The input with predictions overlaid (Optional) zTPath provided instead of image. If you want to predict an image from disk, is path =z+Path expects a string path to image on diskRGBfloat32zgInput image is of type {}, expected numpy, if reading from PIL, wrap in np.array(image).astype(float32)rr=r<)rimage return_plotr iou_thresholdcolor thicknessNcs j|SNr*xr>rrz*deepforest.predict_image..label) isinstancestrr nparrayropenconvertastypetypendarray TypeErrorr!rrtoevalr#r=r predict_imageroapply)r-rdrrergrhresultrr>rr|s4     zdeepforest.predict_imagec shjj_jjdj_tjj|||jjd||d}|j fdd|d<|S)a7Create a dataset and predict entire annotation file Csv file format is .csv file with the columns "image_path", "xmin","ymin","xmax","ymax" for the image name and bounding box position. Image_path is the relative filename, not absolute path, which is in the root_dir directory. One bounding box per line. Args: csv_file: path to csv file root_dir: directory of images. If none, uses "image_dir" in config savedir: Optional. Directory to save image plots. color: color of the bounding box as a tuple of BGR color, e.g. orange annotations is (0, 165, 255) thickness: thickness of the rectangle border line in px Returns: df: pandas dataframe with bounding boxes, label and scores for each image in the csv file r=r<)rr@rQsavedirrrfrgrhcs j|Srirjrkr>rrrm#rnz)deepforest.predict_file..ro) rrzrr{r#r=r predict_fileror})r-r@rQrrgrhr~rr>rrs  zdeepforest.predict_file皙?333333??MbP?c stjrjd_jjdj_jdj_t j j||||||||| | j | | d} | dkr|t ddS|s|r| j fdd| d <n&| D] \}}|j fd d|d <q| S) aFor images too large to input into the model, predict_tile cuts the image into overlapping windows, predicts trees on each window and reassambles into a single array. Args: raster_path: Path to image on disk image (array): Numpy image array in BGR channel order following openCV convention patch_size: patch size default400, patch_overlap: patch overlap default 0.15, iou_threshold: Minimum iou overlap among predictions between windows to be suppressed. Defaults to 0.5. Lower values suppress more boxes at edges. return_plot: Should the image be returned with the predictions drawn? mosaic: Return a single prediction dataframe (True) or a tuple of image crops and predictions (False) use_soft_nms: whether to perform Gaussian Soft NMS or not, if false, default perform NMS. sigma: variance of Gaussian function used in Gaussian Soft NMS thresh: the score thresh used to filter bboxes after soft-nms performed color: color of the bounding box as a tuple of BGR color, e.g. orange annotations is (0, 165, 255) thickness: thickness of the rectangle border line in px Returns: boxes (array): if return_plot, an image. Otherwise a numpy array of predicted bounding boxes, scores and labels rr=r<)r raster_pathrd patch_size patch_overlaprfremosaic use_soft_nmssigmathreshrrgrhNz#No predictions made, returning Nonecs j|Srirjrkr>rrrmmrnz)deepforest.predict_tile..rocs j|Srirjrkr>rrrmprn)rrrrrzr{r#r=r<r predict_tilerr"ror})r-rrdrrrfrerrrrrgrhr~dfrr>rr's:(   zdeepforest.predict_tilecCs<|j|\}}}|j||}tdd|D}|S)z"Train on a loaded dataset cSsg|]}|qSrrrlossrrr sz,deepforest.training_step..)rrBforwardsumvalues)r-batch batch_idxrimagestargets loss_dictlossesrrr training_stepts   zdeepforest.training_stepc Csz|\}}}WntdYdSX|jt|j||}W5QRXtdd|D}|D]\}} |j d || ddqp|S)z#Train on a loaded dataset z!Empty batch encountered, skippingNcSsg|]}|qSrrrrrrrsz.deepforest.validation_step..zval_{}T)Zon_epoch) r"rrBrno_gradrrrr)logr!) r-rrrrrrrkeyvaluerrrvalidation_steps  zdeepforest.validation_stepcCs|jdddks|jd|jdddkr|j|jdd|jddd}|d|d|d |d t|d tdks|d D]H\}}|d |j|d |d |d|j|d |dqdS)Nr?r@r Zval_accuracy_intervalrrQ)r@rQZ box_recallZ box_precisionZ class_recallz {}_Recallrorecallz {}_Precision precision)r#Z current_epochr rrwiterrowsr!r*)r-resultsindexrowrrr on_epoch_ends" zdeepforest.on_epoch_endc Cshtj|j|jdddd}tjjj|ddddd d d d d d }|jdddk r`||ddS|SdS)NrBlrg?)rmomentumming? Tg-C6?relrg:0yE>) modefactorpatienceverbose thresholdthreshold_modecooldownmin_lrepsr?r@Zval_classification) optimizer lr_schedulermonitor)rSGDr parametersr#rrReduceLROnPlateau)r-r schedulerrrrconfigure_optimizerss$     zdeepforest.configure_optimizerscstjrjd_jjdj_tj j|||j jdd}t |}|j fdd|d<||jdk|jdk@}|d krjd d }tj|||||d }|d js|d dfdd|d d<|d dfdd|d d<||d<|S)aPCompute intersection-over-union and precision/recall for a given iou_threshold Args: csv_file: location of a csv file with columns "name","xmin","ymin","xmax","ymax","label", each box in a row root_dir: location of files in the dataframe 'name' column. iou_threshold: float [0,1] intersection-over-union union between annotation and prediction to be scored true positive savedir: optional path dir to save evaluation images Returns: results: dict of ("results", "precision", "recall") for a given threshold rr=r<)rr@rQrrrfcs j|Sri)r(rkr>rrrmrnz%deepforest.evaluate..rorNr?rf) predictions ground_dfrQrfrrZpredicted_labelcst|sj|S|Sri)pdisnullr*rkr>rrrmrnZ true_labelcs j|Srirjrkr>rrrmrnr)rrrrrzr{r#r=rrrrread_csvror}xminxmax evaluate_iour empty)r-r@rQrfrrrrrr>rr s6    ""zdeepforest.evaluate)T)T)NFTr F)NNFNr )NNr ) NNrrrFTFrrNr )NN)__name__ __module__ __qualname____doc__rr6r;r&rMrNr\r`rar|rrrrrrr __classcell__rrr0rr sJ6    $ 3 " M r )rpandasrPILrrZpytorch_lightningrIrnumpyrrlibs.deepforestrrrrrr rZpytorch_lightning.callbacksr ZLightningModuler rrrrs