Examinando por Autor "CAROLA ANDREA FIGUEROA FLORES"
Mostrando 1 - 7 de 7
Resultados por página
Opciones de ordenación
- PublicaciónCOMPREHENSIVE ANALYSIS OF MODEL ERRORS IN BLUEBERRY DETECTION AND MATURITY CLASSIFICATION: IDENTIFYING LIMITATIONS AND PROPOSING FUTURE IMPROVEMENTS IN AGRICULTURAL MONITORING(AGRICULTURE-BASEL, 2023)
;CAROLA ANDREA FIGUEROA FLORESCRISTHIAN ALEJANDRO AGUILERA CARRASCOIN BLUEBERRY FARMING, ACCURATELY ASSESSING MATURITY IS CRITICAL TO EFFICIENT HARVESTING. DEEP LEARNING SOLUTIONS, WHICH ARE INCREASINGLY POPULAR IN THIS AREA, OFTEN UNDERGO EVALUATION THROUGH METRICS LIKE MEAN AVERAGE PRECISION (MAP). HOWEVER, THESE METRICS MAY ONLY PARTIALLY CAPTURE THE ACTUAL PERFORMANCE OF THE MODELS, ESPECIALLY IN SETTINGS WITH LIMITED RESOURCES LIKE THOSE IN AGRICULTURAL DRONES OR ROBOTS. TO ADDRESS THIS, OUR STUDY EVALUATES DEEP LEARNING MODELS, SUCH AS YOLOV7, RT-DETR, AND MASK-RCNN, FOR DETECTING AND CLASSIFYING BLUEBERRIES. WE PERFORM THESE EVALUATIONS ON BOTH POWERFUL COMPUTERS AND EMBEDDED SYSTEMS. USING TYPE-INFLUENCE DETECTOR ERROR (TIDE) ANALYSIS, WE CLOSELY EXAMINE THE ACCURACY OF THESE MODELS. OUR RESEARCH REVEALS THAT PARTIAL OCCLUSIONS COMMONLY CAUSE ERRORS, AND OPTIMIZING THESE MODELS FOR EMBEDDED DEVICES CAN INCREASE THEIR SPEED WITHOUT LOSING PRECISION. THIS WORK IMPROVES THE UNDERSTANDING OF OBJECT DETECTION MODELS FOR BLUEBERRY DETECTION AND MATURITY ESTIMATION. - PublicaciónDEEP LEARNING FOR CHILEAN NATIVE FLORA CLASSIFICATION: A COMPARATIVE ANALYSIS(Frontiers in Plant Science, 2023)CAROLA ANDREA FIGUEROA FLORESTHE LIMITED AVAILABILITY OF INFORMATION ON CHILEAN NATIVE FLORA HAS RESULTED IN A LACK OF KNOWLEDGE AMONG THE GENERAL PUBLIC, AND THE CLASSIFICATION OF THESE PLANTS POSES CHALLENGES WITHOUT EXTENSIVE EXPERTISE. THIS STUDY EVALUATES THE PERFORMANCE OF SEVERAL DEEP LEARNING (DL) MODELS, NAMELY INCEPTIONV3, VGG19, RESNET152, AND MOBILENETV2, IN CLASSIFYING IMAGES REPRESENTING CHILEAN NATIVE FLORA. THE MODELS ARE PRE-TRAINED ON IMAGENET. A DATASET CONTAINING 500 IMAGES FOR EACH OF THE 10 CLASSES OF NATIVE FLOWERS IN CHILE WAS CURATED, RESULTING IN A TOTAL OF 5000 IMAGES. THE DL MODELS WERE APPLIED TO THIS DATASET, AND THEIR PERFORMANCE WAS COMPARED BASED ON ACCURACY AND OTHER RELEVANT METRICS. THE FINDINGS HIGHLIGHT THE POTENTIAL OF DL MODELS TO ACCURATELY CLASSIFY IMAGES OF CHILEAN NATIVE FLORA. THE RESULTS CONTRIBUTE TO ENHANCING THE UNDERSTANDING OF THESE PLANT SPECIES AND FOSTERING AWARENESS AMONG THE GENERAL PUBLIC. FURTHER IMPROVEMENTS AND APPLICATIONS OF DL IN ECOLOGY AND BIODIVERSITY RESEARCH ARE DISCUSSED.
- PublicaciónHALLUCINATING SALIENCY MAPS FOR FINE-GRAINED IMAGE CLASSIFICATION FOR LIMITED DATA DOMAINS(IN PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, 2021)CAROLA ANDREA FIGUEROA FLORESMOST OF THE SALIENCY METHODS ARE EVALUATED ON THEIRABILITY TO GENERATE SALIENCY MAPS, AND NOT ON THEIR FUNCTIONALITY INA COMPLETE VISION PIPELINE, LIKE FOR INSTANCE, IMAGE CLASSI?CATION.IN THE CURRENT PAPER, WE PROPOSE AN APPROACH WHICH DOES NOTREQUIRE EXPLICIT SALIENCY MAPS TO IMPROVE IMAGE CLASSI?CATION,BUT THEY ARE LEARNED IMPLICITELY, DURING THE TRAINING OF AN END-TO-END IMAGE CLASSI?CATION TASK. WE SHOW THAT OUR APPROACHOBTAINS SIMILAR RESULTS AS THE CASE WHEN THE SALIENCY MAPSARE PROVIDED EXPLICITELY. COMBINING RGB DATA WITH SALIENCYMAPS REPRESENTS A SIGNI?CANT ADVANTAGE FOR OBJECT RECOGNITION,ESPECIALLY FOR THE CASE WHEN TRAINING DATA IS LIMITED. WE VALIDATEOUR METHOD ON SEVERAL DATASETS FOR ?NE-GRAINED CLASSI?CATIONTASKS (FLOWERS, BIRDS AND CARS). IN ADDITION, WE SHOW THAT OURSALIENCY ESTIMATION METHOD, WHICH IS TRAINED WITHOUT ANY SALIENCYGROUNDTRUTH DATA, OBTAINS COMPETITIVE RESULTS ON REAL IMAGESALIENCY BENCHMARK (TORONTO), AND OUTPERFORMS DEEP SALIENCYMODELS WITH SYNTHETIC IMAGES (SID4VAM).
- PublicaciónSALIENCY DETECTION FROM SUBITIZING PROCESSING(VISION SENSORS RECENT ADVANCE (INTECHOPEN), 2023)CAROLA ANDREA FIGUEROA FLORESMOST OF THE SALIENCY METHODS ARE EVALUATED FOR THEIR ABILITY TO GENERATE SALIENCY MAPS, AND NOT FOR THEIR FUNCTIONALITY IN A COMPLETE VISION PIPELINE, FOR INSTANCE, IMAGE CLASSIFICATION OR SALIENT OBJECT SUBITIZING. IN THIS WORK, WE INTRODUCE SALIENCY SUBITIZING AS THE WEAK SUPERVISION. THIS TASK IS INSPIRED BY THE ABILITY OF PEOPLE TO QUICKLY AND ACCURATELY IDENTIFY THE NUMBER OF ITEMS WITHIN THE SUBITIZING RANGE (E.G., 1 TO 4 DIFFERENT TYPES OF THINGS). THIS MEANS THAT THE SUBITIZING INFORMATION WILL TELL US THE NUMBER OF FEATURED OBJECTS IN A GIVEN IMAGE. TO THIS END, WE PROPOSE A SALIENCY SUBITIZING PROCESS (SSP) AS A FIRST APPROXIMATION TO LEARN SALIENCY DETECTION, WITHOUT THE NEED FOR ANY UNSUPERVISED METHODS OR SOME RANDOM SEEDS. WE CONDUCT EXTENSIVE EXPERIMENTS ON TWO BENCHMARK DATASETS (TORONTO AND SID4VAM). THE EXPERIMENTAL RESULTS SHOW THAT OUR METHOD OUTPERFORMS OTHER WEAKLY SUPERVISED METHODS AND EVEN PERFORMS COMPARABLE TO SOME FULLY SUPERVISED METHODS AS A FIRST APPROXIMATION.
- PublicaciónSALIENCY DETECTION FROM SUBITIZING PROCESSING: FIRST APPROXIMATION(2022 XVLIII LATIN AMERICAN COMPUTER CONFERENCE (CLEI), 2022)CAROLA ANDREA FIGUEROA FLORESMOST OF THE SALIENCY METHODS ARE EVALUATED ON THEIR ABILITY TO GENERATE SALIENCY MAPS, AND NOT ON THEIR FUNCTIONALITY IN A COMPLETE VISION PIPELINE, LIKE FOR INSTANCE, IMAGE CLASSIFICATION OR SALIENT OBJECT SUBITIZING. IN THIS PAPER, WE STUDY THE PROBLEM OF SALIENT OBJECT SUBUTIZING, I.E. PREDICTING THE NUMBER OF SALIENT OBJECTS IN A SYNTHETIC IMAGES (SID4VAM AND TORONTO). THIS TASK IS INSPIRED BY THE ABILITY OF PEOPLE TO QUICKLY AND ACCURATELY IDENTIFY THE NUMBER OF ITEMS WITHIN THE SUBITIZING RANGE (1-4). THIS MEANS THAT THE SUBITIZED INFORMATION WILL TELL US THE NUMBER OF FEATURED OBJECTS IN A GIVEN IMAGE, AND WILL THUS SUBSEQUENTLY OBTAIN THE LOCATION OR APPEARANCE INFORMATION OF THE FEATURED OBJECTS, AND EVERYTHING WILL BE DONE WITHIN A WEAKLY SUPERVISED CONFIGURATION.
- PublicaciónSALIENCY FOR FINE-GRAINED OBJECT RECOGNITION IN DOMAINS WITH SCARCE TRAINING DATA(PATTERN RECOGNITION, 2019)CAROLA ANDREA FIGUEROA FLORESTHIS PAPER INVESTIGATES THE ROLE OF SALIENCY TO IMPROVE THE CLASSIFICATION ACCURACY OF A CONVOLUTIONAL NEURAL NETWORK (CNN) FOR THE CASE WHEN SCARCE TRAINING DATA IS AVAILABLE. OUR APPROACH CONSISTS IN ADDING A SALIENCY BRANCH TO AN EXISTING CNN ARCHITECTURE WHICH IS USED TO MODULATE THE STANDARD BOTTOM-UP VISUAL FEATURES FROM THE ORIGINAL IMAGE INPUT, ACTING AS AN ATTENTIONAL MECHANISM THAT GUIDES THE FEATURE EXTRACTION PROCESS. THE MAIN AIM OF THE PROPOSED APPROACH IS TO ENABLE THE EFFECTIVE TRAINING OF A FINE-GRAINED RECOGNITION MODEL WITH LIMITED TRAINING SAMPLES AND TO IMPROVE THE PERFORMANCE ON THE TASK, THEREBY ALLEVIATING THE NEED TO ANNOTATE A LARGE DATASET. THE VAST MAJORITY OF SALIENCY METHODS ARE EVALUATED ON THEIR ABILITY TO GENERATE SALIENCY MAPS, AND NOT ON THEIR FUNCTIONALITY IN A COMPLETE VISION PIPELINE. OUR PROPOSED PIPELINE ALLOWS TO EVALUATE SALIENCY METHODS FOR THE HIGH-LEVEL TASK OF OBJECT RECOGNITION. WE PERFORM EXTENSIVE EXPERIMENTS ON VARIOUS FINE-GRAINED DATASETS (FLOWERS, BIRDS, CARS, AND DOGS) UNDER DIFFERENT CONDITIONS AND SHOW THAT SALIENCY CAN CONSIDERABLY IMPROVE THE NETWORK?S PERFORMANCE, ESPECIALLY FOR THE CASE OF SCARCE TRAINING DATA. FURTHERMORE, OUR EXPERIMENTS SHOW THAT SALIENCY METHODS THAT OBTAIN IMPROVED SALIENCY MAPS (AS MEASURED BY TRADITIONAL SALIENCY BENCHMARKS) ALSO TRANSLATE TO SALIENCY METHODS THAT YIELD IMPROVED PERFORMANCE GAINS WHEN APPLIED IN AN OBJECT RECOGNITION PIPELINE.
- PublicaciónSALIENCY FOR FREE: SALIENCY PREDICTION AS A SIDE-EFFECT OF OBJECT RECOGNITION(PATTERN RECOGNITION LETTERS, 2021)CAROLA ANDREA FIGUEROA FLORESSALIENCY IS THE PERCEPTUAL CAPACITY OF OUR VISUAL SYSTEM TO FOCUS OUR ATTENTION (I.E. GAZE) ON RELEVANT OBJECTS INSTEAD OF THE BACKGROUND. SO FAR, COMPUTATIONAL METHODS FOR SALIENCY ESTIMATION REQUIRED THE EXPLICIT GENERATION OF A SALIENCY MAP, PROCESS WHICH IS USUALLY ACHIEVED VIA EYETRACKING EXPERIMENTS ON STILL IMAGES. THIS IS A TEDIOUS PROCESS THAT NEEDS TO BE REPEATED FOR EACH NEW DATASET. IN THE CURRENT PAPER, WE DEMONSTRATE THAT IS POSSIBLE TO AUTOMATICALLY GENERATE SALIENCY MAPS WITHOUT GROUND-TRUTH. IN OUR APPROACH, SALIENCY MAPS ARE LEARNED AS A SIDE EFFECT OF OBJECT RECOGNITION. EXTENSIVE EXPERIMENTS CARRIED OUT ON BOTH REAL AND SYNTHETIC DATASETS DEMONSTRATED THAT OUR APPROACH IS ABLE TO GENERATE ACCURATE SALIENCY MAPS, ACHIEVING COMPETITIVE RESULTS WHEN COMPARED WITH SUPERVISED METHODS.