Exploring the Potential of Reconstructed Multispectral Images for Urban Tree Segmentation in Street View Images

dc.catalogadorgjm
dc.contributor.authorArévalo Ramírez, Tito
dc.contributor.authorAlfaro, Analí
dc.contributor.authorSaavedra, José M.
dc.contributor.authorRecabarren, Matías
dc.contributor.authorPonce-Donoso, Mauricio
dc.contributor.authorDelpiano, José
dc.date.accessioned2024-07-18T14:11:39Z
dc.date.available2024-07-18T14:11:39Z
dc.date.issued2024
dc.description.abstractDeep learning has gained popularity in recent years for reconstructing hyperspectral and multispectral images, offering cost-effective solutions and promising results. Research on hyperspectral image reconstruction feeds deep learning models with images at specific wavelengths and outputs images in other spectral bands. Although encouraging results of previous works, it should be determined to what extent the reconstructed information can lead to an advantage over the captured images. In this context, the present work inspects whether or not reconstructed spectral images add relevant information to segmentation networks for improving urban tree identification. Specifically, we generate red-edge (ReD) and near-infrared (NIR) images from RGB images using a conditional Generative Adversarial Network (cGAN). The training and validation are carried out with 5770 multispectral images obtained after a custom data augmentation process using an urban hyperspectral dataset. The testing outcomes reveal that ReD and NIR can be generated with an average structural similarity index measure of 0.93 and 0.88, respectively. Next, the cGAN generates ReD and NIR information of two RGB-based urban tree datasets (i.e., Jekyll, 3949 samples, and Arbocensus, 317 samples). Subsequently, DeepLabV3 and SegFormer segmentation networks are trained, validated, and tested using RGB, RGB+ReD, and RGB+NIR images from Jekyll and Arbocensus datasets. The experiments show that reconstructed multispectral images might not add information to segmentation networks that enhance their performance. Specifically, the p-values from a T-test show no significant difference between the performance of segmentation networks.
dc.fechaingreso.objetodigital2024-07-18
dc.fuente.origenORCID
dc.identifier.doi10.1109/JSTARS.2024.3419127
dc.identifier.urihttps://doi.org/10.1109/JSTARS.2024.3419127
dc.identifier.urihttps://repositorio.uc.cl/handle/11534/87105
dc.information.autorucEscuela de Ingeniería; Arévalo Ramírez, Tito; 0000-0003-2542-6545; 1300544
dc.language.isoen
dc.nota.accesocontenido completo
dc.pagina.final12122
dc.pagina.inicio12112
dc.revistaIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
dc.rightsacceso abierto
dc.rights.licenseCC BY-NC-ND 4.0 ATTRIBUTION-NONCOMMERCIAL-NODERIVATIVES 4.0 INTERNATIONAL Deed
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectImage to image translation
dc.subjectMultispectral features
dc.subjectNeural networks
dc.subjectSemantic segmentation
dc.subjectUrban trees
dc.titleExploring the Potential of Reconstructed Multispectral Images for Urban Tree Segmentation in Street View Images
dc.typeartículo
dc.volumen17
sipa.codpersvinculados1300544
sipa.trazabilidadORCID;2024-07-15
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Exploring_the_Potential_of_Reconstructed_Multispectral_Images_for_Urban_Tree_Segmentation_in_Street_View_Images.pdf
Size:
17.95 MB
Format:
Adobe Portable Document Format
Description: