Comparison of Two Deep Learning Models to Determine Burned Forest Areas from Sentinel-2 Imagery
- 1. Yildiz Tech Univ, Grad Sch Sci & Engn, Data Sci & Big Data Program, Istanbul, Turkiye
- 2. Yildiz Tech Univ, Dept Geomat Engn, Istanbul, Turkiye
Description
Precise mapping of burned forest areas is essential for monitoring the effects of wildfires and supporting forest management efforts, particularly given the increasing frequency of wildfires driven by climate change. In this study, two image datasets were generated from Sentinel-2 imagery using RGB (Red, Green, Blue) and RGNIR (Red, Green, Near-Infrared) bands to evaluate the effectiveness of these spectral bands for semantic segmentation of burned areas. The U-Net and Feature Pyramid Network (FPN) models were compared for binary segmentation of burned regions using Sentinel-2 satellite data and the Satellite Burned Area Dataset. The U-Net model, utilizing the RGNIR band combination, outperformed the FPN model, achieving an Intersection over Union (IoU) score of 0.7601 and an overall accuracy of 90.92% across 138 test images. These findings underscore U- Net's capacity to extract sufficient spectral and spatial features even with limited training data, providing an efficient method for large-scale mapping of burned areas.
Files
10-22364-bjmc-2024-12-4-06.pdf
Files
(716.7 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:07bf8b4f461695a2fc3a953462ab793b
|
716.7 kB | Preview Download |