当前位置: 首页 > 文章 > Temporal sequence Object-based CNN (TS-OCNN) for crop classification from fine resolution remote sensing image time-series 作物学报(英文版) 2022,10 (5)
Position: Home > Articles > Temporal sequence Object-based CNN (TS-OCNN) for crop classification from fine resolution remote sensing image time-series The Crop Journal 2022,10 (5)

Temporal sequence Object-based CNN (TS-OCNN) for crop classification from fine resolution remote sensing image time-series

作  者:
Huapeng Li;Yajun Tian;Ce Zhang;Shuqing Zhang;Peter M. Atkinso
单  位:
Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, Changchun 130012, Jilin, China;Faculty of Science and Technology, Lancaster University, Lancaster LA1 4YR, U;Lancaster Environment Centre, Lancaster University, Lancaster LA1 4YQ, UK
关键词:
Convolutional neural network;Multi-temporal imagery;Object-based image analysis (OBIA);Crop classification;Fine spatial resolution imager
摘  要:
Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution (FSR) remotely sensed imagery now offers great opportunities for mapping crop types in great detail. However, within-class variance can hamper attempts to discriminate crop classes at fine resolutions. Multi-temporal FSR remotely sensed imagery provides a means of increasing crop classification from FSR imagery, although current methods do not exploit the available information fully. In this research, a novel Temporal Sequence Object-based Convolutional Neural Network (TS-OCNN) was proposed to classify agricultural crop type from FSR image time-series. An object-based CNN (OCNN) model was adopted in the TS-OCNN to classify images at the object level (i.e., segmented objects or crop parcels), thus, maintaining the precise bound-ary information of crop parcels. The combination of image time-series was first utilized as the input to the OCNN model to produce an 'original' or baseline classification. Then the single-date images were fed automatically into the deep learning model scene-by-scene in order of image acquisition date to increase successively the crop classification accuracy. By doing so, the joint information in the FSR multi-temporal observations and the unique individual information from the single-date images were exploited comprehensively for crop classification. The effectiveness of the proposed approach was inves-tigated using multitemporal SAR and optical imagery, respectively, over two heterogeneous agricultural areas. The experimental results demonstrated that the newly proposed TS-OCNN approach consistently increased crop classification accuracy, and achieved the greatest accuracies (82.68% and 87.40%) in com-parison with state-of-the-art benchmark methods, including the object-based CNN (OCNN) (81.63% and 85.88%), object-based image analysis (OBIA) (78.21% and 84.83%), and standard pixel-wise CNN (79.18% and 82.90%). The proposed approach is the first known attempt to explore simultaneously the joint infor-mation from image time-series with the unique information from single-date images for crop classifica-tion using a deep learning framework. The TS-OCNN, therefore, represents a new approach for agricultural landscape classification from multi-temporal FSR imagery. Besides, it is readily generalizable to other landscapes (e.g., forest landscapes), with a wide application prospect.(c) 2022 Crop Science Society of China and Institute of Crop Science, CAAS. Production and hosting by Elsevier B.V. on behalf of KeAi Communications Co., Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

相似文章

计量
文章访问数: 7
HTML全文浏览量: 0
PDF下载量: 0

所属期刊

推荐期刊