Injecting Domain Knowledge Into Deep Neural Networks for Tree Crown Delineation

被引:5
|
作者
Harmon, Ira [1 ]
Marconi, Sergio [2 ]
Weinstein, Ben [2 ]
Graves, Sarah [3 ]
Wang, Daisy Zhe [1 ]
Zare, Alina [4 ]
Bohlman, Stephanie [5 ]
Singh, Aditya [6 ]
White, Ethan [2 ]
机构
[1] Univ Florida, Dept Comp & Informat Sci & Engn, Gainesville, FL 32611 USA
[2] Univ Florida, Dept Wildlife Ecol & Conservat, Gainesville, FL 32611 USA
[3] Univ Wisconsin Madison, Dept Environm Studies, Madison, WI 53706 USA
[4] Univ Florida, Dept Elect & Comp Engn, Gainesville, FL 32611 USA
[5] Univ Florida, Sch Forest Resources & Conservat, Gainesville, FL 32611 USA
[6] Univ Florida, Dept Agr & Biol Engn, Gainesville, FL 32611 USA
基金
美国国家科学基金会;
关键词
Convolutional neural network (CNN); forest ecology; neuro-symbolics; remote sensing; tree crown delineation; DATA FUSION; SEGMENTATION; ALLOMETRY; DIAMETER; FORESTS; HEIGHT; AREA;
D O I
10.1109/TGRS.2022.3216622
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Automated individual tree crown (ITC) delineation plays an important role in forest remote sensing. Accurate ITC delineation benefits biomass estimation, allometry estimation, and species classification among other forest-related tasks, all of which are used to monitor forest health and make important decisions in forest management. In this article, we introduce neuro-symbolic DeepForest, a convolutional neural network (CNN)-based ITC delineation algorithm that uses a neuro-symbolic framework to inject domain knowledge (represented as rules written in probabilistic soft logic) into a CNN. We create rules that encode concepts for competition, allometry, constrained growth, mean ITC area, and crown color. Our results show that the delineation model learns from the annotated training data as well as the rules and that under some conditions, the injection of rules improves model performance and affects model bias. We then analyze the effects of each rule on its related aspects of model performance. We find that the addition of domain data can improve F1 by as much as four F1 points, reduce the Kullback-Leibler divergence (KL-divergence) between ground-truth and predicted area distributions, and reduce the aggregate error in area between ground-truth and predicted delineations.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] A Domain-Specific Architecture for Deep Neural Networks
    Jouppi, Norman P.
    Young, Cliff
    Patil, Nishant
    Patterson, David
    COMMUNICATIONS OF THE ACM, 2018, 61 (09) : 50 - 59
  • [42] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [43] Constructing Deep Spiking Neural Networks from Artificial Neural Networks with Knowledge Distillation
    Xu, Qi
    Li, Yaxin
    Shen, Jiangrong
    Liu, Jian K.
    Tang, Huajin
    Pan, Gang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7886 - 7895
  • [44] Knowledge Distillation for Optimization of Quantized Deep Neural Networks
    Shin, Sungho
    Boo, Yoonho
    Sung, Wonyong
    2020 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2020, : 111 - 116
  • [45] Enlightening Deep Neural Networks with Knowledge of Confounding Factors
    Zhong, Yu
    Ettinger, Gil
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 1077 - 1086
  • [46] Adaptive Knowledge Driven Regularization for Deep Neural Networks
    Luo, Zhaojing
    Cai, Shaofeng
    Cui, Can
    Ooi, Beng Chin
    Yang, Yang
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8810 - 8818
  • [47] Knowledge discovery in databases based on deep neural networks
    Tan, Yuanhua
    Zhang, Chaolin
    Ma, Yonglin
    Mao, Yici
    PROCEEDINGS OF THE 2015 10TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS, 2015, : 1222 - 1227
  • [48] Improving the Interpretability of Deep Neural Networks with Knowledge Distillation
    Liu, Xuan
    Wang, Xiaoguang
    Matwin, Stan
    2018 18TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2018, : 905 - 912
  • [49] TRANSFER KNOWLEDGE FOR HIGH SPARSITY IN DEEP NEURAL NETWORKS
    Liu, Wenran
    Chen, Xiaogang
    Ji, Xiangyang
    2017 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2017), 2017, : 1354 - 1358
  • [50] On Injecting Entropy-Like Features into Deep Neural Networks for Content Relevance Assessment
    Sido, Jakub
    Ekstein, Kamil
    Prazak, Ondrej
    Konopik, Miloslav
    THEORY AND PRACTICE OF NATURAL COMPUTING (TPNC 2021), 2021, 13082 : 59 - 68