Semantic Scene Understanding in Unstructured Environment with Deep Convolutional Neural Network

被引:0
|
作者
Baheti, Bhakti [1 ]
Gajre, Suhas [1 ]
Talbar, Sanjay [1 ]
机构
[1] SGGS Inst Engn & Technol, Ctr Excellence Signal & Image Proc, Nanded 431606, Maharashtra, India
来源
PROCEEDINGS OF THE 2019 IEEE REGION 10 CONFERENCE (TENCON 2019): TECHNOLOGY, KNOWLEDGE, AND SOCIETY | 2019年
关键词
Semantic Segmentation; ResNet; Dilated Convolution; DeepLabV3+;
D O I
10.1109/tencon.2019.8929376
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Number of road fatalities have been continuously increasing since last few decades all over the world. Nowadays advanced driver assistance systems are being developed to help the driver in driving process and semantic scene understanding is an essential task for it. Convolutional Neural Networks (CNN) have shown impressive progress in various computer vision tasks including the semantic segmentation. Various architectures have been proposed in literature but loss of spatial acuity in semantic segmentation prevents them from achieving better results as details of small objects are lost in downsampling. To overcome this drawback, we propose to use dilated residual network as backbone in DeepLabV3+ which enables to preserve the details of smaller objects in the scene without reducing the receptive field. We focus our work on India Driving dataset (IDD) containing data from unstructured traffic scenario. Proposed architecture proves to be effective compared to earlier approaches in literature with 0.618 mIoU.
引用
收藏
页码:790 / 795
页数:6
相关论文
共 50 条
  • [21] Understanding and Boosting of Deep Convolutional Neural Network Based on Sample Distribution
    Zheng, Qinghe
    Yang, Mingqiang
    Zhang, Qingrui
    Zhang, Xinxin
    Yang, Jiajie
    PROCEEDINGS OF 2017 IEEE 2ND INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC), 2017, : 823 - 827
  • [22] Development of an Ensembled Meta-Deep Learning Model for Semantic Road-Scene Segmentation in an Unstructured Environment
    Sivanandham, Sangavi
    Gunaseelan, Dharani Bai
    APPLIED SCIENCES-BASEL, 2022, 12 (23):
  • [23] Deep Convolutional Neural Network
    Zhou, Yu
    Fang, Rui
    Liu, Peng
    Liu, Kai
    2019 PROCEEDINGS OF THE CONFERENCE ON CONTROL AND ITS APPLICATIONS, CT, 2019, : 46 - 51
  • [24] On the contextual aspects of using deep convolutional neural network for semantic image segmentation
    Wang, Chunlai
    Mauch, Lukas
    Saxena, Mehul Manoj
    Yang, Bin
    JOURNAL OF ELECTRONIC IMAGING, 2018, 27 (05)
  • [25] Understanding the Semantic Structures of Tables with a Hybrid Deep Neural Network Architecture
    Nishida, Kyosuke
    Sadamitsu, Kugatsu
    Higashinaka, Ryuichiro
    Matsuo, Yoshihiro
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 168 - 174
  • [26] Scene Classification Based on a Deep Random-Scale Stretched Convolutional Neural Network
    Liu, Yanfei
    Zhong, Yanfei
    Fei, Feng
    Zhu, Qiqi
    Qin, Qianqing
    REMOTE SENSING, 2018, 10 (03)
  • [27] TEXNET: A DEEP CONVOLUTIONAL NEURAL NETWORK MODEL TO RECOGNIZE TEXT IN NATURAL SCENE IMAGES
    KAVITHA, D.
    RADHA, V.
    JOURNAL OF ENGINEERING SCIENCE AND TECHNOLOGY, 2021, 16 (02): : 1782 - 1799
  • [28] A Multi-label Scene Categorization Model Based on Deep Convolutional Neural Network
    Zhao, Gaofeng
    Luo, Wang
    Cui, Yang
    Fan, Qiang
    Peng, Qiwei
    Kong, Zhen
    Zhu, Liang
    Zhang, Tai
    COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, CSPS 2018, VOL III: SYSTEMS, 2020, 517 : 128 - 135
  • [29] Acoustic Scene Classification Using Deep Convolutional Neural Network via Transfer Learning
    Ye, Min
    Zhong, Hong
    Song, Xiao
    Huang, Shilei
    Cheng, Gang
    PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2019, : 19 - 22
  • [30] Implementation of deep convolutional neural network for classification of multiscaled and multiangled remote sensing scene
    Alegavi, S. S.
    Sedamkar, R. R.
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2020, 14 (01): : 21 - 34