Incremental scene detection in outdoor environment based on hierarchical bag-of-words model

被引:0
|
作者
Chen H.-T. [1 ]
Zhang B. [1 ]
Sun F.-C. [2 ]
Huang Y.-L. [2 ]
Yuan J. [3 ]
机构
[1] College of Computer Science, Nankai University, Tianjin
[2] College of Software, Nankai University, Tianjin
[3] College of Artificial Intelligence, Nankai University, Tianjin
基金
中国国家自然科学基金;
关键词
Bag-of-words model; Mobile robots; Outdoor environment; Scene detection; Unsupervised learning;
D O I
10.7641/CTA.2020.90683
中图分类号
学科分类号
摘要
For the purpose of autonomously carrying out tasks in diversified environments, robots are required to possess the ability of scene understanding, and scene detection is considered as one of its most important components. Due to the continuity of time and space in a specific scene, it is hypothesized that mobile robots remain in the same scene during one period of time, and the image sequences from the same scene share a similar visual appearance. Therefore, an incremental scene detection method that requires no prior knowledge is proposed. By establishing the connection between images and scenes through a hierarchical bag of words (BoW) model, our method makes scene detection more similar to human cognitive process. Firstly, every image that is captured by robots in real time is segmented into blocks. Secondly, a dynamic clustering algorithm is implemented to incrementally build the low-level dictionary, according to which features of the high-level BoW model are extracted. Then another dynamic clustering algorithm is implemented for incremental scene detection, so that the current image is classified as either an experienced scene, or an unexperienced scene, until a new scene is detected. Experimental results show that our method can effectively complete autonomous scene detection without prior knowledge. © 2020, Editorial Department of Control Theory & Applications South China University of Technology. All right reserved.
引用
收藏
页码:1471 / 1480
页数:9
相关论文
共 21 条
  • [1] ZHANG Kai, LIU Huaping, DENG Xiaoyan, Et al., Radar-image cross-modal retrieval for outdoor mobile robots, Control Theory & Applications, 35, 12, pp. 1759-1764, (2018)
  • [2] CHEN Haotian, ZHENG Yang, ZHANG Yutong, Et al., Indoor red green blue-depth segmentation based on object-object supportive semantic relationships, Control Theory & Applications, 36, 4, pp. 579-588, (2019)
  • [3] SINGH V, GIRISH D, RALESCU A L., Image understanding-a brief review of scene classification and recognition, Proceedings of the 28th Modern Artificial Intelligence and Cognitive Science Conference, pp. 85-91, (2017)
  • [4] TONG Z H, SHI D C, YAN B Z, Et al., A review of indoor-outdoor scene classification, Proceedings of the 2017 2nd International Conference on Control, Automation and Artificial Intelligence, pp. 1-6, (2017)
  • [5] WANG Rong, WANG Zhiliang, MA Xirong, The indoor scene video recognition technology for home robots based on visual attention model combined with spatial information, Robot, 35, 3, pp. 313-318, (2013)
  • [6] QIAN Kui, SONG Aiguo, ZHANG Huatao, Et al., Robot indoor scenes recognition based on autonomous developmental neural network, Robot, 35, 6, pp. 703-743, (2013)
  • [7] STAMFORD J, PEACH B., Scene detection using convolutional neural networks, Proceedings of the 2nd IET International Conference on Technologies for Active and Assisted Living, pp. 1-6, (2016)
  • [8] DIMA C, HEBERT M, STENTZ A T., Enabling learning from large datasets: applying active learning to mobile robotics, Proceedings of IEEE Conference on International Conference on Robotics and Automation, pp. 108-114, (2004)
  • [9] CHASANIS V T, LIKAS C L, GALATSANOS N P., Scene detection in videos using shot clustering and sequence alignment, IEEE Transactions on Multimedia, 11, 1, pp. 89-100, (2009)
  • [10] PANDA R, KUANAR S K, CHOWDHURY A S., Nyström approximated temporally constrained multisimilarity spectral clustering approach for movie scene detection, IEEE Transactions on Cybernetics, 48, 3, pp. 836-847, (2018)