Multi-modal wound classification using wound image and location by deep neural network

被引:0
|
作者
D. M. Anisuzzaman
Yash Patel
Behrouz Rostami
Jeffrey Niezgoda
Sandeep Gopalakrishnan
Zeyun Yu
机构
[1] University of Wisconsin-Milwaukee,Department of Computer Science
[2] University of Wisconsin-Milwaukee,Department of Electrical Engineering
[3] Advancing the Zenith of Healthcare (AZH) Wound and Vascular Center,College of Nursing
[4] University of Wisconsin Milwaukee,Big Data Analytics and Visualization Laboratory, Department of Biomedical Engineering
[5] University of Wisconsin-Milwaukee,undefined
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Wound classification is an essential step of wound diagnosis. An efficient classifier can assist wound specialists in classifying wound types with less financial and time costs and help them decide on an optimal treatment procedure. This study developed a deep neural network-based multi-modal classifier using wound images and their corresponding locations to categorize them into multiple classes, including diabetic, pressure, surgical, and venous ulcers. A body map was also developed to prepare the location data, which can help wound specialists tag wound locations more efficiently. Three datasets containing images and their corresponding location information were designed with the help of wound specialists. The multi-modal network was developed by concatenating the image-based and location-based classifier outputs with other modifications. The maximum accuracy on mixed-class classifications (containing background and normal skin) varies from 82.48 to 100% in different experiments. The maximum accuracy on wound-class classifications (containing only diabetic, pressure, surgical, and venous) varies from 72.95 to 97.12% in various experiments. The proposed multi-modal network also showed a significant improvement in results from the previous works of literature.
引用
收藏
相关论文
共 50 条
  • [1] Multi-modal wound classification using wound image and location by deep neural network
    Anisuzzaman, D. M.
    Patel, Yash
    Rostami, Behrouz
    Niezgoda, Jeffrey
    Gopalakrishnan, Sandeep
    Yu, Zeyun
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [2] Multi-modal medical image classification using deep residual network and genetic algorithm
    Abid, Muhammad Haris
    Ashraf, Rehan
    Mahmood, Toqeer
    Faisal, C. M. Nadeem
    PLOS ONE, 2023, 18 (06):
  • [3] Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion
    Deng, Xin
    Dragotti, Pier Luigi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (10) : 3333 - 3348
  • [4] Wound image segmentation using deep convolutional neural network
    Kang, Hyunyoung
    Seo, Kyungdeok
    Lee, Sena
    Oh, Byung Ho
    Yang, Sejung
    PHOTONICS IN DERMATOLOGY AND PLASTIC SURGERY 2023, 2023, 12352
  • [5] Multi-modal image segmentation using a modified Hopfield neural network
    Rout, S
    Seethalakshmy
    Srivastava, P
    Majumdar, J
    PATTERN RECOGNITION, 1998, 31 (06) : 743 - 750
  • [6] Multi-modal image segmentation using a modified Hopfield neural network
    BITS, Pilani, India
    Pattern Recognit, 6 (743-750):
  • [7] Multi-modal multi-concept-based deep neural network for automatic image annotation
    Xu, Haijiao
    Huang, Changqin
    Huang, Xiaodi
    Huang, Muxiong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (21) : 30651 - 30675
  • [8] Multi-modal multi-concept-based deep neural network for automatic image annotation
    Haijiao Xu
    Changqin Huang
    Xiaodi Huang
    Muxiong Huang
    Multimedia Tools and Applications, 2019, 78 : 30651 - 30675
  • [9] A deep multi-modal neural network for informative Twitter content classification during emergencies
    Abhinav Kumar
    Jyoti Prakash Singh
    Yogesh K. Dwivedi
    Nripendra P. Rana
    Annals of Operations Research, 2022, 319 : 791 - 822
  • [10] A deep multi-modal neural network for informative Twitter content classification during emergencies
    Kumar, Abhinav
    Singh, Jyoti Prakash
    Dwivedi, Yogesh K.
    Rana, Nripendra P.
    ANNALS OF OPERATIONS RESEARCH, 2020, 319 (1) : 791 - 822