A vision-based fully automated approach to robust image cropping detection

被引:12
|
作者
Fanfani, Marco [1 ]
Iuliani, Massimo [1 ,2 ]
Bellavia, Fabio [1 ]
Colombo, Carlo [1 ]
Piva, Alessandro [1 ,2 ]
机构
[1] Univ Florence, Dept Informat Engn, Florence, Italy
[2] Univ Florence, FORLAB Multimedia Forens Lab, Prato, Italy
关键词
Multimedia forensics; Robust computer vision; Cropping detection; Image content analysis; CAMERA CALIBRATION; FORGERY DETECTION;
D O I
10.1016/j.image.2019.115629
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The definition of valid and robust methodologies for assessing the authenticity of digital information is nowadays critical to contrast social manipulation through the media. A key research topic in multimedia forensics is the development of methods for detecting tampered content in large image collections without any human intervention. This paper introduces AMARCORD (Automatic Manhattan-scene AsymmetRically CrOpped imageRy Detector), a fully automated detector for exposing evidences of asymmetrical image cropping on Manhattan-World scenes. The proposed solution estimates and exploits the camera principal point, i.e., a physical feature extracted directly from the image content that is quite insensitive to image processing operations, such as compression and resizing, typical of social media platforms. Robust computer vision techniques are employed throughout, so as to cope with large sources of noise in the data and improve detection performance. The method leverages a novel metric based on robust statistics, and is also capable to decide autonomously whether the image at hand is tractable or not. The results of an extensive experimental evaluation covering several cropping scenarios demonstrate the effectiveness and robustness of our approach.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Robust Vision-based Indoor Localization
    Clark, Ronald
    Trigoni, Niki
    Markham, Andrew
    IPSN'15: PROCEEDINGS OF THE 14TH INTERNATIONAL SYMPOSIUM ON INFORMATION PROCESSING IN SENSOR NETWORKS, 2015, : 378 - 379
  • [32] Vision-based Robust Localization for Vehicles
    Moro, F.
    Fontanelli, D.
    Palopoli, L.
    2012 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE (I2MTC), 2012, : 553 - 558
  • [33] A Robust and Automated Vision-Based Human Fall Detection System Using 3D Multi-Stream CNNs with an Image Fusion Technique
    Alanazi, Thamer
    Babutain, Khalid
    Muhammad, Ghulam
    APPLIED SCIENCES-BASEL, 2023, 13 (12):
  • [34] New hybrid vision-based control approach for automated guided vehicles
    Miljkovic, Zoran
    Vukovic, Najdan
    Mitic, Marko
    Babic, Bojan
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2013, 66 (1-4): : 231 - 249
  • [35] A Deep Learning-based Approach for Vision-based Weeds Detection
    Wang, Yan
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (12) : 75 - 82
  • [36] A Vision-Based Approach to UAV Detection and Tracking in Cooperative Applications
    Opromolla, Roberto
    Fasano, Giancarmine
    Accardo, Domenico
    SENSORS, 2018, 18 (10)
  • [37] An Improved Approach for Vision-Based Lane Marking Detection and Tracking
    Lu, Wenjie
    Rodriguez, Sergio A. F.
    Seignez, Emmanuel
    Reynaud, Roger
    INTERNATIONAL CONFERENCE ON ELECTRICAL, CONTROL AND AUTOMATION ENGINEERING (ECAE 2013), 2013, : 382 - 386
  • [38] A Lidar and vision-based approach for pedestrian and vehicle detection and tracking
    Premebida, Cristiano
    Monteiro, Goncalo
    Nunes, Urbano
    Peixoto, Paulo
    2007 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE, VOLS 1 AND 2, 2007, : 83 - 88
  • [39] A sonar approach to obstacle detection for a vision-based autonomous wheelchair
    Del Castillo, Guillermo
    Skaar, Steven
    Cardenas, Antonio
    Fehr, Linda
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2006, 54 (12) : 967 - 981
  • [40] Vision-based Contingency Detection
    Lee, Jinhan
    Kiser, Jeffrey F.
    Bobick, Aaron F.
    Thomaz, Andrea L.
    PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011), 2011, : 297 - 304