DeepGemini: Verifying Dependency Fairness for Deep Neural Networks

被引:0
|
作者
Xie, Xuan [1 ]
Zhang, Fuyuan [2 ]
Hu, Xinwen [3 ]
Ma, Lei [1 ,2 ,4 ]
机构
[1] Univ Alberta, Edmonton, AB, Canada
[2] Kyushu Univ, Fukuoka, Japan
[3] Hunan Normal Univ, Changsha, Peoples R China
[4] Univ Tokyo, Tokyo, Japan
基金
加拿大自然科学与工程研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have been widely adopted in many decision-making industrial applications. Their fairness issues, i.e., whether there exist unintended biases in the DNN, receive much attention and become critical concerns, which can directly cause negative impacts in our daily life and potentially undermine the fairness of our society, especially with their increasing deployment at an unprecedented speed. Recently, some early attempts have been made to provide fairness assurance of DNNs, such as fairness testing, which aims at finding discriminatory samples empirically, and fairness certification, which develops sound but not complete analysis to certify the fairness of DNNs. Nevertheless, how to formally compute discriminatory samples and fairness scores (i.e., the percentage of fair input space), is still largely uninvestigated. In this paper, we propose DeepGemini, a novel fairness formal analysis technique for DNNs, which contains two key components: discriminatory sample discovery and fairness score computation. To uncover discriminatory samples, we encode the fairness of DNNs as safety properties and search for discriminatory samples by means of state-of-the-art verification techniques for DNNs. This reduction enables us to be the first to formally compute discriminatory samples. To compute the fairness score, we develop counterexample guided fairness analysis, which utilizes four heuristics to efficiently approximate a lower bound of fairness score. Extensive experimental evaluations demonstrate the effectiveness and efficiency of DeepGemini on commonly-used benchmarks, and DeepGemini outperforms state-of-the-art DNN fairness certification approaches in terms of both efficiency and scalability.
引用
收藏
页码:15251 / 15259
页数:9
相关论文
共 50 条
  • [1] Verifying Properties of Binarized Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    Ryzhyk, Leonid
    Sagiv, Mooly
    Walsh, Toby
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 6615 - 6624
  • [2] FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks
    Mohammadi, Kiarash
    Sivaraman, Aishwarya
    Farnadi, Golnoosh
    PROCEEDINGS OF 2023 ACM CONFERENCE ON EQUITY AND ACCESS IN ALGORITHMS, MECHANISMS, AND OPTIMIZATION, EAAMO 2023, 2023,
  • [3] Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
    Katz, Guy
    Barrett, Clark
    Dill, David L.
    Julian, Kyle
    Kochenderfer, Mykel J.
    COMPUTER AIDED VERIFICATION, CAV 2017, PT I, 2017, 10426 : 97 - 117
  • [4] Verifying Attention Robustness of Deep Neural Networks Against Semantic Perturbations
    Munakata, Satoshi
    Urban, Caterina
    Yokoyama, Haruki
    Yamamoto, Koji
    Munakata, Kazuki
    NASA FORMAL METHODS, NFM 2023, 2023, 13903 : 37 - 61
  • [5] Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations
    Munakata, Satoshi
    Urban, Caterina
    Yokoyama, Haruki
    Yamamoto, Koji
    Munakata, Kazuki
    2022 29TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC, 2022, : 560 - 561
  • [6] Maximizing Fairness in Deep Neural Networks via Mode Connectivity
    Andreeva, Olga
    Almeida, Matthew
    Ding, Wei
    Crouter, Scott E.
    Chen, Ping
    IEEE INTELLIGENT SYSTEMS, 2022, 37 (03) : 36 - 44
  • [7] VeriDIP: Verifying Ownership of Deep Neural Networks Through Privacy Leakage Fingerprints
    Hu, Aoting
    Lu, Zhigang
    Xie, Renjie
    Xue, Minhui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2568 - 2584
  • [8] Information-Theoretic Testing and Debugging of Fairness Defects in Deep Neural Networks
    Monjezi, Verya
    Trivedi, Ashutosh
    Tan, Gang
    Tizpaz-Niari, Saeid
    2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE, 2023, : 1571 - 1582
  • [9] POPULATION DEPENDENCY OF NEURAL NETWORKS
    Koepke, T.
    Stephan, C.
    Cammann, H.
    Semjonow, A.
    ANTICANCER RESEARCH, 2008, 28 (6B) : 4054 - 4054
  • [10] CASCADED CONTEXT DEPENDENCY: AN EXTREMELY LIGHTWEIGHT MODULE FOR DEEP CONVOLUTIONAL NEURAL NETWORKS
    Ma, Xu
    Qiao, Zhinan
    Guo, Jingda
    Tang, Sihai
    Chen, Qi
    Yang, Qing
    Fu, Song
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1741 - 1745