To Authenticity, and Beyond! Building Safe and Fair Generative AI Upon the Three Pillars of Provenance

被引:1
|
作者
Collomosse, John [1 ]
Parsons, Andy [2 ]
机构
[1] Adobe Res, San Jose, CA 95110 USA
[2] Adobe Inc, Content Authent Initiat, New York, NY 10011 USA
基金
英国工程与自然科学研究理事会;
关键词
Training; Visualization; Generative AI; Biological system modeling; Voting; Buildings; Watermarking;
D O I
10.1109/MCG.2024.3380168
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Provenance facts, such as who made an image and how, can provide valuable context for users to make trust decisions about visual content. Against a backdrop of inexorable progress in generative AI for computer graphics, over two billion people will vote in public elections this year. Emerging standards and provenance enhancing tools promise to play an important role in fighting fake news and the spread of misinformation. In this article, we contrast three provenance enhancing technologies-metadata, fingerprinting, and watermarking-and discuss how we can build upon the complementary strengths of these three pillars to provide robust trust signals to support stories told by real and generative images. Beyond authenticity, we describe how provenance can also underpin new models for value creation in the age of generative AI. In doing so, we address other risks arising with generative AI such as ensuring training consent, and the proper attribution of credit to creatives who contribute their work to train generative models. We show that provenance may be combined with distributed ledger technology to develop novel solutions for recognizing and rewarding creative endeavor in the age of generative AI.
引用
收藏
页码:82 / 90
页数:9
相关论文
共 2 条
  • [1] Building AI literacy for humanities students: teaching beyond generative AI
    Galmar, Bruno
    AI & SOCIETY, 2024,
  • [2] Three Paradoxes to Reconcile to Promote Safe, Fair, and Trustworthy AI in Education
    Slama, Rachel
    Toutziaridi, Amalia Christina
    Reich, Justin
    PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON LEARNING@SCALE, L@S 2024, 2024, : 295 - 299