Detecting dark patterns in shopping websites - a multi-faceted approach using Bidirectional Encoder Representations From Transformers (BERT)

被引:0
|
作者
Vedhapriyavadhana, R. [1 ]
Bharti, Priyanshu [2 ]
Chidambaranathan, Senthilnathan [3 ]
机构
[1] Univ West Scotland, Sch Comp Engn & Phys Sci, Import Bldg,2 Clove Crescent, London E14 2BE, England
[2] Vellore Inst Technol, Sch Comp Sci & Engn, Chennai, India
[3] Virtusa, Dept Architecture & Design, Piscataway, NJ USA
关键词
Dark patterns; multi-class text classification; natural language processing; BERT; user experience; user interfaces;
D O I
10.1080/17517575.2025.2457961
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Dark patterns refer to certain elements of the user interface and user experience that are designed to deceive, manipulate, confuse, and pressure users of a particular platform or website into making decisions they wouldn't have made knowingly. Many companies have begun implementing dark patterns on their websites, employing carefully crafted language and design elements to manipulate their users. Numerous studies have examined this subject and developed a classification system for these patterns. Additionally, governments worldwide have taken actions to restrict the use of these practices. This proposed work seeks to establish a fundamental framework for developing a browser extension, the purpose of which is to extract text from a specific shopping website, employ Bidirectional Encoder Representations from Transformers (BERT), an open-source natural language processing model, to identify and expose dark patterns to users who may be unaware of them. This tool's development has the potential to create a more equitable environment and enable individuals to enhance their knowledge in this area. The proposed work explores the issues and challenges associated with detecting dark patterns, as well as the strategies employed by companies to make detection more challenging by carefully modifying the design of their websites and applications. Moreover, the proposed work aims to enhance the accuracy for the detection of dark patterns using a natural language processing (NLP) model, i.e. BERT which results in accuracy 97% compared to classical models such as Random Forest and SVM having accuracy of 95.4% and 95.8% respectively. It seeks to facilitate future research and improvements to ensure the tool remains up-to-date with the constantly changing tactics. At last, the proposed work introduces a novel approach for safeguarding users from dark patterns using a machine-learning detection chromium extension. It additionally provides insights beyond the technical complexities that could help in the further development of this application. Dark patterns refer to certain elements of the user interface and user experience that are designed to deceive,confuse,and pressure users of a particular platform or website into making decisions they wouldn't have made knowingly. This proposed work seeks to establish a fundamental framework for developing a browser extension to extract text from a specific shopping website, employ an open-source natural language processing model, to identify and expose dark patterns to users who may be unaware of them. It aims to enhance the accuracy for the detection of dark patterns which results in accuracy 97% compared to other classical models.
引用
收藏
页数:33
相关论文
共 50 条
  • [31] Topic Modelling of Legal Texts Using Bidirectional Encoder Representations from Sentence Transformers
    Hammami, Eya
    Faiz, Rim
    ADVANCES IN INFORMATION SYSTEMS, ARTIFICIAL INTELLIGENCE AND KNOWLEDGE MANAGEMENT, ICIKS 2023, 2024, 486 : 333 - 343
  • [32] Automatic detection of actionable radiology reports using bidirectional encoder representations from transformers
    Yuta Nakamura
    Shouhei Hanaoka
    Yukihiro Nomura
    Takahiro Nakao
    Soichiro Miki
    Takeyuki Watadani
    Takeharu Yoshikawa
    Naoto Hayashi
    Osamu Abe
    BMC Medical Informatics and Decision Making, 21
  • [33] Do syntactic trees enhance Bidirectional Encoder Representations from Transformers (BERT) models for chemical-drug relation extraction?
    Tang, Anfu
    Deleger, Louise
    Bossy, Robert
    Zweigenbaum, Pierre
    Nedellec, Claire
    DATABASE-THE JOURNAL OF BIOLOGICAL DATABASES AND CURATION, 2022, 2022
  • [34] Adapting Bidirectional Encoder Representations from Transformers (BERT) to Assess Clinical Semantic Textual Similarity: Algorithm Development and Validation Study
    Kades, Klaus
    Sellner, Jan
    Koehler, Gregor
    Full, Peter M.
    Lai, T. Y. Emmy
    Kleesiek, Jens
    Maier-Hein, Klaus H.
    JMIR MEDICAL INFORMATICS, 2021, 9 (02)
  • [35] Climate Change Sentiment Analysis Using Domain Specific Bidirectional Encoder Representations From Transformers
    Anoop, V. S.
    Krishnan, T. K. Ajay
    Daud, Ali
    Banjar, Ameen
    Bukhari, Amal
    IEEE ACCESS, 2024, 12 : 114912 - 114922
  • [36] Using Multilingual Bidirectional Encoder Representations from Transformers on Medical Corpus for Kurdish Text Classification
    Badawi, Soran S.
    ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY, 2023, 11 (01): : 10 - 15
  • [37] Predicting Antimalarial Activity in Natural Products Using Pretrained Bidirectional Encoder Representations from Transformers
    Nguyen-Vo, Thanh-Hoang
    Trinh, Quang H.
    Nguyen, Loc
    Do, Trang T. T.
    Chua, Matthew Chin Heng
    Nguyen, Binh P.
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2022, 62 (21) : 5050 - 5058
  • [38] ECC-BERT: Classification of error correcting codes using the improved bidirectional encoder representation from transformers
    Li, Sida
    Hu, Xiaochang
    Huang, Zhiping
    Zhou, Jing
    IET COMMUNICATIONS, 2022, 16 (04) : 359 - 368
  • [39] Bidirectional Encoder Representations from Transformers (Bert) And Serialized Multi-Layer Multi-Head Attention Feature Location Model Foraspect-Level Sentiment Analysis
    Regina, I. Anette
    Sengottuvelan, P.
    JOURNAL OF ALGEBRAIC STATISTICS, 2022, 13 (02) : 1391 - 1406
  • [40] Contextual classification of clinical records with bidirectional long short-term memory (Bi-LSTM) and bidirectional encoder representations from transformers (BERT) model
    Zalte, Jaya
    Shah, Harshal
    COMPUTATIONAL INTELLIGENCE, 2024, 40 (04)