Limitations of risk-based artificial intelligence regulation: a structuration theory approach

被引:0
|
作者
Lily Ballot Jones [1 ]
Julia Thornton [1 ]
Daswin De Silva [1 ]
机构
[1] La Trobe University,Centre for Data Analytics and Cognition
来源
关键词
AI risks; AI regulation; Risk-based regulation; Structuration theory; Trustworthy AI;
D O I
10.1007/s44163-025-00233-9
中图分类号
学科分类号
摘要
Artificial Intelligence (AI) is transforming the way we live and work. The disruptive impact and risks of Generative AI have accelerated the global transition from voluntary AI ethics guidelines to mandatory AI regulation. The European Union AI Act is the world’s first horizontal and standalone law governing AI that came into force in August 2024, just as other jurisdictions, countries and states, are navigating possible modes of regulation. Starting with the EU AI Act, most of the current regulatory effort follows a risk-based classification approach. While this is prescriptive and application-focused, it overlooks the complex circular impacts of AI and the inherent limitations of measurement of risk, overemphasis on high-risk classification, perceived trustworthiness of AI and the geopolitical power imbalance of AI. This article contributes an overview of the current landscape of AI regulation, followed by a detailed assessment of the limitations and potential means of addressing these limitations through a structuration theory approach. Summarily, this approach can be used to recognise AI systems as agents that actively participate in the duality of structure, and the subsequent shaping of society. It acknowledges the direct negotiation of agency granted to machines alongside their ability to determine an understanding from given inputs, which then qualifies AI as an active participant in the recursive structuration of society. This agentic view of AI in the structuration theory approach complements ongoing efforts to develop a comprehensive and balanced AI regulation.
引用
收藏
相关论文
共 50 条