Artificial Intelligence (AI) is transforming the way we live and work. The disruptive impact and risks of Generative AI have accelerated the global transition from voluntary AI ethics guidelines to mandatory AI regulation. The European Union AI Act is the world’s first horizontal and standalone law governing AI that came into force in August 2024, just as other jurisdictions, countries and states, are navigating possible modes of regulation. Starting with the EU AI Act, most of the current regulatory effort follows a risk-based classification approach. While this is prescriptive and application-focused, it overlooks the complex circular impacts of AI and the inherent limitations of measurement of risk, overemphasis on high-risk classification, perceived trustworthiness of AI and the geopolitical power imbalance of AI. This article contributes an overview of the current landscape of AI regulation, followed by a detailed assessment of the limitations and potential means of addressing these limitations through a structuration theory approach. Summarily, this approach can be used to recognise AI systems as agents that actively participate in the duality of structure, and the subsequent shaping of society. It acknowledges the direct negotiation of agency granted to machines alongside their ability to determine an understanding from given inputs, which then qualifies AI as an active participant in the recursive structuration of society. This agentic view of AI in the structuration theory approach complements ongoing efforts to develop a comprehensive and balanced AI regulation.