Artificial intelligence (AI) has experienced a boom in the past few years. Although the term “AI” has been around since the 1950s, it is only with recent technological developments that it has become a part of everyday life, with ChatGPT, Gemini, and Grok becoming household names.
AI is being used in almost all industries, from entertainment and finance to hospitality and healthcare. International casino gaming options offer AI-personalization for game recommendations and bonuses to provide more engaging gambling sessions and better customer service chatbots; AI is used to streamline supply chains in the manufacturing industry; it is being used to simplify cross-border transactions; and it can help daily users with everything from filing their taxes to creating recipes.
But with the prevalence of AI, there comes a big question: how can it be regulated? The rate of AI development is exceeding national and global regulations, and policymakers are attempting to minimize the potential harmful impacts of AI without limiting its many benefits at an international level.
Development Moving Faster Than Regulation
One of the biggest challenges to regulating AI is the rapid pace of development. Although some nations have attempted to create regulatory frameworks, like the EU AI Act, revisions are constantly required as new developments emerge, like the recent breakthrough that led to generative AI.
Passing a law can take months, and by that time, the technology could have completely transformed, which leaves gaps in the legislation. This is called the “pacing problem” and is common when it comes to regulating any type of technology, including AI, crypto, and autonomous vehicles.
There are two solutions to this challenge, although neither seems feasible at a global level:
- Slow down the pace of innovation.
- Improve the legal system so that timely regulations can be proposed and passed.
The first solution is nearly impossible, since countries and developers are highly competitive. And many legal systems have been in place for decades, making it tough to change.
Lack Of Universal Definition
AI has become an umbrella term that includes many tools. It includes automation tools, machine learning applications, and generative AI. Some applications are low risk, like an AI tool that filters out spam emails. Others are high-risk, like using AI to automate drones in war zones.
This lack of a universal definition of what comprises “artificial intelligence” poses a challenge for regulators. The pace of development also makes it tough, since any definition can become obsolete as new technologies emerge.
Technologies that were once considered “intelligent” have lost that status as they become normalized. And without a solid, globally recognized definition, it will be almost impossible to regulate AI.
Pushback From Tech Giants
Regulatory efforts have received strong pushback from tech giants. For example, in Canada, Microsoft and Amazon lobbied publicly against any legislation and even hinted that they would restrict the launch of some products in the country if the legislation is a “burden” or vague.
In the US, there was a bill in California in 2024 that would enforce safety measures for AI software. However, the governor rejected the bill due to campaign efforts from AI developers. OpenAI wrote a letter that claimed the proposed legislation would slow down innovation and force technology entrepreneurs to leave California and look for opportunities in states and countries without tight legislative controls.
Even the EU AI Act received strong opposition from tech companies. Facebook’s Zuckerberg claimed that the regulatory framework will block innovation and the chance for economic growth, while OpenAI threatened to leave Europe if regulations became too restrictive.
Responsibility And Liability
Then there is the question of who is liable if AI causes harm. Regulators are struggling to answer this question, since AI does not have any rights and can’t be held accountable.
There are also various organizations and people involved in the creation and use of AI. The developer, deployer, and user all play a role, and it is tough to pin down who should be held accountable for potential risks. The European Commission states: “While the developers of AI may be best placed to address risks arising from the development phase, their ability to control risks during the use phase may be more limited. In that case, the deployer should be subject to the relevant obligation.”
Regulators are also considering giving AI systems a legal status, as “electronic persons”.
International Regulation Differences
There is a lack of global coherence in terms of AI legislation. In the EU, the 2024 AI Act applies and provides strict regulation, like requiring labels for AI-generated content. The US lacks comprehensive AI laws, although some states are attempting to pass bills without limiting innovation. Outside of the West, countries are also attempting to regulate AI. Chile is in the process of enacting a comprehensive law that takes a similar approach to the EU. South Korea has passed an AI Basic Act that highlights the role of AI in economic growth, but also takes into consideration the risks involved.
Global regulations will be tough to develop, as local laws and cultures must be considered.
Final Thoughts
Legislating AI in a global market presents many challenges, mainly due to the rapid pace of innovation and development. It is nearly impossible to create a legal framework for a tool that is changing every day in terms of capabilities and applications.
Although nations will have to work together to determine an overarching legal framework, individual countries must also step up and start to create regulations that protect users and assert liability on developers and deployers. The United Nations has created a High-Level Advisory Body on AI that is actively analyzing the changing situation and providing recommendations for global governance; however, governments and stakeholders will have to work together for the protection of human rights while fostering development.
This article was written in cooperation with esportsinsider