Artificial Intelligence (AI) as a main competitive advantage for the 21st century is rapidly gaining momentum on the world stage. Its accelerating capabilities and potential are so significant that it will transform nearly all aspects of our societies, including the economy, education, healthcare, the law, national security, and beyond. It is becoming the driver for growth, competitiveness, and innovation. Non-adoption can be a critical risk for businesses and governments. Organizations that fail to adapt to the fast-paced AI-changing environment could quickly find their business model become obsolete.
The new AI generation is expeditiously and materially advancing into territories that were thought to be reserved for humans. AI is moving way beyond organizing and providing information into the realms of generating knowledge and somewhat exhibiting creativity.
Driven by the potentials, benefits, opportunities, and risks a global race for AI leadership has already begun to be at the forefront of AI development and application. The United States was the first country to implement a comprehensive AI research and development strategic plan in May 2016. In February 2019, the US President signed Executive Order 13859 Maintaining American Leadership in Artificial Intelligence. In July 2017, China released its Next Generation AI Development Plan to become the world leader in the field by 2030. In April 2018, the European Union set the approach for the European AI Strategy aims at making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy.
The AI race is not exclusive to AI development and application but also extends to AI governance. The importance of AI regulations comes from the significant role they will play in shaping AI’s future and will determine who will emerge as the true leaders in this field. Today frontrunners in AI regulations and governance endeavors are the United States, China, and the EU. The European Union is taking a proactive approach by introducing strict rules aimed at protecting citizens’ privacy and ensuring the ethical use of AI. China is also taking a proactive and leading approach; its 2022 regulation on recommendation algorithms aims to gain meaningful insight into the functioning of algorithms and ensures they perform within the regulators’ acceptable bounds. The United States is taking a more laissez-faire approach, seeking to create a more favorable environment for AI innovation.
Where We Stand Today
In general, the basic approach to AI regulation and governance focuses on the risks of AI’s underlying technology, i.e., machine-learning algorithms. That lies at the level of the data input, algorithm testing, and the decision model. The identified driver set of principles is privacy, accountability, safety and security, transparency, explain-ability, fairness and non-discrimination, human control of technology, professional responsibility, and respect for human values.
The proposed EU AI Act is modeled on a risk-based approach that intends to reach beyond Europe territories. It is anticipated that the European Parliament will vote on the proposed text for the AI Act by March 2023. The aim of the EU’s new laws is not merely to ensure that the rights of EU citizens are upheld in the digital space but also to ensure that European companies and organizations have a better opportunity to compete against large foreign and international firms.
On the other hand, China’s Internet Information Service Algorithmic Recommendation Management Provisions, which went into effect in March 2022, is the first regulation of its kind worldwide. The law gives users new rights, including the ability to opt out of using recommended algorithms and delete user data. It also creates higher transparency regarding where and how recommender engines are used.
There is yet no comparable regulation framework in the United States. However, there are tentative steps toward articulating a rights-based regulatory approach with the Biden Administration’s Blueprint for an AI Bill of Rights. In addition, in January 2023, the National Institute of Standards and Technology (NIST) released the AI Risk Management Framework (AI RMF 1.0), a voluntary set of standards to help incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
Key Challenges to Regulators
Regulating and establishing an effective governance framework for AI presents a long list of major obstacles to regulators and policymakers, starting from the very basics of agreeing on an AI definition. Its rapid advancement and unpredictability is another obstacle, as articulated in a white paper published by the World Economic Forum in November 2016: “Given the Fourth Industrial Revolution’s extraordinarily fast technological and social change, relying only on government legislation and incentives to ensure the right outcomes is ill-advised. These are likely to be out-of-date or redundant by the time they are implemented.” Transparency is another challenge. AI systems can be opaque, making it difficult to understand how they arrive at certain decisions. Lack of transparency can make it challenging to access the fairness and accuracy of AI-generated outcomes. This leads to another challenge: liability. In cases of harm or error, determining who is liable for the actions of an AI system can be difficult. As AI systems become more complex and autonomous, this challenge will only increase. Counting a few more among many: AI personhood, intellectual property rights, data privacy and security, bias and discrimination, and ethical and moral concerns.
The Way Forward
One of the biggest concerns of AI regulations is the conflicting laws and regulations between different jurisdictions; the nature of AI is that it is cross-border and global. In situations where different jurisdictions have conflicting regulations regarding AI, it can create a lack of harmony in the global regulatory landscape, which poses significant challenges for businesses and organizations that operate in multiple jurisdictions.
AI governance and regulation is certainly a global issue, and in-depth international coordination to regulate and govern AI is a necessity. The lack of coordination will create barriers to innovation and might affect national security. Further, such coordination will always be limited by the different approaches and priorities of the AI global leaders. It is key to promote international coordination, and regulators should engage in ongoing dialogues with their international counterparts to share best practices and develop consistent standards and regulations for AI governance. This will require developing strong partnerships with global organizations and stakeholders such as the Organisation for Economic Co-operation and Development, the UN, the EU, the US, and China.
Mais Haddad holds a a Doctorate of Juridical Science from the University of Pittsburgh School of Law and has over 15 years of legal, risk, and compliance advisory and consultancy experience.
Suggested citation: Mais Haddad, The Race for AI Governance: Navigating the International Regulatory Landscape of Artificial Intelligence, JURIST – Professional Commentary, March 17, 2023, https://www.jurist.org/commentary/2023/03/mais-haddad-international-regulations-artificial-intelligence/.
This article was prepared for publication by Hayley Behal, JURIST Commentary Co-Managing Editor. Please direct any questions or comments to her at email@example.com