Will Washington Ever Get Around to Regulating Artificial Intelligence? Features
12019 / Pixabay
Will Washington Ever Get Around to Regulating Artificial Intelligence?

On November 4, 2021, US Senators Rob Portman (R-OH) and Jacky Rosen (D-NV) announced the introduction of S.3175 – 117th Congress (2021-2022): Advancing American Artificial Intelligence Innovation Act of 2021 to Congress. Although the stated goal of S. 3175 is to ensure private companies have access to accurate data so they can better meet Department of Defense (DoD) needs, S. 3175 is undoubtedly an attempt by US lawmakers to reassert US leadership over AI governance at a time when other countries appear to be taking the lead.

Recall that back in April, the European Union (EU) Commission unveiled the Artificial Intelligence Act, a comprehensive piece of legislation that, if passed, will lay down harmonized rules on AI and is expected to reverberate around the globe. The China factor cannot be ignored either. China’s race for AI supremacy is not lost on US policymakers. S. 3175 comes at a time when concerns over China’s potential to outpace the US in AI deployments is at an all-time high. In a recent report to Congress, the DOD highlighted China’s intent to prioritize and produce AI technologies to empower its military and economy and the threat that that poses for the US If the past is anything to go by, S.3175 will not see the light of day.

Compared to the comprehensive piece of legislation pending in the EU, a hodgepodge of AI-related bills float about Congress with most having very little chance of becoming law. Although there appears to be growing concern in Washington regarding the unlawful, irresponsible, and/or unethical development and deployment of AI, US lawmakers are either reluctant to act or are simply confused when it comes to AI regulation.

Unfortunately, while Washington sleeps, AI development and deployment continues at a dizzying pace with major consequences for United States and the rest of the world. To be clear, Washington has not been completely silent on the issue of AI and AI governance. Lacking, however, is a comprehensive and coherent piece of legislation aimed at maximizing the benefits of AI while minimizing its negative impacts. This article introduces S.3175, surveys the current legal landscape for AI in the US, and discusses why AI regulation is long overdue.

S. 3175: An Overview

S. 3175 authorizes the Secretary of Defense to “carry out a pilot program to assess the feasibility and advisability of establishing data libraries for developing and enhancing [AI] capabilities to ensure that the [DoD] is able to procure optimal [AI] and machine learning software capabilities to meet Department requirements and technology development goals.” In carrying out a pilot program, the Secretary “may establish data libraries containing Department data sets relevant to the development of artificial intelligence software and technology,” and may “allow appropriate public and private sector organizations to access such data libraries for the purposes of developing artificial intelligence models and other technical software solutions. Clearly, S. 3175 is a very narrow piece of legislation that does not address or purport to address the myriad issues raised by AI and other emerging technologies.

Unlike the EU’s Artificial Intelligence Act, S.3175 does not address the risks associated with AI systems, does not offer a framework for addressing these risks, and does not address issues of accountability with regard to the development, deployment or use AI systems.

US and AI Regulation: A History of Weak and Tentative Steps

  1. S. 3175 is far from the first AI-related bill to be introduced to Congress in recent years.

In May 2021, Senator Edward J. Markey (D-Mass.) and Congresswoman Doris Matsui (CA-06) introduced the Algorithmic Justice and Online Platform Transparency Act of 2021 to prohibit harmful algorithms, increase transparency into websites’ content amplification and moderation practices, and commission a cross-government investigation into discriminatory algorithmic processes throughout the economy.

Back in 2017, the Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017 was introduced and aimed at establishing a federal advisory committee to examine and wrestle with the economic opportunities and impacts that emerging AI technologies would have in many aspects of American life.

Arguably the most extensive legislative proposal on AI to date was the Algorithmic Accountability Act of 2019, introduced in 2019 by US Senators Cory Booker (D-NJ) and Ron Wyden (D-OR), along with Rep. Yvette D. Clarke (D-NY). Reportedly the first federal legislative effort to regulate AI systems across industries in the US, the Algorithmic Accountability Act of 2019 would have authorized the Federal Trade Commission (FTC) to issue and enforce regulations that target high-risk AI systems.

Among other AI-related bills that have been introduced in Congress in recent years are:

  • 1353: Advancing American AI Act’ (April 22, 2021),
  • R. 8132:American COMPETE Act (September 29, 2020),
  • R. 8346Academic Research Protection Act (September 22, 2020),
  • Con.Res.116: Expressing the sense of Congress with respect to the principles that should guide the national artificial intelligence strategy of the United States (September 16, 2020),
  • H.R. 2575: AI in Government Act (September 14, 2020),
  • R. 8230: The Integrating New Technologies to Empower Law Enforcement at Our Borders Act (September 11, 2020),
  • R. 8183: The ADAPT Act (September 8, 2020), and
  • 1558: The Artificial Intelligence Initiative Act or AI-IA (Introduced May 21, 2019).

And not all AI bills are derailed at the introductory phase; there are a number of legislative success stories.

On January 1, 2021, the National AI Initiative Act of 2020 became law as part of the National Defense Authorization Act of 2021. On February 1, 2021, the House Armed Services Committee created a new Subcommittee on Cyber, Innovative Technologies, and Information Systems. The National Defense Authorization Act of 2019 mandated the establishment of a 15-member National Security Commission on Artificial Intelligence (NSCAI). Formed in 2017, the Congressional Artificial Intelligence Caucus exists to, among other things, inform policymakers of impacts of advances in AI.       

The Executive Branch, including the White House and some federal agencies, has also taken steps to address AI.

For example, in 2020 the White House released Guidance for Regulation of Artificial Intelligence Applications. On January 12, 2021, the White House Office of Science and Technology Policy formally established the National AI Initiative Office. On January 27, 2021, President Joe Biden signed the Executive Order on the President’s Council of Advisors on Science and Technology, establishing the President’s Council of Advisors on Science and Technology (PCAST). The FTC is using its mandate to address AI-related issues. In a January 10, 2021, blog post, “Aiming for truth, fairness, and equity in your company’s use of AI,” the FTC announced the Commission’s plan to start bringing enforcement actions related to “biased algorithms.” In May 2021, the FTC finalized a settlement with photo app developer Everalbum, Inc. related to alleged misuse of facial recognition technology.

All things considered, Washington has taken some tentative steps towards addressing AI, including by supporting investments in AI, facilitating AI R&D, aggressively protecting American intellectual property, and committing to train a workforce that is AI ready. That said, the US is yet to present a comprehensive vision for AI and there is growing fear that “[w]ithout any such vision, other governments will fill the void.

Should the US Regulate AI?

There is a growing list of reasons why a comprehensive but balanced AI regulation is not only needed, but long overdue. Below, I will zero in on three:  

First, there is an emerging consensus that the US urgently needs AI legislation. There are calls for an AI Bill of Rights and calls for US law makers to address the use of AI by tech giants. Although businesses are far from united on this issue, a growing number of industry insiders have actually called for some form of regulation. Sundar Pichai, head of Google and its parent company Alphabet, has called for “sensible regulation of AI. Tech-industry billionaire Elon Musk has, on numerous occasions, called for AI to be regulated in order to manage associated risks. “I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees at the National Governors Association summer meeting in 2017. Satya Nadella, Microsoft’s CEO, has also called for the regulation of facial recognition technologies and for global rules around AI.

Second, in the age of frontier and disruptive technologies that are driven, in part, by US companies, it could be argued that the US has an obligation to write rules to ensure that AI development is carried out ethically and responsibly. Rules are necessary to ensure that the benefits of AI are maximized and shared equitably. Conversely, rules are equally important to ensure that the numerous risks and dangers associated with AI are minimized or possibly prevented. Rules are also necessary to ensure that core values are protected and fundamental rights respected. Studies show that “neutral” technology can produce very negative outcomes particularly for the most vulnerable and marginalized in a society. Negative outcomes associated with AI range from widespread discrimination and algorithmic bias to the potential violation of a broad spectrum of civil, political rights, as well as economic, social and cultural rights. As heath care, financial services, education and other industries turn to automation for efficiency and increasingly rely on algorithms to make important decisions, more and more lives will be affected, and possibly, upended.

Finally, if Congress does not act, state and city legislators and regulators are likely to step in to fill the gap and this could make an already confusing legal landscape even more bewildering. The end result could be an odd patchwork of state and municipal legislation. Already, state and city laws relating to AI systems are on the rise, with some imposing outright bans on facial recognition technology. Washington state’s Act Relating to the use of facial recognition services took effect on July 1, 2021. In May 2019, San Francisco became the first city in the US to ban the police and city government agencies from using the facial recognition technology. In January 2021, Portland became the first city to prohibit private entities from using facial recognition technologies in places of “public accommodation” within the city of Portland with the entry into force of Ordinance No. 190114. Several Massachusetts cities have also embraced regulation including Boston, Brookline, CambridgeSomerville, Springfield, and Northampton.

Where do we go from here?

Although AI has been on Washington’s radar for decades, the US is yet to present a comprehensive vision for AI or pass a coherent and holistic piece of legislation on the matter.

There is a growing sense that Washington is not prepared to regulate AI and may be bent on discouraging other governments from regulating it as well. “Europe and our allies should avoid heavy-handed innovation-killing models, and instead consider a similar regulatory approach,” the White House warned in January 2020. The problem, of course, is that with the superpowers battling for AI supremacy, the priority now is bolstering relevant R&D—not regulation. The problem with this, as argued in a 2020 paper, is that the race for technological supremacy creates a complex ecology of choices that could push stakeholders to underestimate or even ignore ethical and safety procedures.

AI systems are ubiquitous and are increasingly used across all areas of public life. The question is no longer whether to regulate AI but rather what the scope of regulation should be.

In this context, it is important to carefully address some basic questions: How should AI be regulated? Who should make the rules? Which AI-supported technologies should be regulated and which should not? Which specific AI technologies require regulation and which do not? A balance must be struck. As Maria Axente, responsible AI lead at PWC, told the BBC: “The question is how can it be done in a way that doesn’t kill innovation, as well as continue to balance the benefits of AI with the risks it poses, as AI becomes more embedded in our lives?”

As this debate continues, the benefits of regulation should not be forgotten, nor should the problems regulation can bring to the fore. Regulation can go a long way toward promoting the trust of the American people in the development and deployment of AI-related technologies. The relationship between regulatory reform and innovation is complex and not fully understood. However, there is broad agreement that under the right circumstances, regulation can drive innovation and economic growth by, among other things, injecting much-needed certainty into the marketplace. Finally, regulation can help ensure policy and regulatory coherence in a sector that is still very new, very complex, and not completely understood by many. Regulation can foster competition and discourage anti-competitive business practices.

Regulating an emerging, disruptive, and rapidly evolving technology is not easy. AI and other emerging technologies “are creating a sea change in today’s regulatory environment,” and pose significant challenges for regulators who strive to maintain a balance between competing policy objectives. The fact that AI is “dual-use, often open-source, and diffusing rapidly” makes regulation particularly challenging.

The sheer breadth and depth of AI and its advances also makes regulation a daunting exercise. There are many important decisions to be made: whether to use current laws to govern the use of AI rather than draft new ones; whether to adopt a one size fits all regulatory framework or a targeted, sectoral framework, how to enshrine values (e.g. liberties, equality, non-discrimination, and sustainable development) without stifling innovation and competition, and how to avoid creating overly complex or conflicting legal obligations.

There is also always the risk that regulation could stifle innovation, undermine competition, and make already powerful tech companies become even more powerful, by imposing too high a cost on small and medium-sized enterprises. Some analysts have argued that the EU’s General Data Protection Regulation actually helped Google and Facebook consolidate their dominance of the European advertising market.

One problem that is likely to hamper the development of a comprehensive AI legislation is that although everyone, from government agencies to civil society groups, and from investors to tech leaders agree that the development of AI needs to be done responsibly, ethically, and carefully, there is little agreement on what this means in practical terms. Likewise, while most people agree that an effective AI regulation must be sensible, balanced, future proof, and comprehensive, what there adjectives mean in practical terms is also not always very clear.

To make a complicated landscape even more complicated, while Americans feel that AI should be regulated, “they are unsure who the regulators should be” and when asked who should decide how AI systems are designed and deployed, “half of Americans indicated they do not know or refused to answer,” the study by the Center for the Governance of AI found.

AI is developing rapidly and, in the United States, is being met with a wide array of federal AI policies and no consistent regulatory approach. This is problematic. But there is at least one bright spot on the horizon; recently-appointed Director of the National Artificial Intelligence Initiative Office, Lynne Parker, appears to be open to regulation and believes the US should have a vision for the regulation of AI, similar to the EU’s General Data Protection Regulation. “There’s a growing recognition that if we have just a patchwork of regulatory approaches, it’s not helping innovation at all,” Parker has been quoted as saying. Whether Parker’s vision can translate into timely and comprehensive AI regulation remains to be seen.

Between no regulation and/or deregulation, on the one hand, and a complete ban, on the other, an appropriate, balanced, smart, future-proof, and proactive regulatory framework can be found. The goal must be to protect important public interests and preserve core values, while promoting innovation, competition, and economic growth. In this, lawmakers and stakeholders must accept and embrace the unique regulatory challenges posed by AI and other digital-age technologies and respond accordingly.

As succinctly put in an article in Deloitte Insights: “[t]he assumption that regulations can be crafted slowly and deliberately, and then remain in place, unchanged, for long periods of time, has been upended in today’s environment. As new business models and services emerge … government agencies are challenged with creating or modifying regulations, enforcing them, and communicating them to the public at a previously undreamed-of pace. And they must do this while working within legacy frameworks and attempting to foster innovation.”

Uche Ewelukwa Ofodile (SJD, Harvard) is a Senior Fellow of the Mossavar-Rahmani Center for Business and Government at the Kennedy School of Government, Harvard University. She is the author of many monographs and articles on intellectual property rights, technology and the law, international trade and investment law, China in Africa, and global governance. She holds the E.J. Ball Endowed Chair at the University of Arkansas School of Law. She has previously written for JURIST on EU efforts to regulate Artificial Intelligence.