US government watchdog finds federal use of artificial intelligence poses threat to federal agencies and public
teguhjatipras / Pixabay
US government watchdog finds federal use of artificial intelligence poses threat to federal agencies and public

The US Government Accountability Office (GAO) released a public report Tuesday stating that most federal agencies that use facial recognition technology systems are unaware of the “privacy and accuracy-related risks” that such systems pose to federal agencies and the American public.

After holding a forum on AI oversight, the GAO developed an artificial intelligence (AI) accountability framework focused on “governance, data, performance, and monitoring—to help federal agencies and others use AI responsibly.”

Of the 42 federal agencies that the GAO surveyed, 20 reported owning or using facial recognition technology systems. The GAO confirmed that most federal agencies that use facial recognition technology are unaware of which AI systems their employees use; hence, the GEO remarked that these agencies have “not fully assessed the potential risks of using these systems, such as risks related to privacy and accuracy.” Consequently, the GAO also noted that the use of these AI systems can pose “[n]umerous risks to federal agencies and the public.”

The GAO, which has provided objective, non-partisan information on government operations for a century, said:

AI is a transformative technology with applications in medicine, agriculture, manufacturing, transportation, defense, and many other areas. It also holds substantial promise for improving government operations. Federal guidance has focused on ensuring AI is responsible, equitable, traceable, reliable, and governable. Third-party assessments and audits are important to achieving these goals. However, AI systems pose unique challenges to such oversight because their inputs and operations are not always visible.

In March, the American Civil Liberties Union (ACLU) requested information on how intelligence agencies use AI for national security. In its request, the ACLU warned that AI systems can be biased against marginalized communities and may pose a risk to civil rights.