US Mandates AI Transparency, Safeguards for Government Application

The US is addressing AI transparency focusing on risk assessments that federal agencies are implementing to oversee AI operations.

On Thursday, the White House declared its decision to mandate federal entities to enforce stringent measures to govern the use of artificial intelligence (AI), with a deadline set for December 1. These measures aim to shield American citizens’ rights and assure safety amid the broadening scope of AI in various government functions.

The directive from the Office of Management and Budget instructs federal agencies to diligently monitor, evaluate, and validate AI’s effects on the populace. It stresses the necessity to counter algorithmic bias, enhancing public insight into the government’s AI utilization. Additionally, it mandates agencies to perform risk assessments and establish benchmarks for operation and oversight.

The administration has underscored the imperative for definitive protections in AI applications that may affect the civil liberties or well-being of U.S. residents, which includes comprehensive public reporting to elucidate the AI’s usage parameters within federal operations.

In an effort to manage AI’s potential hazards to national security, economic stability, public health, or safety, President Joe Biden, in October, enacted an executive order. This legal framework mandates AI developers to disclose safety evaluation outcomes to the U.S. authorities prior to their public dissemination.

Among the newly announced precautionary steps is the provision for air passengers to bypass the facial recognition technology employed by the Transportation Security Administration, thereby avoiding screening delays. In the realm of federal health care, human oversight is mandated for AI-driven diagnostic aids to ensure the accuracy of their outputs.

The advent of generative AI, capable of producing text, imagery, and videos from broad queries, has ignited both enthusiasm and apprehension, with concerns about potential job displacements, electoral disruptions, and the overarching fear of human subjugation.

To foster transparency, federal agencies are now obliged to publish AI application inventories, operational metrics, and, where security is uncompromised, share government-held AI resources.

The directive references current AI applications within the federal framework, such as the Federal Emergency Management Agency’s AI-enabled damage evaluations post-hurricanes, the Centers for Disease Control and Prevention’s AI use in forecasting disease spread and opioid detection, and the Federal Aviation Administration’s AI utilization in optimizing metropolitan air traffic to enhance travel efficiency.

The administration’s initiative includes the recruitment of 100 AI specialists to champion AI’s safe employment and mandates federal agencies to appoint chief AI officers within the ensuing 60 days.

Earlier in January, the Biden administration suggested a policy that would obligate U.S. cloud service providers to ascertain foreign access to American data repositories for AI training, adhering to “know your customer” regulations.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.