Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Why Biden’s Robust AI Executive Order May Fall Short in 2024

Experts Say Agencies Lack Clear Guidance and Capabilities to Enforce Key Measures
Why Biden’s Robust AI Executive Order May Fall Short in 2024
What will the U.S. federal government do with the red-team reports it receives under President Joe Biden's October AI executive order? (Image: Shutterstock)

The federal government lacks critical capabilities necessary to enforce a key component of the administration's executive order on artificial intelligence, which directs advanced AI developers to share the results of red-team safety tests, experts told Information Security Media Group.

See Also: On-Demand | Security Operations In a New Paradigm

President Joe Biden signed a sweeping executive order in late October that invoked Cold War-era executive powers, requiring companies developing AI models that potentially pose serious risks to national security, national economic security or national public health and safety to share their test results with the federal government (see: White House Issues Sweeping Executive Order to Secure AI).

"Companies must tell the government about the large-scale AI systems they're developing and share rigorous independent test results to prove they pose no national security or safety risk to the American people," Biden said, adding: "In the wrong hands, AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run."

The new guidance tasks the National Institute of Standards and Technology with setting rigorous standards for red-team testing prior to publicly deploying new foundational models and orders the Department of Homeland Security to apply those new standards across critical infrastructure sectors. But the order does not specify which federal agencies will be tasked with reviewing the results of those tests, and it does not provide specific mitigation processes for instances in which red-team safety tests reveal significant risks.

Even if the guidance did provide those details, "it is unlikely that the government will have time to police all but the largest of firms," said Rob Mason, chief technology officer of the software security firm Applause. "Establishing fair and effective certification processes and enforcement methods could take a considerable amount of time."

The Office of Management and Budget released implementation guidance in November for the order that directs federal agencies to designate chief AI officers, establish AI governance boards and comply with testing, training and monitoring safeguards to ensure the responsible development and deployment of government AI initiatives.

The guidance instructs agencies to publicly report the use of AI "and their plans to achieve consistency" with the executive order. But it does not provide clear directions for agencies and federal contractors on how they should share the results of their safety tests on AI systems, experts said.

The Office of Management and Budget did not respond to a request for comment.

"It's likely the initial strategy may involve organizations self-reporting their adherence to guidelines and consumers and users becoming the enforcers," Mason said.

The order has received criticism from some industry groups, including NetChoice, a technology trade association funded by Google and Meta, which said in a statement that the guidelines "will result in stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation."

The U.S. Chamber of Commerce - the largest lobbying group in the country, representing over 3 million organizations - in a statement praised the administration for outlining specific priorities in the executive order, including "attracting highly skilled workers, bolstering resources needed for intra-government coordination and speeding up the development of standards."

But the group also warned that "substantive and process problems still exist" throughout the guidance, such as "short overlapping timelines for agency-required action," which can lead to "ill-informed rule-making and degrading intra-government cooperation."

As the administration continues to map out a plan for implementing the executive order, it will also face the challenge of ensuring federal agencies have the technical staff and knowledge to assess safety test results, enforce key components and ensure the responsible development and deployment of AI systems.

The federal government remains hampered in its AI and cyber capabilities due to a growing global cyber talent gap. The nonprofit International Information System Security Certification Consortium, otherwise known as ISC2, published a 2023 Cybersecurity Workforce study that shows a record-breaking gap 4 million between global demand and the cyber workforce capacity in 2023.

According to the study, the top three skills gaps across cybersecurity organizations are in cloud computing security, zero trust implementation and AI or machine learning.

Federal agencies are further constrained by rigid pay scales and low salaries compared to similar positions outside of government, and they suffer from lengthy hiring time frames that are more than double the amount of time it takes to onboard in the private sector, according to research from the nonprofit Partnership for Public Service.


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.asia, you agree to our use of cookies.