Artificial Intelligence & Machine Learning , Government , Industry Specific

White House Says Agencies on Track to Meet January AI Goals

Official Says Administration Taking 'Aggressive Set of Actions' to Meet Deadlines
White House Says Agencies on Track to Meet January AI Goals
White House Special Advisor for Artificial Intelligence Ben Buchanan said federal agencies are meeting deadlines given in an October 2023 executive order on AI. (Image: Aspen Institute)

Federal agencies are making significant headway in achieving a series of critical milestones included in a sweeping executive order on artificial intelligence the president signed in October 2023, according to White House Special Advisor for AI Ben Buchanan.

See Also: Strengthen Cybersecurity with Zero Trust Principles

The executive order gives federal agencies until Jan. 28 to implement key cybersecurity components, including completing assessments of potential cybersecurity risks associated with the use of AI in critical infrastructure sectors. Some experts cast doubt on how effectively federal agencies are able to implement the order, given a lack of funding and technical expertise (see: Why Biden's Robust AI Executive Order May Fall Short in 2024).

Many agencies have already achieved significant milestones as the administration aims to establish new standards and regulations around the use of AI systems, Buchanan said Tuesday.

"One of the places where we know we have to move fast is on a lot of the safety and security issues," Buchanan said at an event hosted by Aspen Digital, adding that the National Institute of Standards and Technology recently launched the AI Safety Institute to support the development and deployment of safe and trustworthy AI systems. The order also requires the Department of Health and Humans Services to establish its own AI Task Force by the end of the month.

"We've definitely had an aggressive set of actions here, and the EO is pretty aggressive across the board," he added.

President Joe Biden invoked the Defense Production Act when signing the executive order, requiring organizations developing certain foundation models that can pose national security risks to provide the federal government with the results of red-team testing and other safety evaluations (see: White House Issues Sweeping Executive Order to Secure AI).

The order directs the Department of Commerce to implement those requirements by the end of the month, in addition to proposing new regulations that address the use of U.S. digital infrastructure-as-a-service products by foreign adversaries and malicious actors.

The deadlines included in the order aim to ensure agencies can keep up with the rapid pace of technological advancement while responsibly deploying AI tools and technologies.

Under the guidance, the General Services Administration has until the end of January to develop and issue a framework that aims to prioritize emerging tech offerings in the Federal Risk and Authorization Management Program authorization process. The director of the Office of Personnel Management will be required to establish new hiring measures to ensure the federal government is hiring subject matter experts and using skills-based assessments to recruit AI talent.

While agencies have made efforts toward reaching the goals outlined in the order, Buchanan said the guidance comes with certain limitations that control its impact on the private sector, which has been largely responsible for the development of many popular AI models.

"In 2024, 2025, I think we're going to have to decide what [should] AI legislation look like?" he said. "There are things that only Congress can do, and I think we're going to have to have that conversation."


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.asia, you agree to our use of cookies.