Advanced Malware: Why AI Can't Help All HackersHoward Marshall of Accenture Security on the AI-Generated Malware Landscape
The fear that generative AI tools, such as ChatGPT, could turn a low-sophisticated "script kiddie-type hacker" into a sophisticated adversary on the level of a nation-state is unfounded, said Howard Marshall, global intelligence lead, Accenture Security.
Marshall says most hackers lack the expertise and education to create sophisticated malware. While AI is evolving and adapting rapidly, the technology is not yet mature enough to automate the intricate process of creating malicious code effectively, he said. Unsophisticated attackers lack the knowledge to properly formulate queries and may not comprehend the inaccuracies in the generated output.
"In nation states, people with big budgets, folks that know what they're doing, AI will help them create [malware] faster. But there are not nearly as many of those people as there are at the bottom end of the threat spectrum," he said. "For that reason, the initial concern that gen AI will create 10 million more sophisticated hackers that can create eloquent malware - that's not going to happen anytime soon."
In this video interview with Information Security Media Group at Black Hat USA 2023, Marshall discussed:
- How AI is helping sophisticated threat actors;
- The impact of quality control around generative AI for adversaries and defenders;
- The rising interest of adversaries in targeting MacOS.
Prior to joining Accenture, Marshall spent more than 20 years working with the FBI before retiring as the deputy assistant director of the agency's Cyber Division. He held six other positions during his tenure, including special agent in charge of the Louisville Division.