Deception: Can AI Tools Build Better Decoys and Honeypots?Generative AI Has Engineering and Design Promise, Says Lupovis' Xavier Bellekens
Can generative artificial intelligence tools be used to help better deceive attackers? That was a question posed by Xavier Bellekens, CEO of deception-as-a-service platform vendor Lupovis.
"We decided to use generative AI to write the code for us of a decoy or a honeypot and see what it would come up with," Bellekens said.
Deception technology provides early warning of potential hacking attacks by tricking bad actors into accessing fake systems or data stores designed to ring alarm bells when anyone does so. Given the ability of generative artificial intelligence to rapidly deliver new content, Bellekens' goal was to see if ChatGPT and similar AI tools could create convincing decoys, including generating convincing-looking text and graphics for everything from remotely accessible CCTV cameras and programmable logic controllers to printers and unprotected databases storing passport data.
In this video interview with Information Security Media Group, Bellekens also discussed:
- Which ChatGPT-generated decoys were top targets for attackers, and why;
- Current limits for building convincing decoys using ChatGPT;
- The potential offered by future large language models with greater capabilities to power prediction engines.
Bellekens co-founded Lupovis.io, a cyber-deception spinout from Scotland's University of Strathclyde, where he serves as an assistant professor in the Institute for Signals, Sensors and Communications. He also serves as chair of the IEEE U.K. and Ireland's Blockchain Group and vice chair of its Cybersecurity Group. He previously served as a nonresident senior fellow of the Atlantic Council's Cyber Statecraft Initiative.