AI-Based Attacks , Events , Fraud Management & Cybercrime
AI in Cybercrime: Lowering the Barrier for Bad Actors
ReliaQuest Field CISO Rick Holland on How Cybercriminals Are Exploiting AI ToolsLike security practitioners, cybercriminals want AI. But in the AI-versus-AI cyber battle, the barrier for malicious actors "keeps getting lower and lower, while the barrier for defenders is getting more complex and more difficult," said Rick Holland, field CISO, ReliaQuest.
See Also: 5 Real-Life Examples of Cyberattacks and How to Stop Them
Attackers have new, sophisticated tools at their disposal, and cybercriminal forums now feature dedicated sections for AI tools, such as FraudGPT and ChaosGPT. For as little as 500 euros, malicious actors can buy large language models, making sophisticated attacks more accessible and reshaping the threat landscape. "Criminals want their own Copilot, just like we have a Copilot for every security technology out there," Holland said.
For security teams, sticking with the basics still works. "Ransomware is probably on the top of everyone's threat model, or it should be. If you can't defend against that - if you're not patching your VPNs and you don't have multifactor authentication - you're leaving the doors wide open. The light's flashing," he said.
In this video interview with Information Security Media Group at Infosecurity Europe 2024, Holland discussed:
- The challenges cybersecurity defenders face due to AI advancements;
- How AI-driven phishing and social engineering tactics are evolving;
- The rise of business email compromise and email account takeover.
Holland has a background as a practitioner, vendor executive and Forrester industry analyst. At ReliaQuest, he manages the global team responsible for the company's threat intelligence and research efforts.