Anti-Phishing, DMARC , Artificial Intelligence & Machine Learning , Cybercrime
Expert Gives Another Reason to Fear the Phish: Smart AI BotsIndustry Veteran Peter Cassidy Discusses the Implications of ChatGPT
ChatGPT, an AI-based chatbot developed by OpenAI that specializes in dialogue, is raising concern among security professionals about how criminals could use cheap, accessible natural language AI to write convincing phishing emails and pull off nefarious deepfake scams. Peter Cassidy, industry veteran and general secretary of the Anti-Phishing Working Group, explains how ChatGPT could make it easier for cybercriminals to achieve their targets with accuracy and at scale.
See Also: OnDemand | Understanding Human Behavior: Tackling Retail's ATO & Fraud Prevention Challenge
Tools such as ChatGPT will make it harder for security teams and enterprises to identify phishing emails. "They have to be looking for more esoteric kinds of behaviors without interrupting the user experience," says Cassidy.
In order to defend against this type of phishing content, Cassidy advises security leaders to instruct their employees to slow down. "Tell your people in all scenarios … if you're given any kind of instruction of urgency, slow down and think twice," he says.
In a video interview with Information Security Media Group, Cassidy discusses:
- How phishing emails and tactics have changed over the years;
- How the new ChatGPT technology is different from previous software tools and the implications of having this technology in the marketplace;
- What organizations should do to defend against this type of phishing content.
Cassidy is a product development consultant, software designer, industrial analyst and widely published writer, speaker and commentator on information security, white collar crime and electronic crime. He has been investigating the intersection of security technologies, electronic commerce, public policy and financial crime for decades in his many capacities.