Financial institutions globally have invested heavily in anti-financial crimes strategies and tools that report potential risk to regulatory authorities. But so have their adversaries. David Stewart and Keith Swanson discuss how institutions are using AI/ML to create more effective fraud defenses.
ServiceNow wants to apply generative AI to its knowledge around how customer environments are configured to help organizations harden their digital attack surface. Security product leader Lou Fiorello said ServiceNow will use generative AI to leverage its presence across the entire enterprise.
Atlanta-based trust intelligence firm OneTrust has balanced growth and profitability and now plans to use its $150 million funding round to boost its financial controls and processes and recruit a majority independent board to prepare for an eventual initial public offering, said CEO Kabir Barday.
Organizations struggle with governing the data that goes into and informs large language models since it's in documents rather than spreadsheets or SQL databases, said BigID CEO Dimitri Sirota. Companies need a more effective governance framework for managing unstructured data, Sirota said.
In this episode of CyberEd.io's podcast series "Cybersecurity Insights," Morphisec's Michael Gorelik discussed automated moving target defense - or AMTD, which is a risk-reduction strategy and preventive measure that reduces adversary success rates and provides "the final layer of defense."
Adversaries use artificial intelligence to obtain explosives, advance sextortion schemes and propagate malware through malicious websites that appear legitimate. Intelligence officials grapple with emboldened criminals who use AI for nefarious purposes and nation-state actors such as China.
Natural language models aren't the boon to auditing many in the Web3 community hoped that generative artificial intelligence tools would be. After a burst of optimism, the consensus now is that AI tools generate well-written, perfectly formatted - and completely worthless - bug reports.
Unintended bias in artificial intelligence tops deliberate misuse when it comes to the privacy concerns around use of facial recognition in public areas, with data handled by AI, according to Harry Boje, data protection and privacy officer at Paydek.
Cybercriminals are using an evil twin of OpenAI's generative artificial intelligence tool Chat GPT. It's called FraudGPT, it's available on criminal forums, and it can be used to write malicious code and create convincing phishing emails. A similar tool called WormGPT is also available.
A startup led by former AWS and Oracle AI executives completed a Series A funding round to strengthen security around ML systems and AI applications. Seattle-based Protect AI plans to use the $35 million investment to expand its AI Radar tool and research unique threats in the AI and ML landscape.
Supply chain compromise, open-source technology and rapid advances in AI capabilities pose significant challenges to safeguarding artificial intelligence systems. The "giant leap" achieved by systems such as ChatGPT makes it tough to discern whether someone is interacting with a human or a machine.
With both excitement and fear swirling around the opportunities and risks offered by emerging AI, seven technology companies - including Microsoft, Amazon, Google and Meta - have promised the White House they would ensure the development of AI products that are safe, secure and trustworthy.
In the latest weekly update, ISMG editors discuss key takeaways from ISMG's recent Healthcare Summit, how the healthcare sector is embracing generative AI tools, and why Microsoft just decided to give all customers access to expanded logging capabilities.
Organizations went from having little information about their security posture to drowning in so many alerts that no human could possibly understand it all. Broadcom has focused on artificial intelligence for IT operations to help companies identity and remediate the root cause of security alerts.
Singapore's Personal Data Protection Commission has released proposed guidelines for the use and processing of personal data for the development of and research into AI systems. The privacy agency said that in some cases organizations may not require prior consent.