In late 2023, the debut of ISO/IEC 42001 marked a major advance in AI standards, offering a systematic framework for AI management. Avani Desai of Schellman sees it as a "paradigm shift" that emphasizes managing AI-specific risks that are distinctly different from traditional concerns.
Nearly 1,000 artificial intelligence and technology experts globally have called for regulation around deepfakes to mitigate risks including fraud and political disinformation that could cause "mass confusion." The letter comes on the heels of a 400% spike in deepfake content in the past four years.
In the latest weekly update, Jeremy Grant of Venable LLP joins editors at ISMG to discuss the state of secure identity in 2024, the challenges in developing next-generation remote ID proofing systems, and the potential role generative AI can play in both compromising and protecting identities.
In most organizations, the privacy team plays an important role in artificial intelligence implementation and governance. Tarun Samtani, DPO and privacy program director at International SOS, said privacy principles inherently align with the demand for responsible data use of AI technology.
Twenty technology giants including Google and Meta pledged Friday to combat the presence of artificially generated deepfake content meant to deceive voters as more than 4 billion people in more than 70 countries prepare for elections this year.
The U.S. Federal Trade Commission said it's too easy for fraudsters to launch "child in trouble" and romance scams, so it has proposed rule-making that would give the agency new authority to sue in federal court any technology providers that facilitate impersonation fraud.
In the latest weekly update, four ISMG editors discussed the relatively low profile of cyberwarfare in recent international conflicts, the potential revival of a dormant HIPAA compliance audit program and the security implications of sovereign AI development.
The AI industry is exploding with demand for talent that can navigate the maze of machine learning, data analytics and neural networks. But what does this mean for the average IT person looking for a job? Steve King of CyberEd.io discusses finding work in the AI field.
Nation-state hackers including Russian military intelligence and hackers backed by China have used OpenAI large language models for research and to craft phishing emails, the artificial intelligence company disclosed Tuesday in conjunction with major financial backer Microsoft.
Google called on governments across the globe to create a cross-border framework to ensure that artificial intelligence can effectively fight cyberthreats. The company said the technology could offset the inherent advantages attackers have had in cyberspace since almost the start of the internet.
Two key European Parliament committees accepted a political compromise that aims to govern how trading bloc countries develop and deploy artificial intelligence. The regulation is set to become the globe's first comprehensive regulation concerning AI.
The U.S. federal patent authority aims to provide clarity on how it will analyze inventions. Only humans can be named in single-person patents, and at least one human must be labelled as the inventor of any given claim, the U.S. Patent and Trademark Office said Tuesday.
Beyond the hype, AI is transforming cybersecurity by automating threat detection, streamlining incident response and predicting attacker behaviors. Organizations are increasingly deploying AI to protect their data, stay ahead of cybercriminals and build more resilient security systems.
A federal government IT modernization funding program is looking to invest in projects that will help hasten the implementation of artificial intelligence to improve efficiencies and service delivery among government agencies. It will favor proposals with budgets under $6 million.
Large language models may boost the capabilities of novice hackers but are of little use to threat actors past their salad days, concludes a British governmental evaluation. "There may be a limited number of tasks in which use of currently deployed LLMs could increase the capability of a novice."
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.asia, you agree to our use of cookies.