Generative AI holds great potential for many amazing applications in healthcare, but it's critical to establish a strong framework before deploying it, said Barbee Mooneyhan, vice president of security, IT and privacy of Woebot Health, a provider of AI-driven online mental health services.
AI allows U.S. agencies to address hard problems like quickly writing secure code but comes with risks around nation-states generating attacks more efficiently. "The cybersecurity element is a great example of the bright and the dark side of AI technology," said White House Director Arati Prabhakar.
U.S. federal agencies are advising organizations to hone their real-time verification capabilities and passive detection techniques to alleviate the impact of deepfakes. The technology's easy accessibility means less capable malicious actors can make use of deepfakes' mounting verisimilitude.
The European Union will open up supercomputers to artificial intelligence startups in a bid to boost innovation inside the trading bloc, European Commission President Ursula von der Leyen said Wednesday. She said Europe has a "narrowing window of opportunity" to guide responsible innovation.
As tech companies have jumped to incorporate AI in products, artificial intelligence with no human supervision runs the risk of catastrophe, warned two tech executives before a panel of U.S. senators who intend to introduce regulatory legislation later this year.
Adobe, IBM, Nvidia, and five additional tech giants on Tuesday signed onto a White House-driven initiative for developing secure and trustworthy generative artificial intelligence models. The commitments, at least for now, are the closet approximation of targeted AI regulation in the United States.
California Gov. Gavin Newsom on Wednesday signed an executive order to study the development, use and risks of artificial intelligence, and develop a process to deploy "trustworthy AI" in the state government. The order calls for a staggered implementation over the next two years.
The rise of artificial intelligence makes it easier for adversaries to harm the U.S. and introduces new risks around malicious insiders with loyalties to China, experts say during a Senate hearing. Generative AI can help less technically sophisticated threat actors carry out complex cyberattacks.
Artificial intelligence holds the potential to undermine trust in democracy - but overwrought warnings themselves can erode trust in the system critics seek to preserve, warns a cybersecurity firm. AI is "a long way from massively influencing our perception of reality and political discourse."
The Dutch privacy regulator says imminent artificial intelligence regulation in the European Union may fail to prevent the rollout of dangerous algorithms. Europe is close to finalizing the AI Act, but citizens of the Netherlands "should not expect miracles," the regulator said.
Regulatory scrutiny over artificial intelligence will only mount, warns consultancy KPMG in a report advising companies to proactively set up guardrails to manage risk. Even in the absence of regulatory regimes, "companies must proactively set appropriate risk and compliance guardrails."
The U.K. plans to hold its first-ever global summit on artificial intelligence this November. Goals of the event include detailing AI risks and opportunities, building effective frameworks for using AI safely, and setting international standards to manage AI risks and enforce norms.
Threat actors are manipulating the technology behind large language model chatbots to access confidential information, generate offensive content and "trigger unintended consequences," warns the U.K. National Cyber Security Center. Prompt injection attacks are "extremely difficult" to mitigate.
It's critical for healthcare sector entities considering - or already using - generative AI applications to create an extensive threat modeling infrastructure and understand all attack vectors, said Mervyn Chapman, principal consultant at consulting and managed services firm Ahead.
British lawmakers are calling on the government to speed up efforts to articulate a comprehensive artificial intelligence policy in the face of challenges ranging from bias to existential risk. Delay could erode Britain's position "as a center of AI research," the lawmakers said.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.asia, you agree to our use of cookies.