vertex
-
MalBot November 15, 2024, 12:55am 1 An exploit using a poisoned model could enable exfiltration of sensitive fine-tuned LLM adapters.
-
In a recent report, Palo Alto Networks researchers disclosed two critical vulnerabilities within Google’s Vertex AI platform that could expose organizations to serious security risks. Known as ModeLeak, these vulnerabilities enable privilege escalation and model exfiltration, potentially allowing attackers to access sensitive machine learning (ML) and large language model (LLM) data within Vertex AI environments.
-
Executive Summary In the race to gain a competitive edge, organizations are increasingly training artificial intelligence (AI) models on sensitive data. But what if a seemingly harmless AI model became a gateway for attackers?
-
The release and timing of any features or functionality described in this post remain at Elastic’s sole discretion. Any features or functionality not currently available may not be delivered on time or at all.
-
We’re pleased to announce that Orca Security has been accepted into Google Cloud’s Generative AI Partner Initiative, following the integration of Google Cloud’s Vertex AI into the Orca Cloud Security Platform. Orca’s Vertex AI integration enables customers to automatically generate remediation code and instructions to fix identified cloud risks – multiplying productivity, lowering skill thresholds,…
-
eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
-
The Sysdig Threat Research Team (Sysdig TRT) recently discovered a new Freejacking campaign abusing Google’s Vertex AI platform for cryptomining. Vertex AI is a SaaS, which makes it vulnerable to a number of attacks, such as Freejacking and account takeovers. Freejacking is the act of abusing free services, such as free trials, for financial gain.…