close
close

SAP AI Core vulnerabilities expose customer data to cyberattacks

July 18, 2024Press releaseCloud Security/Enterprise Security

Cybersecurity researchers have uncovered security flaws in SAP AI Core, the cloud-based platform for creating and deploying predictive workflows based on artificial intelligence (AI), that could be exploited to obtain access tokens and customer data.

The five vulnerabilities were summarized as SAPwned from cloud security company Wiz.

“The vulnerabilities we found could have allowed attackers to access customer data and contaminate internal artifacts – and thus spread to related services and other customers’ environments,” security researcher Hillai Ben-Sasson said in a report seen by The Hacker News.

Following responsible disclosure on January 25, 2024, the vulnerabilities were fixed by SAP on May 15, 2024.

Internet security

In short, the vulnerabilities allow unauthorized access to customer private artifacts and credentials for cloud environments such as Amazon Web Services (AWS), Microsoft Azure, and SAP HANA Cloud.

They could also be used to modify Docker images in SAP’s internal container registry, SAP’s Docker images in the Google Container Registry, and artifacts hosted on SAP’s internal Artifactory server, which could lead to a supply chain attack on SAP AI Core services.

Furthermore, the access could be weaponized to gain cluster administrator rights to the SAP AI Core Kubernetes cluster by exploiting the fact that the Helm package manager server is exposed to both read and write operations.

“With this level of access, an attacker could directly access other customers’ pods and steal sensitive data such as models, datasets and code,” Ben-Sasson explained. “This access also allows attackers to interfere with customers’ pods, falsify AI data and manipulate the conclusions of the models.”

Wiz said the problems stem from the platform allowing malicious AI models and training procedures to be run without appropriate isolation and sandboxing mechanisms.

As a result, a threat actor could build a regular AI application on SAP AI Core, bypass network restrictions, and probe the Kubernetes pod’s internal network to obtain AWS tokens and access customer code and training datasets by exploiting misconfigurations in AWS Elastic File System (EFS) shares.

“Training an AI, by definition, requires the execution of arbitrary code, so appropriate safeguards should be in place to ensure that untrusted code is properly separated from internal assets and other tenants,” said Ben-Sasson.

The findings come after Netskope revealed that the increasing use of generative AI in enterprises has prompted them to deploy lock controls, data loss prevention (DLP) tools, real-time coaching and other mechanisms to mitigate risk.

“Regulated data (data that organizations are legally required to protect) accounts for more than a third of the sensitive data shared with generative AI (GenAI) applications – putting organizations at potential risk of costly data breaches,” the company said.

They are also tracking the emergence of a new cybercriminal threat group called NullBulge, which has been targeting AI and gaming-focused companies since April 2024, with the goal of stealing sensitive data and selling compromised OpenAI API keys on underground forums while claiming to be a hacktivist crew protecting “artists around the world” from AI.

“NullBulge targets the software supply chain by weaponizing code in publicly available repositories on GitHub and Hugging Face and tricking victims into importing malicious libraries or using mod packages used by gaming and modeling software,” said Jim Walter, security researcher at SentinelOne.

“The group uses tools such as AsyncRAT and XWorm before delivering LockBit payloads built using the leaked LockBit Black Builder. Groups like NullBulge represent the persistent threat of ransomware with low barriers to entry, combined with the everlasting effect of infections from info-stealers.”

Did you find this article interesting? Follow us on Þjórsárdalur and LinkedIn to read more exclusive content we publish.