Resource Hijacking: Cloud Service Hijacking - Bedrock LLM Abuse
AWS Specific Sub-Technique
Other sub-techniques of Resource Hijacking (7)
ID | Name |
---|---|
T1496.004 | Cloud Service Hijacking |
T1496.A007 | Cloud Service Hijacking - Bedrock LLM Abuse |
T1496.A001 | Cloud Service Hijacking - SES Messaging |
T1496.001 | Compute Hijacking |
T1496.A008 | Compute Hijacking - EC2 Use |
T1496.A006 | Compute Hijacking - ECS |
T1496.003 | SMS Pumping |
AWS Specific Content
A prerequisite for this technique is that a threat actor has already gained control of an AWS identity with the permissions to perform the actions in the AWS CloudTrail Event Name(s) section.
Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) and LLMs (Large Language Models) from leading AI companies and Amazon available for use through a unified API. Using this technique, a threat actor can send prompts to LLMs that are hosted on Amazon Bedrock. The threat actor can then trade or sell access to the LLMs for use by other entities while the compromised AWS account holder would be responsible for paying the usage charges.
Detection
AWS Specific Content
When this technique is used by the threat actor, actions taken by the threat actor using the credentials obtained will be logged in CloudTrail. You can use the Event History page in the AWS CloudTrail console to view the last 90 days of management events in an AWS Region for the events listed in the AWS CloudTrail Event Name(s) section, such as
bedrock:InvokeModel
, bedrock:InvokeModelWithResponseStream
, bedrock:Converse
, and bedrock:ConverseWithResponseStream
.A separate CloudTrail trail will give you an ongoing record of events in your AWS account past 90 days and can be configured to log events in multiple regions. You can also review events using the console as well as the AWS CLI.When looking through Event history for events related to this technique, you should note that the actions are non-mutable and are therefore listed as
readOnly
, which means that the Events will not be visible if there are filters set to show only mutable actions.It is also possible to create a CloudWatch metric filter to watch for when specific AWS API calls are used and perform notification actions if logged, and additionally configure CloudWatch to automatically perform an action in response to an alarm.
Amazon GuardDuty is a managed threat detection service continuously monitoring malicious or unauthorized behavior, and contains integrated detections for LLM Abuse based on anomalous behavior. This helps customers protect their AWS accounts and workloads, and helps to identify suspicious activity potentially before it escalates into an incident. For additional information on how GuardDuty protects AI workloads, click here.
In some cases, a threat actor will also attempt to increase the available quota for individual types of requests within Amazon Bedrock; for example, Service Quota increases can be requested to allow an increase of On-demand
bedrock:InvokeModel
requests per minute. If a quota increase request is issued by the threat actor, a RequestServiceQuotaIncrease
Event (with event source: servicequotas.amazonaws.com
) will be logged in CloudTrail as described in the Modify Cloud Compute Infrastructure > Modify Cloud Compute Configurations technique.When using Amazon Bedrock, if the logging and capturing of prompts issued to, and received from, LLMs in generative AI applications that have already been configured is important, then Model Invocation Logging should be configured. Note that when threat actors access LLMs where generative AI applications have not already been configured, it is not common for them to configure Model Invocation Logging. Additional reading on how to respond to security events where generative AI is the source or the target of a security event, you can go to the Methodology for incident response on generative AI workloads which is available here.
Mitigation
AWS Specific Content
Service Control Policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for the IAM users and IAM roles in your organization and can be used to restrict the use of lifecycle policies to delete objects within an AWS account. You can also make sure that principals are scoped with the least-privileged permissions necessary to perform duties, limiting the ability to use Bedrock to when it is required.
You can also use identity-based policies to restrict the use of Amazon Bedrock. In the following identity-based policy example, you can deny access to running inference on a specific model (For a list of model IDs, see Amazon Bedrock model IDs). Note - you should test policies in a development environment before deploying them in production:
{
"Version": "2012-10-17",
"Statement": {
"Sid": "DenyInference",
"Effect": "Deny",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream",
"bedrock:Converse",
"bedrock:ConverseWithResponseStream"
],
"Resource": "arn:aws:bedrock:*::foundation-model/model-id"
}
}
Additional examples of identity-based policies to restrict the use of Amazon Bedrock are available here.
IAM Access Analyzer can be used to regularly review and verify access and manage permissions across your AWS environment, which will highlight AWS identities with excessive permissions and the actions performed by those identities.