Protecting Amazon S3 Data from Ransomware

Posted by Ben Potter on Tuesday, May 25, 2021

Contents

Amazon Web Services (AWS) Simple Storage Service (S3) is incredibly durable, secure by default, and feature rich. Like every other storage system on the planet it’s unfortunately not immune to ransomware attacks. In this post I’ll cover how ransomware can work in S3, and a few simple steps for you to help protect your data from ransomware.

Attack Vectors

Ransomware in S3 is different than a traditional computer or server with a file system. S3 can only be accessed by the S3 API, and every API action (Put/Get/Delete etc) needs to be authenticated. This puts the focus on how you manage credentials (see next section), and how you configure permissions. Also S3 does not run on a computer that can be compromised by malware. Normally with ransomware on a traditional file system the file (called an object in S3) is simply encrypted in place by malware, and the adversary keeps the decryption key (if you’re lucky!). In S3 objects cannot be modified in place, only copied or deleted. Malware cannot be run in S3, instead adversaries need to use the API to manipulate the objects and buckets using your credentials.

The adversary could use the CopyObject action to simply copy the object and enable encryption with a KMS encryption key. The problem with that is the KMS key needs to exist in an AWS account, and you could simply create an abuse case with AWS that another account has accessed your account without authorization and they would investigate. Also, CopyObject is limited to objects only 5GB in size, so unlikely an adversary would use this method.

With the time it takes and API “noise” involved and risk of the adversaries' AWS account being discovered by copying and deleting objects, the adversary will more than likely simply use PutBucketLifecycleConfiguration to delete all objects in a bucket including delete markers for object versioning. This assumes they have permission, and of course you have significantly limited both PutBucketLifecycleConfiguration and DeleteObjectVersion actions haven’t you? If they do not have permission the adversary may simply use DeleteObjects action which can delete up to 1000 objects at a time. The question is have you enabled logging so you know if the objects have been copied first, or simply deleted?

Access Management Recommendations

Protecting data in S3 starts with who or what has access, you must explicitly grant access to S3 as there is no access granted by default. Bucket level access control mechanisms should be reviewed including Bucket policies, along with Access Control Lists (ACLs). The other access mechanism is the IAM service, both users and roles are principals that can access S3. You should check the level of access granted for every IAM user and role, being careful of the actions that can cause harm like DeleteObject, and even bucket level actions like PutBucketLifecycleConfiguration that could cause harm. You should follow the principle of least privilege, only grant access if its required. You can discover all actions, resources and conditions for S3 in the authorization reference. You can also check out an official AWS blog I wrote on Techniques for writing least privilege IAM policies. In a previous blog I wrote about how you should not use IAM user access keys, as you could accidentally store them somewhere that’s not safe like a public Git repository and immediately loose them. Access keys are probably the most common way that an adversary could gain unauthorized access to your account so avoid using them.

Protection Recommendations

Bucket versioning allows the automatic creation of multiple versions of an object. When an object is deleted with versioning turned on, it is only marked as deleted but is still retrievable. If an object is overwritten, previous versions are stored and marked. This is a useful protection against ransomware, as the adversary would also need to delete the marker which is another API operation that adds to the time it takes to ransom your objects. The permissions for S3 are separated between manipulating objects and object versions. For example, the DeleteObject action is different to DeleteObjectVersion. This means you can grant only DeleteObject permission which inserts the delete marker, but the versions of the object cannot be touched.

S3 Object Lock is an option you can enable to prevent objects from being deleted or overwritten for a fixed amount of time. It’s a model like write-once-read-many (WORM) and has been assessed by various regulations for safeguarding your data. It’s probably the best protection against ransomware after access management, however you should make your own risk-based decision to enable it or not. You can simply create a new bucket, enable object lock (under advanced settings), then apply a retention period and/or legal hold settings on the objects to protect (or the entire bucket).

Monitoring Recommendations

Requests to S3 buckets at the bucket level, e.g. creating or deleting buckets are included in AWS CloudTrail which you should enable, and have CloudTrail logs located in a separate AWS account (along with other critical logs) for safety. In addition to bucket level actions you should also log actions inside your buckets, e.g. object PUT, GET, and DELETE actions help you trace who or what has accessed (or attempted access) to your data. There are two ways to log in S3; CloudTrail object level logging, and S3 server access logging. CloudTrail logs record the request that was made to S3, IP address, who made the request, date stamp, and some additional details. S3 access logs provide detailed records which may contain additional details to help you with an investigation. I highly recommend you enable at least one of the logging methods, the CloudTrail option is the easiest one to enable.

In addition to logs you should be aware of metrics by setting Amazon CloudWatch alarms to alert you when specific thresholds of request metrics are exceeded. For example, if your bucket rarely has delete operations then you could set an alarm to alert you of object DELETE that could be an indication of ransomware.

Amazon GuardDuty is an intelligent threat detection service that you should enable to help detect threats and anomalies, and has S3 protection. S3 protection allows GuardDuty to monitor object level operations to identify potential security risks for data within your S3 buckets. If you have already enabled GuardDuty, go to the console (in each region you have enabled it) and verify you have S3 protection enabled.

Access Analyzer for S3 alerts you to S3 buckets that are configured to allow access to anyone on the internet or other AWS accounts, including AWS accounts outside of your organization. For example, Access Analyzer for S3 might show that a bucket has read or write access provided through a bucket access control list (ACL), a bucket policy, or an access point policy. Access Analyzer for S3 works by first enabling IAM Access Analyzer.

In summary, ransomware targeting S3 is more than likely to simply delete your data and claim that it’s being held ransom than actually encrypting it. Following best practices from the AWS Well-Architected Security Pillar will help you protect and detect against ransomware attacks. Adversaries continue to find better ways to target you, and make more money in doing so!

Further reading:
AWS Well-Architected Security Pillar
Official Security Best Practices for Amazon S3