S3 access restriction based on local IP
Allowing untrustworthy access to your AWS S3 buckets can lead to unauthorized actions such as viewing, uploading, modifying or deleting S3 objects. To prevent S3 data exposure, data loss, unexpected charges on your AWS bill or you just want a central place to manage your buckets access using policies, you need to ensure that your S3 buckets are accessible only to a short list of safe-listed IPs.
If you want to allow servers in your network access to internal S3 buckets, without making the objects within them open to the internet, whitelisting access with a bucket policy is a simple solution to allow downloading files from an internal bucket.
Accessing an S3 Bucket Over the Internet
To secure our files on Amazon S3, we can restrict access to a S3 bucket to specific IP addresses.The most ideal method for interfacing with S3 from Linux is to just install the AWS CLI, and run commands like get-object
to fetch files directly, or use the API or SDK for the language of your choice. If you’re running on EC2, it’s fairly trivial to update the IAM role for the EC2 instance, and attach a policy giving it access to the bucket. As long as the AWS CLI is installed, you can use it with the instance role without managing keys.
If the request is coming from the IP address of your server, it will be allowed. This can be used to very easily allow downloading files from their endpoint URL, as if the bucket was running in a private subnet (though it’s still going over the internet).
We can restrict access to a S3 bucket by adding bucket policy to allow only requests coming from the specified IP range. We can either add aws_s3_bucket_policy from Terraform or directly add a bucket policy from the AWS Console.
We'll now see how to add a bucket policy using Terraform.
- Make sure a S3 bucket is already existing so that we can attach bucket policy to it.
- Navigate to Terraform's aws_s3_bucket_policy page and check the syntax of how the policy would look like.
- Next, we can make use of AWS Policy Generator to add the permissions to the policy that we would like to have. Choose the policy type, add statements (A statement is the formal description of a single permission), add conditions stating which IP address would you like to allow and finally generate the statement.
- A sample example would like the below allowing only GetObject permission from the VPC ID.
Explanation
Create a bucket named my-tf-test-bucket (or any name that you would you prefer)
After creating the bucket, we can add the bucket policy similar to the snippet below. Pass the bucket_id from the bucket created into aws_s3_bucket_policy resource so that the policy refers to the bucket that was previously created.
As mentioned earlier, the policy is allowing only GetObject permission if and only if the requests are coming from the mentioned VPC ID, therefore the policy denies any requests coming from any other endpoints. As shown below, we add a condition to restrict the access to the specific VPC ID, if in case you would like to add few more VPC IDs, that can be added as well.
You can append any other permissions (ListObject, PutObject, etc.) that you would like to see on the bucket privileges.
The same bucket policy can be added via AWS console by navigating to S3 service -> Choose the bucket -> Go to Permissions tab. Scroll down to bucket policy and add your policy over there.
How to check if the bucket policy is working or not?
To validate the bucket policy changes, we can make use of EC2 instance. Basically, we will create a EC2 instance within the VPC that we would like to access the S3 bucket objects from and we will create another EC2 instance from any other VPC that should deny the S3 bucket access.
After creating the EC2 instances, we can SSH (make sure you have attached appropriate IAM role to the instance) into the instances and try to get the list of objects present in the bucket or cURL objects inside the bucket and get the output. If you get the output as 200 OK status from the whitelisted VPC and 403 Forbidden or Access Denied error status either from the other VPC or from your local terminal, then we can say the bucket policy is working as expected as it is denying the access from an unknown source IP address.
Conclusion
From a security standpoint, the S3 VPC endpoint is a robust solution because you’re only allowing traffic out to the S3 service specifically, and not the whole internet. If this fits in with your use case, then the S3 VPC endpoint could be the way to go.