Amazon S3 Encryption

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/52

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

53 Terms

1
New cards

Amazon S3 – Object Encryption

Amazon S3 supports multiple encryption methods to protect data at rest and in transit. Understanding when to use each method is important for the AWS exam.

2
New cards

S3 provides 4 main encryption methods:

1. SSE-S3 (Server-Side Encryption with S3-Managed Keys)

2. SSE-KMS (Server-Side Encryption with KMS Keys)

3. SSE-C (Server-Side Encryption with Customer-Provided Keys)

4. Client-Side Encryption

3
New cards

1. SSE-S3 (Server-Side Encryption with S3-Managed Keys)

  • Enabled by default for new S3 buckets and objects

  • Encryption keys are owned, managed, and handled by AWS

  • You never see or manage the key

  • Uses AES-256 encryption

  • Object is encrypted server-side by Amazon S3

Header must be set to:

(x-amz-server-side-encryption: AES256)

How it works:

  • User uploads object

  • S3 automatically encrypts the object using an AWS-owned key

  • Encrypted object is stored in the bucket

<ul><li><p><strong>Enabled by default</strong> for new S3 buckets and objects</p></li><li><p>Encryption keys are <strong>owned, managed, and handled by AWS</strong></p></li><li><p>You <strong>never see or manage the key</strong></p></li><li><p>Uses <strong>AES-256</strong> encryption</p></li><li><p>Object is encrypted <strong>server-side</strong> by Amazon S3</p></li></ul><p><strong>Header must be set to:</strong></p><p><span>(x-amz-server-side-encryption: AES256)</span></p><p></p><p><strong>How it works:</strong></p><ul><li><p>User uploads object</p></li><li><p>S3 automatically encrypts the object using an AWS-owned key</p></li><li><p>Encrypted object is stored in the bucket</p></li></ul><p></p>
4
New cards

2. SSE-KMS (Server-Side Encryption with KMS Keys)

Encryption keys are managed using AWS KMS

KMS advantages:

  • user has control over keys

  • user can audit keys using CloudTrail —> anytime someone uses a key in KMS, this is going to be logged in a service that logs everything that happens in AWS called CloudTrail.

Object is encrypted server side

Header must be set to:

x-amz-server-side-encryption: aws:kms

How it works:

  • User uploads object and specifies a KMS key

  • S3 uses the KMS key to encrypt the object

  • Encrypted object is stored in the bucket

Access requirements:

  • Permission to access the S3 object

  • Permission to use the KMS key

<p>Encryption keys are managed using <strong>AWS KMS</strong></p><p><strong>KMS advantages:</strong></p><ul><li><p>user has control over keys</p></li><li><p>user can audit keys using CloudTrail —&gt; anytime someone uses a key in KMS, this is going to be logged in a service that logs everything that happens in AWS called CloudTrail.</p></li></ul><p>Object is encrypted server side</p><p><strong>Header must be set to:</strong></p><p>x-amz-server-side-encryption: aws:kms</p><p></p><p><strong>How it works:</strong></p><ul><li><p>User uploads object and specifies a KMS key</p></li><li><p>S3 uses the KMS key to encrypt the object</p></li><li><p>Encrypted object is stored in the bucket</p></li></ul><p><strong>Access requirements:</strong></p><ul><li><p>Permission to access the S3 object</p></li><li><p>Permission to use the <strong>KMS key</strong></p></li></ul><p></p>
5
New cards

SSE-KMS Limitation

  • If you use SSE-KMS, you may be impacted by the KMS limits —> the reason to that is because now that you upload and download files from Amazon S3, you need to leverage a KMS key.

  • When you upload, it calls the GeneratedDataKey KMS API

  • When you try to download, it calls the Decrypt KMS API

  • Each of these API calls is going to count towards the KMS quotas of API calls per second. —> (Based on the region, you have between 5,000 and 30,000 requests per second)

  • You can request a quota increase using the Service Quotas Console

  • If you have a very, very high throughput S3 bucket, and everything is encrypted using KMS keys, you may hit KMS throttling.

<ul><li><p>If you use SSE-KMS, you may be impacted by the KMS limits —&gt; the reason to that is because now that you upload and download files from Amazon S3, you need to leverage a KMS key.</p></li><li><p>When you upload, it  calls the GeneratedDataKey KMS API</p></li><li><p>When you try to download, it calls the Decrypt KMS API</p></li><li><p>Each of these API calls is going to count towards the KMS quotas of API calls per second. —&gt; (Based on the region, you have between 5,000 and 30,000 requests per second)</p></li><li><p>You can request a quota increase using the Service Quotas Console</p></li><li><p>If you have a very, very high throughput S3 bucket, and everything is encrypted using KMS keys, you may hit KMS throttling.</p></li></ul><p></p>
6
New cards

3. SSE-C (Server-Side Encryption with Customer-Provided Keys)

  • Server-Side Encryption using fully managed by the customer outside of AWS

  • Encryption key is provided by the customer

  • Key is not stored by AWS, instead they will be discarded after encryption/decryption or being used

  • Still server-side encryption

  • Must use HTTPS

  • Encryption key must be provided in HTTP headers, for every HTTP request made

  • Still server-side encryption

How it works:

  • User uploads object + encryption key

  • S3 encrypts the object using the provided key

  • Encrypted object is stored in the bucket

  • To read the object inside the bucket, the user must again provide the key that was used to encrypt that file.

<ul><li><p>Server-Side Encryption using fully managed by the customer outside of AWS</p></li><li><p>Encryption key is <strong>provided by the customer</strong></p></li><li><p>Key is <strong>not stored by AWS, </strong>instead they will be discarded after encryption/decryption or being used</p></li><li><p>Still <strong>server-side encryption</strong></p></li><li><p>Must use <strong>HTTPS</strong></p></li><li><p>Encryption key must be provided in HTTP headers, for every HTTP request made</p></li><li><p>Still <strong>server-side encryption</strong></p></li></ul><p></p><p><strong>How it works:</strong></p><ul><li><p>User uploads object + encryption key</p></li><li><p>S3 encrypts the object using the provided key</p></li><li><p>Encrypted object is stored in the bucket</p></li><li><p>To read the object inside the bucket, the user must again provide the key that was used to encrypt that file.</p></li></ul><p></p>
7
New cards

4. Client-Side Encryption

  • Use client libraries such as Amazon S3 Client-Side Encryption Library

  • Client must encrypt the data themselves before sending to Amazon S3

  • Client must decrypt data themselves when retrieving from Amazon S3

  • Customer fully manages the keys and encryption cycle

How it works:

  • Client encrypts data locally

  • Encrypted file is uploaded to S3

  • Client downloads and decrypts data locally

<ul><li><p>Use client libraries such as <strong>Amazon S3 Client-Side Encryption Library</strong></p></li><li><p>Client must encrypt the data themselves before sending to Amazon S3</p></li><li><p>Client must decrypt data themselves when retrieving from Amazon S3</p></li><li><p>Customer fully manages the keys and encryption cycle</p></li></ul><p></p><p><strong>How it works:</strong></p><ul><li><p>Client encrypts data locally</p></li><li><p>Encrypted file is uploaded to S3</p></li><li><p>Client downloads and decrypts data locally</p></li></ul><p></p>
8
New cards

Amazon S3 - Encryption in transit (SSL/TLS)

  • Encryption in flight is also called SSL/TLS
    Amazon S3 exposes two endpoints:

  • HTTP Endpoint - non encrypted

  • HTTPS Endpoint - encryption in flight

Important Notes:

  • HTTPS is strongly recommended

  • HTTPS is mandatory for SSE-C

<ul><li><p>Encryption in flight is also called SSL/TLS<br><strong>Amazon S3 exposes two endpoints:</strong></p></li><li><p>HTTP Endpoint - non encrypted</p></li><li><p>HTTPS Endpoint - encryption in flight</p></li></ul><p></p><p>Important Notes:</p><ul><li><p>HTTPS is <strong>strongly recommended</strong></p></li><li><p>HTTPS is mandatory for SSE-C</p></li></ul><p></p>
9
New cards

How do you go about Forcing Encryption in Transit?

  • You can do so by using a bucket policy

In this case, we attached a bucket policy to your S3 bucket. In the below screenshot, we are saying that we deny any GetObject operation if the condition is "aws:SecureTransport": "false" —> which means It will be True when you are using HTTPS and False when using HTTP which is considered insecure encryption method.

Effect:

  • Blocks all HTTP requests

  • Allows only HTTPS traffic

<ul><li><p>You can do so by using a bucket policy</p></li></ul><p>In this case, we attached a bucket policy to your S3 bucket. In the below screenshot, we are saying that we deny any GetObject operation if the condition is "aws:SecureTransport": "false" —&gt; which means It will be True when you are using HTTPS and False when using HTTP which is considered insecure encryption method.</p><p><strong>Effect:</strong></p><ul><li><p>Blocks all HTTP requests</p></li><li><p>Allows only HTTPS traffic </p></li></ul><p></p>
10
New cards

Amazon S3 – Default Encryption vs Bucket Policies

Amazon S3 provides two ways to ensure objects are encrypted:

  1. Default Encryption

  2. Bucket Policies (Enforcement)

11
New cards

Default Encryption (Bucket-Level Setting)

Key Points

  • All new S3 buckets now have default encryption enabled

  • Default encryption is SSE-S3:

    • Server-Side Encryption with S3-managed keys

  • Automatically encrypts all new objects

  • Does not require encryption headers from the user

Customizing Default Encryption

You can change the default encryption to:

  • SSE-KMS (using AWS KMS keys)

  • SSE-S3 (default)

Example:

  • Set bucket default encryption to SSE-KMS

  • Any object uploaded without encryption headers will still be encrypted using KMS

Important Limitation

  • Default encryption does not prevent unencrypted PUT requests

  • It only ensures encryption after upload

12
New cards

Bucket Policies (Encryption Enforcement)

Key Points

  • Bucket policies can force encryption

  • They can deny uploads that do not meet encryption requirements

  • Applied before default encryption

What Bucket Policies Can Enforce

  • Require SSE-KMS

  • Require SSE-C

  • Require specific encryption headers

Example Enforcement Logic

  • Deny PutObject if:

    • x-amz-server-side-encryptionaws:kms

  • Deny PutObject if:

    • SSE-C headers are missing

Why Use a Bucket Policy?

  • Prevents users from uploading objects without encryption

  • Ensures compliance and security standards

  • Guarantees encryption before data is stored

<p>Key Points </p><ul><li><p>Bucket policies can <strong>force encryption</strong></p></li><li><p>They can <strong>deny uploads</strong> that do not meet encryption requirements</p></li><li><p>Applied <strong>before</strong> default encryption</p></li></ul><p> What Bucket Policies Can Enforce </p><ul><li><p>Require <strong>SSE-KMS</strong></p></li><li><p>Require <strong>SSE-C</strong></p></li><li><p>Require <strong>specific encryption headers</strong></p></li></ul><p> Example Enforcement Logic </p><ul><li><p>Deny <code>PutObject</code> if:</p><ul><li><p><code>x-amz-server-side-encryption</code> ≠ <code>aws:kms</code></p></li></ul></li><li><p>Deny <code>PutObject</code> if:</p><ul><li><p>SSE-C headers are missing</p></li></ul></li></ul><p> Why Use a Bucket Policy? </p><ul><li><p>Prevents users from uploading objects <strong>without encryption</strong></p></li><li><p>Ensures <strong>compliance and security standards</strong></p></li><li><p>Guarantees encryption <strong>before data is stored</strong></p></li></ul><p></p>
13
New cards

What is CORS?

CORS (Cross-Origin Resource Sharing)

  • CORS is a web browser security mechanism

  • Controls whether a web page from one origin can request resources from another origin

  • Enforced by the browser, not the server

14
New cards

What Is an Origin?

An origin is defined by three components:

  1. Scheme (Protocol)http or https

  2. Host (Domain)www.example.com

  3. Port443 for HTTPS (default)

Same Origin

  • Same protocol

  • Same domain

  • Same port

Example:

https://www.example.com
https://www.example.com/index.html

Same origin

Different Origin

https://www.example.com
https://other.example.com

Different origins

15
New cards

Why CORS Exists?

  • Prevents malicious websites from accessing resources on another site

  • Blocks cross-origin requests unless explicitly allowed

16
New cards

How CORS Works (High-Level Flow)

  • User visits Origin A (e.g., example.com)

  • Web page tries to load resources from Origin B (e.g., other.com)

  • Browser sends a preflight request (OPTIONS) to Origin B

  • Origin B responds with CORS headers

  • Browser decides whether to allow or block the request

<ul><li><p>User visits <strong>Origin A</strong> (e.g., <code>example.com</code>)</p></li><li><p>Web page tries to load resources from <strong>Origin B</strong> (e.g., <code>other.com</code>)</p></li><li><p>Browser sends a <strong>preflight request (OPTIONS)</strong> to Origin B</p></li><li><p>Origin B responds with <strong>CORS headers</strong></p></li><li><p>Browser decides whether to allow or block the request</p></li></ul><p></p>
17
New cards

CORS Headers (Key Concept)

The most important header:

Access-Control-Allow-Origin

Other possible headers:

  • Access-Control-Allow-Methods

  • Access-Control-Allow-Headers

If headers allow the request ➜ browser proceeds
If not ➜ browser blocks the request

18
New cards

CORS and Amazon S3 (VERY IMPORTANT FOR EXAM )

When Is CORS Needed in S3?

  • When a browser requests files from:

    • One S3 bucket

    • Another S3 bucket

  • You can do this by allowing for a specific origin or for * (all origins)

19
New cards

S3 CORS Example (Static Website)

Flow

  1. Browser loads index.html from Bucket 1

  2. index.html references an image in Bucket 2

  3. Browser sends a cross-origin request to Bucket 2

  4. Bucket 2 must allow Bucket 1 via CORS configuration

  5. If allowed ➜ image loads

  6. If not allowed ➜ request is blocked

Remember CORS is web browser security that allows you to enable images or assets or files being retrieved from one S3 bucket in case the request is originating from another origin.

<p>Flow </p><ol><li><p>Browser loads <code>index.html</code> from Bucket 1</p></li><li><p><code>index.html</code> references an image in Bucket 2</p></li><li><p>Browser sends a cross-origin request to Bucket 2</p></li><li><p>Bucket 2 must allow Bucket 1 via <strong>CORS configuration</strong></p></li><li><p>If allowed ➜ image loads</p></li><li><p>If not allowed ➜ request is blocked</p></li></ol><p></p><p>Remember CORS is web browser security that allows you to enable images or assets or  files being retrieved from one S3 bucket in case the request is originating from another origin.</p>
20
New cards

Amazon S3 – MFA Delete

What is MFA Delete?

  • MFA (Multi-Factor Authentication) Delete is a security feature for Amazon S3

  • Requires an MFA code in addition to normal credentials

  • Protects against accidental or malicious permanent deletions

21
New cards

How MFA Works

  • MFA code is generated by:

    • Mobile apps (e.g., Google Authenticator)

    • Hardware MFA devices

  • The MFA code must be provided at the time of the operation

<ul><li><p>MFA code is generated by:</p><ul><li><p>Mobile apps (e.g., Google Authenticator)</p></li><li><p>Hardware MFA devices</p></li></ul></li><li><p>The MFA code must be provided <strong>at the time of the operation</strong></p></li></ul><p></p>
22
New cards

When MFA Delete Is Required

MFA Delete is required for destructive operations:

  1. Permanently deleting an object version

    • Deletes a specific version forever

  2. Suspending versioning on a bucket

These actions are considered high-risk and require extra protection.

23
New cards

When MFA Delete Is NOT Required

MFA is not required for non-destructive actions:

  • Enabling versioning

  • Listing object versions

  • Listing deleted object versions

  • Reading or uploading objects

24
New cards

What needs to be done prior to using MFA Delete?

Versioning must be enabled on the bucket

25
New cards

Who Can Enable or Disable MFA Delete?

  • ONLY the bucket owner (root account) can:

    • Enable MFA Delete

    • Disable MFA Delete

IAM users and roles cannot manage MFA Delete

MFA Delete is an extra protection to prevent against the permanent deletion of specific object versions.

26
New cards

Amazon S3 – Access Logs

What Are S3 Access Logs?

  • S3 Access Logs record all requests made to an S3 bucket

  • Used for:

    • Auditing

    • Security analysis

    • Troubleshooting

  • Logs both successful and denied requests

27
New cards

What Gets Logged

Every request to the bucket, including:

  • GET, PUT, DELETE, LIST

  • Requests from any AWS account

  • Authorized and unauthorized requests

Each request is written as a log file to another S3 bucket.

28
New cards

Where Logs Are Stored

  • Logs are delivered to a target S3 bucket

  • The logging bucket must be in the same AWS region

  • Logs are stored as objects in the logging bucket

29
New cards

Analyzing S3 Access Logs

Access log data can be analyzed using:

  • Amazon Athena

30
New cards

Logging bucket location

The target logging bucket must be in the same AWS region

31
New cards

How S3 Access Logs Work (Flow)

  • Client makes a request to an S3 bucket

  • S3 processes the request (allow or deny)

  • S3 writes a log record

  • Log file is delivered to the logging bucket

<ul><li><p>Client makes a request to an S3 bucket</p></li><li><p>S3 processes the request (allow or deny)</p></li><li><p>S3 writes a log record</p></li><li><p>Log file is delivered to the <strong>logging bucket</strong></p></li></ul><p></p>
32
New cards

CRITICAL Warning (Exam Favorite)

🚫 NEVER use the same bucket for:

  • Source bucket (being monitored)

  • Target logging bucket

Why?

  • Causes an infinite logging loop

  • Each log write generates another log

  • Bucket grows exponentially

  • Results in very high costs

<p><span data-name="no_entry_sign" data-type="emoji">🚫</span> <strong>NEVER use the same bucket for:</strong></p><ul><li><p>Source bucket (being monitored)</p></li><li><p>Target logging bucket</p></li></ul><p>Why? </p><ul><li><p>Causes an <strong>infinite logging loop</strong></p></li><li><p>Each log write generates another log</p></li><li><p>Bucket grows exponentially</p></li><li><p>Results in <strong>very high costs</strong></p></li></ul><p></p>
33
New cards

Amazon S3 – Pre-Signed URLs

What Is a Pre-Signed URL?

  • A temporary URL that grants access to a specific S3 object

  • Generated using:

    • AWS Management Console

    • AWS CLI

    • AWS SDK

  • URL includes:

    • Authorization information

    • Expiration time

34
New cards

URL Expiration

S3 Console - 1 min up to 720 mins (12 hours)

AWS CLI - (default 3600 secs, max. 604800 secs —> 168 hous)

35
New cards

How Pre-Signed URLs Work

  • The user who generates the URL must have permission to:

    • s3:GetObject (for downloads)

    • s3:PutObject (for uploads)

  • Anyone who receives the URL:

    • Inherits the permissions of the creator

    • Does not need AWS credentials

36
New cards

What is the use case?

Scenario

  • You have a private S3 bucket

  • You want to give external access to one specific file

  • You do not want to make the file public or compromise security

Steps

  1. Generate Pre-Signed URL

    • As the bucket owner or an authorized IAM user

    • URL carries your credentials/permissions (GET or PUT)

    • URL has an expiration time (temporary access)

  2. Share the URL

    • Send the pre-signed URL to the target user

    • User does not need AWS credentials

  3. Access the File

    • User clicks the URL or uses it in a browser/app

    • S3 verifies the signature and permissions

    • File is returned (download or upload) for the limited time allowed

<p>Scenario </p><ul><li><p>You have a <strong>private S3 bucket</strong></p></li><li><p>You want to give <strong>external access</strong> to <strong>one specific file</strong></p></li><li><p>You <strong>do not want</strong> to make the file public or compromise security</p></li></ul><p></p><p>Steps </p><ol><li><p><strong>Generate Pre-Signed URL</strong></p><ul><li><p>As the bucket owner or an authorized IAM user</p></li><li><p>URL carries <strong>your credentials/permissions</strong> (GET or PUT)</p></li><li><p>URL has an <strong>expiration time</strong> (temporary access)</p></li></ul></li><li><p><strong>Share the URL</strong></p><ul><li><p>Send the pre-signed URL to the target user</p></li><li><p>User does <strong>not need AWS credentials</strong></p></li></ul></li><li><p><strong>Access the File</strong></p><ul><li><p>User clicks the URL or uses it in a browser/app</p></li><li><p>S3 verifies the signature and permissions</p></li><li><p>File is returned (download or upload) <strong>for the limited time allowed</strong></p></li></ul></li></ol><p></p>
37
New cards

Common Use Cases:

  1. Allow only logged-in users to download premium video from your s3 bucket

  1. Allow an ever-changing list of users to download files by generating URLs dynamically

  2. Allow temporarily a user to upload a file to a precise location in your s3 bucket.

38
New cards
39
New cards

Amazon S3 Glacier Vault Lock

  • S3 Glacier Vault Lock is a security feature used to implement a WORM model
    (Write Once, Read Many).

  • The idea is:

    • You take an object

    • You store it in an S3 Glacier vault

    • You lock it so it can never be modified or deleted

40
New cards

How Glacier Vault Lock Works

  • You create a Vault Lock Policy on the Glacier vault

  • Once the policy is locked, it:

    • Cannot be changed

    • Cannot be deleted

    • Cannot be overridden by anyone

  • After the policy is locked:

    • Any object inserted into the vault can never be deleted

    • This includes administrators, root users, and even AWS

Use Case

  • Compliance and Data Retention

<ul><li><p>You create a <strong>Vault Lock Policy</strong> on the Glacier vault</p></li><li><p>Once the policy is <strong>locked</strong>, it:</p><ul><li><p>Cannot be changed</p></li><li><p>Cannot be deleted</p></li><li><p>Cannot be overridden by anyone</p></li></ul></li><li><p>After the policy is locked:</p><ul><li><p>Any object inserted into the vault <strong>can never be deleted</strong></p></li><li><p>This includes administrators, root users, and even AWS</p></li></ul></li></ul><p>Use Case</p><ul><li><p>Compliance and Data Retention</p></li></ul><p></p>
41
New cards

S3 Object Lock (S3 Buckets)

  • S3 Object Lock provides a similar WORM model but for S3 buckets

  • It is more granular than Glacier Vault Lock

  • Versioning must be enabled before Object Lock can be used

Key Difference from Glacier Vault Lock

  • Glacier Vault Lock applies to the entire vault

  • S3 Object Lock applies to individual object versions

42
New cards

S3 Object Retention Modes

1. Compliance Mode

  • Very strict retention mode

  • Object versions:

    • Cannot be overwritten

    • Cannot be deleted

    • Cannot be altered by anyone (including root)

  • Retention period:

    • Cannot be shortened

    • Retention settings cannot be changed

  • Used when absolute compliance is required

2. Governance Mode

  • More flexible than compliance mode

  • Most users:

    • Cannot delete or modify object versions

  • Admin users with special IAM permissions:

    • Can override retention settings

    • Can delete objects

  • Useful when compliance is needed but admins still need control

43
New cards

Retention Period

  • Applies to both compliance and governance modes

  • Defines how long an object is protected

  • Retention period:

    • Can be extended

    • Cannot be shortened in compliance mode

44
New cards

Legal Hold

  • Legal Hold is independent of retention period

  • Protects an object indefinitely

  • Used when an object is required for:

    • Legal investigations

    • Court cases

Managing Legal Holds

  • Requires IAM permission:

    s3:PutObjectLegalHold
    
  • Users with this permission can:

    • Place a legal hold on an object

    • Remove a legal hold once it is no longer needed

45
New cards

What Problem Do Access Points Solve?

  • Large S3 buckets often store multiple types of data (finance, sales, analytics, etc.)

  • Different users and groups need access to different parts of the same bucket

  • Managing access with one large S3 bucket policy becomes complex and hard to scale

46
New cards

What Are S3 Access Points?

  • S3 Access Points provide separate access paths to the same S3 bucket

  • Each access point:

    • Has its own policy

    • Controls access to specific prefixes (folders) in the bucket

  • Policies look just like S3 bucket policies

47
New cards

Example Use Case

  • One S3 bucket contains:

    • /finance

    • /sales

Access Points Created:

  • Finance Access Point

    • Read/Write access to finance prefix

  • Sales Access Point

    • Read/Write access to sales prefix

  • Analytics Access Point

    • Read-only access to both finance and sales

Each access point has its own policy, making security easier to manage.

<ul><li><p>One S3 bucket contains:</p><ul><li><p><code>/finance</code></p></li><li><p><code>/sales</code></p></li></ul></li></ul><p><strong>Access Points Created:</strong></p><ul><li><p><strong>Finance Access Point</strong></p><ul><li><p>Read/Write access to <code>finance</code> prefix</p></li></ul></li><li><p><strong>Sales Access Point</strong></p><ul><li><p>Read/Write access to <code>sales</code> prefix</p></li></ul></li><li><p><strong>Analytics Access Point</strong></p><ul><li><p>Read-only access to both <code>finance</code> and <code>sales</code></p></li></ul></li></ul><p>Each access point has its <strong>own policy</strong>, making security easier to manage.</p>
48
New cards
49
New cards

To Summarize:

  • Access Points simplify security management for S3 Buckets

  • Each Access Point has:

    • its own DNS name (Internet Origin or VPC Origin)

    • you can attach an access point policy (similar to bucket policy) - manage security at scale

50
New cards

S3 VPC-Origin Access Points

  • Used for private access from within a VPC

  • Example: EC2 instances accessing S3 without going through the internet

Requirements:

  • Create an S3 VPC endpoint

  • VPC endpoint policy must allow:

    • Access to the S3 bucket

    • Access to the S3 access points

<ul><li><p>Used for <strong>private access from within a VPC</strong></p></li><li><p>Example: EC2 instances accessing S3 without going through the internet</p></li></ul><p><strong>Requirements:</strong></p><ul><li><p>Create an <strong>S3 VPC endpoint</strong></p></li><li><p>VPC endpoint policy must allow:</p><ul><li><p>Access to the S3 bucket</p></li><li><p>Access to the S3 access points</p></li></ul></li></ul><p></p><p></p>
51
New cards

S3 Object Lambda — Notes

What Is S3 Object Lambda?

  • S3 Object Lambda allows you to modify an object just before its being retrieved by the caller application

  • The original object in the S3 bucket is not changed

  • Uses S3 Access Points + AWS Lambda

52
New cards

How It Works (High-Level Flow)

  • Application requests an object

  • Request goes through an S3 Object Lambda Access Point

  • The access point invokes a Lambda function

  • Lambda function:

    • Retrieves the original object from S3

    • Modifies the data (redact, enrich, transform, etc.)

  • Modified object is returned to the application

53
New cards

Example 1: Analytics Application (Redacted Data)

  • E-commerce app:

    • Direct access to the S3 bucket

    • Can read/write original objects

  • Analytics app:

    • Uses an S3 Object Lambda access point

    • Lambda function redacts sensitive data

  • Result:

    • Analytics app receives redacted objects

    • Original data remains unchanged