AWS Certified Security - Specialty

studied byStudied by 1 person
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 508

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

509 Terms

1

The Security team believes that a former employee may have gained unauthorized access to AWS resources sometime in the past 3 months by using an identified access key.
What approach would enable the Security team to find out what the former employee may have done within AWS?

  • A. Use the AWS CloudTrail console to search for user activity.

  • B. Use the Amazon CloudWatch Logs console to filter CloudTrail data by user.

  • C. Use AWS Config to see what actions were taken by the user.

  • D. Use Amazon Athena to query CloudTrail logs stored in Amazon S3.

The correct approach to determine what actions a former employee may have taken within AWS using an identified access key is:

D. Use Amazon Athena to query CloudTrail logs stored in Amazon S3.

### Explanation:

- AWS CloudTrail records API calls made in your AWS account, including the user, service, and action performed. These logs are stored in an Amazon S3 bucket.

- Amazon Athena is a serverless query service that allows you to analyze data directly from Amazon S3 using standard SQL. By querying the CloudTrail logs stored in S3, you can filter for the specific access key and identify all actions performed by the former employee during the specified time frame.

### Why not the other options?

- A. Use the AWS CloudTrail console to search for user activity: While the CloudTrail console allows you to view events, it is not efficient for searching large volumes of data over a 3-month period. Athena is better suited for this task.

- B. Use the Amazon CloudWatch Logs console to filter CloudTrail data by user: CloudWatch Logs can store and analyze logs, but it is not the most efficient way to query CloudTrail logs for specific access keys over a long period.

- C. Use AWS Config to see what actions were taken by the user: AWS Config tracks resource configuration changes, not API calls or user activity. It is not suitable for this use case.

By using Amazon Athena to query CloudTrail logs, the Security team can efficiently identify all actions performed by the former employee using the identified access key.

New cards
2

A company is storing data in Amazon S3 Glacier. The security engineer implemented a new vault lock policy for 10TB of data and called initiate-vault-lock operation 12 hours ago. The audit team identified a typo in the policy that is allowing unintended access to the vault.
What is the MOST cost-effective way to correct this?

  • A. Call the abort-vault-lock operation. Update the policy. Call the initiate-vault-lock operation again.

  • B. Copy the vault data to a new S3 bucket. Delete the vault. Create a new vault with the data.

  • C. Update the policy to keep the vault lock in place.

  • D. Update the policy. Call initiate-vault-lock operation again to apply the new policy.

The **MOST cost-effective way** to correct the typo in the vault lock policy is:

**A. Call the abort-vault-lock operation. Update the policy. Call the initiate-vault-lock operation again.**

### Explanation:

- **Vault Lock Policy**: Once a vault lock policy is initiated, it enters a **provisioned state** for 24 hours. During this time, the policy can still be aborted if it hasn't been locked yet.

- **Abort Vault Lock**: If the policy has not been locked (i.e., within the 24-hour window), you can call the abort-vault-lock operation to cancel the current policy. This allows you to correct the typo in the policy and re-initiate the vault lock with the updated policy.

- **Cost-Effectiveness**: This approach avoids the need to copy or delete data, which would incur additional storage and transfer costs.

### Why not the other options?

- **B. Copy the vault data to a new S3 bucket. Delete the vault. Create a new vault with the data**: This approach is **not cost-effective** because it involves copying 10TB of data to a new bucket, which incurs data transfer and storage costs. It is also time-consuming.

- **C. Update the policy to keep the vault lock in place**: Once the vault lock policy is locked, it **cannot be updated or changed**. This option is not feasible.

- **D. Update the policy. Call initiate-vault-lock operation again to apply the new policy**: You cannot update the policy or re-initiate the vault lock until the current lock process is aborted. This option is not valid.

By aborting the current vault lock, updating the policy, and re-initiating the lock, the security engineer can correct the typo in the most cost-effective and efficient manner.

New cards
3

A company wants to control access to its AWS resources by using identities and groups that are defined in its existing Microsoft Active Directory.
What must the company create in its AWS account to map permissions for AWS services to Active Directory user attributes?

  • A. AWS IAM groups

  • B. AWS IAM users

  • C. AWS IAM roles

  • D. AWS IAM access keys

The correct answer is:

C. AWS IAM roles

### Explanation:

To control access to AWS resources using identities and groups defined in an existing Microsoft Active Directory, the company must use AWS IAM roles in conjunction with AWS Directory Service (specifically, AWS Managed Microsoft AD or AD Connector). Here's how it works:

1. AWS Directory Service: The company sets up AWS Directory Service to integrate its existing Microsoft Active Directory with AWS. This allows Active Directory users and groups to be recognized in the AWS environment.

2. IAM Roles: The company creates IAM roles in AWS and defines permissions for those roles. These roles are then mapped to Active Directory users or groups using role trust policies.

3. Federation: When Active Directory users log in, they assume the IAM roles based on their Active Directory group memberships or user attributes. This allows them to access AWS resources with the permissions defined in the roles.

### Why not the other options?

- A. AWS IAM groups: IAM groups are used to manage permissions for IAM users within AWS, not for integrating with external identity providers like Active Directory.

- B. AWS IAM users: IAM users are created and managed directly in AWS, not mapped to external identities like Active Directory users.

- D. AWS IAM access keys: Access keys are used for programmatic access to AWS resources and are not related to integrating with Active Directory.

By using IAM roles, the company can map permissions for AWS services to Active Directory user attributes, enabling seamless access control based on existing directory identities.

New cards
4

A company has contracted with a third party to audit several AWS accounts. To enable the audit, cross-account IAM roles have been created in each account targeted for audit. The Auditor is having trouble accessing some of the accounts.
Which of the following may be causing this problem? (Choose three.)

  • A. The external ID used by the Auditor is missing or incorrect.

  • B. The Auditor is using the incorrect password.

  • C. The Auditor has not been granted sts:AssumeRole for the role in the destination account.

  • D. The Amazon EC2 role used by the Auditor must be set to the destination account role.

  • E. The secret key used by the Auditor is missing or incorrect.

  • F. The role ARN used by the Auditor is missing or incorrect.

The correct answers are:

**A. The external ID used by the Auditor is missing or incorrect.**

**C. The Auditor has not been granted sts:AssumeRole for the role in the destination account.**

**F. The role ARN used by the Auditor is missing or incorrect.**

### Explanation:

When using cross-account IAM roles, the following issues could prevent the Auditor from accessing the target accounts:

1. **A. The external ID used by the Auditor is missing or incorrect**:

- An **external ID** is an optional security feature used to prevent the "confused deputy problem." If the destination account requires an external ID and the Auditor does not provide it or provides an incorrect one, the AssumeRole operation will fail.

2. **C. The Auditor has not been granted sts:AssumeRole for the role in the destination account**:

- The IAM role in the destination account must have a trust policy that allows the Auditor's account to assume the role using the sts:AssumeRole action. If this permission is missing or misconfigured, the Auditor will not be able to assume the role.

3. **F. The role ARN used by the Auditor is missing or incorrect**:

- The **role ARN** (Amazon Resource Name) uniquely identifies the IAM role in the destination account. If the Auditor provides an incorrect or missing role ARN, the AssumeRole operation will fail.

### Why not the other options?

- **B. The Auditor is using the incorrect password**:

- Passwords are not used for cross-account role assumption. The Auditor assumes the role using temporary security credentials, not a password.

- **D. The Amazon EC2 role used by the Auditor must be set to the destination account role**:

- This is irrelevant. The Auditor is not using an EC2 instance role; they are assuming a cross-account IAM role.

- **E. The secret key used by the Auditor is missing or incorrect**:

- Secret keys are not used for cross-account role assumption. The Auditor assumes the role using the AssumeRole API, which generates temporary security credentials.

By addressing the external ID, sts:AssumeRole permissions, and role ARN, the Auditor should be able to successfully assume the cross-account IAM role and access the target accounts.

New cards
5

Compliance requirements state that all communications between company on-premises hosts and EC2 instances be encrypted in transit. Hosts use custom proprietary protocols for their communication, and EC2 instances need to be fronted by a load balancer for increased availability.
Which of the following solutions will meet these requirements?

  • A. Offload SSL termination onto an SSL listener on a Classic Load Balancer, and use a TCP connection between the load balancer and the EC2 instances.

  • B. Route all traffic through a TCP listener on a Classic Load Balancer, and terminate the TLS connection on the EC2 instances.

  • C. Create an HTTPS listener using an Application Load Balancer, and route all of the communication through that load balancer.

  • D. Offload SSL termination onto an SSL listener using an Application Load Balancer, and re-spawn and SSL connection between the load balancer and the EC2 instances.

The correct answer is:

D. Offload SSL termination onto an SSL listener using an Application Load Balancer, and re-spawn an SSL connection between the load balancer and the EC2 instances.

### Explanation:

To meet the compliance requirements of encrypting all communications between on-premises hosts and EC2 instances, while also fronting the EC2 instances with a load balancer for increased availability, the solution must ensure end-to-end encryption. Here's why option D is the best choice:

1. SSL Termination at the Load Balancer:

- The Application Load Balancer (ALB) supports SSL termination, meaning it can decrypt incoming HTTPS traffic from on-premises hosts.

2. Re-Spawn SSL Connection to EC2 Instances:

- After decrypting the traffic, the ALB can re-encrypt the traffic using a new SSL/TLS connection between the load balancer and the EC2 instances. This ensures that the communication between the load balancer and the EC2 instances is also encrypted, meeting the requirement for encryption in transit.

3. Custom Proprietary Protocols:

- Since the hosts use custom proprietary protocols, the ALB can handle the traffic at the application layer (Layer 7), ensuring compatibility with the protocols.

### Why not the other options?

- A. Offload SSL termination onto an SSL listener on a Classic Load Balancer, and use a TCP connection between the load balancer and the EC2 instances:

- This option does not ensure encryption between the load balancer and the EC2 instances, as it uses a TCP connection (unencrypted) for this segment.

- B. Route all traffic through a TCP listener on a Classic Load Balancer, and terminate the TLS connection on the EC2 instances:

- While this ensures encryption between the on-premises hosts and the EC2 instances, it does not offload SSL termination to the load balancer, which could increase the computational load on the EC2 instances.

- C. Create an HTTPS listener using an Application Load Balancer, and route all of the communication through that load balancer:

- This option does not explicitly mention re-encrypting the traffic between the load balancer and the EC2 instances, which is necessary for end-to-end encryption.

By using an Application Load Balancer to terminate SSL and re-encrypt the traffic to the EC2 instances, the solution ensures end-to-end encryption and meets the compliance requirements.

New cards
6

An application is currently secured using network access control lists and security groups. Web servers are located in public subnets behind an Application Load
Balancer (ALB); application servers are located in private subnets.
How can edge security be enhanced to safeguard the Amazon EC2 instances against attack? (Choose two.)

  • A. Configure the application's EC2 instances to use NAT gateways for all inbound traffic.

  • B. Move the web servers to private subnets without public IP addresses.

  • C. Configure AWS WAF to provide DDoS attack protection for the ALB.

  • D. Require all inbound network traffic to route through a bastion host in the private subnet.

  • E. Require all inbound and outbound network traffic to route through an AWS Direct Connect connection.

The correct answers are:

B. Move the web servers to private subnets without public IP addresses.

C. Configure AWS WAF to provide DDoS attack protection for the ALB.

### Explanation:

To enhance edge security for the application and safeguard the EC2 instances against attacks, the following measures are recommended:

1. B. Move the web servers to private subnets without public IP addresses:

- By moving the web servers to private subnets, they are no longer directly exposed to the internet. All inbound traffic must pass through the Application Load Balancer (ALB), which acts as a single point of entry and provides an additional layer of security. This reduces the attack surface and minimizes the risk of direct attacks on the web servers.

2. C. Configure AWS WAF to provide DDoS attack protection for the ALB:

- AWS WAF (Web Application Firewall) can be integrated with the ALB to protect against common web exploits, such as SQL injection and cross-site scripting (XSS). Additionally, AWS WAF can help mitigate Distributed Denial of Service (DDoS) attacks by filtering and monitoring HTTP/HTTPS traffic. This enhances the security of the application at the edge.

### Why not the other options?

- A. Configure the application's EC2 instances to use NAT gateways for all inbound traffic:

- NAT gateways are used for outbound internet access for instances in private subnets, not for inbound traffic. This option is not relevant for enhancing edge security.

- D. Require all inbound network traffic to route through a bastion host in the private subnet:

- A bastion host is used for secure access to instances in private subnets, typically for administrative purposes. It is not suitable for routing all inbound application traffic, as it would create a bottleneck and is not designed for this use case.

- E. Require all inbound and outbound network traffic to route through an AWS Direct Connect connection:

- AWS Direct Connect is used for establishing a dedicated network connection between on-premises infrastructure and AWS. It is not a security measure for protecting against attacks on web applications.

By moving the web servers to private subnets and configuring AWS WAF for the ALB, the application's edge security is significantly enhanced, protecting the EC2 instances from potential attacks.

New cards
7

A Security Administrator is restricting the capabilities of company root user accounts. The company uses AWS Organizations and has enabled it for all feature sets, including consolidated billing. The top-level account is used for billing and administrative purposes, not for operational AWS resource purposes.
How can the Administrator restrict usage of member root user accounts across the organization?

  • A. Disable the use of the root user account at the organizational root. Enable multi-factor authentication of the root user account for each organizational member account.

  • B. Configure IAM user policies to restrict root account capabilities for each Organizations member account.

  • C. Create an organizational unit (OU) in Organizations with a service control policy that controls usage of the root user. Add all operational accounts to the new OU.

  • D. Configure AWS CloudTrail to integrate with Amazon CloudWatch Logs and then create a metric filter for RootAccountUsage.

The correct answer is:

C. Create an organizational unit (OU) in Organizations with a service control policy that controls usage of the root user. Add all operational accounts to the new OU.

### Explanation:

To restrict the usage of root user accounts across the organization, the Security Administrator can use AWS Organizations and Service Control Policies (SCPs). Here's how this solution works:

1. Create an Organizational Unit (OU):

- The Administrator creates a new OU within AWS Organizations to group all operational accounts.

2. Create a Service Control Policy (SCP):

- The Administrator creates an SCP that restricts the capabilities of the root user account. For example, the SCP can deny specific actions or limit the root user's permissions.

3. Attach the SCP to the OU:

- The SCP is attached to the OU, and all member accounts within the OU inherit the policy. This ensures that the root user accounts in those member accounts are restricted according to the SCP.

4. Add Operational Accounts to the OU:

- All operational accounts are moved into the new OU, ensuring that the SCP applies to them.

### Why not the other options?

- A. Disable the use of the root user account at the organizational root. Enable multi-factor authentication of the root user account for each organizational member account:

- Disabling the root user account at the organizational root is not possible, as the root user is always available for emergency access. Enabling MFA is a good security practice but does not restrict the capabilities of the root user.

- B. Configure IAM user policies to restrict root account capabilities for each Organizations member account:

- IAM policies cannot be applied to the root user account. They only apply to IAM users, groups, and roles.

- D. Configure AWS CloudTrail to integrate with Amazon CloudWatch Logs and then create a metric filter for RootAccountUsage:

- While this approach can help monitor and detect root user activity, it does not restrict or control the usage of the root user account.

By using SCPs in AWS Organizations, the Administrator can effectively restrict the capabilities of root user accounts across the organization, ensuring better security and compliance.

New cards
8

A Systems Engineer has been tasked with configuring outbound mail through Simple Email Service (SES) and requires compliance with current TLS standards.
The mail application should be configured to connect to which of the following endpoints and corresponding ports?

The correct answer is:

C. email-smtp.us-east-1.amazonaws.com over port 587

### Explanation:

To configure outbound mail through Amazon Simple Email Service (SES) while ensuring compliance with current TLS standards, the mail application must connect to the SMTP endpoint provided by SES. Here's why option C is correct:

1. SMTP Endpoint:

- SES provides an SMTP endpoint for sending emails. The endpoint format is email-smtp.<region>.amazonaws.com, where <region> is the AWS region (e.g., us-east-1).

2. Port 587:

- Port 587 is the standard port for SMTP submission with STARTTLS, which ensures that the connection is encrypted using TLS. This is the recommended port for secure email transmission.

3. TLS Compliance:

- Using port 587 with STARTTLS ensures that the mail application complies with current TLS standards for secure communication.

### Why not the other options?

- A. email.us-east-1.amazonaws.com over port 8080:

- This is not a valid SES endpoint or port. SES does not use port 8080 for email transmission.

- B. email-pop3.us-east-1.amazonaws.com over port 995:

- POP3 (Post Office Protocol) is used for retrieving emails, not sending them. Port 995 is for POP3 over SSL/TLS, which is unrelated to outbound mail through SES.

- D. email-imap.us-east-1.amazonaws.com over port 993:

- IMAP (Internet Message Access Protocol) is used for retrieving emails, not sending them. Port 993 is for IMAP over SSL/TLS, which is unrelated to outbound mail through SES.

By configuring the mail application to connect to email-smtp.us-east-1.amazonaws.com over port 587, the Systems Engineer ensures secure and compliant outbound email transmission through Amazon SES.

New cards
9

A threat assessment has identified a risk whereby an internal employee could exfiltrate sensitive data from production host running inside AWS (Account 1). The threat was documented as follows:
Threat description: A malicious actor could upload sensitive data from Server X by configuring credentials for an AWS account (Account 2) they control and uploading data to an Amazon S3 bucket within their control.
Server X has outbound internet access configured via a proxy server. Legitimate access to S3 is required so that the application can upload encrypted files to an
S3 bucket. Server X is currently using an IAM instance role. The proxy server is not able to inspect any of the server communication due to TLS encryption.
Which of the following options will mitigate the threat? (Choose two.)

  • A. Bypass the proxy and use an S3 VPC endpoint with a policy that whitelists only certain S3 buckets within Account 1.

  • B. Block outbound access to public S3 endpoints on the proxy server.

  • C. Configure Network ACLs on Server X to deny access to S3 endpoints.

  • D. Modify the S3 bucket policy for the legitimate bucket to allow access only from the public IP addresses associated with the application server.

  • E. Remove the IAM instance role from the application server and save API access keys in a trusted and encrypted application config file.

The correct answers are:

A. Bypass the proxy and use an S3 VPC endpoint with a policy that whitelists only certain S3 buckets within Account 1.

B. Block outbound access to public S3 endpoints on the proxy server.

### Explanation:

To mitigate the threat of a malicious actor exfiltrating sensitive data from Server X to an S3 bucket in Account 2, the following measures should be taken:

1. A. Bypass the proxy and use an S3 VPC endpoint with a policy that whitelists only certain S3 buckets within Account 1:

- By using an S3 VPC endpoint, traffic between Server X and S3 remains within the AWS network, avoiding the public internet. This ensures that data cannot be exfiltrated to external S3 buckets.

- A VPC endpoint policy can be configured to allow access only to specific S3 buckets within Account 1, preventing Server X from uploading data to unauthorized buckets.

2. B. Block outbound access to public S3 endpoints on the proxy server:

- Blocking outbound access to public S3 endpoints ensures that Server X cannot communicate with S3 buckets outside the VPC, including those in Account 2. This prevents the malicious actor from uploading data to their own S3 bucket.

### Why not the other options?

- C. Configure Network ACLs on Server X to deny access to S3 endpoints:

- Network ACLs are stateless and apply to all traffic, which could disrupt legitimate access to S3. This approach is not granular enough and could break the application's functionality.

- D. Modify the S3 bucket policy for the legitimate bucket to allow access only from the public IP addresses associated with the application server:

- This approach is ineffective because Server X uses an IAM instance role, which does not rely on IP-based restrictions. Additionally, the malicious actor could still upload data to their own S3 bucket in Account 2.

- E. Remove the IAM instance role from the application server and save API access keys in a trusted and encrypted application config file:

- This approach increases security risks because storing static access keys in a config file is less secure than using IAM roles. It also does not prevent the malicious actor from using their own credentials to upload data to Account 2.

By using an S3 VPC endpoint with a restrictive policy and blocking outbound access to public S3 endpoints, the threat of data exfiltration is effectively mitigated while maintaining legitimate access to S3.

New cards
10

A company will store sensitive documents in three Amazon S3 buckets based on a data classification scheme of Sensitive, Confidential, and Restricted. The security solution must meet all of the following requirements:
✑ Each object must be encrypted using a unique key.
✑ Items that are stored in the Restricted bucket require two-factor authentication for decryption.
✑ AWS KMS must automatically rotate encryption keys annually.
Which of the following meets these requirements?

  • A. Create a Customer Master Key (CMK) for each data classification type, and enable the rotation of it annually. For the Restricted CMK, define the MFA policy within the key policy. Use S3 SSE-KMS to encrypt the objects.

  • B. Create a CMK grant for each data classification type with EnableKeyRotation and MultiFactorAuthPresent set to true. S3 can then use the grants to encrypt each object with a unique CMK.

  • C. Create a CMK for each data classification type, and within the CMK policy, enable rotation of it annually, and define the MFA policy. S3 can then create DEK grants to uniquely encrypt each object within the S3 bucket.

  • D. Create a CMK with unique imported key material for each data classification type, and rotate them annually. For the Restricted key material, define the MFA policy in the key policy. Use S3 SSE-KMS to encrypt the objects.

The correct answer is:

A. Create a Customer Master Key (CMK) for each data classification type, and enable the rotation of it annually. For the "Restricted" CMK, define the MFA policy within the key policy. Use S3 SSE-KMS to encrypt the objects.

### Explanation:

This solution meets all the requirements as follows:

1. Each object must be encrypted using a unique key:

- When using S3 Server-Side Encryption with AWS KMS (SSE-KMS), each object is encrypted with a unique data encryption key (DEK), which is itself encrypted by the Customer Master Key (CMK). This ensures that each object has a unique encryption key.

2. Items in the "Restricted" bucket require two-factor authentication for decryption:

- The MFA policy can be defined within the key policy of the CMK used for the "Restricted" bucket. This ensures that MFA is required for decryption operations involving this CMK.

3. AWS KMS must automatically rotate encryption keys annually:

- AWS KMS supports automatic key rotation for CMKs. By enabling this feature, the CMKs will be rotated annually, ensuring compliance with the requirement.

### Why not the other options?

- B. Create a CMK grant for each data classification type with EnableKeyRotation and MultiFactorAuthPresent set to true. S3 can then use the grants to encrypt each object with a unique CMK:

- CMK grants are used to delegate permissions to use a CMK, but they do not enable automatic key rotation or MFA policies. This option does not meet the requirements.

- C. Create a CMK for each data classification type, and within the CMK policy, enable rotation of it annually, and define the MFA policy. S3 can then create DEK grants to uniquely encrypt each object within the S3 bucket:

- While this option mentions enabling key rotation and defining an MFA policy, the use of "DEK grants" is incorrect. S3 SSE-KMS automatically handles the creation of unique data encryption keys (DEKs) for each object, so no additional grants are needed.

- D. Create a CMK with unique imported key material for each data classification type, and rotate them annually. For the "Restricted" key material, define the MFA policy in the key policy. Use S3 SSE-KMS to encrypt the objects:

- Importing key material is unnecessary and adds complexity. AWS KMS can automatically generate and manage CMKs without requiring imported key material. This option is overly complex and does not provide any additional benefits over option A.

By creating a CMK for each data classification type, enabling automatic key rotation, defining an MFA policy for the "Restricted" CMK, and using S3 SSE-KMS, the solution meets all the requirements effectively.

New cards
11

An organization wants to deploy a three-tier web application whereby the application servers run on Amazon EC2 instances. These EC2 instances need access to credentials that they will use to authenticate their SQL connections to an Amazon RDS DB instance. Also, AWS Lambda functions must issue queries to the RDS database by using the same database credentials.
The credentials must be stored so that the EC2 instances and the Lambda functions can access them. No other access is allowed. The access logs must record when the credentials were accessed and by whom.
What should the Security Engineer do to meet these requirements?

  • A. Store the database credentials in AWS Key Management Service (AWS KMS). Create an IAM role with access to AWS KMS by using the EC2 and Lambda service principals in the role's trust policy. Add the role to an EC2 instance profile. Attach the instance profile to the EC2 instances. Set up Lambda to use the new role for execution.

  • B. Store the database credentials in AWS KMS. Create an IAM role with access to KMS by using the EC2 and Lambda service principals in the role's trust policy. Add the role to an EC2 instance profile. Attach the instance profile to the EC2 instances and the Lambda function.

  • C. Store the database credentials in AWS Secrets Manager. Create an IAM role with access to Secrets Manager by using the EC2 and Lambda service principals in the role's trust policy. Add the role to an EC2 instance profile. Attach the instance profile to the EC2 instances and the Lambda function.

  • D. Store the database credentials in AWS Secrets Manager. Create an IAM role with access to Secrets Manager by using the EC2 and Lambda service principals in the role's trust policy. Add the role to an EC2 instance profile. Attach the instance profile to the EC2 instances. Set up Lambda to use the new role for execution.

The correct answer is:

D. Store the database credentials in AWS Secrets Manager. Create an IAM role with access to Secrets Manager by using the EC2 and Lambda service principals in the role's trust policy. Add the role to an EC2 instance profile. Attach the instance profile to the EC2 instances. Set up Lambda to use the new role for execution.

### Explanation:

This solution meets all the requirements as follows:

1. Store the database credentials in AWS Secrets Manager:

- AWS Secrets Manager is designed to securely store and manage sensitive information, such as database credentials. It provides automatic rotation, access control, and auditing capabilities.

2. Create an IAM role with access to Secrets Manager:

- An IAM role is created with permissions to access the secrets stored in Secrets Manager. The role's trust policy allows both EC2 and Lambda service principals to assume the role.

3. Add the role to an EC2 instance profile and attach it to the EC2 instances:

- The IAM role is added to an EC2 instance profile, which is then attached to the EC2 instances. This allows the EC2 instances to assume the role and access the credentials.

4. Set up Lambda to use the new role for execution:

- The Lambda function is configured to use the same IAM role for execution, allowing it to access the credentials stored in Secrets Manager.

5. Access logs and auditing:

- AWS Secrets Manager automatically logs access to secrets, including when the credentials were accessed and by whom. This meets the requirement for access logging.

### Why not the other options?

- A. Store the database credentials in AWS Key Management Service (AWS KMS). Create an IAM role with access to AWS KMS by using the EC2 and Lambda service principals in the role's trust policy. Add the role to an EC2 instance profile. Attach the instance profile to the EC2 instances. Set up Lambda to use the new role for execution:

- AWS KMS is used for encryption keys, not for storing credentials. This option does not meet the requirement for storing and managing database credentials.

- B. Store the database credentials in AWS KMS. Create an IAM role with access to KMS by using the EC2 and Lambda service principals in the role's trust policy. Add the role to an EC2 instance profile. Attach the instance profile to the EC2 instances and the Lambda function:

- Similar to option A, AWS KMS is not suitable for storing credentials. Additionally, attaching an instance profile to a Lambda function is not valid, as Lambda functions use execution roles.

- C. Store the database credentials in AWS Secrets Manager. Create an IAM role with access to Secrets Manager by using the EC2 and Lambda service principals in the role's trust policy. Add the role to an EC2 instance profile. Attach the instance profile to the EC2 instances and the Lambda function:

- This option incorrectly suggests attaching an instance profile to a Lambda function. Lambda functions use execution roles, not instance profiles.

By using AWS Secrets Manager to store the credentials and configuring IAM roles for both EC2 instances and Lambda functions, the Security Engineer meets all the requirements effectively.

New cards
12

A company has a customer master key (CMK) with imported key materials. Company policy requires that all encryption keys must be rotated every year.
What can be done to implement the above policy?

  • A. Enable automatic key rotation annually for the CMK.

  • B. Use AWS Command Line Interface to create an AWS Lambda function to rotate the existing CMK annually.

  • C. Import new key material to the existing CMK and manually rotate the CMK.

  • D. Create a new CMK, import new key material to it, and point the key alias to the new CMK.

The correct answer is:

D. Create a new CMK, import new key material to it, and point the key alias to the new CMK.

### Explanation:

For Customer Master Keys (CMKs) with imported key material, automatic key rotation is not supported. Therefore, to comply with the company's policy of rotating encryption keys annually, the following steps must be taken:

1. Create a new CMK:

- A new CMK is created in AWS KMS to replace the existing one.

2. Import new key material to the new CMK:

- New key material is imported into the new CMK. This ensures that the new key is used for encryption and decryption operations.

3. Point the key alias to the new CMK:

- The key alias (a friendly name for the CMK) is updated to point to the new CMK. This ensures that applications using the alias do not need to be modified, as they will automatically start using the new CMK.

### Why not the other options?

- A. Enable automatic key rotation annually for the CMK:

- Automatic key rotation is not supported for CMKs with imported key material. This option is invalid.

- B. Use AWS Command Line Interface to create an AWS Lambda function to rotate the existing CMK annually:

- While this approach could be used to automate the process of creating a new CMK and importing key material, it is unnecessarily complex. Manually creating a new CMK and updating the alias is simpler and more straightforward.

- C. Import new key material to the existing CMK and manually rotate the CMK:

- Importing new key material into an existing CMK does not rotate the key. The existing CMK continues to use the same key ID, and the new key material does not change the key's metadata or alias. This option does not meet the requirement for key rotation.

By creating a new CMK, importing new key material, and updating the alias, the company can effectively rotate the encryption key annually while complying with its policy.

New cards
13

A water utility company uses a number of Amazon EC2 instances to manage updates to a fleet of 2,000 Internet of Things (IoT) field devices that monitor water quality. These devices each have unique access credentials.
An operational safety policy requires that access to specific credentials is independently auditable.
What is the MOST cost-effective way to manage the storage of credentials?

  • A. Use AWS Systems Manager to store the credentials as Secure Strings Parameters. Secure by using an AWS KMS key.

  • B. Use AWS Key Management System to store a master key, which is used to encrypt the credentials. The encrypted credentials are stored in an Amazon RDS instance.

  • C. Use AWS Secrets Manager to store the credentials.

  • D. Store the credentials in a JSON file on Amazon S3 with server-side encryption.

The correct answer is:

C. Use AWS Secrets Manager to store the credentials.

### Explanation:

AWS Secrets Manager is the most cost-effective and secure solution for managing the storage of credentials for the IoT field devices. Here's why:

1. Independent Auditing:

- Secrets Manager provides access logging through integration with AWS CloudTrail. This allows for independent auditing of when and by whom specific credentials were accessed, meeting the operational safety policy requirement.

2. Secure Storage:

- Secrets Manager encrypts secrets at rest using AWS KMS (Key Management Service). This ensures that the credentials are securely stored.

3. Automatic Rotation:

- Secrets Manager supports automatic rotation of credentials, which enhances security by regularly updating the credentials without manual intervention.

4. Cost-Effectiveness:

- Secrets Manager is designed for securely storing and managing secrets, making it more cost-effective than custom solutions like storing encrypted credentials in an RDS instance or S3.

5. Scalability:

- Secrets Manager can handle a large number of secrets (e.g., 2,000 unique credentials for IoT devices) and provides APIs for easy retrieval and management.

### Why not the other options?

- A. Use AWS Systems Manager to store the credentials as Secure Strings Parameters. Secure by using an AWS KMS key:

- While AWS Systems Manager Parameter Store can store Secure Strings, it lacks the automatic rotation and access logging features provided by Secrets Manager. This makes it less suitable for meeting the auditing requirement.

- B. Use AWS Key Management System to store a master key, which is used to encrypt the credentials. The encrypted credentials are stored in an Amazon RDS instance:

- This approach is overly complex and not cost-effective. Storing encrypted credentials in an RDS instance requires additional infrastructure and management, and it does not provide built-in access logging or rotation capabilities.

- D. Store the credentials in a JSON file on Amazon S3 with server-side encryption:

- While S3 provides server-side encryption, this approach lacks access logging and automatic rotation. Additionally, managing 2,000 unique credentials in a JSON file is cumbersome and error-prone.

By using AWS Secrets Manager, the water utility company can securely store and manage the credentials for its IoT devices while meeting the operational safety policy requirements for independent auditing.

New cards
14

An organization is using Amazon CloudWatch Logs with agents deployed on its Linux Amazon EC2 instances. The agent configuration files have been checked and the application log files to be pushed are configured correctly. A review has identified that logging from specific instances is missing.
Which steps should be taken to troubleshoot the issue? (Choose two.)

  • A. Use an EC2 run command to confirm that the awslogs service is running on all instances.

  • B. Verify that the permissions used by the agent allow creation of log groups/streams and to put log events.

  • C. Check whether any application log entries were rejected because of invalid time stamps by reviewing /var/cwlogs/rejects.log.

  • D. Check that the trust relationship grants the service cwlogs.amazonaws.com permission to write objects to the Amazon S3 staging bucket.

  • E. Verify that the time zone on the application servers is in UTC.

The correct answers are:

A. Use an EC2 run command to confirm that the “awslogs” service is running on all instances.

B. Verify that the permissions used by the agent allow creation of log groups/streams and to put log events.

### Explanation:

To troubleshoot missing logging from specific EC2 instances, the following steps should be taken:

1. A. Use an EC2 run command to confirm that the “awslogs” service is running on all instances:

- The CloudWatch Logs agent awslogs service must be running on the instances to push logs to CloudWatch. Using the EC2 Run Command, you can remotely check the status of the awslogs service on all instances to ensure it is active.

2. B. Verify that the permissions used by the agent allow creation of log groups/streams and to put log events:

- The IAM role or credentials used by the CloudWatch Logs agent must have the necessary permissions to create log groups/streams and to put log events. If the permissions are missing or incorrect, the agent will not be able to send logs to CloudWatch.

### Why not the other options?

- C. Check whether any application log entries were rejected because of invalid time stamps by reviewing /var/cwlogs/rejects.log:

- While this step can help identify issues with log entries, it is not directly related to the problem of missing logging from specific instances. The rejects.log file is used to track rejected log entries, not missing logs.

- D. Check that the trust relationship grants the service “cwlogs.amazonaws.com” permission to write objects to the Amazon S3 staging bucket:

- This is irrelevant. CloudWatch Logs does not use an S3 staging bucket for log ingestion. The trust relationship should allow the EC2 instances to assume the IAM role with permissions for CloudWatch Logs.

- E. Verify that the time zone on the application servers is in UTC:

- While incorrect time zones can cause issues with log timestamps, they do not prevent logs from being sent to CloudWatch. This step is not directly related to the issue of missing logs.

By confirming that the awslogs service is running and verifying the agent's permissions, you can effectively troubleshoot the issue of missing logging from specific EC2 instances.

New cards
15

A Security Engineer must design a solution that enables the incident Response team to audit for changes to a user's IAM permissions in the case of a security incident.
How can this be accomplished?

  • A. Use AWS Config to review the IAM policy assigned to users before and after the incident.

  • B. Run the GenerateCredentialReport via the AWS CLI, and copy the output to Amazon S3 daily for auditing purposes.

  • C. Copy AWS CloudFormation templates to S3, and audit for changes from the template.

  • D. Use Amazon EC2 Systems Manager to deploy images, and review AWS CloudTrail logs for changes.

The correct answer is:

A. Use AWS Config to review the IAM policy assigned to users before and after the incident.

### Explanation:

To audit changes to a user's IAM permissions, the Security Engineer can use AWS Config as follows:

1. AWS Config:

- AWS Config provides a detailed history of configuration changes for AWS resources, including IAM users, groups, roles, and policies. It tracks changes to IAM policies and permissions over time.

2. Configuration Snapshots:

- AWS Config captures configuration snapshots of IAM resources, allowing you to review the state of IAM policies before and after a security incident.

3. Compliance and Auditing:

- AWS Config rules can be used to evaluate whether IAM configurations comply with security policies. The configuration timeline feature allows you to see exactly when changes were made and what the changes were.

### Why not the other options?

- B. Run the GenerateCredentialReport via the AWS CLI, and copy the output to Amazon S3 daily for auditing purposes:

- The Credential Report provides a snapshot of IAM user credentials (e.g., password and access key usage) but does not track changes to IAM policies or permissions over time. It is not suitable for auditing changes to IAM permissions.

- C. Copy AWS CloudFormation templates to S3, and audit for changes from the template:

- CloudFormation templates are used for infrastructure as code (IaC) and do not provide a history of IAM policy changes. This option is irrelevant for auditing IAM permissions.

- D. Use Amazon EC2 Systems Manager to deploy images, and review AWS CloudTrail logs for changes:

- While CloudTrail logs API calls, including those related to IAM, it does not provide a detailed history of configuration changes like AWS Config. Additionally, EC2 Systems Manager is unrelated to auditing IAM permissions.

By using AWS Config, the Security Engineer can effectively audit changes to IAM permissions and provide the incident response team with the necessary information to investigate security incidents.

New cards
16

A company has complex connectivity rules governing ingress, egress, and communications between Amazon EC2 instances. The rules are so complex that they cannot be implemented within the limits of the maximum number of security groups and network access control lists (network ACLs).
What mechanism will allow the company to implement all required network rules without incurring additional cost?

  • A. Configure AWS WAF rules to implement the required rules.

  • B. Use the operating system built-in, host-based firewall to implement the required rules.

  • C. Use a NAT gateway to control ingress and egress according to the requirements.

  • D. Launch an EC2-based firewall product from the AWS Marketplace, and implement the required rules in that product.

The correct answer is:

B. Use the operating system built-in, host-based firewall to implement the required rules.

### Explanation:

When the complexity of network rules exceeds the limits of AWS security groups and network ACLs, the most cost-effective solution is to use the operating system's built-in, host-based firewall. Here's why:

1. Host-Based Firewall:

- Operating systems like Linux (e.g., iptables or firewalld) and Windows (e.g., Windows Firewall) provide built-in firewalls that can be configured to implement complex network rules. These firewalls operate at the instance level and are not constrained by the limits of AWS security groups or network ACLs.

2. No Additional Cost:

- Using the host-based firewall does not incur additional costs, as it leverages existing operating system capabilities.

3. Granular Control:

- Host-based firewalls allow for highly granular control over ingress, egress, and inter-instance communication, making them suitable for complex rule sets.

### Why not the other options?

- A. Configure AWS WAF rules to implement the required rules:

- AWS WAF (Web Application Firewall) is designed to protect web applications from common web exploits (e.g., SQL injection, XSS). It is not suitable for controlling general network traffic between EC2 instances.

- C. Use a NAT gateway to control ingress and egress according to the requirements:

- A NAT gateway is used to allow instances in a private subnet to access the internet. It does not provide the granularity or flexibility needed to implement complex network rules.

- D. Launch an EC2-based firewall product from the AWS Marketplace, and implement the required rules in that product:

- While this option can provide advanced firewall capabilities, it incurs additional costs for the EC2 instance and the Marketplace product. It is not a cost-effective solution.

By using the operating system's built-in firewall, the company can implement all required network rules without exceeding AWS limits or incurring additional costs.

New cards
17
term image

The correct answers are:

B. kms:Decrypt

C. kms:CreateGrant

### Explanation:

To start an EC2 instance with encrypted EBS volumes, the IAM user must have the necessary permissions to interact with the Customer Master Key (CMK) used to encrypt the volumes. The following permissions are required:

1. kms:Decrypt:

- This permission allows the IAM user to decrypt the encrypted EBS volumes attached to the EC2 instance. Without this permission, the instance cannot access the data on the encrypted volumes, causing it to fail to start.

2. kms:CreateGrant:

- This permission allows the IAM user to create a grant, which is necessary for the EC2 service to use the CMK to decrypt the EBS volumes. Grants are used to delegate permissions to AWS services like EC2 to use the CMK on behalf of the user.

### Why not the other options?

- A. kms:GenerateDataKey:

- This permission is used to generate data keys for encryption, not for decrypting existing encrypted volumes. It is not required for starting an EC2 instance with encrypted EBS volumes.

- D. "Condition": { "Bool": { "kms:ViaService": "ec2.us-west-2.amazonaws.com" } }:

- While this condition can restrict the use of the CMK to the EC2 service in a specific region, it is not necessary for the basic functionality of starting an EC2 instance with encrypted EBS volumes.

- E. "Condition": { "Bool": { "kms:GrantIsForAWSResource": true } }:

- This condition ensures that grants are only created for AWS resources, but it is not required for the IAM user to start the EC2 instance.

By adding kms:Decrypt and kms:CreateGrant to the IAM user policy, the user will have the necessary permissions to start the EC2 instance with encrypted EBS volumes.

New cards
18

A Security Administrator has a website hosted in Amazon S3. The Administrator has been given the following requirements:
✑ Users may access the website by using an Amazon CloudFront distribution.
✑ Users may not access the website directly by using an Amazon S3 URL.
Which configurations will support these requirements? (Choose two.)

  • A. Associate an origin access identity with the CloudFront distribution.

  • B. Implement a Principal: cloudfront.amazonaws.com condition in the S3 bucket policy.

  • C. Modify the S3 bucket permissions so that only the origin access identity can access the bucket contents.

  • D. Implement security groups so that the S3 bucket can be accessed only by using the intended CloudFront distribution.

  • E. Configure the S3 bucket policy so that it is accessible only through VPC endpoints, and place the CloudFront distribution into the specified VPC.

The correct answers are:

A. Associate an origin access identity with the CloudFront distribution.

C. Modify the S3 bucket permissions so that only the origin access identity can access the bucket contents.

### Explanation:

To ensure that users can access the website only through the CloudFront distribution and not directly via the S3 URL, the following configurations are required:

1. A. Associate an origin access identity with the CloudFront distribution:

- An origin access identity (OAI) is a special CloudFront user that allows CloudFront to access the S3 bucket. By associating an OAI with the CloudFront distribution, you ensure that CloudFront can fetch content from the S3 bucket.

2. C. Modify the S3 bucket permissions so that only the origin access identity can access the bucket contents:

- The S3 bucket policy should be updated to grant access only to the OAI. This ensures that only CloudFront (using the OAI) can access the bucket, and users cannot access the content directly via the S3 URL.

### Why not the other options?

- B. Implement a "Principal": "cloudfront.amazonaws.com" condition in the S3 bucket policy:

- This is incorrect because CloudFront does not use the cloudfront.amazonaws.com principal. Instead, it uses an origin access identity (OAI) to access S3.

- D. Implement security groups so that the S3 bucket can be accessed only by using the intended CloudFront distribution:

- Security groups are used for EC2 instances, not S3 buckets. S3 does not support security groups, so this option is invalid.

- E. Configure the S3 bucket policy so that it is accessible only through VPC endpoints, and place the CloudFront distribution into the specified VPC:

- CloudFront is a global service and cannot be placed into a VPC. Additionally, restricting S3 access to VPC endpoints would prevent CloudFront from accessing the bucket, as CloudFront operates outside the VPC.

By associating an OAI with the CloudFront distribution and restricting S3 bucket access to the OAI, the Security Administrator can ensure that users can access the website only through CloudFront and not directly via the S3 URL.

New cards
19
term image

The correct answer is:

D. The Lambda function does not have permissions to access the CloudTrail S3 bucket.

### Explanation:

The error message "Insufficient Permissions" indicates that the Lambda function lacks the necessary permissions to perform its tasks. Here's why option D is correct:

1. Access to CloudTrail Logs in S3:

- The Lambda function runs an Athena query to check CloudTrail logs stored in an S3 bucket. To access these logs, the Lambda function must have permissions to read from the S3 bucket where the CloudTrail logs are stored.

2. Missing S3 Permissions in Lambda Execution Role:

- The Lambda execution role provided in the question does not include any permissions for S3 s3:GetObject, s3:ListBucket, etc.). Without these permissions, the Lambda function cannot access the CloudTrail logs in the S3 bucket, leading to the "Insufficient Permissions" error.

### Why not the other options?

- A. The Lambda function does not have permissions to start the Athena query execution:

- The Lambda execution role includes athena:* permissions, which allow it to start and manage Athena queries. This is not the cause of the error.

- B. The Security Engineer does not have permissions to start the Athena query execution:

- The Security Engineer's permissions include athena:Get* and athena:List*, which are sufficient to start and manage Athena queries. This is not the cause of the error.

- C. The Athena service does not support invocation through Lambda:

- Athena can be invoked through Lambda. This option is incorrect.

By adding the necessary S3 permissions (e.g., s3:GetObject and s3:ListBucket) to the Lambda execution role, the Lambda function will be able to access the CloudTrail logs in the S3 bucket, resolving the "Insufficient Permissions" error.

New cards
20

A company requires that IP packet data be inspected for invalid or malicious content.
Which of the following approaches achieve this requirement? (Choose two.)

  • A. Configure a proxy solution on Amazon EC2 and route all outbound VPC traffic through it. Perform inspection within proxy software on the EC2 instance.

  • B. Configure the host-based agent on each EC2 instance within the VPC. Perform inspection within the host-based agent.

  • C. Enable VPC Flow Logs for all subnets in the VPC. Perform inspection from the Flow Log data within Amazon CloudWatch Logs.

  • D. Configure Elastic Load Balancing (ELB) access logs. Perform inspection from the log data within the ELB access log files.

  • E. Configure the CloudWatch Logs agent on each EC2 instance within the VPC. Perform inspection from the log data within CloudWatch Logs.

The correct answers are:

A. Configure a proxy solution on Amazon EC2 and route all outbound VPC traffic through it. Perform inspection within proxy software on the EC2 instance.

B. Configure the host-based agent on each EC2 instance within the VPC. Perform inspection within the host-based agent.

### Explanation:

To inspect IP packet data for invalid or malicious content, the following approaches are effective:

1. A. Configure a proxy solution on Amazon EC2 and route all outbound VPC traffic through it. Perform inspection within proxy software on the EC2 instance:

- A proxy solution can inspect all outbound traffic from the VPC. By routing traffic through the proxy, you can analyze and filter packets for malicious content before they leave the VPC.

2. B. Configure the host-based agent on each EC2 instance within the VPC. Perform inspection within the host-based agent:

- A host-based agent can inspect traffic at the instance level. This allows for granular inspection of IP packets directly on each EC2 instance, ensuring that malicious content is detected and blocked before it enters or leaves the instance.

### Why not the other options?

- C. Enable VPC Flow Logs for all subnets in the VPC. Perform inspection from the Flow Log data within Amazon CloudWatch Logs:

- VPC Flow Logs provide metadata about IP traffic (e.g., source/destination IP, ports, and protocol) but do not capture the actual packet content. Therefore, they cannot be used to inspect for invalid or malicious content within the packets.

- D. Configure Elastic Load Balancing (ELB) access logs. Perform inspection from the log data within the ELB access log files:

- ELB access logs provide information about HTTP/HTTPS requests but do not capture IP packet data. They are not suitable for inspecting IP packets for malicious content.

- E. Configure the CloudWatch Logs agent on each EC2 instance within the VPC. Perform inspection from the log data within CloudWatch Logs:

- The CloudWatch Logs agent collects and sends logs to CloudWatch but does not inspect IP packet data. This option is not suitable for packet-level inspection.

By using a proxy solution or host-based agents, the company can effectively inspect IP packet data for invalid or malicious content, ensuring the security of its network traffic.

New cards
21

An organization has a system in AWS that allows a large number of remote workers to submit data files. File sizes vary from a few kilobytes to several megabytes.
A recent audit highlighted a concern that data files are not encrypted while in transit over untrusted networks.
Which solution would remediate the audit finding while minimizing the effort required?

  • A. Upload an SSL certificate to IAM, and configure Amazon CloudFront with the passphrase for the private key.

  • B. Call KMS.Encrypt() in the client, passing in the data file contents, and call KMS.Decrypt() server-side.

  • C. Use AWS Certificate Manager to provision a certificate on an Elastic Load Balancing in front of the web service's servers.

  • D. Create a new VPC with an Amazon VPC VPN endpoint, and update the web service's DNS record.

The correct answer is:

C. Use AWS Certificate Manager to provision a certificate on an Elastic Load Balancing in front of the web service's servers.

### Explanation:

To remediate the audit finding and ensure that data files are encrypted while in transit over untrusted networks, the following solution is the most effective and requires minimal effort:

1. AWS Certificate Manager (ACM):

- ACM allows you to provision, manage, and deploy SSL/TLS certificates for use with AWS services. These certificates can be used to encrypt data in transit.

2. Elastic Load Balancing (ELB):

- By provisioning an SSL/TLS certificate from ACM and attaching it to an ELB (Application Load Balancer or Network Load Balancer), you can ensure that all data transmitted between remote workers and the web service is encrypted using HTTPS.

3. Minimal Effort:

- This solution requires minimal effort because ACM simplifies the process of obtaining and managing certificates, and ELB automatically handles the encryption and decryption of traffic.

### Why not the other options?

- A. Upload an SSL certificate to IAM, and configure Amazon CloudFront with the passphrase for the private key:

- This option is not suitable because CloudFront is typically used for content delivery, not for handling file uploads from remote workers. Additionally, managing certificates through IAM is more complex than using ACM.

- B. Call KMS.Encrypt() in the client, passing in the data file contents, and call KMS.Decrypt() server-side:

- While this approach would encrypt the data, it is overly complex and requires significant changes to the client and server-side code. It is not a practical solution for encrypting data in transit.

- D. Create a new VPC with an Amazon VPC VPN endpoint, and update the web service's DNS record:

- This option is not practical for a large number of remote workers, as it would require each worker to connect via VPN, which is cumbersome and not scalable. It also does not address the encryption of data in transit over untrusted networks.

By using AWS Certificate Manager to provision a certificate on an Elastic Load Balancer, the organization can ensure that data files are encrypted in transit with minimal effort, addressing the audit finding effectively.

New cards
22

Which option for the use of the AWS Key Management Service (KMS) supports key management best practices that focus on minimizing the potential scope of data exposed by a possible future key compromise?

  • A. Use KMS automatic key rotation to replace the master key, and use this new master key for future encryption operations without re-encrypting previously encrypted data.

  • B. Generate a new Customer Master Key (CMK), re-encrypt all existing data with the new CMK, and use it for all future encryption operations.

  • C. Change the CMK alias every 90 days, and update key-calling applications with the new key alias.

  • D. Change the CMK permissions to ensure that individuals who can provision keys are not the same individuals who can use the keys.

The correct answer is:

B. Generate a new Customer Master Key (CMK), re-encrypt all existing data with the new CMK, and use it for all future encryption operations.

### Explanation:

To minimize the potential scope of data exposed by a possible future key compromise, the following key management best practice should be followed:

1. Generate a New CMK and Re-encrypt Data:

- By generating a new Customer Master Key (CMK) and re-encrypting all existing data with the new CMK, you ensure that any compromised key only exposes a limited amount of data. This approach reduces the risk associated with a key compromise by limiting the scope of data encrypted with the compromised key.

2. Use the New CMK for Future Encryption Operations:

- After re-encrypting the data, the new CMK should be used for all future encryption operations. This ensures that only the new key is used moving forward, further minimizing the risk of exposure.

### Why not the other options?

- A. Use KMS automatic key rotation to replace the master key, and use this new master key for future encryption operations without re-encrypting previously encrypted data:

- While automatic key rotation is a good practice, it does not re-encrypt previously encrypted data. This means that data encrypted with the old key remains vulnerable if the old key is compromised.

- C. Change the CMK alias every 90 days, and update key-calling applications with the new key alias:

- Changing the CMK alias does not provide any security benefit, as it does not change the underlying key or re-encrypt data. It is merely a naming convention and does not minimize the scope of data exposed by a key compromise.

- D. Change the CMK permissions to ensure that individuals who can provision keys are not the same individuals who can use the keys:

- While this is a good practice for separation of duties, it does not address the scope of data exposed by a key compromise. It focuses on access control rather than minimizing the impact of a compromised key.

By generating a new CMK and re-encrypting all existing data, the organization can effectively minimize the potential scope of data exposed by a possible future key compromise, adhering to key management best practices.

New cards
23
term image

The correct answer is:

C. Set up an Organizations hierarchy, replace the global FullMSRAccess with the following Service Control Policy at the top level:

{

"Version": "2012-10-17",

"Statement": [

{

"Action": [

"dynamodb:*",

"rds:*",

"ec2:*",

"s3:*",

"sts:*"

],

"Effect": "Allow",

"Resource": "*"

}

]

}

### Explanation:

To enforce the use of only Amazon EC2, Amazon S3, Amazon RDS, Amazon DynamoDB, and AWS STS in specific accounts, the most scalable and efficient approach is to use AWS Organizations with Service Control Policies (SCPs). Here's why:

1. AWS Organizations:

- AWS Organizations allows you to centrally manage and govern multiple AWS accounts. By setting up an Organizations hierarchy, you can apply policies across all accounts in the organization.

2. Service Control Policies (SCPs):

- SCPs are used to restrict the services and actions that can be performed in member accounts. By replacing the FullMSRAccess policy with a custom SCP at the top level, you can enforce the use of only the specified services (EC2, S3, RDS, DynamoDB, and STS) across all governed accounts.

3. Scalability and Efficiency:

- Applying an SCP at the top level of the Organizations hierarchy ensures that the policy is automatically enforced across all member accounts. This approach is scalable and efficient, as it eliminates the need to manually configure policies for each account or user.

### Why not the other options?

- A. Set up an AWS Organization hierarchy, and replace the FullMSRAccess policy with the following Service Control Policy for the governed organization units:

- While this option uses SCPs, it applies the policy to specific organizational units (OUs) rather than the entire organization. This approach is less efficient and scalable compared to applying the policy at the top level.

- B. Create multiple IAM users for the regulated accounts, and attach the following policy statement to restrict services as required:

- This approach is not scalable, as it requires manually creating and managing IAM users and policies for each account. It also does not leverage the centralized management capabilities of AWS Organizations.

- D. Set up all users in the Active Directory for federated access to all accounts in the company. Associate Active Directory groups with IAM groups, and attach the following policy statement to restrict services as required:

- While federated access and IAM groups are useful for managing user permissions, this approach does not provide the centralized control and scalability offered by AWS Organizations and SCPs. It also requires significant manual configuration.

By using AWS Organizations and applying a top-level SCP, the Security Explorer can efficiently and scalably enforce the use of only the specified AWS services across the governed accounts.

New cards
24

A company's database developer has just migrated an Amazon RDS database credential to be stored and managed by AWS Secrets Manager. The developer has also enabled rotation of the credential within the Secrets Manager console and set the rotation to change every 30 days.
After a short period of time, a number of existing applications have failed with authentication errors.
What is the MOST likely cause of the authentication errors?

  • A. Migrating the credential to RDS requires that all access come through requests to the Secrets Manager.

  • B. Enabling rotation in Secrets Manager causes the secret to rotate immediately, and the applications are using the earlier credential.

  • C. The Secrets Manager IAM policy does not allow access to the RDS database.

  • D. The Secrets Manager IAM policy does not allow access for the applications.

The correct answer is:

B. Enabling rotation in Secrets Manager causes the secret to rotate immediately, and the applications are using the earlier credential.

### Explanation:

When you enable rotation for a secret in AWS Secrets Manager, the secret is rotated immediately. This means that the old credential is replaced with a new one. If the applications are still using the old credential, they will fail to authenticate with the database, resulting in authentication errors.

### Why not the other options?

- A. Migrating the credential to RDS requires that all access come through requests to the Secrets Manager:

- This is incorrect because migrating the credential to Secrets Manager does not inherently require all access to come through Secrets Manager. Applications can still use the credentials directly if they are configured to do so.

- C. The Secrets Manager IAM policy does not allow access to the RDS database:

- This is incorrect because the IAM policy for Secrets Manager governs access to the secret itself, not the RDS database. The issue described is related to credential rotation, not IAM permissions.

- D. The Secrets Manager IAM policy does not allow access for the applications:

- This is incorrect because the issue described is related to the applications using outdated credentials, not IAM permissions for accessing Secrets Manager.

To resolve the authentication errors, the applications need to be updated to retrieve the latest credentials from Secrets Manager whenever they attempt to authenticate with the database. This ensures that they always use the most current credentials.

New cards
25

A Security Engineer launches two Amazon EC2 instances in the same Amazon VPC but in separate Availability Zones. Each instance has a public IP address and is able to connect to external hosts on the internet. The two instances are able to communicate with each other by using their private IP addresses, but they are not able to communicate with each other when using their public IP addresses.
Which action should the Security Engineer take to allow communication over the public IP addresses?

  • A. Associate the instances to the same security groups.

  • B. Add 0.0.0.0/0 to the egress rules of the instance security groups.

  • C. Add the instance IDs to the ingress rules of the instance security groups.

  • D. Add the public IP addresses to the ingress rules of the instance security groups.

The correct answer is:

D. Add the public IP addresses to the ingress rules of the instance security groups.

### Explanation:

To allow communication between the two EC2 instances over their public IP addresses, the Security Engineer must ensure that the security group ingress rules permit traffic from the public IP addresses of the instances. Here's why:

1. Security Group Rules:

- Security groups act as virtual firewalls for EC2 instances, controlling inbound (ingress) and outbound (egress) traffic. By default, security groups do not allow inbound traffic unless explicitly permitted.

2. Public IP Communication:

- When instances communicate using their public IP addresses, the traffic is routed through the internet. To allow this communication, the security group must explicitly permit inbound traffic from the public IP addresses of the other instance.

3. Ingress Rules:

- Adding the public IP addresses of the instances to the ingress rules of their respective security groups ensures that traffic from these IP addresses is allowed.

### Why not the other options?

- A. Associate the instances to the same security groups:

- While associating instances with the same security group can simplify rule management, it does not automatically allow communication over public IP addresses. The ingress rules must still explicitly permit traffic from the public IP addresses.

- B. Add 0.0.0.0/0 to the egress rules of the instance security groups:

- This option allows all outbound traffic from the instances, but it does not address the issue of inbound traffic over public IP addresses. The problem is with ingress rules, not egress rules.

- C. Add the instance IDs to the ingress rules of the instance security groups:

- Security group rules do not support instance IDs as a source or destination. Instead, IP addresses or security group IDs must be used.

By adding the public IP addresses to the ingress rules of the instance security groups, the Security Engineer can enable communication between the instances over their public IP addresses.

New cards
26

The Security Engineer is managing a web application that processes highly sensitive personal information. The application runs on Amazon EC2. The application has strict compliance requirements, which instruct that all incoming traffic to the application is protected from common web exploits and that all outgoing traffic from the EC2 instances is restricted to specific whitelisted URLs.
Which architecture should the Security Engineer use to meet these requirements?

  • A. Use AWS Shield to scan inbound traffic for web exploits. Use VPC Flow Logs and AWS Lambda to restrict egress traffic to specific whitelisted URLs.

  • B. Use AWS Shield to scan inbound traffic for web exploits. Use a third-party AWS Marketplace solution to restrict egress traffic to specific whitelisted URLs.

  • C. Use AWS WAF to scan inbound traffic for web exploits. Use VPC Flow Logs and AWS Lambda to restrict egress traffic to specific whitelisted URLs.

  • D. Use AWS WAF to scan inbound traffic for web exploits. Use a third-party AWS Marketplace solution to restrict egress traffic to specific whitelisted URLs.

The correct answer is:

D. Use AWS WAF to scan inbound traffic for web exploits. Use a third-party AWS Marketplace solution to restrict egress traffic to specific whitelisted URLs.

### Explanation:

To meet the compliance requirements for the web application, the Security Engineer should implement the following architecture:

1. AWS WAF (Web Application Firewall):

- AWS WAF is designed to protect web applications from common web exploits such as SQL injection, cross-site scripting (XSS), and other OWASP Top 10 vulnerabilities. By integrating AWS WAF with an Application Load Balancer (ALB) or Amazon CloudFront, the Security Engineer can scan and filter inbound traffic to the application, ensuring protection against web exploits.

2. Third-Party AWS Marketplace Solution for Egress Traffic:

- Restricting egress traffic to specific whitelisted URLs requires a solution that can enforce URL-based filtering. While AWS-native services like VPC Flow Logs and Lambda can monitor traffic, they are not designed for granular URL-based filtering. A third-party AWS Marketplace solution, such as a next-generation firewall (NGFW) or web proxy, can provide the necessary functionality to restrict egress traffic to specific whitelisted URLs.

### Why not the other options?

- A. Use AWS Shield to scan inbound traffic for web exploits. Use VPC Flow Logs and AWS Lambda to restrict egress traffic to specific whitelisted URLs:

- AWS Shield is designed for DDoS protection, not for scanning inbound traffic for web exploits. Additionally, VPC Flow Logs and Lambda are not suitable for enforcing URL-based egress traffic restrictions.

- B. Use AWS Shield to scan inbound traffic for web exploits. Use a third-party AWS Marketplace solution to restrict egress traffic to specific whitelisted URLs:

- AWS Shield is not designed to scan inbound traffic for web exploits. It focuses on DDoS protection, making this option unsuitable for meeting the requirement to protect against common web exploits.

- C. Use AWS WAF to scan inbound traffic for web exploits. Use VPC Flow Logs and AWS Lambda to restrict egress traffic to specific whitelisted URLs:

- While AWS WAF is suitable for scanning inbound traffic, VPC Flow Logs and Lambda are not designed for URL-based egress traffic filtering. They lack the granularity and functionality required to enforce whitelisted URLs.

By using AWS WAF for inbound traffic protection and a third-party AWS Marketplace solution for egress traffic filtering, the Security Engineer can effectively meet the compliance requirements for the web application.

New cards
27
term image

The correct answer is:

D. Move all the files to an S3 bucket. Create a CloudFront distribution in front of the bucket and terminate the web server.

### Explanation:

To protect the static website against future DDoS attacks while minimizing ongoing operational overhead, the following approach is the most effective:

1. Move Files to Amazon S3:

- Amazon S3 is designed for storing and serving static content, such as HTML, CSS, and PDF files. By moving the files to an S3 bucket, you eliminate the need for a web server, reducing the attack surface and operational complexity.

2. Create a CloudFront Distribution:

- Amazon CloudFront is a content delivery network (CDN) that provides DDoS protection and improves performance by caching content at edge locations. By creating a CloudFront distribution in front of the S3 bucket, you can serve the static content securely and efficiently.

3. Terminate the Web Server:

- Since the website is static and all files are served from S3 and CloudFront, the web server is no longer needed. Terminating the web server reduces operational overhead and eliminates the risk of DDoS attacks targeting the server.

### Why not the other options?

- A. Move all the files to an Amazon S3 bucket. Have the web server serve the files from the S3 bucket:

- While moving files to S3 reduces storage complexity, the web server is still required to serve the files, leaving it vulnerable to DDoS attacks. This option does not fully address the DDoS protection requirement.

- B. Launch a second Amazon EC2 instance in a new subnet. Launch an Application Load Balancer in front of both instances:

- This approach increases operational overhead by adding more EC2 instances and an Application Load Balancer. It also does not provide robust DDoS protection, as the web servers remain exposed to potential attacks.

- C. Launch an Application Load Balancer in front of the EC2 instance. Create an Amazon CloudFront distribution in front of the Application Load Balancer:

- While this option adds some level of DDoS protection through CloudFront, it still relies on an EC2 instance and an Application Load Balancer, which increases operational overhead and does not fully eliminate the attack surface.

By moving the files to S3, creating a CloudFront distribution, and terminating the web server, the company can effectively protect the static website against DDoS attacks while minimizing operational overhead.

New cards
28

The Information Technology department has stopped using Classic Load Balancers and switched to Application Load Balancers to save costs. After the switch, some users on older devices are no longer able to connect to the website.
What is causing this situation?

  • A. Application Load Balancers do not support older web browsers.

  • B. The Perfect Forward Secrecy settings are not configured correctly.

  • C. The intermediate certificate is installed within the Application Load Balancer.

  • D. The cipher suites on the Application Load Balancers are blocking connections.

The correct answer is:

D. The cipher suites on the Application Load Balancers are blocking connections.

### Explanation:

The issue is likely caused by the cipher suites configured on the Application Load Balancer (ALB). Here's why:

1. Cipher Suites:

- Application Load Balancers use cipher suites to establish secure connections with clients. Older devices may not support the modern cipher suites configured on the ALB, leading to connection failures.

2. Compatibility with Older Devices:

- Older devices often use outdated encryption protocols and cipher suites. If the ALB is configured to only support modern, high-security cipher suites, these older devices will be unable to establish a secure connection.

3. Solution:

- To resolve this issue, the Security Engineer should review and update the cipher suites on the ALB to include support for older, but still secure, cipher suites that are compatible with older devices.

### Why not the other options?

- A. Application Load Balancers do not support older web browsers:

- Application Load Balancers do support older web browsers, but the issue is typically related to the cipher suites and encryption protocols supported by the ALB and the older browsers.

- B. The Perfect Forward Secrecy settings are not configured correctly:

- Perfect Forward Secrecy (PFS) is a feature that enhances security by ensuring that session keys are not compromised even if the server's private key is compromised. While PFS is important for security, it is not the cause of the connection issues with older devices.

- C. The intermediate certificate is installed within the Application Load Balancer:

- Intermediate certificates are used to establish a chain of trust for SSL/TLS certificates. If the intermediate certificate were missing or incorrectly installed, it would cause SSL/TLS handshake failures for all clients, not just older devices.

By adjusting the cipher suites on the Application Load Balancer to include support for older devices, the Information Technology department can ensure that all users, including those on older devices, can connect to the website.

New cards
29

A security team is responsible for reviewing AWS API call activity in the cloud environment for security violations. These events must be recorded and retained in a centralized location for both current and future AWS regions.
What is the SIMPLEST way to meet these requirements?

  • A. Enable AWS Trusted Advisor security checks in the AWS Console, and report all security incidents for all regions.

  • B. Enable AWS CloudTrail by creating individual trails for each region, and specify a single Amazon S3 bucket to receive log files for later analysis.

  • C. Enable AWS CloudTrail by creating a new trail and applying the trail to all regions. Specify a single Amazon S3 bucket as the storage location.

  • D. Enable Amazon CloudWatch logging for all AWS services across all regions, and aggregate them to a single Amazon S3 bucket for later analysis.

The correct answer is:

C. Enable AWS CloudTrail by creating a new trail and applying the trail to all regions. Specify a single Amazon S3 bucket as the storage location.

### Explanation:

To meet the requirements of recording and retaining AWS API call activity in a centralized location for both current and future AWS regions, the simplest and most effective solution is to use AWS CloudTrail. Here's why:

1. AWS CloudTrail:

- CloudTrail is a service that records AWS API calls made in your account, including the identity of the API caller, the time of the call, the source IP address, and more. It provides a comprehensive audit trail of activity in your AWS environment.

2. Single Trail for All Regions:

- By creating a new trail and applying it to all regions, you ensure that API calls from all current and future regions are recorded. This simplifies management and ensures consistent logging across your AWS environment.

3. Centralized Storage in Amazon S3:

- Specifying a single Amazon S3 bucket as the storage location for CloudTrail logs centralizes the logs, making it easier to analyze and retain them for future review.

### Why not the other options?

- A. Enable AWS Trusted Advisor security checks in the AWS Console, and report all security incidents for all regions:

- AWS Trusted Advisor provides recommendations for optimizing your AWS environment but does not record API call activity. It is not suitable for meeting the requirement of logging and retaining API calls.

- B. Enable AWS CloudTrail by creating individual trails for each region, and specify a single Amazon S3 bucket to receive log files for later analysis:

- While this option uses CloudTrail, creating individual trails for each region is more complex and less efficient than creating a single trail that applies to all regions.

- D. Enable Amazon CloudWatch logging for all AWS services across all regions, and aggregate them to a single Amazon S3 bucket for later analysis:

- CloudWatch logs are used for monitoring and logging application and system logs, not for recording AWS API calls. This option does not meet the requirement of logging API activity.

By enabling AWS CloudTrail with a single trail applied to all regions and specifying a centralized S3 bucket for log storage, the security team can efficiently meet the requirements for recording and retaining AWS API call activity.

New cards
30

A Security Administrator is performing a log analysis as a result of a suspected AWS account compromise. The Administrator wants to analyze suspicious AWS
CloudTrail log files but is overwhelmed by the volume of audit logs being generated.
What approach enables the Administrator to search through the logs MOST efficiently?

  • A. Implement a write-only CloudTrail event filter to detect any modifications to the AWS account resources.

  • B. Configure Amazon Macie to classify and discover sensitive data in the Amazon S3 bucket that contains the CloudTrail audit logs.

  • C. Configure Amazon Athena to read from the CloudTrail S3 bucket and query the logs to examine account activities.

  • D. Enable Amazon S3 event notifications to trigger an AWS Lambda function that sends an email alarm when there are new CloudTrail API entries.

The correct answer is:

C. Configure Amazon Athena to read from the CloudTrail S3 bucket and query the logs to examine account activities.

### Explanation:

To efficiently search through a large volume of AWS CloudTrail log files, the Security Administrator should use Amazon Athena. Here's why:

1. Amazon Athena:

- Athena is an interactive query service that allows you to analyze data directly from Amazon S3 using standard SQL. By configuring Athena to read from the S3 bucket containing CloudTrail logs, the Administrator can efficiently query and analyze the logs.

2. Efficient Log Analysis:

- Athena enables the Administrator to run SQL queries on the CloudTrail logs, making it easier to search for specific events, filter results, and identify suspicious activities. This approach is much more efficient than manually sifting through log files.

3. Scalability:

- Athena is designed to handle large datasets, making it suitable for analyzing the high volume of CloudTrail logs generated in an AWS account.

### Why not the other options?

- A. Implement a "write-only" CloudTrail event filter to detect any modifications to the AWS account resources:

- While event filters can help narrow down the scope of logs, they do not provide a way to efficiently search through existing logs. This option is not suitable for analyzing a large volume of logs.

- B. Configure Amazon Macie to classify and discover sensitive data in the Amazon S3 bucket that contains the CloudTrail audit logs:

- Amazon Macie is designed to discover and classify sensitive data, not to analyze CloudTrail logs. This option is not relevant to the task of searching through CloudTrail logs for suspicious activities.

- D. Enable Amazon S3 event notifications to trigger an AWS Lambda function that sends an email alarm when there are new CloudTrail API entries:

- While this approach can alert the Administrator to new log entries, it does not help with searching through existing logs. It is not suitable for analyzing a large volume of historical logs.

By configuring Amazon Athena to query the CloudTrail logs stored in an S3 bucket, the Security Administrator can efficiently search through the logs and identify suspicious activities, making the log analysis process more manageable.

New cards
31

During a recent security audit, it was discovered that multiple teams in a large organization have placed restricted data in multiple Amazon S3 buckets, and the data may have been exposed. The auditor has requested that the organization identify all possible objects that contain personally identifiable information (PII) and then determine whether this information has been accessed.
What solution will allow the Security team to complete this request?

  • A. Using Amazon Athena, query the impacted S3 buckets by using the PII query identifier function. Then, create a new Amazon CloudWatch metric for Amazon S3 object access to alert when the objects are accessed.

  • B. Enable Amazon Macie on the S3 buckets that were impacted, then perform data classification. For identified objects that contain PII, use the research function for auditing AWS CloudTrail logs and S3 bucket logs for GET operations.

  • C. Enable Amazon GuardDuty and enable the PII rule set on the S3 buckets that were impacted, then perform data classification. Using the PII findings report from GuardDuty, query the S3 bucket logs by using Athena for GET operations.

  • D. Enable Amazon Inspector on the S3 buckets that were impacted, then perform data classification. For identified objects that contain PII, query the S3 bucket logs by using Athena for GET operations.

The correct answer is:

B. Enable Amazon Macie on the S3 buckets that were impacted, then perform data classification. For identified objects that contain PII, use the research function for auditing AWS CloudTrail logs and S3 bucket logs for GET operations.

### Explanation:

To identify all possible objects containing personally identifiable information (PII) and determine whether this information has been accessed, the Security team should use Amazon Macie. Here's why:

1. Amazon Macie:

- Amazon Macie is a data security and privacy service that uses machine learning and pattern matching to discover and classify sensitive data, such as PII, in Amazon S3 buckets. By enabling Macie on the impacted S3 buckets, the Security team can automatically identify objects that contain PII.

2. Data Classification:

- Macie performs data classification to identify and categorize sensitive data, making it easier to locate objects that contain PII.

3. Auditing Access:

- After identifying objects containing PII, the Security team can use Macie's research function to audit AWS CloudTrail logs and S3 bucket logs for GET operations. This helps determine whether the PII has been accessed.

### Why not the other options?

- A. Using Amazon Athena, query the impacted S3 buckets by using the PII query identifier function. Then, create a new Amazon CloudWatch metric for Amazon S3 object access to alert when the objects are accessed:

- Amazon Athena does not have a built-in "PII query identifier function." Additionally, creating CloudWatch metrics for S3 object access does not help identify PII or audit access logs. This option is not suitable for the task.

- C. Enable Amazon GuardDuty and enable the PII rule set on the S3 buckets that were impacted, then perform data classification. Using the PII findings report from GuardDuty, query the S3 bucket logs by using Athena for GET operations:

- Amazon GuardDuty is a threat detection service, not a data classification service. It does not have a "PII rule set" or the capability to classify data. This option is not relevant to identifying PII.

- D. Enable Amazon Inspector on the S3 buckets that were impacted, then perform data classification. For identified objects that contain PII, query the S3 bucket logs by using Athena for GET operations:

- Amazon Inspector is an automated security assessment service for EC2 instances and applications, not for S3 buckets. It does not perform data classification or identify PII. This option is not suitable for the task.

By enabling Amazon Macie, performing data classification, and auditing access logs, the Security team can effectively identify PII and determine whether it has been accessed, meeting the auditor's request.

New cards
32

During a recent internal investigation, it was discovered that all API logging was disabled in a production account, and the root user had created new API keys that appear to have been used several times.
What could have been done to detect and automatically remediate the incident?

  • A. Using Amazon Inspector, review all of the API calls and configure the inspector agent to leverage SNS topics to notify security of the change to AWS CloudTrail, and revoke the new API keys for the root user.

  • B. Using AWS Config, create a config rule that detects when AWS CloudTrail is disabled, as well as any calls to the root user create-api-key. Then use a Lambda function to re-enable CloudTrail logs and deactivate the root API keys.

  • C. Using Amazon CloudWatch, create a CloudWatch event that detects AWS CloudTrail deactivation and a separate Amazon Trusted Advisor check to automatically detect the creation of root API keys. Then use a Lambda function to enable AWS CloudTrail and deactivate the root API keys.

  • D. Using Amazon CloudTrail, create a new CloudTrail event that detects the deactivation of CloudTrail logs, and a separate CloudTrail event that detects the creation of root API keys. Then use a Lambda function to enable CloudTrail and deactivate the root API keys.

The correct answer is:

B. Using AWS Config, create a config rule that detects when AWS CloudTrail is disabled, as well as any calls to the root user create-api-key. Then use a Lambda function to re-enable CloudTrail logs and deactivate the root API keys.

### Explanation:

To detect and automatically remediate the incident where API logging was disabled and new API keys were created by the root user, the following approach is the most effective:

1. AWS Config:

- AWS Config provides a detailed view of the configuration of AWS resources and tracks changes over time. By creating a config rule, you can detect when AWS CloudTrail is disabled and when API keys are created by the root user.

2. Config Rule for CloudTrail and API Keys:

- A config rule can be created to monitor the status of AWS CloudTrail and detect if it is disabled. Additionally, the rule can detect API calls related to the creation of API keys by the root user.

3. Lambda Function for Remediation:

- A Lambda function can be triggered by the config rule to automatically re-enable AWS CloudTrail logs and deactivate the root API keys. This ensures that the incident is remediated promptly.

### Why not the other options?

- A. Using Amazon Inspector, review all of the API calls and configure the inspector agent to leverage SNS topics to notify security of the change to AWS CloudTrail, and revoke the new API keys for the root user:

- Amazon Inspector is used for assessing the security and compliance of applications, not for monitoring AWS CloudTrail or API key creation. This option is not suitable for the task.

- C. Using Amazon CloudWatch, create a CloudWatch event that detects AWS CloudTrail deactivation and a separate Amazon Trusted Advisor check to automatically detect the creation of root API keys. Then use a Lambda function to enable AWS CloudTrail and deactivate the root API keys:

- While CloudWatch events can detect changes to AWS CloudTrail, Amazon Trusted Advisor does not have the capability to detect the creation of root API keys. This option is not feasible.

- D. Using Amazon CloudTrail, create a new CloudTrail event that detects the deactivation of CloudTrail logs, and a separate CloudTrail event that detects the creation of root API keys. Then use a Lambda function to enable CloudTrail and deactivate the root API keys:

- If CloudTrail is disabled, it cannot log events or trigger actions. Therefore, this approach would not work in the scenario where CloudTrail is already disabled.

By using AWS Config to detect changes and a Lambda function for remediation, the Security team can effectively detect and automatically remediate the incident, ensuring that API logging is re-enabled and unauthorized API keys are deactivated.

New cards
33

An application has a requirement to be resilient across not only Availability Zones within the application's primary region but also be available within another region altogether.
Which of the following supports this requirement for AWS resources that are encrypted by AWS KMS?

  • A. Copy the application's AWS KMS CMK from the source region to the target region so that it can be used to decrypt the resource after it is copied to the target region.

  • B. Configure AWS KMS to automatically synchronize the CMK between regions so that it can be used to decrypt the resource in the target region.

  • C. Use AWS services that replicate data across regions, and re-wrap the data encryption key created in the source region by using the CMK in the target region so that the target region's CMK can decrypt the database encryption key.

  • D. Configure the target region's AWS service to communicate with the source region's AWS KMS so that it can decrypt the resource in the target region.

The correct answer is:

C. Use AWS services that replicate data across regions, and re-wrap the data encryption key created in the source region by using the CMK in the target region so that the target region's CMK can decrypt the database encryption key.

### Explanation:

To ensure that an application is resilient across Availability Zones and regions, and that encrypted resources can be accessed in another region, the following approach is the most effective:

1. Replicate Data Across Regions:

- Use AWS services that support cross-region replication (e.g., Amazon S3, Amazon RDS, or DynamoDB Global Tables) to replicate data from the source region to the target region.

2. Re-wrap the Data Encryption Key:

- When data is encrypted in the source region, it uses a data encryption key (DEK) that is encrypted by the Customer Master Key (CMK) in the source region. To decrypt the data in the target region, the DEK must be re-encrypted (re-wrapped) using the CMK in the target region. This ensures that the target region's CMK can decrypt the DEK, which in turn decrypts the data.

3. Cross-Region Key Management:

- This approach allows the application to use the CMK in the target region to decrypt the data, ensuring that the application remains functional and secure in both regions.

### Why not the other options?

- A. Copy the application's AWS KMS CMK from the source region to the target region so that it can be used to decrypt the resource after it is copied to the target region:

- AWS KMS CMKs are region-specific and cannot be copied directly between regions. This option is not feasible.

- B. Configure AWS KMS to automatically synchronize the CMK between regions so that it can be used to decrypt the resource in the target region:

- AWS KMS does not support automatic synchronization of CMKs between regions. This option is not valid.

- D. Configure the target region's AWS service to communicate with the source region's AWS KMS so that it can decrypt the resource in the target region:

- AWS services in one region cannot directly communicate with AWS KMS in another region to decrypt resources. This approach is not supported.

By using AWS services that replicate data across regions and re-wrapping the data encryption key with the target region's CMK, the application can achieve cross-region resilience while maintaining encryption security.

New cards
34

An organization policy states that all encryption keys must be automatically rotated every 12 months.
Which AWS Key Management Service (KMS) key type should be used to meet this requirement?

  • A. AWS managed Customer Master Key (CMK)

  • B. Customer managed CMK with AWS generated key material

  • C. Customer managed CMK with imported key material

  • D. AWS managed data key

The correct answer is:

B. Customer managed CMK with AWS generated key material

### Explanation:

To meet the organization's policy of automatically rotating encryption keys every 12 months, the following key type should be used:

1. Customer Managed CMK with AWS Generated Key Material:

- Customer managed CMKs allow you to control the key management policies, including enabling automatic key rotation. When the key material is generated by AWS, you can enable automatic key rotation, which ensures that the CMK is rotated every 12 months as required.

2. Automatic Key Rotation:

- AWS KMS supports automatic key rotation for customer managed CMKs with AWS generated key material. This feature automatically generates new cryptographic material for the CMK every 12 months, ensuring compliance with the organization's policy.

### Why not the other options?

- A. AWS managed Customer Master Key (CMK):

- AWS managed CMKs are automatically created and managed by AWS for use with specific AWS services. While they are automatically rotated, the rotation schedule is controlled by AWS and may not align with the organization's 12-month requirement.

- C. Customer managed CMK with imported key material:

- Customer managed CMKs with imported key material do not support automatic key rotation. You must manually rotate these keys, which does not meet the requirement for automatic rotation every 12 months.

- D. AWS managed data key:

- AWS managed data keys are used for encrypting data but are not CMKs. They are ephemeral keys generated by AWS services and do not support key rotation policies.

By using a customer managed CMK with AWS generated key material and enabling automatic key rotation, the organization can ensure that all encryption keys are automatically rotated every 12 months, meeting the policy requirement.

New cards
35

A Security Engineer received an AWS Abuse Notice listing EC2 instance IDs that are reportedly abusing other hosts.
Which action should the Engineer take based on this situation? (Choose three.)

  • A. Use AWS Artifact to capture an exact image of the state of each instance.

  • B. Create EBS Snapshots of each of the volumes attached to the compromised instances.

  • C. Capture a memory dump.

  • D. Log in to each instance with administrative credentials to restart the instance.

  • E. Revoke all network ingress and egress except for to/from a forensics workstation.

  • F. Run Auto Recovery for Amazon EC2.

The correct answers are:

B. Create EBS Snapshots of each of the volumes attached to the compromised instances.

C. Capture a memory dump.

E. Revoke all network ingress and egress except for to/from a forensics workstation.

### Explanation:

When dealing with compromised EC2 instances, the Security Engineer should take the following actions to investigate and mitigate the issue:

1. B. Create EBS Snapshots of each of the volumes attached to the compromised instances:

- Creating EBS snapshots preserves the state of the volumes for forensic analysis. This ensures that evidence is not lost and can be examined to determine the cause and extent of the compromise.

2. C. Capture a memory dump:

- Capturing a memory dump of the compromised instances can provide valuable information about running processes, network connections, and other volatile data that may be lost if the instance is restarted or terminated.

3. E. Revoke all network ingress and egress except for to/from a forensics workstation:

- Restricting network access prevents further abuse and limits the attacker's ability to exfiltrate data or maintain control of the compromised instances. Allowing access only to a forensics workstation ensures that the investigation can proceed securely.

### Why not the other options?

- A. Use AWS Artifact to capture an exact image of the state of each instance:

- AWS Artifact is used for accessing AWS compliance reports and does not provide functionality for capturing instance images or snapshots. This option is not relevant to the situation.

- D. Log in to each instance with administrative credentials to restart the instance:

- Restarting the instance can result in the loss of volatile data, such as memory contents, which is crucial for forensic analysis. This action should be avoided until after evidence has been preserved.

- F. Run Auto Recovery for Amazon EC2:

- Auto Recovery is used to recover instances that become impaired due to underlying hardware issues. It is not relevant to addressing compromised instances or conducting a forensic investigation.

By creating EBS snapshots, capturing memory dumps, and revoking network access, the Security Engineer can effectively investigate and mitigate the compromise while preserving critical evidence.

New cards
36

A Security Administrator is configuring an Amazon S3 bucket and must meet the following security requirements:
✑ Encryption in transit
✑ Encryption at rest
✑ Logging of all object retrievals in AWS CloudTrail
Which of the following meet these security requirements? (Choose three.)

  • A. Specify “aws:SecureTransport” within a condition in the S3 bucket policy.

  • B. Enable a security group for the S3 bucket that allows port 443, but not port 80.

  • C. Set up default encryption for the S3 bucket.

  • D. Enable Amazon CloudWatch Logs for the AWS account.

  • E. Enable API logging of data events for all S3 objects.

  • F. Enable S3 object versioning for the S3 bucket.

The correct answers are:

A. Specify "aws:SecureTransport": "true" within a condition in the S3 bucket policy.

C. Set up default encryption for the S3 bucket.

E. Enable API logging of data events for all S3 objects.

### Explanation:

To meet the security requirements for the Amazon S3 bucket, the following configurations are necessary:

1. A. Specify "aws:SecureTransport": "true" within a condition in the S3 bucket policy:

- This condition ensures that all requests to the S3 bucket must use HTTPS (TLS), enforcing encryption in transit.

2. C. Set up default encryption for the S3 bucket:

- Enabling default encryption ensures that all objects stored in the S3 bucket are encrypted at rest. You can choose to use either SSE-S3 (Amazon S3-managed keys) or SSE-KMS (AWS KMS-managed keys) for encryption.

3. E. Enable API logging of data events for all S3 objects:

- Enabling data event logging in AWS CloudTrail logs all object-level API operations, including object retrievals. This meets the requirement for logging all object retrievals.

### Why not the other options?

- B. Enable a security group for the S3 bucket that allows port 443, but not port 80:

- Security groups are used for EC2 instances, not S3 buckets. S3 does not support security groups, so this option is invalid.

- D. Enable Amazon CloudWatch Logs for the AWS account:

- While CloudWatch Logs can be used to monitor and log various AWS services, it does not specifically log S3 object retrievals. This option does not meet the requirement for logging object retrievals in CloudTrail.

- F. Enable S3 object versioning for the S3 bucket:

- Object versioning helps protect against accidental deletion or overwrites but does not address encryption in transit, encryption at rest, or logging of object retrievals. This option is not relevant to the security requirements.

By specifying the "aws:SecureTransport" condition, setting up default encryption, and enabling API logging of data events, the Security Administrator can meet the security requirements for the S3 bucket.

New cards
37
term image

The correct answer is:

C. The CMK is to be used for encrypting and decrypting only when the principal is ExampleUser and the request comes from WorkMail or SES in the specified region.

### Explanation:

The key policy attached to the Customer Master Key (CMK) specifies the following:

1. Principal:

- The policy allows the IAM user ExampleUser in the account 111122223333 to perform specific KMS actions kms:Encrypt, kms:Decrypt, kms:GenerateDataKey*, kms:CreateGrant, kms:ListGrants).

2. Condition:

- The policy includes a condition that restricts these actions to requests coming from the Amazon WorkMail and Amazon SES services in the us-west-2 region. This is specified by the kms:ViaService condition key.

3. Scope of Use:

- The CMK can only be used for encrypting and decrypting when the request is made by ExampleUser and originates from the specified AWS services (WorkMail or SES) in the us-west-2 region.

### Why not the other options?

- A. The Amazon WorkMail and Amazon SES services have delegated KMS encrypt and decrypt permissions to the ExampleUser principal in the 111122223333 account:

- This is incorrect because the policy does not delegate permissions to WorkMail or SES. Instead, it allows ExampleUser to use the CMK when the request comes from these services.

- B. The ExampleUser principal can transparently encrypt and decrypt email exchanges specifically between ExampleUser and AWS:

- This is incorrect because the policy does not specify that the encryption and decryption are limited to email exchanges. It allows ExampleUser to use the CMK for any data, provided the request comes from WorkMail or SES.

- D. The key policy allows WorkMail or SES to encrypt or decrypt on behalf of the user for any CMK in the account:

- This is incorrect because the policy does not grant WorkMail or SES the ability to encrypt or decrypt on behalf of the user. It allows ExampleUser to perform these actions when the request comes from these services.

By specifying the kms:ViaService condition, the key policy ensures that the CMK is used only when the request comes from the specified AWS services in the specified region, providing a secure and controlled use of the CMK.

New cards
38
term image

The correct answer is:

D. Only principals from account 111122223333 that have an IAM policy applied that grants access to this key to use the key.

### Explanation:

The key policy statement allows the following:

1. Principal:

- The policy specifies the arn:aws:iam::111122223333:root as the principal. This means that the policy applies to the entire AWS account 111122223333.

2. Action:

- The policy allows all KMS actions kms:*, meaning any action related to the key can be performed.

3. Resource:

- The policy applies to all resources *, meaning it covers all keys managed by AWS KMS in the account.

4. IAM Policies:

- While the key policy allows all principals in the account 111122223333 to use the key, actual access is still controlled by IAM policies. Only principals (users, roles, or groups) that have an IAM policy granting them access to the key can use it.

### Why not the other options?

- A. All principals from all AWS accounts to use the key:

- This is incorrect because the policy only applies to the account 111122223333. It does not grant access to principals from other AWS accounts.

- B. Only the root user from account 111122223333 to use the key:

- This is incorrect because the policy applies to the entire account arn:aws:iam::111122223333:root, not just the root user. It allows any principal in the account to use the key, provided they have the necessary IAM permissions.

- C. All principals from account 111122223333 to use the key but only on Amazon S3:

- This is incorrect because the policy does not restrict the use of the key to Amazon S3. The key can be used for any service that supports AWS KMS, not just S3.

By specifying the arn:aws:iam::111122223333:root principal, the key policy allows any principal in the account 111122223333 to use the key, provided they have an IAM policy that grants them access to the key.

New cards
39

A Software Engineer wrote a customized reporting service that will run on a fleet of Amazon EC2 instances. The company security policy states that application logs for the reporting service must be centrally collected.
What is the MOST efficient way to meet these requirements?

  • A. Write an AWS Lambda function that logs into the EC2 instance to pull the application logs from the EC2 instance and persists them into an Amazon S3 bucket.

  • B. Enable AWS CloudTrail logging for the AWS account, create a new Amazon S3 bucket, and then configure Amazon CloudWatch Logs to receive the application logs from CloudTrail.

  • C. Create a simple cron job on the EC2 instances that synchronizes the application logs to an Amazon S3 bucket by using rsync.

  • D. Install the Amazon CloudWatch Logs Agent on the EC2 instances, and configure it to send the application logs to CloudWatch Logs.

The correct answer is:

D. Install the Amazon CloudWatch Logs Agent on the EC2 instances, and configure it to send the application logs to CloudWatch Logs.

### Explanation:

To centrally collect application logs from a fleet of Amazon EC2 instances, the most efficient and scalable solution is to use Amazon CloudWatch Logs. Here's why:

1. Amazon CloudWatch Logs Agent:

- The CloudWatch Logs Agent can be installed on the EC2 instances to automatically collect and send application logs to CloudWatch Logs. This eliminates the need for manual log collection and ensures that logs are centrally stored.

2. Centralized Log Collection:

- CloudWatch Logs provides a centralized location for storing and analyzing logs. It supports real-time log streaming, making it easy to monitor and troubleshoot applications.

3. Scalability and Efficiency:

- Using CloudWatch Logs is scalable and efficient, as it automatically handles log collection from multiple instances and provides features like log retention, search, and alarms.

### Why not the other options?

- A. Write an AWS Lambda function that logs into the EC2 instance to pull the application logs from the EC2 instance and persists them into an Amazon S3 bucket:

- This approach is overly complex and inefficient. It requires managing SSH access to EC2 instances and writing custom code to pull logs, which is not scalable for a fleet of instances.

- B. Enable AWS CloudTrail logging for the AWS account, create a new Amazon S3 bucket, and then configure Amazon CloudWatch Logs to receive the application logs from CloudTrail:

- AWS CloudTrail logs API calls made in the AWS account, not application logs from EC2 instances. This option is not relevant to collecting application logs.

- C. Create a simple cron job on the EC2 instances that synchronizes the application logs to an Amazon S3 bucket by using rsync:

- While this approach can work, it is less efficient and scalable compared to using CloudWatch Logs. It requires manual setup and management of cron jobs and does not provide real-time log collection or centralized monitoring.

By installing the Amazon CloudWatch Logs Agent on the EC2 instances and configuring it to send logs to CloudWatch Logs, the Software Engineer can efficiently meet the company's security policy requirements for centrally collecting application logs.

New cards
40

A Security Engineer is trying to determine whether the encryption keys used in an AWS service are in compliance with certain regulatory standards.
Which of the following actions should the Engineer perform to get further guidance?

  • A. Read the AWS Customer Agreement.

  • B. Use AWS Artifact to access AWS compliance reports.

  • C. Post the question on the AWS Discussion Forums.

  • D. Run AWS Config and evaluate the configuration outputs.

The correct answer is:

B. Use AWS Artifact to access AWS compliance reports.

### Explanation:

To determine whether the encryption keys used in an AWS service comply with regulatory standards, the Security Engineer should use AWS Artifact. Here's why:

1. AWS Artifact:

- AWS Artifact is a service that provides on-demand access to AWS compliance reports, including those related to encryption and key management. These reports detail how AWS services comply with various regulatory standards, such as GDPR, HIPAA, and SOC.

2. Compliance Reports:

- By accessing the relevant compliance reports through AWS Artifact, the Security Engineer can verify whether the encryption keys and AWS services meet the required regulatory standards.

3. Guidance and Assurance:

- AWS Artifact provides authoritative and up-to-date information directly from AWS, ensuring that the Security Engineer has accurate and reliable guidance.

### Why not the other options?

- A. Read the AWS Customer Agreement:

- The AWS Customer Agreement outlines the terms and conditions of using AWS services but does not provide specific details about compliance with regulatory standards for encryption keys.

- C. Post the question on the AWS Discussion Forums:

- While the AWS Discussion Forums can be a useful resource for general questions, they do not provide official or authoritative compliance information. The Engineer should rely on official AWS documentation and services for compliance guidance.

- D. Run AWS Config and evaluate the configuration outputs:

- AWS Config is used to assess, audit, and evaluate the configuration of AWS resources. While it can help ensure that resources are configured correctly, it does not provide specific compliance reports or guidance on regulatory standards for encryption keys.

By using AWS Artifact to access compliance reports, the Security Engineer can obtain the necessary guidance to determine whether the encryption keys used in AWS services comply with regulatory standards.

New cards
41

The Development team receives an error message each time the team members attempt to encrypt or decrypt a Secure String parameter from the SSM Parameter Store by using an AWS KMS customer managed key (CMK).
Which CMK-related issues could be responsible? (Choose two.)

  • A. The CMK specified in the application does not exist.

  • B. The CMK specified in the application is currently in use.

  • C. The CMK specified in the application is using the CMK KeyID instead of CMK Amazon Resource Name.

  • D. The CMK specified in the application is not enabled.

  • E. The CMK specified in the application is using an alias.

The correct answers are:

A. The CMK specified in the application does not exist.

D. The CMK specified in the application is not enabled.

### Explanation:

When the Development team encounters errors while trying to encrypt or decrypt a Secure String parameter using an AWS KMS customer managed key (CMK), the following issues could be responsible:

1. A. The CMK specified in the application does not exist:

- If the CMK specified in the application does not exist, the encryption or decryption operation will fail. The application must reference a valid CMK.

2. D. The CMK specified in the application is not enabled:

- If the CMK is disabled, it cannot be used for encryption or decryption operations. The CMK must be enabled to perform these actions.

### Why not the other options?

- B. The CMK specified in the application is currently in use:

- A CMK being in use does not prevent it from being used for encryption or decryption. This is not a valid reason for the error.

- C. The CMK specified in the application is using the CMK KeyID instead of CMK Amazon Resource Name:

- AWS KMS accepts both the KeyID and the Amazon Resource Name (ARN) to reference a CMK. Using the KeyID instead of the ARN should not cause an error.

- E. The CMK specified in the application is using an alias:

- Using an alias to reference a CMK is a valid and common practice. This should not cause an error unless the alias itself is incorrect or points to a non-existent or disabled CMK.

By ensuring that the CMK exists and is enabled, the Development team can resolve the errors and successfully encrypt or decrypt the Secure String parameter.

New cards
42

An application has been written that publishes custom metrics to Amazon CloudWatch. Recently, IAM changes have been made on the account and the metrics are no longer being reported.
Which of the following is the LEAST permissive solution that will allow the metrics to be delivered?

  • A. Add a statement to the IAM policy used by the application to allow logs:putLogEvents and logs:createLogStream

  • B. Modify the IAM role used by the application by adding the CloudWatchFullAccess managed policy.

  • C. Add a statement to the IAM policy used by the application to allow cloudwatch:putMetricData.

  • D. Add a trust relationship to the IAM role used by the application for cloudwatch.amazonaws.com.

The correct answer is:

C. Add a statement to the IAM policy used by the application to allow cloudwatch:putMetricData.

### Explanation:

To allow the application to publish custom metrics to Amazon CloudWatch, the least permissive solution is to grant the specific permission required for this action:

1. cloudwatch:putMetricData:

- This permission allows the application to publish custom metrics to CloudWatch. It is the minimum required permission for the application to deliver metrics.

2. Least Permissive Principle:

- The least permissive principle dictates that only the necessary permissions should be granted to perform a specific task. Adding only cloudwatch:putMetricData to the IAM policy ensures that the application has the minimum required permissions without granting unnecessary access.

### Why not the other options?

- A. Add a statement to the IAM policy used by the application to allow logs:putLogEvents and logs:createLogStream:

- These permissions are related to Amazon CloudWatch Logs, not CloudWatch Metrics. They are not relevant to publishing custom metrics.

- B. Modify the IAM role used by the application by adding the CloudWatchFullAccess managed policy:

- The CloudWatchFullAccess managed policy grants extensive permissions for CloudWatch, including actions that are not necessary for publishing custom metrics. This approach violates the least permissive principle.

- D. Add a trust relationship to the IAM role used by the application for cloudwatch.amazonaws.com:

- A trust relationship defines which entities (e.g., AWS services) can assume the IAM role. It does not grant permissions to perform actions like publishing metrics. This option is not relevant to the issue.

By adding the cloudwatch:putMetricData permission to the IAM policy used by the application, the Security Engineer can ensure that the application can deliver metrics to CloudWatch while adhering to the least permissive principle.

New cards
43

A Developer's laptop was stolen. The laptop was not encrypted, and it contained the SSH key used to access multiple Amazon EC2 instances. A Security
Engineer has verified that the key has not been used, and has blocked port 22 to all EC2 instances while developing a response plan.
How can the Security Engineer further protect currently running instances?

  • A. Delete the key-pair key from the EC2 console, then create a new key pair.

  • B. Use the modify-instance-attribute API to change the key on any EC2 instance that is using the key.

  • C. Use the EC2 RunCommand to modify the authorized_keys file on any EC2 instance that is using the key.

  • D. Update the key pair in any AMI used to launch the EC2 instances, then restart the EC2 instances.

The correct answer is:

C. Use the EC2 RunCommand to modify the authorized_keys file on any EC2 instance that is using the key.

### Explanation:

To protect currently running EC2 instances after a stolen laptop containing the SSH key, the Security Engineer should take the following steps:

1. EC2 RunCommand:

- AWS Systems Manager RunCommand allows you to remotely execute commands on EC2 instances. The Engineer can use RunCommand to modify the authorized_keys file on each instance, removing the compromised SSH key and adding a new one.

2. Modify authorized_keys File:

- The authorized_keys file on each EC2 instance contains the public keys that are allowed to connect via SSH. By removing the compromised key and adding a new one, the Engineer ensures that the stolen key can no longer be used to access the instances.

3. Immediate Action:

- Using RunCommand allows the Engineer to quickly and efficiently update the authorized_keys file on multiple instances without needing to manually log in to each one.

### Why not the other options?

- A. Delete the key-pair key from the EC2 console, then create a new key pair:

- Deleting the key pair from the EC2 console does not remove the public key from the authorized_keys file on the instances. The stolen key can still be used to access the instances until the authorized_keys file is updated.

- B. Use the modify-instance-attribute API to change the key on any EC2 instance that is using the key:

- The modify-instance-attribute API is used to change instance attributes like instance type or security groups, not SSH keys. It does not update the authorized_keys file on the instances.

- D. Update the key pair in any AMI used to launch the EC2 instances, then restart the EC2 instances:

- Updating the key pair in an AMI affects new instances launched from that AMI, not currently running instances. Restarting instances without updating the authorized_keys file does not prevent the stolen key from being used.

By using EC2 RunCommand to modify the authorized_keys file, the Security Engineer can effectively protect currently running instances from unauthorized access using the stolen SSH key.

New cards
44

An organization has tens of applications deployed on thousands of Amazon EC2 instances. During testing, the Application team needs information to let them know whether the network access control lists (network ACLs) and security groups are working as expected.
How can the Application team's requirements be met?

  • A. Turn on VPC Flow Logs, send the logs to Amazon S3, and use Amazon Athena to query the logs.

  • B. Install an Amazon Inspector agent on each EC2 instance, send the logs to Amazon S3, and use Amazon EMR to query the logs.

  • C. Create an AWS Config rule for each network ACL and security group configuration, send the logs to Amazon S3, and use Amazon Athena to query the logs.

  • D. Turn on AWS CloudTrail, send the trails to Amazon S3, and use AWS Lambda to query the trails.

The correct answer is:

A. Turn on VPC Flow Logs, send the logs to Amazon S3, and use Amazon Athena to query the logs.

### Explanation:

To determine whether the network access control lists (network ACLs) and security groups are working as expected, the Application team should use VPC Flow Logs. Here's why:

1. VPC Flow Logs:

- VPC Flow Logs capture information about the IP traffic going to and from network interfaces in a VPC. This includes details about allowed and denied traffic based on network ACLs and security groups.

2. Amazon S3:

- Sending VPC Flow Logs to Amazon S3 provides a centralized and durable storage solution for the logs. This allows for easy access and analysis.

3. Amazon Athena:

- Amazon Athena is an interactive query service that allows you to analyze data directly from S3 using standard SQL. By querying the VPC Flow Logs, the Application team can determine whether the network ACLs and security groups are functioning as intended.

### Why not the other options?

- B. Install an Amazon Inspector agent on each EC2 instance, send the logs to Amazon S3, and use Amazon EMR to query the logs:

- Amazon Inspector is used for assessing the security and compliance of applications, not for monitoring network traffic. This option is not relevant to the task of verifying network ACLs and security groups.

- C. Create an AWS Config rule for each network ACL and security group configuration, send the logs to Amazon S3, and use Amazon Athena to query the logs:

- AWS Config tracks configuration changes and compliance but does not provide detailed information about network traffic. This option is not suitable for verifying the functionality of network ACLs and security groups.

- D. Turn on AWS CloudTrail, send the trails to Amazon S3, and use AWS Lambda to query the trails:

- AWS CloudTrail logs API calls made in the AWS account, not network traffic. This option is not relevant to the task of verifying network ACLs and security groups.

By turning on VPC Flow Logs, sending the logs to Amazon S3, and using Amazon Athena to query the logs, the Application team can effectively verify whether the network ACLs and security groups are working as expected.

New cards
45

An application outputs logs to a text file. The logs must be continuously monitored for security incidents.
Which design will meet the requirements with MINIMUM effort?

  • A. Create a scheduled process to copy the component's logs into Amazon S3. Use S3 events to trigger a Lambda function that updates Amazon CloudWatch metrics with the log data. Set up CloudWatch alerts based on the metrics.

  • B. Install and configure the Amazon CloudWatch Logs agent on the application's EC2 instance. Create a CloudWatch metric filter to monitor the application logs. Set up CloudWatch alerts based on the metrics.

  • C. Create a scheduled process to copy the application log files to AWS CloudTrail. Use S3 events to trigger Lambda functions that update CloudWatch metrics with the log data. Set up CloudWatch alerts based on the metrics.

  • D. Create a file watcher that copies data to Amazon Kinesis when the application writes to the log file. Have Kinesis trigger a Lambda function to update Amazon CloudWatch metrics with the log data. Set up CloudWatch alerts based on the metrics.

The correct answer is:

B. Install and configure the Amazon CloudWatch Logs agent on the application's EC2 instance. Create a CloudWatch metric filter to monitor the application logs. Set up CloudWatch alerts based on the metrics.

### Explanation:

To continuously monitor application logs for security incidents with minimal effort, the following approach is the most efficient:

1. Amazon CloudWatch Logs Agent:

- Installing and configuring the CloudWatch Logs agent on the EC2 instance allows the application logs to be automatically sent to CloudWatch Logs. This eliminates the need for manual log collection and ensures real-time monitoring.

2. CloudWatch Metric Filter:

- A metric filter can be created in CloudWatch Logs to monitor the application logs for specific patterns or keywords that indicate security incidents. This allows for automated detection and alerting.

3. CloudWatch Alerts:

- Setting up CloudWatch alerts based on the metrics generated by the metric filter ensures that the team is notified immediately when a security incident is detected.

### Why not the other options?

- A. Create a scheduled process to copy the component's logs into Amazon S3. Use S3 events to trigger a Lambda function that updates Amazon CloudWatch metrics with the log data. Set up CloudWatch alerts based on the metrics:

- This approach involves multiple steps and additional services (S3, Lambda), making it more complex and less efficient than using the CloudWatch Logs agent directly.

- C. Create a scheduled process to copy the application log files to AWS CloudTrail. Use S3 events to trigger Lambda functions that update CloudWatch metrics with the log data. Set up CloudWatch alerts based on the metrics:

- AWS CloudTrail is used for logging API calls, not application logs. This approach is not relevant to the task of monitoring application logs for security incidents.

- D. Create a file watcher that copies data to Amazon Kinesis when the application writes to the log file. Have Kinesis trigger a Lambda function to update Amazon CloudWatch metrics with the log data. Set up CloudWatch alerts based on the metrics:

- This approach is overly complex and involves multiple services (Kinesis, Lambda) and custom code, making it less efficient than using the CloudWatch Logs agent.

By installing the CloudWatch Logs agent, creating a metric filter, and setting up CloudWatch alerts, the team can efficiently and continuously monitor the application logs for security incidents with minimal effort.

New cards
46

The Security Engineer for a mobile game has to implement a method to authenticate users so that they can save their progress. Because most of the users are part of the same OpenID-Connect compatible social media website, the Security Engineer would like to use that as the identity provider.
Which solution is the SIMPLEST way to allow the authentication of users using their social media identities?

  • A. Amazon Cognito

  • B. AssumeRoleWithWebIdentity API

  • C. Amazon Cloud Directory

  • D. Active Directory (AD) Connector

The correct answer is:

A. Amazon Cognito

### Explanation:

To authenticate users using their social media identities in a simple and efficient manner, the Security Engineer should use Amazon Cognito. Here's why:

1. Amazon Cognito:

- Amazon Cognito is a service that provides user authentication and management. It supports OpenID Connect (OIDC) and allows users to sign in using their social media identities (e.g., Facebook, Google, etc.). This makes it easy to integrate with the social media website that is already being used by most of the users.

2. User Pools:

- Amazon Cognito User Pools allow you to create a user directory that supports social identity providers. Users can sign in using their social media credentials, and Cognito handles the authentication process.

3. Simplicity:

- Amazon Cognito provides a straightforward way to integrate social media authentication without the need for complex custom code or infrastructure. It also handles user management, including sign-up, sign-in, and access control.

### Why not the other options?

- B. AssumeRoleWithWebIdentity API:

- This API is used to obtain temporary security credentials for users who have been authenticated by a web identity provider (e.g., Facebook, Google). While it can be used for authentication, it requires more manual implementation and management compared to Amazon Cognito.

- C. Amazon Cloud Directory:

- Amazon Cloud Directory is a managed directory service that is not specifically designed for user authentication. It is more suited for hierarchical data management and does not provide built-in support for social media authentication.

- D. Active Directory (AD) Connector:

- AD Connector is used to connect on-premises Active Directory to AWS services. It is not designed for social media authentication and does not support OpenID Connect.

By using Amazon Cognito, the Security Engineer can easily and efficiently implement user authentication using social media identities, providing a seamless experience for the mobile game users.

New cards
47

A Software Engineer is trying to figure out why network connectivity to an Amazon EC2 instance does not appear to be working correctly. Its security group allows inbound HTTP traffic from 0.0.0.0/0, and the outbound rules have not been modified from the default. A custom network ACL associated with its subnet allows inbound HTTP traffic from 0.0.0.0/0 and has no outbound rules.
What would resolve the connectivity issue?

  • A. The outbound rules on the security group do not allow the response to be sent to the client on the ephemeral port range.

  • B. The outbound rules on the security group do not allow the response to be sent to the client on the HTTP port.

  • C. An outbound rule must be added to the network ACL to allow the response to be sent to the client on the ephemeral port range.

  • D. An outbound rule must be added to the network ACL to allow the response to be sent to the client on the HTTP port.

The correct answer is:

C. An outbound rule must be added to the network ACL to allow the response to be sent to the client on the ephemeral port range.

### Explanation:

To resolve the network connectivity issue, the following steps should be taken:

1. Network ACL Outbound Rules:

- Network ACLs (NACLs) are stateless, meaning that they do not automatically allow return traffic. Both inbound and outbound rules must be explicitly defined. In this case, the custom network ACL allows inbound HTTP traffic but has no outbound rules, which means that the response traffic from the EC2 instance to the client is being blocked.

2. Ephemeral Port Range:

- When a client initiates a connection to the EC2 instance on port 80 (HTTP), the response from the EC2 instance is sent back to the client on an ephemeral port (typically in the range 1024-65535). Therefore, an outbound rule must be added to the network ACL to allow traffic to the client on the ephemeral port range.

### Why not the other options?

- A. The outbound rules on the security group do not allow the response to be sent to the client on the ephemeral port range:

- Security groups are stateful, meaning that they automatically allow return traffic. If the inbound rule allows traffic, the corresponding outbound traffic is automatically allowed, regardless of the outbound rules. This option is incorrect.

- B. The outbound rules on the security group do not allow the response to be sent to the client on the HTTP port:

- Security groups are stateful, so they automatically allow return traffic. This option is incorrect.

- D. An outbound rule must be added to the network ACL to allow the response to be sent to the client on the HTTP port:

- The response from the EC2 instance to the client is sent on an ephemeral port, not the HTTP port (port 80). This option is incorrect.

By adding an outbound rule to the network ACL to allow traffic to the client on the ephemeral port range, the connectivity issue can be resolved, ensuring that the response from the EC2 instance reaches the client.

New cards
48

A Security Engineer has been asked to create an automated process to disable IAM user access keys that are more than three months old.
Which of the following options should the Security Engineer use?

  • A. In the AWS Console, choose the IAM service and select Users. Review the Access Key Age column.

  • B. Define an IAM policy that denies access if the key age is more than three months and apply to all users.

  • C. Write a script that uses the GenerateCredentialReport, GetCredentialReport, and UpdateAccessKey APIs.

  • D. Create an Amazon CloudWatch alarm to detect aged access keys and use an AWS Lambda function to disable the keys older than 90 days.

The correct answer is:

C. Write a script that uses the GenerateCredentialReport, GetCredentialReport, and UpdateAccessKey APIs.

### Explanation:

To create an automated process to disable IAM user access keys that are more than three months old, the Security Engineer should use the following approach:

1. GenerateCredentialReport API:

- This API generates a credential report that contains information about all IAM users in the account, including the age of their access keys.

2. GetCredentialReport API:

- This API retrieves the generated credential report, which can then be parsed to identify access keys that are more than three months old.

3. UpdateAccessKey API:

- This API can be used to disable the identified access keys that are older than three months.

4. Automation:

- By writing a script that uses these APIs, the Security Engineer can automate the process of identifying and disabling aged access keys. This script can be scheduled to run periodically (e.g., using a cron job or AWS Lambda) to ensure continuous compliance.

### Why not the other options?

- A. In the AWS Console, choose the IAM service and select "Users". Review the "Access Key Age" column:

- This approach is manual and not automated. It requires the Security Engineer to manually review and disable access keys, which is not scalable or efficient.

- B. Define an IAM policy that denies access if the key age is more than three months and apply to all users:

- IAM policies do not support conditions based on the age of access keys. This option is not feasible.

- D. Create an Amazon CloudWatch alarm to detect aged access keys and use an AWS Lambda function to disable the keys older than 90 days:

- CloudWatch alarms are used for monitoring metrics and triggering actions based on thresholds. They do not have built-in capabilities to detect the age of IAM access keys. This option is not suitable for the task.

By writing a script that uses the GenerateCredentialReport, GetCredentialReport, and UpdateAccessKey APIs, the Security Engineer can create an automated process to disable IAM user access keys that are more than three months old, ensuring continuous compliance with the security policy.

New cards
49

The InfoSec team has mandated that in the future only approved Amazon Machine Images (AMIs) can be used.
How can the InfoSec team ensure compliance with this mandate?

  • A. Terminate all Amazon EC2 instances and relaunch them with approved AMIs.

  • B. Patch all running instances by using AWS Systems Manager.

  • C. Deploy AWS Config rules and check all running instances for compliance.

  • D. Define a metric filter in Amazon CloudWatch Logs to verify compliance.

The correct answer is:

C. Deploy AWS Config rules and check all running instances for compliance.

### Explanation:

To ensure compliance with the mandate that only approved Amazon Machine Images (AMIs) can be used, the InfoSec team should use AWS Config. Here's why:

1. AWS Config Rules:

- AWS Config allows you to define rules that check the configuration of your AWS resources. You can create a custom rule that checks whether EC2 instances are using approved AMIs.

2. Compliance Monitoring:

- AWS Config continuously monitors the configuration of your resources and evaluates them against the defined rules. If an EC2 instance is launched with an unapproved AMI, AWS Config will flag it as non-compliant.

3. Automated Remediation:

- AWS Config can be integrated with AWS Lambda to automatically remediate non-compliant resources. For example, you can create a Lambda function that terminates instances using unapproved AMIs.

### Why not the other options?

- A. Terminate all Amazon EC2 instances and relaunch them with approved AMIs:

- This approach is disruptive and does not provide a mechanism to prevent the use of unapproved AMIs in the future. It is not a sustainable solution for ensuring ongoing compliance.

- B. Patch all running instances by using AWS Systems Manager:

- Patching instances ensures that they are up-to-date with security patches but does not address the requirement to use only approved AMIs. This option is not relevant to the mandate.

- D. Define a metric filter in Amazon CloudWatch Logs to verify compliance:

- CloudWatch Logs and metric filters are used for monitoring and analyzing log data, not for enforcing compliance with AMI usage. This option is not suitable for the task.

By deploying AWS Config rules, the InfoSec team can ensure that only approved AMIs are used, providing continuous compliance monitoring and automated remediation if necessary.

New cards
50

A pharmaceutical company has digitized versions of historical prescriptions stored on premises. The company would like to move these prescriptions to AWS and perform analytics on the data in them. Any operation with this data requires that the data be encrypted in transit and at rest.
Which application flow would meet the data protection requirements on AWS?

  • A. Digitized files -> Amazon Kinesis Data Analytics

  • B. Digitized files -> Amazon Kinesis Data Firehose -> Amazon S3 -> Amazon Athena

  • C. Digitized files -> Amazon Kinesis Data Streams -> Kinesis Client Library consumer -> Amazon S3 -> Athena

  • D. Digitized files -> Amazon Kinesis Data Firehose -> Amazon Elasticsearch

The correct answer is:

B. Digitized files -> Amazon Kinesis Data Firehose -> Amazon S3 -> Amazon Athena

### Explanation:

To meet the data protection requirements of encrypting data in transit and at rest while performing analytics, the following application flow is the most suitable:

1. Amazon Kinesis Data Firehose:

- Kinesis Data Firehose can ingest the digitized files and deliver them to Amazon S3. It supports encryption in transit using HTTPS and can encrypt data at rest in S3 using AWS KMS.

2. Amazon S3:

- Amazon S3 provides server-side encryption (SSE) to encrypt data at rest. You can use SSE-S3 (Amazon S3-managed keys) or SSE-KMS (AWS KMS-managed keys) to ensure that the data is encrypted.

3. Amazon Athena:

- Amazon Athena allows you to query data stored in S3 using standard SQL. Athena supports encryption in transit (TLS) and can query encrypted data in S3 without needing to decrypt it first.

### Why not the other options?

- A. Digitized files -> Amazon Kinesis Data Analytics:

- Kinesis Data Analytics is used for real-time analytics on streaming data. It does not provide a mechanism for storing historical data or querying it using SQL, which is required for analyzing historical prescriptions.

- C. Digitized files -> Amazon Kinesis Data Streams -> Kinesis Client Library consumer -> Amazon S3 -> Athena:

- While this option also uses S3 and Athena, it involves more components (Kinesis Data Streams and Kinesis Client Library) than necessary, making it more complex. Kinesis Data Firehose is a simpler and more efficient solution for ingesting and delivering data to S3.

- D. Digitized files -> Amazon Kinesis Data Firehose -> Amazon Elasticsearch:

- Amazon Elasticsearch is used for search and analytics but does not provide the same level of SQL-based querying capabilities as Amazon Athena. Additionally, this option does not mention encryption at rest in S3, which is a key requirement.

By using Amazon Kinesis Data Firehose to ingest and deliver data to Amazon S3, and then querying the data with Amazon Athena, the pharmaceutical company can meet the data protection requirements while performing the necessary analytics on the historical prescriptions.

New cards
51
term image

The correct answers are:

A. The policy allows access for the AWS account 111122223333 to manage key access though IAM policies.

B. The policy allows all IAM users in account 111122223333 to have full access to the KMS key.

### Explanation:

The key policy attached to the AWS KMS key has the following effects:

1. A. The policy allows access for the AWS account 111122223333 to manage key access though IAM policies:

- The key policy grants the AWS account 111122223333 the ability to manage access to the KMS key through IAM policies. This means that IAM policies within the account can be used to grant or restrict access to the key.

2. B. The policy allows all IAM users in account 111122223333 to have full access to the KMS key:

- The key policy grants the kms:* action to the AWS account 111122223333, which includes all IAM users in the account. This means that all IAM users in the account have full access to the KMS key unless restricted by IAM policies.

### Why not the other options?

- C. The policy allows the root user in account 111122223333 to have full access to the KMS key:

- While the root user does have full access to the KMS key, the policy is not limited to the root user. It grants access to the entire AWS account, including all IAM users.

- D. The policy allows the KMS service-linked role in account 111122223333 to have full access to the KMS key:

- The policy does not specifically mention the KMS service-linked role. It grants access to the entire AWS account, not just the service-linked role.

- E. The policy allows all IAM roles in account 111122223333 to have full access to the KMS key:

- The policy grants access to the entire AWS account, including IAM roles, but it is not limited to IAM roles. It includes all IAM users as well.

By specifying the AWS account 111122223333 as the principal and granting the kms:* action, the key policy allows the account to manage key access through IAM policies and grants all IAM users in the account full access to the KMS key.

New cards
52

A company uses AWS Organization to manage 50 AWS accounts. The finance staff members log in as AWS IAM users in the FinanceDept AWS account. The staff members need to read the consolidated billing information in the MasterPayer AWS account. They should not be able to view any other resources in the
MasterPayer AWS account. IAM access to billing has been enabled in the MasterPayer account.
Which of the following approaches grants the finance staff the permissions they require without granting any unnecessary permissions?

  • A. Create an IAM group for the finance users in the FinanceDept account, then attach the AWS managed ReadOnlyAccess IAM policy to the group.

  • B. Create an IAM group for the finance users in the MasterPayer account, then attach the AWS managed ReadOnlyAccess IAM policy to the group.

  • C. Create an AWS IAM role in the FinanceDept account with the ViewBilling permission, then grant the finance users in the MasterPayer account the permission to assume that role.

  • D. Create an AWS IAM role in the MasterPayer account with the ViewBilling permission, then grant the finance users in the FinanceDept account the permission to assume that role.

The correct answer is:

D. Create an AWS IAM role in the MasterPayer account with the ViewBilling permission, then grant the finance users in the FinanceDept account the permission to assume that role.

### Explanation:

To grant the finance staff the necessary permissions to read consolidated billing information in the MasterPayer account without granting unnecessary permissions, the following approach should be used:

1. IAM Role in the MasterPayer Account:

- Create an IAM role in the MasterPayer account with the ViewBilling permission. This role will allow access to the consolidated billing information.

2. Assume Role Permission:

- Grant the finance users in the FinanceDept account the permission to assume the IAM role created in the MasterPayer account. This allows the finance staff to switch to the role and access the billing information without having direct access to other resources in the MasterPayer account.

3. Least Privilege:

- This approach adheres to the principle of least privilege by granting only the necessary permissions to access billing information and nothing more.

### Why not the other options?

- A. Create an IAM group for the finance users in the FinanceDept account, then attach the AWS managed ReadOnlyAccess IAM policy to the group:

- The ReadOnlyAccess policy grants read access to all AWS resources, which is more than what the finance staff need. This option does not meet the requirement of granting only the necessary permissions.

- B. Create an IAM group for the finance users in the MasterPayer account, then attach the AWS managed ReadOnlyAccess IAM policy to the group:

- This option places the finance users in the MasterPayer account, which is not necessary and could grant them access to other resources in the MasterPayer account. It also uses the ReadOnlyAccess policy, which is overly permissive.

- C. Create an AWS IAM role in the FinanceDept account with the ViewBilling permission, then grant the finance users in the MasterPayer account the permission to assume that role:

- This option is incorrect because the ViewBilling permission needs to be in the MasterPayer account, not the FinanceDept account. The finance users in the MasterPayer account do not need to assume a role in the FinanceDept account.

By creating an IAM role in the MasterPayer account with the ViewBilling permission and allowing the finance users in the FinanceDept account to assume that role, the company can ensure that the finance staff have the necessary access to billing information without granting unnecessary permissions.

New cards
53

A Solutions Architect is designing a web application that uses Amazon CloudFront, an Elastic Load Balancing Application Load Balancer, and an Auto Scaling group of Amazon EC2 instances. The load balancer and EC2 instances are in the US West (Oregon) region. It has been decided that encryption in transit is necessary by using a customer-branded domain name from the client to CloudFront and from CloudFront to the load balancer.
Assuming that AWS Certificate Manager is used, how many certificates will need to be generated?

  • A. One in the US West (Oregon) region and one in the US East (Virginia) region.

  • B. Two in the US West (Oregon) region and none in the US East (Virginia) region.

  • C. One in the US West (Oregon) region and none in the US East (Virginia) region.

  • D. Two in the US East (Virginia) region and none in the US West (Oregon) region.

The correct answer is:

A. One in the US West (Oregon) region and one in the US East (Virginia) region.

### Explanation:

To enable encryption in transit for the web application using Amazon CloudFront, an Elastic Load Balancing Application Load Balancer (ALB), and an Auto Scaling group of Amazon EC2 instances, the following certificates are required:

1. Certificate for CloudFront:

- CloudFront requires the SSL/TLS certificate to be issued in the US East (N. Virginia) region. This is because CloudFront is a global service, and all certificates used with CloudFront must be managed in the US East (N. Virginia) region.

2. Certificate for the Application Load Balancer:

- The Application Load Balancer (ALB) is region-specific and requires the SSL/TLS certificate to be issued in the same region where the ALB is deployed, which is US West (Oregon) in this case.

### Why not the other options?

- B. Two in the US West (Oregon) region and none in the US East (Virginia) region:

- This is incorrect because CloudFront requires the certificate to be issued in the US East (N. Virginia) region. Certificates for CloudFront cannot be issued in the US West (Oregon) region.

- C. One in the US West (Oregon) region and none in the US East (Virginia) region:

- This is incorrect because CloudFront requires a certificate issued in the US East (N. Virginia) region. Only having a certificate in the US West (Oregon) region would not suffice for CloudFront.

- D. Two in the US East (Virginia) region and none in the US West (Oregon) region:

- This is incorrect because the Application Load Balancer requires a certificate issued in the same region where it is deployed (US West (Oregon)). Certificates issued in the US East (N. Virginia) region cannot be used for the ALB in the US West (Oregon) region.

By generating one certificate in the US West (Oregon) region for the ALB and one certificate in the US East (N. Virginia) region for CloudFront, the Solutions Architect can ensure that encryption in transit is properly configured for the web application.

New cards
54

A Security Engineer has been asked to troubleshoot inbound connectivity to a web server. This single web server is not receiving inbound connections from the internet, whereas all other web servers are functioning properly.
The architecture includes network ACLs, security groups, and a virtual security appliance. In addition, the Development team has implemented Application Load Balancers (ALBs) to distribute the load across all web servers. It is a requirement that traffic between the web servers and the internet flow through the virtual security appliance.
The Security Engineer has verified the following:
1. The rule set in the Security Groups is correct
2. The rule set in the network ACLs is correct
3. The rule set in the virtual appliance is correct
Which of the following are other valid items to troubleshoot in this scenario? (Choose two.)

  • A. Verify that the 0.0.0.0/0 route in the route table for the web server subnet points to a NAT gateway.

  • B. Verify which Security Group is applied to the particular web server's elastic network interface (ENI).

  • C. Verify that the 0.0.0.0/0 route in the route table for the web server subnet points to the virtual security appliance.

  • D. Verify the registered targets in the ALB.

  • E. Verify that the 0.0.0.0/0 route in the public subnet points to a NAT gateway.

The correct answers are:

B. Verify which Security Group is applied to the particular web server's elastic network interface (ENI).

C. Verify that the 0.0.0.0/0 route in the route table for the web server subnet points to the virtual security appliance.

### Explanation:

To troubleshoot the inbound connectivity issue for the single web server, the Security Engineer should focus on the following:

1. B. Verify which Security Group is applied to the particular web server's elastic network interface (ENI):

- Even though the Security Engineer has verified that the rule set in the Security Groups is correct, it is important to ensure that the correct Security Group is actually applied to the ENI of the problematic web server. Misconfiguration at this level could prevent inbound connections.

2. C. Verify that the 0.0.0.0/0 route in the route table for the web server subnet points to the virtual security appliance:

- The requirement states that traffic between the web servers and the internet must flow through the virtual security appliance. If the route table for the web server subnet does not correctly route traffic through the virtual security appliance, inbound connections may fail.

### Why not the other options?

- A. Verify that the 0.0.0.0/0 route in the route table for the web server subnet points to a NAT gateway:

- A NAT gateway is used for outbound internet traffic from private subnets, not for inbound traffic. This option is not relevant to the issue of inbound connectivity.

- D. Verify the registered targets in the ALB:

- While verifying the registered targets in the ALB is important for load balancing, the issue is specific to a single web server not receiving inbound connections. This option is less relevant to the specific problem described.

- E. Verify that the 0.0.0.0/0 route in the public subnet points to a NAT gateway:

- This option is not relevant to the issue of inbound connectivity to the web server. The public subnet's route table configuration does not directly affect the web server's ability to receive inbound connections.

By verifying the Security Group applied to the ENI and ensuring the route table correctly routes traffic through the virtual security appliance, the Security Engineer can effectively troubleshoot the inbound connectivity issue for the web server.

New cards
55

Which approach will generate automated security alerts should too many unauthorized AWS API requests be identified?

  • A. Create an Amazon CloudWatch metric filter that looks for API call error codes and then implement an alarm based on that metric's rate.

  • B. Configure AWS CloudTrail to stream event data to Amazon Kinesis. Configure an AWS Lambda function on the stream to alarm when the threshold has been exceeded.

  • C. Run an Amazon Athena SQL query against CloudTrail log files. Use Amazon QuickSight to create an operational dashboard.

  • D. Use the Amazon Personal Health Dashboard to monitor the account's use of AWS services, and raise an alert if service error rates increase.

The correct answer is:

A. Create an Amazon CloudWatch metric filter that looks for API call error codes and then implement an alarm based on that metric's rate.

### Explanation:

To generate automated security alerts for unauthorized AWS API requests, the following approach is the most effective:

1. Amazon CloudWatch Metric Filter:

- CloudWatch metric filters can be used to search for specific patterns in log data, such as API call error codes (e.g., AccessDenied). By creating a metric filter that identifies these error codes, you can track the rate of unauthorized API requests.

2. CloudWatch Alarm:

- Once the metric filter is in place, you can create a CloudWatch alarm based on the metric's rate. The alarm can be configured to trigger when the rate of unauthorized API requests exceeds a specified threshold, sending notifications via Amazon SNS or other alerting mechanisms.

### Why not the other options?

- B. Configure AWS CloudTrail to stream event data to Amazon Kinesis. Configure an AWS Lambda function on the stream to alarm when the threshold has been exceeded:

- While this approach can work, it is more complex and involves additional services (Kinesis and Lambda). It is less efficient compared to using CloudWatch metric filters and alarms.

- C. Run an Amazon Athena SQL query against CloudTrail log files. Use Amazon QuickSight to create an operational dashboard:

- This approach is useful for analyzing historical data and creating dashboards but does not provide real-time alerts. It is not suitable for generating automated security alerts.

- D. Use the Amazon Personal Health Dashboard to monitor the account's use of AWS services, and raise an alert if service error rates increase:

- The Personal Health Dashboard provides information about the health of AWS services but does not monitor or alert on unauthorized API requests. This option is not relevant to the task.

By creating a CloudWatch metric filter to identify API call error codes and setting up a CloudWatch alarm based on the metric's rate, the Security Engineer can effectively generate automated security alerts for unauthorized AWS API requests.

New cards
56

A company has multiple production AWS accounts. Each account has AWS CloudTrail configured to log to a single Amazon S3 bucket in a central account. Two of the production accounts have trails that are not logging anything to the S3 bucket.
Which steps should be taken to troubleshoot the issue? (Choose three.)

  • A. Verify that the log file prefix is set to the name of the S3 bucket where the logs should go.

  • B. Verify that the S3 bucket policy allows access for CloudTrail from the production AWS account IDs.

  • C. Create a new CloudTrail configuration in the account, and configure it to log to the account's S3 bucket.

  • D. Confirm in the CloudTrail Console that each trail is active and healthy.

  • E. Open the global CloudTrail configuration in the master account, and verify that the storage location is set to the correct S3 bucket.

  • F. Confirm in the CloudTrail Console that the S3 bucket name is set correctly.

The correct answers are:

B. Verify that the S3 bucket policy allows access for CloudTrail from the production AWS account IDs.

D. Confirm in the CloudTrail Console that each trail is active and healthy.

F. Confirm in the CloudTrail Console that the S3 bucket name is set correctly.

### Explanation:

To troubleshoot the issue of CloudTrail not logging to the S3 bucket in the central account, the following steps should be taken:

1. B. Verify that the S3 bucket policy allows access for CloudTrail from the production AWS account IDs:

- The S3 bucket policy must grant the necessary permissions for CloudTrail from the production accounts to write logs to the bucket. If the bucket policy is not correctly configured, CloudTrail will not be able to log to the bucket.

2. D. Confirm in the CloudTrail Console that each trail is active and healthy:

- Check the CloudTrail Console to ensure that each trail is active and healthy. If a trail is not active or is misconfigured, it will not log events to the S3 bucket.

3. F. Confirm in the CloudTrail Console that the S3 bucket name is set correctly:

- Ensure that the S3 bucket name specified in the CloudTrail configuration is correct. If the bucket name is incorrect, CloudTrail will not be able to write logs to the intended bucket.

### Why not the other options?

- A. Verify that the log file prefix is set to the name of the S3 bucket where the logs should go:

- The log file prefix is used to organize log files within the S3 bucket, not to specify the bucket itself. This option is not relevant to the issue of logs not being written to the bucket.

- C. Create a new CloudTrail configuration in the account, and configure it to log to the account's S3 bucket:

- This option suggests creating a new CloudTrail configuration that logs to the account's own S3 bucket, which does not address the requirement of logging to a central S3 bucket. It is not a valid troubleshooting step for the described issue.

- E. Open the global CloudTrail configuration in the master account, and verify that the storage location is set to the correct S3 bucket:

- CloudTrail configurations are account-specific, and there is no "global CloudTrail configuration" in the master account. This option is not applicable.

By verifying the S3 bucket policy, confirming that each trail is active and healthy, and ensuring the S3 bucket name is set correctly, the Security Engineer can effectively troubleshoot the issue of CloudTrail not logging to the central S3 bucket.

New cards
57

Amazon CloudWatch Logs agent is successfully delivering logs to the CloudWatch Logs service. However, logs stop being delivered after the associated log stream has been active for a specific number of hours.
What steps are necessary to identify the cause of this phenomenon? (Choose two.)

  • A. Ensure that file permissions for monitored files that allow the CloudWatch Logs agent to read the file have not been modified.

  • B. Verify that the OS Log rotation rules are compatible with the configuration requirements for agent streaming.

  • C. Configure an Amazon Kinesis producer to first put the logs into Amazon Kinesis Streams.

  • D. Create a CloudWatch Logs metric to isolate a value that changes at least once during the period before logging stops.

  • E. Use AWS CloudFormation to dynamically create and maintain the configuration file for the CloudWatch Logs agent.

The correct answers are:

A. Ensure that file permissions for monitored files that allow the CloudWatch Logs agent to read the file have not been modified.

B. Verify that the OS Log rotation rules are compatible with the configuration requirements for agent streaming.

### Explanation:

To identify the cause of logs stopping after the associated log stream has been active for a specific number of hours, the following steps should be taken:

1. A. Ensure that file permissions for monitored files that allow the CloudWatch Logs agent to read the file have not been modified:

- If the file permissions for the monitored log files are changed, the CloudWatch Logs agent may no longer be able to read the files, causing logs to stop being delivered. Ensuring that the file permissions remain correct is crucial for continuous log delivery.

2. B. Verify that the OS Log rotation rules are compatible with the configuration requirements for agent streaming:

- Log rotation rules can affect the availability and accessibility of log files. If the log rotation rules are not compatible with the CloudWatch Logs agent's configuration, the agent may lose access to the log files after rotation, causing logs to stop being delivered.

### Why not the other options?

- C. Configure an Amazon Kinesis producer to first put the logs into Amazon Kinesis Streams:

- This option introduces an additional service (Kinesis) and does not address the root cause of the issue. It is not a troubleshooting step but rather a potential alternative solution.

- D. Create a CloudWatch Logs metric to isolate a value that changes at least once during the period before logging stops:

- While creating metrics can help monitor log delivery, it does not directly identify the cause of logs stopping. This option is more about monitoring than troubleshooting.

- E. Use AWS CloudFormation to dynamically create and maintain the configuration file for the CloudWatch Logs agent:

- This option is about managing the configuration of the CloudWatch Logs agent but does not directly address the issue of logs stopping after a specific period. It is not a troubleshooting step.

By ensuring that file permissions are correct and verifying that log rotation rules are compatible with the CloudWatch Logs agent's configuration, the Security Engineer can identify and resolve the issue of logs stopping after a specific number of hours.

New cards
58

A company has deployed a custom DNS server in AWS. The Security Engineer wants to ensure that Amazon EC2 instances cannot use the Amazon-provided DNS.
How can the Security Engineer block access to the Amazon-provided DNS in the VPC?

  • A. Deny access to the Amazon DNS IP within all security groups.

  • B. Add a rule to all network access control lists that deny access to the Amazon DNS IP.

  • C. Add a route to all route tables that black holes traffic to the Amazon DNS IP.

  • D. Disable DNS resolution within the VPC configuration.

Below is a detailed explanation of why “Disable DNS resolution within the VPC configuration” (Option D) is the correct answer and why the other options are not viable solutions.


Correct Answer: D. Disable DNS resolution within the VPC configuration

Explanation:

In AWS, each VPC has two primary DNS attributes:

  1. enableDnsSupport (called DNS resolution in the console)

  2. enableDnsHostnames (called DNS hostnames in the console)

When you disable DNS resolution (enableDnsSupport = false), the built-in Amazon DNS server (the ".2" address in each subnet, such as 10.0.0.2 for a 10.0.0.0/16 VPC) is no longer provided to the instances via DHCP or otherwise. Hence, your Amazon EC2 instances in that VPC will not be able to send DNS queries to the Amazon-provided resolver, effectively forcing them to use your custom DNS server.

This is the recommended and most straightforward approach to ensuring that instances cannot use the Amazon-provided DNS.


Why the Other Answers Are Incorrect

A. Deny access to the Amazon DNS IP within all security groups

  • Security groups in AWS are stateful and designed to allow traffic based on “allow” rules; they do not have explicit “deny” rules.

  • Even if you attempted to prevent outbound DNS lookups to the .2 address, security groups are not typically the tool used to block traffic to internal AWS services. They simply allow or disallow traffic based on allowed rules.

  • This approach is not reliable for ensuring the Amazon DNS is completely blocked, especially because you cannot add a “deny” rule in a security group—it either “allows” traffic that matches a rule or drops traffic if there is no matching allow rule.

B. Add a rule to all network access control lists (NACLs) that deny access to the Amazon DNS IP

  • Network ACLs are stateless and can have explicit deny rules, but this is still a complex and less reliable approach:

    • You would need to carefully manage both inbound and outbound rules for DNS (UDP/TCP port 53) and the Amazon DNS IP (the .2 address).

    • Misconfiguration can disrupt other critical VPC functionality.

  • Moreover, even if you try to block 10.0.0.2 (or the .2 address in your VPC) at the NACL level, certain AWS internal services might still rely on that resolver, and blocking it could introduce unexpected behavior.

  • Blocking DNS at the NACL layer is not the standard AWS-recommended solution.

C. Add a route to all route tables that black holes traffic to the Amazon DNS IP

  • The .2 address is treated as a “local” address within your subnet and typically cannot be overridden by custom route entries.

  • AWS does not allow you to add a route to explicitly blackhole the .2 address. The local route for subnets always has precedence, and you cannot just override it with a custom blackhole (e.g., 0.0.0.0/0 → blackhole).

  • Attempting to do so is not supported and will not reliably ensure traffic never reaches the Amazon DNS.


Summary

  • Disabling DNS resolution in the VPC is the correct and recommended method.

  • It ensures that EC2 instances do not receive or can’t use the Amazon-provided DNS endpoint (x.x.x.2) at all.

  • Blocking via Security Groups or NACLs is either not possible (security groups cannot explicitly “deny”) or not recommended (NACLs can be cumbersome and risk breaking internal VPC functionality).

  • Route tables cannot override the “local” link to the .2 address.

Hence, Option D is correct, and the other options are not suitable or recommended solutions.

New cards
59

An employee accidentally exposed an AWS access key and secret access key during a public presentation. The company Security Engineer immediately disabled the key.
How can the Engineer assess the impact of the key exposure and ensure that the credentials were not misused? (Choose two.)

  • A. Analyze AWS CloudTrail for activity.

  • B. Analyze Amazon CloudWatch Logs for activity.

  • C. Download and analyze the IAM Use report from AWS Trusted Advisor.

  • D. Analyze the resource inventory in AWS Config for IAM user activity.

  • E. Download and analyze a credential report from IAM.

When investigating whether newly exposed credentials were misused, the two most direct and comprehensive data sources in AWS are:

  1. CloudTrail – which logs API calls across nearly all AWS services.

  2. IAM’s credential report – which provides last-used timestamps and details for access keys.

Hence, the best answer choices (and the reasons) are:


Correct Choices
A. Analyze AWS CloudTrail for activity.

  • Why?

    • AWS CloudTrail records all API calls (including sign-in attempts and resource-level actions) made using the compromised credentials.

    • By reviewing CloudTrail logs, you can see exactly which API calls were made, by whom, at what time, and from which source IP address.

    • This is the primary and most complete source for detecting any unauthorized API calls.

E. Download and analyze a credential report from IAM.

  • Why?

    • IAM provides a credential report that shows, among other things, the “Access key last used” date and time, as well as which AWS service and region the key accessed.

    • If the credential was used after being exposed (and before being disabled), it should show a “last used” timestamp in the report, giving you a quick overview of possible misuse.

    • You can then compare that timestamp with internal records (e.g., CloudTrail logs) to see if those calls were expected or malicious.


Incorrect/Non-Ideal Choices

B. Analyze Amazon CloudWatch Logs for activity.

  • Why not optimal?

    • CloudWatch Logs can contain logs from services (like Lambda logs, application logs, VPC Flow Logs, etc.) if specifically configured.

    • It does not automatically capture all AWS API calls or changes at the IAM level.

    • While it can provide supplemental data, it’s not the primary place to see full credential usage across AWS.

C. Download and analyze the IAM Use report from AWS Trusted Advisor.

  • Why not available?

    • Trusted Advisor offers checks and recommendations (e.g., for exposed keys or overly permissive policies), but there is no dedicated “IAM Use report” you can download that is comparable in detail to a CloudTrail log or the IAM credential report.

    • It will not give you granular API call records.

D. Analyze the resource inventory in AWS Config for IAM user activity.

  • Why not sufficient?

    • AWS Config primarily tracks configuration changes to AWS resources (including some IAM changes like attaching policies), but it does not log every API call.

    • For example, if the exposed key was used to read S3 data or spin up instances (without changing configs), AWS Config would not necessarily reflect those actions.

    • It’s useful for compliance and resource tracking but not for complete activity/usage auditing.


Summary

To be sure that exposed credentials weren’t misused, you’ll need to investigate what calls (if any) were made. The most direct sources for that are:

  • CloudTrail (A) for detailed API-level logs,

  • IAM credential report (E) to see last-used times and confirm whether the key was indeed invoked after exposure.

New cards
60

Which of the following minimizes the potential attack surface for applications?

  • A. Use security groups to provide stateful firewalls for Amazon EC2 instances at the hypervisor level.

  • B. Use network ACLs to provide stateful firewalls at the VPC level to prevent access to any specific AWS resource.

  • C. Use AWS Direct Connect for secure trusted connections between EC2 instances within private subnets.

  • D. Design network security in a single layer within the perimeter network (also known as DMZ, demilitarized zone, and screened subnet) to facilitate quicker responses to threats.

Option A is the best solution

Let’s review each choice against the goal of minimizing the potential attack surface for applications:


A. Use security groups to provide stateful firewalls for Amazon EC2 instances at the hypervisor level.

  • Security groups are recommended best practice in AWS:

    • They are stateful: responses to allowed inbound traffic are automatically allowed back out.

    • They operate at the instance (hypervisor) level, so each instance can be locked down precisely.

    • They minimize the exposed surface by allowing only explicitly permitted traffic.

This approach aligns with AWS best practices for least-privileged, micro-level control over ingress/egress. So, this is a strong contender.


B. Use network ACLs to provide stateful firewalls at the VPC level to prevent access to any specific AWS resource.

  • Network ACLs (NACLs) are stateless, not stateful.

  • They operate at the subnet level, not per instance.

  • Because they are stateless, you must create both inbound and outbound rules to permit return traffic; they do not inherently “remember” or track connections.

  • NACLs are often a secondary layer of defense, but by themselves do not provide the fine-grained, stateful controls of security groups.

  • The statement that they “provide stateful firewalls” is incorrect.

Hence, this is not correct for minimizing attack surface in the way the question is framed, especially given the inaccuracy about stateful functionality.


C. Use AWS Direct Connect for secure trusted connections between EC2 instances within private subnets.

  • AWS Direct Connect provides a dedicated network connection from on-premises to AWS.

  • It does not inherently address reducing the application’s external attack surface in a typical VPC scenario.

  • It’s more about secure WAN/enterprise connectivity, not about locking down or filtering inbound traffic from the public internet to your instances.

This choice isn’t related to minimizing the application’s public attack surface.


D. Design network security in a single layer within the perimeter network (DMZ) to facilitate quicker responses to threats.

  • Relying on a single-layer “perimeter” approach (a single DMZ) doesn’t align with AWS’s recommended defense-in-depth approach.

  • Usually, you’d have multiple layers of network segmentation (private subnets, security groups, NACLs, etc.) instead of a single-layer DMZ.

  • A single perimeter layer is more vulnerable and does not minimize the attack surface effectively.

Hence, this is not the correct approach for minimizing attack surface.


Conclusion

Option A is the best solution to minimize the potential attack surface: using security groups for stateful, hypervisor-level firewalls on each Amazon EC2 instance is an AWS best practice and significantly reduces exposure.

New cards
61

A distributed web application is installed across several EC2 instances in public subnets residing in two Availability Zones. Apache logs show several intermittent brute-force attacks from hundreds of IP addresses at the layer 7 level over the past six months.
What would be the BEST way to reduce the potential impact of these attacks in the future?

  • A. Use custom route tables to prevent malicious traffic from routing to the instances.

  • B. Update security groups to deny traffic from the originating source IP addresses.

  • C. Use network ACLs.

  • D. Install intrusion prevention software (IPS) on each instance.

Answer: D. Install intrusion prevention software (IPS) on each instance.


Why This Is the Correct Answer

  • Layer-7 Attacks: The question specifies a “layer-7” brute-force attack coming from hundreds of IPs. An IPS (intrusion prevention system) can inspect traffic at the application (HTTP) layer, recognize suspicious/brute-force patterns, and block them in real time—even when attackers frequently change source IPs.

  • Dynamic and Distributed Threats: Because these attacks are intermittent and come from numerous IP addresses, a system that can dynamically learn and apply detection rules (such as an IPS) is far more effective than static blocking methods.


Why the Other Answers Are Wrong

  1. A. Use custom route tables to prevent malicious traffic from routing to the instances

    • Route tables direct traffic to destinations (e.g., internet gateway, NAT gateway), but they do not perform IP-based filtering at the application layer.

    • You cannot simply “blackhole” or block specific IP addresses at layer 7 by changing route tables.

  2. B. Update security groups to deny traffic from the originating source IP addresses

    • Security groups are stateful but allow rules are the norm; you do not specify “deny” explicitly. You’d have to remove IP ranges from allow rules.

    • Managing hundreds (or thousands) of IPs is difficult, and attackers often rotate addresses quickly. This approach doesn’t scale for a distributed brute-force attack.

  3. C. Use network ACLs

    • Network ACLs are stateless and operate at the subnet boundary, typically at layers 3 and 4. They cannot inspect or block patterns at layer 7.

    • Keeping up with constantly changing IP addresses is cumbersome, and there’s no built-in intelligence to detect complex brute-force patterns.

New cards
62

A company plans to move most of its IT infrastructure to AWS. They want to leverage their existing on-premises Active Directory as an identity provider for AWS.
Which combination of steps should a Security Engineer take to federate the company's on-premises Active Directory with AWS? (Choose two.)

  • A. Create IAM roles with permissions corresponding to each Active Directory group.

  • B. Create IAM groups with permissions corresponding to each Active Directory group.

  • C. Configure Amazon Cloud Directory to support a SAML provider.

  • D. Configure Active Directory to add relying party trust between Active Directory and AWS.

  • E. Configure Amazon Cognito to add relying party trust between Active Directory and AWS.

Answer: A and D.

  1. Create IAM roles with permissions corresponding to each AD group.

  2. Configure Active Directory (via AD FS or another SAML-capable solution) to add a relying party trust between AD and AWS.


Why These Are Correct

  1. Create IAM Roles Matching AD Groups

    • When federating via SAML, AWS IAM roles are used to define which AWS permissions a particular AD group (or set of users) should receive.

    • Each AD group is mapped to a corresponding IAM role so that, upon successful authentication and SAML assertion, users assume the relevant IAM role for their group.

  2. Configure AD for a Relying Party Trust

    • You need a SAML 2.0–compliant IdP (e.g., AD FS) to establish a trust relationship with AWS.

    • In Active Directory (using AD FS), this is configured by creating a “relying party trust” that represents AWS. This setup enables users who authenticate with AD to receive SAML tokens they can use to log in to AWS.


Why the Other Answers Are Incorrect

  • B. Create IAM groups with permissions corresponding to each Active Directory group

    • SAML federation is primarily role-based in AWS, not group-based. You typically do not map on-prem AD groups to IAM groups; instead, you map them to IAM roles.

  • C. Configure Amazon Cloud Directory to support a SAML provider

    • Cloud Directory is different from IAM SAML federation. You configure the SAML provider directly in IAM, not in Cloud Directory.

  • E. Configure Amazon Cognito to add relying party trust between AD and AWS

    • Cognito is often used for web or mobile app authentication scenarios, especially for user pools or federation with social IdPs.

    • For directly federating on-prem AD users to the AWS Management Console or services, the standard approach is SAML federation with IAM. Cognito is not required for that.

New cards
63

A security alert has been raised for an Amazon EC2 instance in a customer account that is exhibiting strange behavior. The Security Engineer must first isolate the
EC2 instance and then use tools for further investigation.
What should the Security Engineer use to isolate and research this event? (Choose three.)

  • A. AWS CloudTrail

  • B. Amazon Athena

  • C. AWS Key Management Service (AWS KMS)

  • D. VPC Flow Logs

  • E. AWS Firewall Manager

  • F. Security groups

Answer: A, D, and F

  1. AWS CloudTrail (A)

    • Provides a history of AWS API calls for the account, including those made to and from the instance. This is invaluable for investigating unusual behavior at the AWS API level (e.g., suspicious launches, security group changes, snapshots, etc.).

  2. VPC Flow Logs (D)

    • Captures information about the IP traffic going to and from network interfaces in the VPC. By reviewing these logs, you can investigate unusual inbound or outbound network traffic patterns from the suspicious EC2 instance.

  3. Security Groups (F)

    • You can isolate the EC2 instance by modifying its security group to restrict all inbound/outbound traffic, except for administrative or forensic channels (e.g., allowing only your forensic workstation's IP). This prevents further malicious activity while enabling incident response.


Why the Other Options Are Incorrect

  • Amazon Athena (B):

    • While Athena can be used to query VPC Flow Logs and CloudTrail logs stored in Amazon S3, it is not directly required to isolate or investigate. You certainly could use Athena for more advanced analysis, but given you can only choose three, the direct log sources (CloudTrail, Flow Logs) and the means to quarantine the instance (Security Groups) are more essential.

  • AWS Key Management Service (C):

    • KMS is for managing cryptographic keys and does not help directly isolate or investigate suspicious EC2 instance activity.

  • AWS Firewall Manager (E):

    • Firewall Manager is a service to centrally configure and manage AWS WAF and security group rules across multiple accounts. It does not directly isolate a single instance nor provide logs for deeper investigation.

New cards
64

A financial institution has the following security requirements:
✑ Cloud-based users must be contained in a separate authentication domain.
✑ Cloud-based users cannot access on-premises systems.
As part of standing up a cloud environment, the financial institution is creating a number of Amazon managed databases and Amazon EC2 instances. An Active Directory service exists on-premises that has all the administrator accounts, and these must be able to access the databases and instances.
How would the organization manage its resources in the MOST secure manner? (Choose two.)

  • A. Configure an AWS Managed Microsoft AD to manage the cloud resources.

  • B. Configure an additional on-premises Active Directory service to manage the cloud resources.

  • C. Establish a one-way trust relationship from the existing Active Directory to the new Active Directory service.

  • D. Establish a one-way trust relationship from the new Active Directory to the existing Active Directory service.

  • E. Establish a two-way trust between the new and existing Active Directory services.

Answer: A and D.

  1. Configure an AWS Managed Microsoft AD to manage the cloud resources

  2. Establish a one-way trust relationship from the new (cloud) Active Directory to the existing (on-prem) Active Directory service


Why These Two Are Correct

  1. AWS Managed Microsoft AD for the cloud domain (A)

    • To keep cloud-based users separate from on-premises users (as required), you deploy a distinct Active Directory domain in the cloud using AWS Managed Microsoft AD.

    • This ensures the cloud user accounts and credentials are isolated and managed independently from your on-prem AD.

  2. One-Way Trust from New AD to Existing AD (D)

    • In a one-way trust, the trusting domain (the new AD in AWS) accepts logins from the trusted domain (the existing on-prem AD).

    • Practically, this means your on-prem AD user accounts (including admin accounts) can authenticate to and manage resources in the AWS domain.

    • Because the trust is one-way (and specifically from the new AD to the on-prem AD), cloud-based accounts cannot traverse the trust to access on-premises systems.

    • This setup satisfies the requirement that “cloud-based users cannot access on-premises systems,” while still allowing on-prem admins to access cloud resources.


Why the Other Options Are Incorrect

B. Configure an additional on-premises Active Directory service to manage the cloud resources

  • This would still be an on-prem AD, not a separate cloud domain. It doesn’t satisfy the requirement to keep cloud-based users contained in a distinct authentication domain that is isolated from on-premises.

C. Establish a one-way trust relationship from the existing AD to the new AD

  • This direction (on-prem AD trusts cloud AD) would allow cloud-based accounts to access on-prem resources, which violates the requirement that “cloud-based users cannot access on-premises systems.”

E. Establish a two-way trust between the new and existing Active Directory services

  • A two-way trust allows authentication in both directions, meaning cloud users could also access on-premises resources, violating the security requirement.

New cards
65

An organization wants to be alerted when an unauthorized Amazon EC2 instance in its VPC performs a network port scan against other instances in the VPC.
When the Security team performs its own internal tests in a separate account by using pre-approved third-party scanners from the AWS Marketplace, the Security team also then receives multiple Amazon GuardDuty events from Amazon CloudWatch alerting on its test activities.
How can the Security team suppress alerts about authorized security tests while still receiving alerts about the unauthorized activity?

  • A. Use a filter in AWS CloudTrail to exclude the IP addresses of the Security team's EC2 instances.

  • B. Add the Elastic IP addresses of the Security team's EC2 instances to a trusted IP list in Amazon GuardDuty.

  • C. Install the Amazon Inspector agent on the EC2 instances that the Security team uses.

  • D. Grant the Security team's EC2 instances a role with permissions to call Amazon GuardDuty API operations.

Answer: B. Add the Elastic IP addresses of the Security team’s EC2 instances to a trusted IP list in Amazon GuardDuty.


Why This Is the Correct Answer

  • Amazon GuardDuty allows you to specify known “trusted IP lists.”

  • Once you add the scanning instances’ Elastic IP addresses to that list, GuardDuty will not generate findings for traffic originating from those IP addresses (e.g., port scans by your authorized security scanners).

  • Thus, you still receive alarms for unauthorized scanning activity, but scanning from your pre-approved IPs will be suppressed.


Why the Other Options Are Incorrect

A. Use a filter in AWS CloudTrail to exclude the IP addresses of the Security team's EC2 instances

  • CloudTrail records API calls to AWS services, not network-based events.

  • GuardDuty findings come from network traffic and other data sources (like VPC Flow Logs, DNS logs), so filtering CloudTrail data will not stop GuardDuty from generating port-scan findings.

C. Install the Amazon Inspector agent on the EC2 instances that the Security team uses

  • Amazon Inspector is a vulnerability assessment service that evaluates the security state of your instances.

  • Installing Inspector does not prevent or suppress GuardDuty alerts about port scanning. It’s simply unrelated to how GuardDuty detects suspicious network activity.

D. Grant the Security team's EC2 instances a role with permissions to call Amazon GuardDuty API operations

  • Even if the scanners have permission to call GuardDuty APIs, that does not automatically suppress findings for port scans.

  • GuardDuty continuously monitors traffic sources and patterns—it does not rely on the instance’s role to decide whether traffic is authorized or not.

New cards
66

An organization is moving non-business-critical applications to AWS while maintaining a mission-critical application in an on-premises data center. An on-premises application must share limited confidential information with the applications in AWS. The internet performance is unpredictable.
Which configuration will ensure continued connectivity between sites MOST securely?

  • A. VPN and a cached storage gateway

  • B. AWS Snowball Edge

  • C. VPN Gateway over AWS Direct Connect

  • D. AWS Direct Connect

Answer: C. VPN Gateway over AWS Direct Connect


Why This Is Correct

  1. Stable Connectivity: AWS Direct Connect provides a dedicated network connection from on-premises to AWS, bypassing the public internet. This helps mitigate concerns about unpredictable internet performance.

  2. Enhanced Security (VPN Over Direct Connect):

    • Direct Connect alone does not inherently encrypt traffic.

    • By running a VPN tunnel over Direct Connect, you can ensure data in transit is encrypted.

    • This meets the requirement to share “limited confidential information” securely.

In other words, you get both the reliability and bandwidth benefits of Direct Connect and the cryptographic protection of a VPN.


Why the Other Options Are Incorrect

  • A. VPN and a Cached Storage Gateway

    • A standard VPN uses the public internet, which can have unpredictable performance.

    • A storage gateway is primarily for hybrid storage use cases, not continuous, secure, predictable connectivity for an application.

  • B. AWS Snowball Edge

    • Snowball Edge is a physical data transfer device for migrating large amounts of data offline.

    • It does not provide the real-time connectivity that the application needs.

  • D. AWS Direct Connect

    • While Direct Connect provides dedicated, predictable bandwidth, it does not include encryption by default.

    • For “most secure” communication, especially when dealing with confidential data, you typically add a VPN layer on top of Direct Connect or use another encryption method.

New cards
67

An application has been built with Amazon EC2 instances that retrieve messages from Amazon SQS. Recently, IAM changes were made and the instances can no longer retrieve messages.
What actions should be taken to troubleshoot the issue while maintaining least privilege? (Choose two.)

  • A. Configure and assign an MFA device to the role used by the instances.

  • B. Verify that the SQS resource policy does not explicitly deny access to the role used by the instances.

  • C. Verify that the access key attached to the role used by the instances is active.

  • D. Attach the AmazonSQSFullAccess managed policy to the role used by the instances.

  • E. Verify that the role attached to the instances contains policies that allow access to the queue.

Answer: B and E.

  1. Verify that the SQS resource policy does not explicitly deny access (B)

  2. Verify that the role attached to the instances contains policies that allow access (E)


Why These Are Correct

  1. Check SQS Resource Policy (B)

    • An SQS resource policy can explicitly allow or deny access based on principals (roles, users, etc.).

    • If there is an “explicit deny” condition for the role, even valid IAM permissions in the role’s policy would be overridden.

  2. Check the Role’s Policies (E)

    • The EC2 instance gets its permissions from the IAM role it is running under.

    • Confirm the role’s attached policies explicitly allow the sqs:ReceiveMessage, sqs:DeleteMessage, sqs:ChangeMessageVisibility, etc., actions on the relevant queue.


Why the Other Options Are Incorrect

  • A. Configure and assign an MFA device to the role used by the instances

    • EC2 instance roles typically use temporary credentials from the Instance Metadata Service. MFA is not involved in automated requests to AWS resources.

  • C. Verify that the access key attached to the role used by the instances is active

    • Again, EC2 instance roles do not utilize a static access key and secret. They use temporary STS credentials. You don’t attach a long-term access key to an IAM role.

  • D. Attach the AmazonSQSFullAccess managed policy to the role used by the instances

    • This would grant far more permissions than necessary, violating the least privilege principle. Instead, ensure the minimum required actions for SQS access are allowed.

New cards
68

A company has a forensic logging use case whereby several hundred applications running on Docker on EC2 need to send logs to a central location. The Security Engineer must create a logging solution that is able to perform real-time analytics on the log files, grants the ability to replay events, and persists data.
Which AWS Services, together, can satisfy this use case? (Choose two.)

  • A. Amazon Elasticsearch

  • B. Amazon Kinesis

  • C. Amazon SQS

  • D. Amazon CloudWatch

  • E. Amazon Athena

Answer: A (Amazon Elasticsearch) and B (Amazon Kinesis).


Why These Two Are Correct

  1. Amazon Kinesis

    • Real-Time Streaming: Kinesis Data Streams can collect and process large streams of data in real time.

    • Replay: Kinesis supports retention windows (default 24 hours, extendable to 7 days or longer with extended retention), allowing you to replay events for forensic analysis or reprocessing as needed.

    • Durable Storage: Data is stored across multiple Availability Zones for fault tolerance within the retention window.

  2. Amazon Elasticsearch (Amazon OpenSearch Service)

    • Real-Time Analytics: By streaming log data from Kinesis to Elasticsearch, you can index and query data in near real time.

    • Centralized Logging: Elasticsearch is well-suited for log analytics, text-based searches, and dashboards (often via Kibana/OpenSearch Dashboards).

    • Persistence: Data indexed in Elasticsearch remains there for however long you configure, meeting the requirement of persistent storage for forensic logs.

Together, Kinesis + Elasticsearch provides a powerful, low-latency solution for ingestion, real-time analytics, replay, and ongoing storage.


Why the Other Options Are Not the Best Fit

  • C. Amazon SQS

    • While SQS can queue messages, it is not typically used for real-time analytics or replaying events over time. Once a message is consumed, it’s gone (unless visibility timeout or re-drive policies are used, but that’s still not ideal for replay).

    • It lacks the robust replay capabilities of Kinesis Data Streams.

  • D. Amazon CloudWatch

    • CloudWatch Logs can capture log data, but it’s not specifically designed for robust real-time analytics at scale (it’s more about storage, basic search, and alerting). It does not naturally provide an event replay mechanism.

    • For deep, long-term analytics and replay, Kinesis + Elasticsearch is more suitable.

  • E. Amazon Athena

    • Athena queries data stored in Amazon S3. It’s great for ad hoc queries and historical analysis, but not for near real-time streaming analytics.

    • It does not inherently provide a mechanism to replay events the way Kinesis Data Streams does.

Hence, Kinesis (for ingestion + replay) and Elasticsearch (for indexing + real-time analytics + long-term storage) best satisfy the requirements.

New cards
69

Which of the following is the most efficient way to automate the encryption of AWS CloudTrail logs using a Customer Master Key (CMK) in AWS KMS?

  • A. Use the KMS direct encrypt function on the log data every time a CloudTrail log is generated.

  • B. Use the default Amazon S3 server-side encryption with S3-managed keys to encrypt and decrypt the CloudTrail logs.

  • C. Configure CloudTrail to use server-side encryption using KMS-managed keys to encrypt and decrypt CloudTrail logs.

  • D. Use encrypted API endpoints so that all AWS API calls generate encrypted CloudTrail log entries using the TLS certificate from the encrypted API call.

Answer: C. Configure CloudTrail to use server-side encryption with KMS-managed keys to encrypt and decrypt CloudTrail logs.


Why This Is the Correct Answer

  • Built-In Automation: AWS CloudTrail natively supports server-side encryption with AWS KMS (using a customer-managed CMK). You simply enable this option in the CloudTrail configuration, and CloudTrail will automatically encrypt logs before storing them in the S3 bucket.

  • Least Operational Overhead: This method is fully managed. AWS handles the encryption and decryption, and you don’t have to manually call Encrypt or Decrypt APIs each time logs are generated.

  • Granular Key Management: You can control access to the CMK (e.g., via key policies), enabling fine-grained control over who can decrypt or manage the logs.


Why the Other Options Are Incorrect

  1. A. Use the KMS direct encrypt function on the log data every time a CloudTrail log is generated

    • This would require manual, per-log calls to Encrypt, adding significant overhead and complexity. It does not leverage the automatic encryption features that CloudTrail provides.

  2. B. Use the default Amazon S3 server-side encryption with S3-managed keys

    • S3-managed keys (SSE-S3) do not use a KMS CMK. This would encrypt the logs, but not with your specific CMK in KMS. Hence, it does not meet the requirement of using a CMK.

  3. D. Use encrypted API endpoints so that all AWS API calls generate encrypted CloudTrail log entries using the TLS certificate

    • While TLS does encrypt data in transit, it does not address at-rest encryption of CloudTrail logs in S3. Additionally, using encrypted API endpoints does not automatically encrypt the log files themselves with a CMK.

New cards
70

An organization is using AWS CloudTrail, Amazon CloudWatch Logs, and Amazon CloudWatch to send alerts when new access keys are created. However, the alerts are no longer appearing in the Security Operations mail box.
Which of the following actions would resolve this issue?

  • A. In CloudTrail, verify that the trail logging bucket has a log prefix configured.

  • B. In Amazon SNS, determine whether the Account spend limit has been reached for this alert.

  • C. In SNS, ensure that the subscription used by these alerts has not been deleted.

  • D. In CloudWatch, verify that the alarm threshold consecutive periods value is equal to, or greater than 1.

Answer: C. In Amazon SNS, ensure that the subscription used by these alerts has not been deleted.


Why This Is the Correct Answer

  • When CloudTrail detects the creation of a new access key and sends that event to CloudWatch (Logs + Alarm), the final step is for the CloudWatch alarm to use SNS to deliver the alert to the Security Operations mailbox.

  • If the SNS subscription to that mailbox was deleted or became unsubscribed, no emails would arrive, even if the CloudWatch alarm is still being triggered correctly.

  • Therefore, verifying (and re-creating, if necessary) that the SNS email subscription is still active will restore the alerts to the mailbox.


Why the Other Options Are Incorrect

  1. A. In CloudTrail, verify that the trail logging bucket has a log prefix configured.

    • A missing prefix alone typically does not stop CloudTrail logs from being delivered. It can be set to “none.” If CloudTrail was still writing logs, changing the prefix would not affect whether the notifications were sent to an email address.

  2. B. In Amazon SNS, determine whether the “Account spend limit” has been reached.

    • SNS spend limits apply mostly to SMS messages, not email. Emails would not suddenly stop because of a “spend limit” on SMS or some other cost-based cap.

  3. D. In CloudWatch, verify that the alarm threshold ‘consecutive periods’ value is equal to, or greater than 1.

    • While a misconfiguration in CloudWatch alarms could cause an alarm never to trigger, it’s less likely to be the culprit if everything was working correctly before and no changes were made there.

    • Even with a multi-period threshold, you would typically see at least some attempts or a different alarm state. The more common and direct cause for alerts no longer appearing is an issue with the SNS subscription.

New cards
71

A Security Engineer must add additional protection to a legacy web application by adding the following HTTP security headers:
-Content Security-Policy
-X-Frame-Options
-X-XSS-Protection
The Engineer does not have access to the source code of the legacy web application.
Which of the following approaches would meet this requirement?

  • A. Configure an Amazon Route 53 routing policy to send all web traffic that does not include the required headers to a black hole.

  • B. Implement an AWS Lambda@Edge origin response function that inserts the required headers.

  • C. Migrate the legacy application to an Amazon S3 static website and front it with an Amazon CloudFront distribution.

  • D. Construct an AWS WAF rule to replace existing HTTP headers with the required security headers by using regular expressions.

Answer: B. Implement an AWS Lambda@Edge origin response function that inserts the required headers.


Why This Is Correct

  • Lambda@Edge can modify HTTP response headers. By attaching a Lambda@Edge function at the origin response (or viewer response) phase of a CloudFront distribution, you can insert or overwrite security headers in the outbound response without modifying the legacy application’s source code.

  • No application changes required. The function runs within CloudFront, so you don’t need to alter the application itself.

  • Automated header injection. Once set up, all responses passing through CloudFront will include the desired security headers.


Why the Other Options Are Incorrect

  1. A. Configure an Amazon Route 53 routing policy to send all web traffic that does not include the required headers to a black hole.

    • Route 53 routing policies control DNS resolution and cannot modify HTTP response headers.

    • Sending traffic to a black hole merely drops it; it does not meet the requirement to add security headers.

  2. C. Migrate the legacy application to an Amazon S3 static website and front it with an Amazon CloudFront distribution.

    • Converting a legacy (likely dynamic) application into a purely static S3 website is typically not feasible without rewriting the application.

    • Even then, you still need a method to insert those additional security headers.

  3. D. Construct an AWS WAF rule to replace existing HTTP headers with the required security headers by using regular expressions.

    • AWS WAF can inspect traffic for malicious patterns, but it does not allow rewriting response headers. It primarily handles request filtering (and optionally response inspection with certain features), not dynamic modification of responses.

New cards
72

During a security event, it is discovered that some Amazon EC2 instances have not been sending Amazon CloudWatch logs.
Which steps can the Security Engineer take to troubleshoot this issue? (Choose two.)

  • A. Connect to the EC2 instances that are not sending the appropriate logs and verify that the CloudWatch Logs agent is running.

  • B. Log in to the AWS account and select CloudWatch Logs. Check for any monitored EC2 instances that are in the Alerting state and restart them using the EC2 console.

  • C. Verify that the EC2 instances have a route to the public AWS API endpoints.

  • D. Connect to the EC2 instances that are not sending logs. Use the command prompt to verify that the right permissions have been set for the Amazon SNS topic.

  • E. Verify that the network access control lists and security groups of the EC2 instances have the access to send logs over SNMP.

Answer: A and C.

  1. Connect to the EC2 instances and verify that the CloudWatch Logs agent is running (A)

    • If the agent is stopped, misconfigured, or not installed, the instances will not send their logs to CloudWatch.

  2. Verify that the EC2 instances have a route to the public AWS API endpoints (C)

    • The CloudWatch Logs agent must be able to reach the CloudWatch Logs service endpoints (typically via HTTPS). If there is no valid network path—due to routing, DNS, or firewall issues—the logs will not be delivered.


Why the Other Choices Are Incorrect

  • B. Check “Alerting” state in CloudWatch Logs and restart instances

    • CloudWatch Logs does not track EC2 instance states as “Alerting,” and restarting the instances is not a standard troubleshooting step for missing logs.

  • D. Verify permissions for an SNS topic

    • Sending logs to CloudWatch does not involve SNS permissions. The CloudWatch Logs agent writes logs directly to the CloudWatch Logs service, so you would need to check IAM permissions for the CloudWatch Logs agent, not an SNS topic.

  • E. Ensure that the instances can send logs over SNMP

    • CloudWatch Logs uses HTTPS (port 443) to the CloudWatch Logs service, not SNMP. Verifying SNMP settings is unrelated to CloudWatch log delivery.

New cards
73

A Security Engineer discovers that developers have been adding rules to security groups that allow SSH and RDP traffic from 0.0.0.0/0 instead of the organization firewall IP.
What is the most efficient way to remediate the risk of this activity?

  • A. Delete the internet gateway associated with the VPC.

  • B. Use network access control lists to block source IP addresses matching 0.0.0.0/0.

  • C. Use a host-based firewall to prevent access from all but the organization's firewall IP.

  • D. Use AWS Config rules to detect 0.0.0.0/0 and invoke an AWS Lambda function to update the security group with the organization's firewall IP.

Answer: D. Use AWS Config rules to detect 0.0.0.0/0 and invoke an AWS Lambda function to update the security group with the organization's firewall IP.


Why This Is Correct

  • Automated Detection: AWS Config can monitor security group rules in near-real time.

  • Automated Remediation: An AWS Lambda function can be triggered by a Config rule violation (i.e., when an ingress rule allows traffic from 0.0.0.0/0 on SSH/RDP ports) and immediately remediate the configuration by updating the offending rule to the organization’s firewall IP.

  • Scalable and Minimal Disruption: This approach is hands-off for developers (who might inadvertently create insecure rules) and ensures any incorrect security group rules are automatically replaced with the correct source IP restrictions.


Why the Other Options Are Less Suitable

  1. A. Delete the internet gateway associated with the VPC

    • This would block all inbound and outbound traffic via the Internet. It’s far too drastic and would break legitimate use cases that require internet connectivity.

  2. B. Use network access control lists to block source IP addresses matching 0.0.0.0/0

    • An NACL rule that denies 0.0.0.0/0 would effectively deny all inbound traffic.

    • Additionally, NACLs are stateless. You must allow both inbound and outbound flows carefully. This is not a fine-grained solution and would block legitimate external traffic.

  3. C. Use a host-based firewall to prevent access from all but the organization's firewall IP

    • Although possible, it requires configuring and maintaining firewalls on every instance.

    • This is more complex and error-prone compared to using AWS-native features like Config and Lambda to enforce proper rules at the security group level.

New cards
74

In response to the past DDoS attack experiences, a Security Engineer has set up an Amazon CloudFront distribution for an Amazon S3 bucket. There is concern that some users may bypass the CloudFront distribution and access the S3 bucket directly.
What must be done to prevent users from accessing the S3 objects directly by using URLs?

  • A. Change the S3 bucket/object permission so that only the bucket owner has access.

  • B. Set up a CloudFront origin access identity (OAI), and change the S3 bucket/object permission so that only the OAI has access.

  • C. Create IAM roles for CloudFront, and change the S3 bucket/object permission so that only the IAM role has access.

  • D. Redirect S3 bucket access to the corresponding CloudFront distribution.

Answer: B. Set up a CloudFront origin access identity (OAI), and change the S3 bucket/object permission so that only the OAI has access.


Why This Is Correct

  • Origin Access Identity (OAI):

    • An OAI is a special CloudFront principal used to restrict direct access to the S3 bucket.

    • You update the S3 bucket policy to only allow the OAI to read objects (instead of allowing public or anonymous access).

    • As a result, only CloudFront can read from the S3 bucket, preventing end users from bypassing CloudFront and accessing the S3 objects directly.

  • Secures the Bucket:

    • By blocking all public access except for the OAI, end users have no direct path to S3.

    • They must use the CloudFront distribution URL to retrieve objects.


Why the Other Options Are Incorrect

  1. A. Change the S3 bucket/object permission so that only the bucket owner has access

    • This will block all external requests, including those from CloudFront (unless using the bucket owner’s credentials), effectively breaking the CloudFront distribution.

  2. C. Create IAM roles for CloudFront, and change the S3 bucket/object permission so that only the IAM role has access

    • CloudFront does not assume an IAM role to access an S3 bucket. Instead, it uses the OAI mechanism for restricting bucket access.

  3. D. Redirect S3 bucket access to the corresponding CloudFront distribution

    • A redirect alone does not enforce access control. An attacker can ignore the redirect and still attempt to access the S3 bucket directly.

    • Only an explicit block of public bucket access (via OAI or S3 bucket policy) prevents unauthorized direct access.

New cards
75

A company plans to move most of its IT infrastructure to AWS. The company wants to leverage its existing on-premises Active Directory as an identity provider for AWS.
Which steps should be taken to authenticate to AWS services using the company's on-premises Active Directory? (Choose three.)

  • A. Create IAM roles with permissions corresponding to each Active Directory group.

  • B. Create IAM groups with permissions corresponding to each Active Directory group.

  • C. Create a SAML provider with IAM.

  • D. Create a SAML provider with Amazon Cloud Directory.

  • E. Configure AWS as a trusted relying party for the Active Directory

  • F. Configure IAM as a trusted relying party for Amazon Cloud Directory.

Answer: A, C, and E.

  1. Create IAM roles with permissions corresponding to each Active Directory group.

    • In AWS, you typically map on-prem AD groups to IAM roles. Each role grants a specific set of permissions, and when your AD users authenticate, they assume the corresponding role.

  2. Create a SAML provider with IAM.

    • You must register the company’s on-premises AD FS (or other SAML 2.0–compliant IdP) in AWS IAM as a SAML provider so that AWS knows how to trust and process the SAML assertions.

  3. Configure AWS as a trusted relying party for the Active Directory.

    • On the on-prem AD side (usually via AD FS), you configure AWS (through its SAML endpoints) as a “relying party.” This means AD FS will issue SAML tokens to AWS when users authenticate, allowing them to assume IAM roles.


Why the Other Options Are Incorrect

  • B. Create IAM groups with permissions corresponding to each Active Directory group

    • SAML federation relies on IAM roles, not groups. You cannot directly map an AD group to an IAM group for the purpose of federation.

  • D. Create a SAML provider with Amazon Cloud Directory

    • SAML federation to AWS is configured in IAM, not in Amazon Cloud Directory.

    • Cloud Directory is different from IAM’s SAML provider configuration.

  • F. Configure IAM as a trusted relying party for Amazon Cloud Directory

    • This is reversed. You need to configure AWS (IAM) as a relying party in AD (or AD FS), not the other way around.

    • You do not rely on Cloud Directory for the trust relationship with on-premises AD.

New cards
76

A Security Analyst attempted to troubleshoot the monitoring of suspicious security group changes. The Analyst was told that there is an Amazon CloudWatch alarm in place for these AWS CloudTrail log events. The Analyst tested the monitoring setup by making a configuration change to the security group but did not receive any alerts.
Which of the following troubleshooting steps should the Analyst perform?

  • A. Ensure that CloudTrail and S3 bucket access logging is enabled for the Analyst's AWS account.

  • B. Verify that a metric filter was created and then mapped to an alarm. Check the alarm notification action.

  • C. Check the CloudWatch dashboards to ensure that there is a metric configured with an appropriate dimension for security group changes.

  • D. Verify that the Analyst's account is mapped to an IAM policy that includes permissions for cloudwatch: GetMetricStatistics and Cloudwatch: ListMetrics.

Answer: B. Verify that a metric filter was created and then mapped to an alarm. Check the alarm notification action.


Why This Is the Correct Troubleshooting Step

  1. CloudTrail → CloudWatch Logs → Metric Filter → Alarm

    • For CloudWatch to raise an alarm about suspicious security group changes, CloudTrail events must first be sent to CloudWatch Logs.

    • A metric filter must be configured in CloudWatch Logs to look for specific API calls (e.g., AuthorizeSecurityGroupIngress, AuthorizeSecurityGroupEgress, RevokeSecurityGroupIngress, RevokeSecurityGroupEgress).

    • That metric filter must then be attached to a CloudWatch alarm with a proper threshold and notification action (e.g., SNS topic subscription).

  2. Most Common Failure Point

    • Often, the reason no alerts appear is that the metric filter either:

      • Was not created at all,

      • Is not matching the relevant API calls,

      • Was not linked correctly to the alarm, or

      • The alarm has no valid notification action (e.g., an SNS topic with no subscriptions).

  3. Verifying the Metric Filter and Alarm

    • Checking that the metric filter is actually capturing security group changes, and that the alarm is firing with a working SNS or email notification, is the logical first step in troubleshooting missing alerts.


Why the Other Choices Are Less Relevant

  • A. Ensure that CloudTrail and S3 bucket access logging is enabled

    • While CloudTrail must be enabled and delivering logs, S3 bucket access logging is unrelated to detecting changes in a security group. You specifically need CloudTrailCloudWatch Logsmetric filteralarm, not S3 bucket access logs.

  • C. Check the CloudWatch dashboards to ensure that there is a metric with an appropriate dimension

    • Although CloudWatch dashboards can display metrics, merely viewing a dashboard is not necessarily how you confirm the alarm is configured properly. The critical piece is the metric filter for the relevant API calls and a functioning alarm.

  • D. Verify that the Analyst's account has permissions for cloudwatch:GetMetricStatistics and cloudwatch:ListMetrics

    • Even without these permissions, the alarm itself can still trigger and send notifications. Lack of these permissions might prevent the Analyst from viewing metrics in the console, but it would not prevent CloudWatch from raising an alarm on the event.

New cards
77

Example.com hosts its internal document repository on Amazon EC2 instances. The application runs on EC2 instances and previously stored the documents on encrypted Amazon EBS volumes. To optimize the application for scale, example.com has moved the files to Amazon S3. The security team has mandated that all the files are securely deleted from the EBS volume, and it must certify that the data is unreadable before releasing the underlying disks.
Which of the following methods will ensure that the data is unreadable by anyone else?

  • A. Change the volume encryption on the EBS volume to use a different encryption mechanism. Then, release the EBS volumes back to AWS.

  • B. Release the volumes back to AWS. AWS immediately wipes the disk after it is deprovisioned.

  • C. Delete the encryption key used to encrypt the EBS volume. Then, release the EBS volumes back to AWS.

  • D. Delete the data by using the operating system delete commands. Run Quick Format on the drive and then release the EBS volumes back to AWS.

Answer: C. Delete the encryption key used to encrypt the EBS volume. Then release the EBS volumes back to AWS.


Why This Is the Correct Answer

  • Cryptographic Erasure

    • When an EBS volume is encrypted with AWS KMS, the data at rest is tied to a specific CMK (Customer Master Key).

    • Deleting or disabling that CMK makes any data encrypted with it unreadable because decryption is no longer possible.

    • This approach is often referred to as “cryptographic erasure” and is the recommended method for ensuring that data is permanently inaccessible.


Why the Other Answers Are Incorrect

  1. A. Change the volume encryption to another mechanism

    • Simply changing the encryption key does not invalidate data already encrypted with the original key. Someone with the original key could still decrypt it (assuming the original key still exists).

  2. B. Rely on AWS wiping the disk after deprovisioning

    • AWS does wipe the physical storage of EBS volumes when they are re-provisioned to another customer, but that is an internal process.

    • You do not receive any certification from AWS that the data was securely wiped. If your internal compliance requires proof of data destruction, relying solely on the AWS deprovisioning process is insufficient.

  3. D. Delete the data via the OS and run a Quick Format

    • A simple OS-level delete or quick format does not guarantee that data can’t be recovered.

    • “Quick format” leaves the underlying bits largely intact on the volume, so it does not meet a strict standard for secure deletion.

Thus, deleting the CMK (Answer C) is the most efficient and reliable way to ensure the encrypted data is rendered completely unreadable.

New cards
78
term image

The correct answer is:

A. Change the "Resource" from "arn:aws:s3:::Bucket" to "arn:aws:s3:::Bucket/*".

Explanation:

  • The current bucket policy specifies "Resource": ["arn:aws:s3:::Bucket"], which only grants permissions on the bucket itself, but not on objects within the bucket.

  • To allow access to objects, the policy needs to include "arn:aws:s3:::Bucket/*", which specifies all objects within the bucket.

Why Not the Other Options?

  • B. Change the "Principal" from "*" to "AWS:arn:aws:iam::account-number:user/username":

    • The "Principal": "*" is valid and allows all IAM users in the account. This is not the issue.

  • C. Change the "Version" from "2012-10-17" to the last revised date of the policy:

    • "2012-10-17" is the latest supported version for AWS policies and does not need to be changed.

  • D. Change the "Action" from "s3:*" to ["s3:GetObject", "s3:ListBucket"]:

    • "s3:*" already grants all S3 actions, including "s3:GetObject" and "s3:ListBucket", so this is not the issue.

Solution:

Modify the Resource in the policy:

"Resource": ["arn:aws:s3:::Bucket", "arn:aws:s3:::Bucket/*"]

This ensures that both the bucket and its objects are accessible.

Conclusion:

Option A is the correct answer.

New cards
79

The Security Engineer has discovered that a new application that deals with highly sensitive data is storing Amazon S3 objects with the following key pattern, which itself contains highly sensitive data.
Pattern:
"randomID_datestamp_PII.csv"
Example:
"1234567_12302017_000-00-0000 csv"
The bucket where these objects are being stored is using server-side encryption (SSE).
Which solution is the most secure and cost-effective option to protect the sensitive data?

  • A. Remove the sensitive data from the object name, and store the sensitive data using S3 user-defined metadata.

  • B. Add an S3 bucket policy that denies the action s3:GetObject

  • C. Use a random and unique S3 object key, and create an S3 metadata index in Amazon DynamoDB using client-side encrypted attributes.

  • D. Store all sensitive objects in Binary Large Objects (BLOBS) in an encrypted Amazon RDS instance.

Answer: C. Use a random and unique S3 object key, and maintain the “sensitive” part of the identifier in an encrypted index (e.g., in DynamoDB).


Why This Works Best

  • Remove PII from the Key: Storing sensitive data directly in the S3 object name can inadvertently expose it in logs, URLs, or bucket listings. Instead, generate a random key (for example a UUID) that does not reveal PII.

  • Encrypt Sensitive Metadata Separately: If you must store or index sensitive fields (like SSNs or birthdates), keep them in a separate data store (e.g., DynamoDB) under client‐side encryption or KMS‐managed encryption. That way, even if someone gains read access to the DynamoDB table, those attributes are encrypted at rest.

  • Preserve S3 as Inexpensive Storage: You still use Amazon S3 for the file contents (which can also be SSE‐encrypted), but any personally identifiable information about which file belongs to which user is handled outside the bucket name or S3 metadata, reducing the chance of accidental exposure.


Why the Other Choices Fall Short

  • A. Remove PII from the object name, store it in S3 user‐defined metadata

    • Although this removes PII from the key itself, the metadata is still readily visible to anyone with HEAD or GetObject permissions on the object. It also stays in logs (e.g., CloudTrail) in plain text.

  • B. Add an S3 bucket policy that denies s3:GetObject

    • This would block all reads of the object, rendering the bucket useless for normal access.

  • D. Store the files as BLOBs in an encrypted Amazon RDS instance

    • Pushing large files into a relational database is typically more expensive and less performant than using S3.

    • It also makes the design more complex to manage for large‐scale or high‐throughput file operations.

New cards
80

AWS CloudTrail is being used to monitor API calls in an organization. An audit revealed that CloudTrail is failing to deliver events to Amazon S3 as expected.
What initial actions should be taken to allow delivery of CloudTrail events to S3? (Choose two.)

  • A. Verify that the S3 bucket policy allow CloudTrail to write objects.

  • B. Verify that the IAM role used by CloudTrail has access to write to Amazon CloudWatch Logs.

  • C. Remove any lifecycle policies on the S3 bucket that are archiving objects to Amazon Glacier.

  • D. Verify that the S3 bucket defined in CloudTrail exists.

  • E. Verify that the log file prefix defined in CloudTrail exists in the S3 bucket.

Answer: A and D.

  1. Verify that the S3 bucket policy allows CloudTrail to write objects.

    • CloudTrail must have permission to PutObject in your bucket. If the S3 bucket policy lacks the correct Principal or Action, CloudTrail will fail to deliver logs.

  2. Verify that the S3 bucket defined in CloudTrail exists.

    • If the bucket name configured in CloudTrail does not exist, or was deleted/renamed, CloudTrail cannot write logs.


Why the Other Options Are Not Required for Initial Actions

  • B. Verify that the IAM role used by CloudTrail has access to write to Amazon CloudWatch Logs

    • This is relevant only if you’re sending CloudTrail events to CloudWatch Logs. It doesn’t affect CloudTrail’s ability to deliver logs to an S3 bucket.

  • C. Remove any lifecycle policies on the S3 bucket that are archiving objects to Amazon Glacier

    • A lifecycle policy that moves objects to Glacier would not prevent CloudTrail from initially writing logs into the bucket. It only affects object storage after they are already there.

  • E. Verify that the log file prefix defined in CloudTrail exists in the S3 bucket

    • CloudTrail will create its prefixes automatically if they do not exist, so you don’t need to pre-create them for successful delivery.

New cards
81

Due to new compliance requirements, a Security Engineer must enable encryption with customer-provided keys on corporate data that is stored in DynamoDB.
The company wants to retain full control of the encryption keys.
Which DynamoDB feature should the Engineer use to achieve compliance'?

  • A. Use AWS Certificate Manager to request a certificate. Use that certificate to encrypt data prior to uploading it to DynamoDB.

  • B. Enable S3 server-side encryption with the customer-provided keys. Upload the data to Amazon S3, and then use S3Copy to move all data to DynamoDB

  • C. Create a KMS master key. Generate per-record data keys and use them to encrypt data prior to uploading it to DynamoDS. Dispose of the cleartext and encrypted data keys after encryption without storing.

  • D. Use the DynamoDB Java encryption client to encrypt data prior to uploading it to DynamoDB.

Answer: D. Use the DynamoDB Java encryption client to encrypt data prior to uploading it to DynamoDB.


Why This Is the Correct Approach

  1. Client-Side Encryption

    • The DynamoDB Encryption Client allows you to encrypt data on the client before it is sent to DynamoDB. In doing so, you can use keys that you control, ensuring that the plaintext data—and your encryption keys—never leave your trusted boundary.

  2. "Customer-Provided Keys" Means You Control the Key Material

    • If your compliance requirements dictate that AWS must not have direct access to your encryption keys, client‐side encryption is the way to go.

    • With the DynamoDB Encryption Client, you can bring your own keys (stored in your on-premises HSMs or another key management solution) or integrate with AWS KMS in a way that still ensures you have exclusive administrative control of the keys.

  3. Full Control Over Encryption Lifecycle

    • By handling the encryption client‐side, you decide when and how to rotate keys, restrict key usage, or revoke them entirely.

    • DynamoDB simply stores encrypted blobs, thus it plays no role in decrypting your data.


Why the Other Choices Are Not Suitable

  • A. Use AWS Certificate Manager to request a certificate

    • ACM provides SSL/TLS certificates for in‐transit encryption. It does not provide a mechanism to encrypt the data at rest in DynamoDB, nor does it handle storing “customer‐provided” encryption keys for data-level encryption.

  • B. Use S3 server‐side encryption with customer-provided keys and then move data to DynamoDB

    • Even if you used SSE-C (server-side encryption with customer-provided keys) on S3, once you copy or move data into DynamoDB, you lose that SSE-C context. DynamoDB would store the data unencrypted (unless you do additional encryption steps). This also introduces extra complexity and does not ensure that DynamoDB data is encrypted with keys you fully control.

  • C. Create a KMS master key and generate per-record data keys yourself

    • While this sounds like a do-it-yourself encryption approach, you’d still be using AWS KMS to manage your master key (unless you have an external key store and are using AWS KMS for wrapping keys—but that’s not natively “customer-provided” keys in the sense that the company fully owns the key material).

    • If your compliance requirement is that AWS cannot access the key material, storing keys in KMS typically does not satisfy that strict requirement.

    • The DynamoDB Encryption Client can integrate with KMS as well, but it allows you a more controlled approach and can be pointed at external key sources as well.

Hence, to satisfy “customer-provided keys” and “full control,” Option D is your most secure and compliant path forward.

New cards
82

A Security Engineer must design a system that can detect whether a file on an Amazon EC2 host has been modified. The system must then alert the Security Engineer of the modification.
What is the MOST efficient way to meet these requirements?

  • A. Install antivirus software and ensure that signatures are up-to-date. Configure Amazon CloudWatch alarms to send alerts for security events.

  • B. Install host-based IDS software to check for file integrity. Export the logs to Amazon CloudWatch Logs for monitoring and alerting.

  • C. Export system log files to Amazon S3. Parse the log files using an AWS Lambda function that will send alerts of any unauthorized system login attempts through Amazon SNS.

  • D. Use Amazon CloudWatch Logs to detect file system changes. If a change is detected, automatically terminate and recreate the instance from the most recent AMI. Use Amazon SNS to send notification of the event.

Answer: B. Install a host-based IDS solution that performs file integrity monitoring and export its logs to CloudWatch for alerting.


Why This Works

  1. File Integrity Monitoring (FIM): A host-based IDS (e.g., Tripwire, OSSEC) continuously scans the file system for changes by comparing current file states to known “good” baselines or hashes. This is precisely what’s needed to detect modifications.

  2. Integration with CloudWatch Logs and Alerts: After detecting a file change, the IDS writes to local logs, which you forward to CloudWatch Logs. From there, you can create metric filters and alarms that notify you (for instance, via an SNS topic) whenever the IDS reports a file modification.

  3. Efficient and Purpose-Built: This approach leverages standard security tools designed for integrity monitoring, avoids manual overhead, and uses AWS-native logging and alerting services.


Why the Other Options Are Less Suitable

  • A. Antivirus Software + CloudWatch Alarms

    • Traditional antivirus primarily focuses on identifying malware signatures, not necessarily arbitrary file changes. It may not detect benign or non-malicious file changes that are still unauthorized.

  • C. Export System Logs to S3 + Lambda

    • Parsing logs for login attempts does not address file modifications. This option solves a different security problem (analyzing login behavior), not file integrity checks.

  • D. Use CloudWatch Logs to detect file system changes and automatically terminate the instance

    • CloudWatch Logs doesn’t have a built-in feature that directly monitors file system changes on an instance.

    • Automatically terminating and recreating the instance each time a file changes is extreme and might be disruptive if the change was authorized.

    • This approach also does not specifically address file integrity monitoring.

New cards
83
term image

The correct answers are:

C. Create a VPN connection from the data center to VPC A. Use an on-premises scanning engine to scan the instances in all three VPCs. Complete the penetration test request form for all three VPCs.
E. Create a VPN connection from the data center to each of the three VPCs. Use an on-premises scanning engine to scan the instances in each VPC. Complete the penetration test request form for all three VPCs.

Explanation:

  • VPC Peering allows connectivity between the VPCs, but AWS does not support transitive routing natively. This means that a scanning engine in one VPC will not automatically have access to the other VPCs unless a proper routing setup (such as a VPN) is configured.

  • AWS requires that penetration testing be approved through a request form for ethical hacking activities. This is necessary to avoid violating AWS security policies.

  • A VPN connection from the data center to VPC A (option C) allows an on-premises scanning engine to reach all three VPCs, assuming routing is correctly configured.

  • A VPN connection to each VPC (option E) ensures direct connectivity from on-premises scanning tools to all three VPCs without relying on VPC Peering.

Why Not the Other Options?

  • A and B are incorrect because AWS requires customers to complete a penetration test request form before performing penetration testing. These options explicitly state that the form is not completed.

  • D is incorrect because, while a VPN connection to all three VPCs enables scanning, penetration testing without AWS approval is against policy.

Conclusion:

To perform penetration testing while complying with AWS security policies, a VPN connection from the data center (C or E) must be used, and the penetration test request form must be completed.

New cards
84

The Security Engineer is managing a traditional three-tier web application that is running on Amazon EC2 instances. The application has become the target of increasing numbers of malicious attacks from the Internet.
What steps should the Security Engineer take to check for known vulnerabilities and limit the attack surface? (Choose two.)

  • A. Use AWS Certificate Manager to encrypt all traffic between the client and application servers.

  • B. Review the application security groups to ensure that only the necessary ports are open.

  • C. Use Elastic Load Balancing to offload Secure Sockets Layer encryption.

  • D. Use Amazon Inspector to periodically scan the backend instances.

  • E. Use AWS Key Management Services to encrypt all the traffic between the client and application servers.

Answer: B and D.

  1. Review the application security groups (B)

    • Restricting inbound traffic to only the necessary ports and source IP addresses significantly reduces the attack surface.

    • Ensuring no unintended open ports or overly broad access is a fundamental step in securing a three-tier web architecture.

  2. Use Amazon Inspector to periodically scan the backend instances (D)

    • Amazon Inspector provides automated security assessments for Amazon EC2 instances.

    • It can detect known vulnerabilities, potential misconfigurations, and deviations from best practices, helping you stay ahead of emerging threats.


Why the Other Options Are Less Appropriate

  • A. Use AWS Certificate Manager (ACM) to encrypt all traffic

    • While encrypting traffic (HTTPS/TLS) is a best practice for protecting data in transit, it does not directly address vulnerabilities on the instances themselves nor limit open ports.

    • Encryption is important, but in the context of “checking for known vulnerabilities and limiting attack surface,” it’s not as direct a solution as reviewing security groups and conducting vulnerability scans.

  • C. Use Elastic Load Balancing to offload SSL encryption

    • Similar to ACM, offloading SSL/TLS at the load balancer is a design choice that can improve performance or simplify certificate management.

    • It does not inherently reduce the number of potential entry points or help identify known vulnerabilities on the instances.

  • E. Use AWS Key Management Service (KMS) to encrypt all traffic

    • KMS manages encryption keys at rest or for certain integrated AWS services.

    • It is not used for encrypting traffic between clients and servers over the network. TLS certificates (via ACM or another certificate authority) handle transport encryption, not KMS.

New cards
85

For compliance reasons, an organization limits the use of resources to three specific AWS regions. It wants to be alerted when any resources are launched in unapproved regions.
Which of the following approaches will provide alerts on any resources launched in an unapproved region?

  • A. Develop an alerting mechanism based on processing AWS CloudTrail logs.

  • B. Monitor Amazon S3 Event Notifications for objects stored in buckets in unapproved regions.

  • C. Analyze Amazon CloudWatch Logs for activities in unapproved regions.

  • D. Use AWS Trusted Advisor to alert on all resources being created.

Answer: A. Develop an alerting mechanism based on processing AWS CloudTrail logs.


Why This Is Correct

  • CloudTrail Logs All Resource Creations:
    AWS CloudTrail records all API calls, including those that create resources in any AWS Region. By collecting and analyzing these logs (either by direct processing, sending them to Amazon CloudWatch Logs, or using AWS Lambda), you can identify any calls that create resources in unapproved Regions.

  • Flexibility and Coverage:
    This approach works for all AWS services that are logged by CloudTrail, not just S3 (as in option B) or any single service. You can set up automated alerts (e.g., via CloudWatch alarms or Amazon SNS) whenever a resource is created in a region outside your approved list.


Why the Other Choices Are Less Suitable

  1. B. Monitor Amazon S3 Event Notifications

    • This only tracks S3 object-level events (e.g., object creation). It does not cover other AWS services/resources that might be launched in an unapproved region, so it doesn’t provide complete coverage.

  2. C. Analyze Amazon CloudWatch Logs for activities in unapproved regions

    • While you can send CloudTrail logs to CloudWatch Logs and perform a similar analysis, you still need CloudTrail as the data source. Simply “analyzing CloudWatch Logs” is incomplete if CloudTrail is not configured to deliver logs there in the first place.

    • In practice, this ends up being the same underlying solution as A (CloudTrail logs → CloudWatch Logs), but A is more direct about the root cause (CloudTrail captures resource creation events).

  3. D. Use AWS Trusted Advisor to alert on all resources being created

    • Trusted Advisor does not provide real-time or near-real-time alerts on new resource creation in unapproved regions. It focuses on cost optimization, performance, security best practices, and fault tolerance checks rather than continuous region-based resource monitoring.

New cards
86

A company runs an application on AWS that needs to be accessed only by employees. Most employees work from the office, but others work remotely or travel.
How can the Security Engineer protect this workload so that only employees can access it?

  • A. Add each employee's home IP address to the security group for the application so that only those users can access the workload.

  • B. Create a virtual gateway for VPN connectivity for each employee, and restrict access to the workload from within the VPC.

  • C. Use a VPN appliance from the AWS Marketplace for users to connect to, and restrict workload access to traffic from that appliance.

  • D. Route all traffic to the workload through AWS WAF. Add each employee's home IP address into an AWS WAF rule, and block all other traffic.

Answer: C. Use a VPN appliance (or VPN service) that all employees connect through, and restrict access to the workload to traffic from that appliance.


Why This Is the Best Solution

  1. Centralized, Secure Access for All Employees

    • A VPN appliance (or a managed client VPN) in AWS allows employees to connect from anywhere—office, home, or while traveling—without having to hard-code IP addresses.

    • All traffic to the protected application then originates from the VPN’s IP addresses, which you can allow in the workload’s security group or network ACL.

  2. Avoids the Complexity of Tracking Changing IPs

    • Home IP addresses and traveling users often have dynamic or changing IPs, making it impractical to whitelist them directly.

    • A single VPN endpoint address is far easier to maintain than adding or removing each user’s IP.

  3. Scalable and Extensible

    • You can add new users simply by creating new VPN credentials or integrating with corporate authentication (e.g., Active Directory).

    • You don’t have to modify the application’s security configuration for each user.


Why the Other Choices Are Not Ideal

  • A. Add each employee’s home IP address to the security group

    • IP addresses can be dynamic, especially for remote or traveling users. Maintaining an up-to-date list quickly becomes unmanageable.

  • B. Create a virtual gateway for VPN connectivity for each employee

    • Virtual Private Gateways in AWS are typically used for site-to-site (i.e., connecting an entire corporate network to a VPC). They are not designed for individual end users on the go.

    • This would be overly complex and not practical for employees scattered in various locations.

  • D. Route all traffic through AWS WAF and add employees’ IP addresses to a whitelist

    • Again, tracking employees’ changing IP addresses does not scale well.

    • WAF is best for layer-7 traffic inspection and filtering (e.g., blocking malicious requests), not for enforcing user-based remote access with dynamic addresses.

New cards
87

A Systems Engineer is troubleshooting the connectivity of a test environment that includes a virtual security appliance deployed inline. In addition to using the virtual security appliance, the Development team wants to use security groups and network ACLs to accomplish various security requirements in the environment.
What configuration is necessary to allow the virtual security appliance to route the traffic?

  • A. Disable network ACLs.

  • B. Configure the security appliance's elastic network interface for promiscuous mode.

  • C. Disable the Network Source/Destination check on the security appliance's elastic network interface

  • D. Place the security appliance in the public subnet with the internet gateway

Answer: C. Disable the Network Source/Destination check on the security appliance’s elastic network interface.


Why This Is the Correct Configuration

  • By default, an Amazon EC2 instance’s network interface performs source/destination checks—it expects that traffic to or from the instance must have the instance’s own IP address as the source or destination.

  • For a virtual appliance that is routing or forwarding traffic on behalf of other hosts, you must disable this source/destination check so that the instance can pass traffic that isn’t addressed to it specifically.

  • Once the source/destination check is disabled, the instance can properly act as an inline security appliance or firewall.


Why the Other Options Are Not Necessary or Correct

  1. A. Disable network ACLs

    • Network ACLs can still be used in conjunction with a virtual security appliance. Disabling them outright isn’t required and removes a layer of security.

  2. B. Configure the security appliance’s ENI for promiscuous mode

    • AWS does not support “promiscuous mode” on ENIs in the same way traditional on-premises networks may allow it. Traffic forwarding in AWS requires disabling the source/dest check, not setting an interface to promiscuous mode.

  3. D. Place the security appliance in the public subnet with the internet gateway

    • Although you might place a security appliance in a public subnet (especially if it’s handling inbound traffic from the internet), doing so alone does not resolve the fundamental source/destination check issue. The appliance would still need to have that check disabled to forward traffic.

New cards
88

A Security Architect is evaluating managed solutions for storage of encryption keys. The requirements are:
-Storage is accessible by using only VPCs.
-Service has tamper-evident controls.
-Access logging is enabled.
-Storage has high availability.
Which of the following services meets these requirements?

  • A. Amazon S3 with default encryption

  • B. AWS CloudHSM

  • C. Amazon DynamoDB with server-side encryption

  • D. AWS Systems Manager Parameter Store

Answer: B. AWS CloudHSM


Why This Meets the Requirements

  1. Accessible only via VPC:
    AWS CloudHSM runs within your Amazon VPC, providing dedicated (bare metal) HSM instances. You connect to your HSM cluster privately through your VPC.

  2. Tamper-evident controls:
    CloudHSM devices are designed to meet stringent hardware security module (HSM) standards, including physical tamper detection. If tampering is detected, the device can zeroize keys and log the event.

  3. Access logging:
    CloudHSM provides logs and audit trails (e.g., syslog). You can configure logging to capture administrative operations and cryptographic operations.

  4. High availability:
    You can cluster multiple HSMs across different Availability Zones, ensuring continuous operation even if one HSM becomes unavailable.


Why the Other Options Are Not Suitable

  • A. Amazon S3 with default encryption

    • S3 is a storage service, not a dedicated key-management or key-storage solution. It doesn’t provide tamper-evident hardware protections for encryption keys.

  • C. Amazon DynamoDB with server-side encryption

    • DynamoDB is a NoSQL database service; while it does support SSE, it’s not intended for storing encryption keys and does not offer hardware-based tamper detection.

  • D. AWS Systems Manager Parameter Store

    • Parameter Store can store configuration data and secrets, but it does not provide the physical tamper-evidence and dedicated HSM hardware required by many compliance standards. It also isn’t strictly limited to VPC traffic by default (you typically access it over the public AWS endpoints unless you configure PrivateLink).

New cards
89
term image

The correct answer is:

C. Both bucket1 and bucket2

Explanation: Bucket1 Access Analysis

  1. Bucket Policy for bucket1 explicitly allows user alice full access (s3:*) to bucket1 and all objects inside it (arn:aws:s3:::bucket1, arn:aws:s3:::bucket1/*).

  2. Since bucket policies are evaluated along with IAM policies, and there is an explicit allow, alice can access bucket1.

Bucket2 Access Analysis

  1. The IAM policy attached to alice allows s3:* (all S3 actions) on bucket2 (arn:aws:s3:::bucket2, arn:aws:s3:::bucket2/*).

  2. Since there is no bucket policy for bucket2 that denies access, IAM permission alone is enough for alice to access bucket2.

Final Verdict:

  • alice has explicit allow permissions on bucket1 via the bucket policy.

  • alice has explicit allow permissions on bucket2 via the IAM policy.

  • Since IAM policies and bucket policies work together and there are no explicit denies, alice can access both bucket1 and bucket2.

Thus, the correct choice is:
C. Both bucket1 and bucket2.

New cards
90

An organization has three applications running on AWS, each accessing the same data on Amazon S3. The data on Amazon S3 is server-side encrypted by using an AWS KMS Customer Master Key (CMK).
What is the recommended method to ensure that each application has its own programmatic access control permissions on the KMS CMK?

  • A. Change the key policy permissions associated with the KMS CMK for each application when it must access the data in Amazon S3.

  • B. Have each application assume an IAM role that provides permissions to use the AWS Certificate Manager CMK.

  • C. Have each application use a grant on the KMS CMK to add or remove specific access controls on the KMS CMK.

  • D. Have each application use an IAM policy in a user context to have specific access permissions on the KMS CMK.

Answer: C. Have each application use a grant on the KMS CMK to add or remove specific access controls on the KMS CMK.


Why This Is the Recommended Method

  • Granular, Programmatic Control
    Grants in AWS KMS allow you to provide (or revoke) fine-grained, temporary, and auditable permissions to use a CMK. Each application can be given a separate grant that specifies which operations (e.g., Encrypt, Decrypt, ReEncrypt) it can perform with the CMK.

  • No Need to Constantly Update Key Policies
    Without grants, you would have to modify the key policy or IAM roles repeatedly to add/remove permissions each time requirements change. Grants enable you to manage this more dynamically—granting or revoking permissions as needed—without rewriting the entire key policy.

  • Tailored to Each Application
    Since each application uses a unique grant, you can precisely specify which operations it can perform, how long it can use the key, and even restrict usage to particular encryption contexts. That ensures each application has its own distinct access controls.


Why the Other Options Are Less Suitable

  1. A. Change the key policy permissions... when it must access the data

    • Constantly updating the key policy is cumbersome and error-prone. Key policies are meant to define broad, static rules about who can manage and use the key. Fine-grained, short-term, or application-specific permissions are better handled by grants.

  2. B. Have each application assume an IAM role that provides permissions to use the AWS Certificate Manager CMK

    • AWS Certificate Manager (ACM) is unrelated to encrypting objects in S3 with KMS CMKs. ACM issues SSL/TLS certificates and does not manage KMS CMKs. Even if you used an IAM role for each application, you would still need either a key policy or grants to define usage of the CMK for encryption/decryption.

  3. D. Use an IAM policy in a user context to have specific access permissions on the KMS CMK

    • While you do need IAM policies or roles for the applications, this alone does not give you the fine-grained, per-application control of encryption/decryption operations that grants provide. You would still need a KMS key policy allowing IAM policy usage—and it would be less flexible for dynamic updates.

New cards
91

The Security Engineer is given the following requirements for an application that is running on Amazon EC2 and managed by using AWS CloudFormation templates with EC2 Auto Scaling groups:
-Have the EC2 instances bootstrapped to connect to a backend database.
-Ensure that the database credentials are handled securely.
-Ensure that retrievals of database credentials are logged.
Which of the following is the MOST efficient way to meet these requirements?

  • A. Pass databases credentials to EC2 by using CloudFormation stack parameters with the property set to true. Ensure that the instance is configured to log to Amazon CloudWatch Logs.

  • B. Store database passwords in AWS Systems Manager Parameter Store by using SecureString parameters. Set the IAM role for the EC2 instance profile to allow access to the parameters.

  • C. Create an AWS Lambda that ingests the database password and persists it to Amazon S3 with server-side encryption. Have the EC2 instances retrieve the S3 object on startup, and log all script invocations to syslog.

  • D. Write a script that is passed in as UserData so that it is executed upon launch of the EC2 instance. Ensure that the instance is configured to log to Amazon CloudWatch Logs.

Answer: B. Store database passwords in AWS Systems Manager Parameter Store using SecureString, and grant EC2 instances permission to retrieve them.


Why This Is Correct

  1. Secure Storage of Credentials

    • AWS Systems Manager Parameter Store (with SecureString parameters) securely encrypts the database credentials, typically using AWS KMS.

    • This ensures that the credentials are not exposed in plain text in CloudFormation templates, user data, or source code.

  2. Access Control Through IAM Roles

    • You can attach an IAM role to the EC2 instances that grants read-only access to the Parameter Store values.

    • Only the instances with that role can retrieve the credentials, satisfying the security requirements.

  3. Retrieval Logging

    • Parameter Store retrievals are logged in AWS CloudTrail. This provides an auditable record of when, and by which resources, the credentials were accessed.

  4. Automation and Efficiency

    • The EC2 Auto Scaling bootstrapping can automatically run a simple script (via user data or a configuration management tool) that queries Parameter Store for the database credentials.

    • This approach seamlessly scales with your Auto Scaling group.


Why the Other Options Are Less Suitable

  • A. Pass credentials via CloudFormation stack parameters

    • Even with “NoEcho” parameters, the credentials would still exist in the CloudFormation stack (though masked in some places). This is less secure and can risk accidental exposure or logging of sensitive values.

    • Additionally, retrieval is not automatically logged the way Parameter Store retrievals are logged.

  • C. Lambda → S3 with server-side encryption

    • Storing secrets in S3 is possible but not the recommended best practice when AWS offers dedicated solutions like Parameter Store or Secrets Manager.

    • Managing and rotating secrets would become more cumbersome, and the auditing is not as straightforward as Parameter Store retrieval logs.

  • D. Write a script in EC2 User Data

    • This might place credentials in plain text if included in user data or script parameters.

    • Also, it does not address how the credentials are securely stored and accessed in the first place.

Therefore, Option B is the most secure and efficient approach, giving you encrypted storage, fine-grained IAM access control, and audit logging for credential retrieval.

New cards
92
term image

The correct answer is:

C. Move the bastion host to the VPC with VPN connectivity. Create a cross-account trust relationship between the bastion VPC and Aurora VPC, and update the Aurora security group for the relationship.

Explanation:

  • The current setup is insecure because the bastion host is in a public subnet, making it exposed to potential attacks.

  • Since adding another VPN connection is not an option, a more secure approach is required to access the Aurora database in the private subnet.

Why Option C is Correct:

  1. Move the Bastion Host to the VPN-Connected VPC

    • This ensures secure access from the corporate network and removes public internet exposure.

  2. Establish a Cross-Account Trust Relationship

    • Since the Aurora database is in a different AWS account, a cross-account trust will allow secure access from the bastion host.

  3. Update the Aurora Security Group

    • To allow access, the Aurora security group should allow inbound traffic from the trusted bastion VPC.

Why Other Options Are Incorrect:

  • A. Move the bastion to the VPN VPC and use VPC Peering:
    Incorrect because while VPC peering allows connectivity, it does not solve the cross-account access issue. Also, Aurora requires security group updates.

  • B. Use SSH Port Forwarding from Developer Workstations:
    Incorrect because this still requires a public-facing bastion host, which is insecure.

  • D. Use AWS Direct Connect between the corporate network and the Aurora VPC:
    Incorrect because Direct Connect is a costly and complex solution for this scenario when a VPC-to-VPC secure trust is sufficient.

Conclusion:

By moving the bastion host to the VPC with VPN access, using a cross-account trust, and updating the Aurora security group, the solution becomes secure without requiring public exposure. Hence, option C is the best approach.

New cards
93

An organization operates a web application that serves users globally. The application runs on Amazon EC2 instances behind an Application Load Balancer.
There is an Amazon CloudFront distribution in front of the load balancer, and the organization uses AWS WAF. The application is currently experiencing a volumetric attack whereby the attacker is exploiting a bug in a popular mobile game.
The application is being flooded with HTTP requests from all over the world with the User-Agent set to the following string: Mozilla/5.0 (compatible; ExampleCorp; ExampleGame/1.22; Mobile/1.0)
What mitigation can be applied to block attacks resulting from this bug while continuing to service legitimate requests?

  • A. Create a rule in AWS WAF rules with conditions that block requests based on the presence of ExampleGame/1.22 in the User-Agent header

  • B. Create a geographic restriction on the CloudFront distribution to prevent access to the application from most geographic regions

  • C. Create a rate-based rule in AWS WAF to limit the total number of requests that the web application services.

  • D. Create an IP-based blacklist in AWS WAF to block the IP addresses that are originating from requests that contain ExampleGame/1.22 in the User-Agent header.

Correct Answer:

A. Create a rule in AWS WAF rules with conditions that block requests based on the presence of ExampleGame/1.22 in the User-Agent header.


Explanation:

  • The attack is specifically exploiting a bug in a mobile game that is identified in the User-Agent header (ExampleGame/1.22).

  • The most targeted and efficient way to mitigate the attack without blocking legitimate traffic is to create a WAF rule that blocks requests with this specific User-Agent string.

  • This ensures only malicious requests are blocked while allowing other legitimate traffic to continue accessing the application.


Why Other Options Are Incorrect:

B. Create a geographic restriction on the CloudFront distribution to prevent access to the application from most geographic regions.

  • This blocks traffic from entire regions, which may include legitimate users.

  • The attack is not geographically isolated but rather global.

  • Blocking entire regions could lead to overblocking and poor user experience.

C. Create a rate-based rule in AWS WAF to limit the total number of requests that the web application services.

  • Rate-limiting can help against DDoS-style floods, but in this case, it will also limit legitimate users.

  • The attack is not necessarily high in volume per IP but is rather exploiting a specific vulnerability.

  • A User-Agent filter is a more precise solution.

D. Create an IP-based blacklist in AWS WAF to block the IP addresses that are originating from requests that contain ExampleGame/1.22 in the User-Agent header.

  • The attack is global and involves many IPs.

  • Maintaining an IP-based blacklist is inefficient because:

    • Attackers can use dynamic IPs, VPNs, or botnets.

    • Legitimate users may share similar IPs.

  • Blocking based on User-Agent is more precise and scalable.


Final Recommendation:

  • Use AWS WAF to create a rule that blocks requests with "ExampleGame/1.22" in the User-Agent header.

  • This effectively stops the attack while ensuring minimal impact on legitimate traffic.

New cards
94

Some highly sensitive analytics workloads are to be moved to Amazon EC2 hosts. Threat modeling has found that a risk exists where a subnet could be maliciously or accidentally exposed to the internet.
Which of the following mitigations should be recommended?

  • A. Use AWS Config to detect whether an Internet Gateway is added and use an AWS Lambda function to provide auto-remediation.

  • B. Within the Amazon VPC configuration, mark the VPC as private and disable Elastic IP addresses.

  • C. Use IPv6 addressing exclusively on the EC2 hosts, as this prevents the hosts from being accessed from the internet.

  • D. Move the workload to a Dedicated Host, as this provides additional network security controls and monitoring.

Correct Answer:

A. Use AWS Config to detect whether an Internet Gateway is added and use an AWS Lambda function to provide auto-remediation.


Explanation:

  • Threat Modeling Concern: The risk is that a subnet could be accidentally or maliciously exposed to the internet, which could lead to unauthorized access.

  • Best Mitigation Approach: Use AWS Config to continuously monitor the VPC configuration.

    • AWS Config detects when an Internet Gateway (IGW) is added.

    • An AWS Lambda function can be triggered for auto-remediation, which can:

      • Remove the Internet Gateway

      • Send alerts

      • Revert changes

This proactive monitoring and automatic correction prevent exposure before it leads to security issues.


Why Other Options Are Incorrect:

B. Within the Amazon VPC configuration, mark the VPC as private and disable Elastic IP addresses.

  • AWS does not provide an option to "mark a VPC as private."

  • Disabling Elastic IPs does not prevent exposure, as instances can still access the internet via a NAT Gateway or a misconfigured route table.

  • Internet access is controlled by route tables, security groups, and internet gateways, not just Elastic IPs.

C. Use IPv6 addressing exclusively on the EC2 hosts, as this prevents the hosts from being accessed from the internet.

  • This is incorrect because:

    • IPv6 does not prevent exposure; it can be publicly routable if misconfigured.

    • An IPv6-enabled subnet can still be accidentally exposed.

    • AWS provides internet access for IPv6 by default, making this a security risk, not a solution.

D. Move the workload to a Dedicated Host, as this provides additional network security controls and monitoring.

  • Dedicated Hosts mainly provide:

    • Compliance benefits (e.g., BYOL licensing, dedicated physical hardware)

    • No additional network security features beyond standard VPC controls.

  • The risk of internet exposure is not mitigated by using a Dedicated Host.


Final Recommendation:

  • Best approach is to continuously monitor and auto-remediate VPC configurations with AWS Config and Lambda to prevent accidental or malicious internet exposure.

New cards
95

A Developer who is following AWS best practices for secure code development requires an application to encrypt sensitive data to be stored at rest, locally in the application, using AWS KMS. What is the simplest and MOST secure way to decrypt this data when required?

  • A. Request KMS to provide the stored unencrypted data key and then use the retrieved data key to decrypt the data.

  • B. Keep the plaintext data key stored in Amazon DynamoDB protected with IAM policies. Query DynamoDB to retrieve the data key to decrypt the data

  • C. Use the Encrypt API to store an encrypted version of the data key with another customer managed key. Decrypt the data key and use it to decrypt the data when required.

  • D. Store the encrypted data key alongside the encrypted data. Use the Decrypt API to retrieve the data key to decrypt the data when required.

Answer: D. Store the encrypted data key alongside the encrypted data. Use the KMS Decrypt API when needed to retrieve the plaintext data key in memory and decrypt the data locally.


Why This Approach Is Both Simple and Secure

  1. Local Encryption Workflow with KMS:

    • You generate a data key by calling KMS’s GenerateDataKey API.

    • KMS returns two copies of the key:

      • A plaintext data key (used briefly in memory to encrypt your data).

      • An encrypted data key (ciphertext blob), which you store alongside your encrypted data.

  2. Decrypt on Demand:

    • When you need to decrypt your data, you pass the encrypted data key to KMS’s Decrypt API.

    • KMS returns the plaintext data key (in memory).

    • Your application uses that plaintext data key to decrypt the data locally.

  3. Security Benefits:

    • The plaintext data key is never stored anywhere—only used briefly in memory.

    • The encryption key itself is protected by KMS, and only your application (with the right IAM permissions) can request decryption.

    • Access is fully logged in AWS CloudTrail, providing an audit trail.


Why the Other Options Are Less Effective

  • A. Request KMS to provide the stored unencrypted data key

    • This is incomplete. It does not explain how the data key remains protected at rest, nor does it describe the recommended practice of storing and retrieving an encrypted copy of the data key.

  • B. Keep the plaintext data key in DynamoDB protected by IAM

    • Storing the plaintext key anywhere persistent (like DynamoDB) is not a best practice. This risks accidental disclosure if DynamoDB access permissions are misconfigured or if logs reveal the key.

  • C. Re-encrypt the data key with another CMK and then decrypt

    • This adds unnecessary complexity by introducing multiple CMKs just to protect the same data key. The standard Envelope Encryption approach requires only the single CMK that generated the data key in the first place.

New cards
96

A Security Administrator at a university is configuring a fleet of Amazon EC2 instances. The EC2 instances are shared among students, and non-root SSH access is allowed. The Administrator is concerned about students attacking other AWS account resources by using the EC2 instance metadata service.
What can the Administrator do to protect against this potential attack?

  • A. Disable the EC2 instance metadata service.

  • B. Log all student SSH interactive session activity.

  • C. Implement iptables-based restrictions on the instances.

  • D. Install the Amazon Inspector agent on the instances.

Answer: C. Implement iptables-based restrictions on the instances.


Explanation

When multiple (potentially untrusted) users have shell access to an EC2 instance, any user could attempt to query the instance metadata service at the well-known address 169.254.169.254 to retrieve temporary credentials if the instance is assigned an IAM role. This would allow them to act with the permissions of that role in other parts of AWS, which is a security risk.

Key points:

  1. Why Blocking Metadata Access Helps

    • By default, the instance metadata service (IMDS) will return temporary AWS credentials to anyone on the instance who can reach 169.254.169.254.

    • If you do not want non-root users to access the IMDS, you can use iptables (or another host-based firewall) to block or limit traffic to 169.254.169.254 for non-root processes. This way, students cannot retrieve credentials from IMDS, but privileged processes (such as root) can still access IMDS if required.

  2. Why Not Simply Disable the IMDS

    • Option A (disabling IMDS entirely) can break legitimate functionality if the instance (or software on it) needs to use the assigned IAM role—for example, to upload logs to S3 or read from DynamoDB.

    • If the instance truly does not need any AWS credentials, you could disable IMDS in launch configuration, but that is often too restrictive in practice.

  3. Why Logging SSH Sessions Isn’t Enough

    • Merely logging student activity (option B) does not prevent credential theft. It might help with after-the-fact forensic analysis, but not proactive prevention.

  4. Why Amazon Inspector (option D) Isn’t the Fix

    • Installing the Amazon Inspector agent can help assess security posture and vulnerabilities, but it does not block or restrict access to the IMDS. It’s not designed to solve this particular issue of unprivileged access to instance credentials.

Therefore, the best solution among the provided choices is to implement iptables (or a similar host-based firewall mechanism) to restrict access to the instance metadata service, ensuring only root or privileged processes can access credentials, thus preventing unprivileged students from retrieving and misusing them.

New cards
97

An organization receives an alert that indicates that an EC2 instance behind an ELB Classic Load Balancer has been compromised.
What techniques will limit lateral movement and allow evidence gathering?

  • A. Remove the instance from the load balancer and terminate it.

  • B. Remove the instance from the load balancer, and shut down access to the instance by tightening the security group.

  • C. Reboot the instance and check for any Amazon CloudWatch alarms.

  • D. Stop the instance and make a snapshot of the root EBS volume.

Answer: B. Remove the instance from the load balancer, and shut down access to the instance by tightening the security group.


Why This Approach Is Best

  1. Limit Lateral Movement:

    • By removing the instance from the load balancer and restricting its security group, you effectively isolate it from inbound and outbound traffic. That prevents further compromise of other systems (lateral movement) while still keeping the instance running.

  2. Preserve Forensic Data:

    • Leaving the instance powered on (rather than terminating, stopping, or rebooting) maintains its current memory state and disk state for forensic analysis.

    • You avoid losing ephemeral memory data (e.g., running processes, in-memory malware indicators) that might vanish if you stop or reboot.

  3. Facilitate Evidence Gathering:

    • With the instance isolated, you can connect via a controlled forensic process (e.g., attaching a specialized forensic security group or using a bastion host with strict access).

    • You can capture disk images, log data, process info, and other artifacts to analyze the nature of the compromise.


Why the Other Options Are Not Ideal

  • A. Remove the instance from the load balancer and terminate it

    • Immediate termination destroys ephemeral evidence (memory, temp files). You lose valuable forensic data.

  • C. Reboot the instance and check for any Amazon CloudWatch alarms

    • A reboot erases volatile memory, which may contain crucial evidence of the attack. Checking CloudWatch alarms is helpful but does not isolate or preserve the compromised host.

  • D. Stop the instance and make a snapshot of the root EBS volume

    • Stopping the instance also destroys in-memory evidence. While snapshots preserve disk data, you lose any volatile artifacts that might be critical for understanding the compromise.

Hence, Option B balances the need to block an attacker from spreading while preserving the live forensic evidence in memory and on disk.

New cards
98

A Development team has asked for help configuring the IAM roles and policies in a new AWS account. The team using the account expects to have hundreds of master keys and therefore does not want to manage access control for customer master keys (CMKs).
Which of the following will allow the team to manage AWS KMS permissions in IAM without the complexity of editing individual key policies?

  • A. The account's CMK key policy must allow the account's IAM roles to perform KMS EnableKey.

  • B. Newly created CMKs must have a key policy that allows the root principal to perform all actions.

  • C. Newly created CMKs must allow the root principal to perform the kms CreateGrant API operation.

  • D. Newly created CMKs must mirror the IAM policy of the KMS key administrator.

Answer: B. Newly created CMKs must have a key policy that allows the root principal to perform all actions.


Why This Works

By default, KMS key policies control who can manage and use a CMK. To simplify permissions management across many CMKs, you can delegate access control to IAM instead of editing each key policy individually. AWS documentation refers to this as “enabling IAM policies for KMS.”

To do so, you include a statement in each CMK’s key policy that grants the AWS account root principal (arn:aws:iam::<ACCOUNT_ID>:root) permission to perform all KMS operations on the CMK (e.g., "Action": "kms:*"). An example statement is:

{
"Sid": "Enable IAM Policies",
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam::<ACCOUNT_ID>:root" },
"Action": "kms:*",
"Resource": "*"
}

Once that statement is in the key policy, IAM policies attached to users, groups, and roles in the same account can control who can use or manage each CMK, without you having to maintain access in each individual key policy.


Why the Other Options Are Not Correct

  • A. CMK key policy must allow the account's IAM roles to perform KMS EnableKey.

    • Merely granting EnableKey does not delegate the full scope of permissions needed to manage keys via IAM. You need to allow "kms:*" to the root principal so the key policy defers all permission management to IAM.

  • C. CMK key policy must allow the root principal to perform the kms:CreateGrant API operation.

    • Granting just CreateGrant is too narrow and does not cover all the other KMS operations (e.g., Decrypt, Encrypt, DescribeKey, TagResource, etc.) that might need to be controlled via IAM.

  • D. Newly created CMKs must mirror the IAM policy of the KMS key administrator.

    • This reintroduces complexity—if you try to replicate IAM policy details in each key policy, you must edit each key policy whenever there is a change. The recommended approach is to allow the account root principal full permission in the key policy so that IAM policies can manage the fine-grained rules.

New cards
99

An Amazon EC2 instance is part of an EC2 Auto Scaling group that is behind an Application Load Balancer (ALB). It is suspected that the EC2 instance has been compromised.
Which steps should be taken to investigate the suspected compromise? (Choose three.)

  • A. Detach the elastic network interface from the EC2 instance.

  • B. Initiate an Amazon Elastic Block Store volume snapshot of all volumes on the EC2 instance.

  • C. Disable any Amazon Route 53 health checks associated with the EC2 instance.

  • D. De-register the EC2 instance from the ALB and detach it from the Auto Scaling group.

  • E. Attach a security group that has restrictive ingress and egress rules to the EC2 instance.

  • F. Add a rule to an AWS WAF to block access to the EC2 instance.

Answer: B, D, and E.

  1. Initiate an Amazon EBS volume snapshot (B).

    • Taking snapshots of all attached EBS volumes preserves disk data in its current state for forensic analysis. This step ensures you have an unaltered copy of the instance’s storage at the time of suspected compromise.

  2. De-register the instance from the ALB and detach it from the ASG (D).

    • Removing the instance from active service prevents further exposure to production traffic, stops the ALB from directing user requests to it, and avoids the Auto Scaling group terminating or replacing the instance prematurely.

  3. Attach a restrictive security group (E).

    • A highly restrictive security group (allowing inbound access only from a known trusted forensic source) prevents further malicious activity while still allowing controlled access to the instance. Keeping the instance running (with minimal network access) allows you to capture volatile memory data if needed.


Why the Other Options Are Less Appropriate

  • A. Detach the elastic network interface

    • Completely detaching the ENI would remove all network connectivity, which can make forensic analysis (e.g., remote memory capture) more difficult. Usually, you attach a locked-down security group rather than removing the interface.

  • C. Disable any Route 53 health checks

    • Health checks are generally used for public endpoints behind DNS routing policies. Disabling them does not meaningfully assist with isolating or gathering forensic data on the compromised instance.

  • F. Add a rule to AWS WAF to block access

    • AWS WAF can help block specific traffic patterns at the web application layer, but it does not isolate or preserve the instance or its data for forensic analysis. De-registering it from the load balancer and restricting the instance’s security group is more direct and comprehensive for this scenario.

New cards
100

A company has five AWS accounts and wants to use AWS CloudTrail to log API calls. The log files must be stored in an Amazon S3 bucket that resides in a new account specifically built for centralized services with a unique top-level prefix for each trail. The configuration must also enable detection of any modification to the logs.
Which of the following steps will implement these requirements? (Choose three.)

  • A. Create a new S3 bucket in a separate AWS account for centralized storage of CloudTrail logs, and enable Log File Validation on all trails.

  • B. Use an existing S3 bucket in one of the accounts, apply a bucket policy to the new centralized S3 bucket that permits the CloudTrail service to use the "s3: PutObject" action and the "s3 GetBucketACL" action, and specify the appropriate resource ARNs for the CloudTrail trails.

  • C. Apply a bucket policy to the new centralized S3 bucket that permits the CloudTrail service to use the "s3 PutObject" action and the "s3 GelBucketACL" action, and specify the appropriate resource ARNs for the CloudTrail trails.

  • D. Use unique log file prefixes for trails in each AWS account.

  • E. Configure CloudTrail in the centralized account to log all accounts to the new centralized S3 bucket.

  • F. Enable encryption of the log files by using AWS Key Management Service

Answer: A, C, and D.

  1. A. Create a new S3 bucket in a separate AWS account for centralized storage of CloudTrail logs, and enable “Log File Validation” on all trails.

    • This sets up a dedicated bucket in a new “central services” account and enables log file validation, which ensures tamper‐evident logging (detecting modifications).

  2. C. Apply a bucket policy to the new centralized S3 bucket that permits CloudTrail to use the “s3:PutObject” and “s3:GetBucketACL” actions, specifying the correct resource ARNs for the trails.

    • Each AWS account must have permission to write its CloudTrail logs to the new bucket. The bucket policy must allow CloudTrail (the service principal) to put objects from each of the five AWS accounts, as well as read the bucket ACL to verify permissions.

  3. D. Use unique log file prefixes for trails in each AWS account.

    • The requirement is to have a unique top-level prefix for each account’s logs. This also helps keep logs organized and makes it easier to manage or query them later.


Why the Other Choices Are Not Part of the Correct Combination

  • B. Use an existing S3 bucket...

    • The requirement is to store logs in a new account dedicated to centralized services, not to reuse an existing bucket in one of the current accounts.

  • E. Configure CloudTrail in the centralized account to log all accounts...

    • Usually, each account configures its own CloudTrail that writes to the central S3 bucket. You don’t have to set up a single “global” CloudTrail in the new account. Instead, CloudTrail in each of the five accounts can write cross-account to the bucket.

  • F. Enable encryption of the log files by using AWS Key Management Service

    • While encryption can be a best practice, it’s not explicitly required here, nor is it necessary for detecting modifications to logs. Log file validation (built into CloudTrail) provides the integrity checks.

New cards

Explore top notes

note Note
studied byStudied by 55 people
873 days ago
5.0(1)
note Note
studied byStudied by 8 people
898 days ago
5.0(1)
note Note
studied byStudied by 25 people
805 days ago
5.0(1)
note Note
studied byStudied by 7 people
952 days ago
5.0(1)
note Note
studied byStudied by 26 people
839 days ago
5.0(1)
note Note
studied byStudied by 20 people
705 days ago
5.0(1)
note Note
studied byStudied by 72 people
828 days ago
5.0(1)
note Note
studied byStudied by 259 people
971 days ago
5.0(1)

Explore top flashcards

flashcards Flashcard (41)
studied byStudied by 8 people
138 days ago
5.0(1)
flashcards Flashcard (45)
studied byStudied by 6 people
722 days ago
5.0(2)
flashcards Flashcard (60)
studied byStudied by 15 people
785 days ago
5.0(1)
flashcards Flashcard (148)
studied byStudied by 3 people
819 days ago
5.0(1)
flashcards Flashcard (53)
studied byStudied by 17 people
556 days ago
5.0(1)
flashcards Flashcard (20)
studied byStudied by 2 people
95 days ago
5.0(1)
flashcards Flashcard (20)
studied byStudied by 7 people
740 days ago
4.0(1)
flashcards Flashcard (67)
studied byStudied by 16 people
46 days ago
5.0(1)
robot