AWS-Certified-Security-Specialty Exam - Amazon AWS Certified Security - Specialty

certleader.com

Ucertify offers free demo for AWS-Certified-Security-Specialty exam. "Amazon AWS Certified Security - Specialty", also known as AWS-Certified-Security-Specialty exam, is a Amazon Certification. This set of posts, Passing the Amazon AWS-Certified-Security-Specialty exam, will help you answer those questions. The AWS-Certified-Security-Specialty Questions & Answers covers all the knowledge points of the real exam. 100% real Amazon AWS-Certified-Security-Specialty exams and revised by experts!

Amazon AWS-Certified-Security-Specialty Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
A company uses AWS Signer with all of the company’s AWS Lambda functions. A developer recently stopped working for the company. The company wants to ensure that all the code that the developer wrote can no longer be deployed to the Lambda functions.
Which solution will meet this requirement?

  • A. Revoke all versions of the signing profile assigned to the developer.
  • B. Examine the developer’s IAM role
  • C. Remove all permissions that grant access to Signer.
  • D. Re-encrypt all source code with a new AWS Key Management Service (AWS KMS) key.
  • E. Use Amazon CodeGuru to profile all the code that the Lambda functions use.

Answer: A

Explanation:
The correct answer is A. Revoke all versions of the signing profile assigned to the developer.
According to the AWS documentation1, AWS Signer is a fully managed code-signing service that helps you ensure the trust and integrity of your code. You can use Signer to sign code artifacts, such as Lambda deployment packages, with code-signing certificates that you control and manage.
A signing profile is a collection of settings that Signer uses to sign your code artifacts. A signing profile includes information such as the following:
AWS-Certified-Security-Specialty dumps exhibit The type of signature that you want to create (for example, a code-signing signature).
AWS-Certified-Security-Specialty dumps exhibit The signing algorithm that you want Signer to use to sign your code.
AWS-Certified-Security-Specialty dumps exhibit The code-signing certificate and its private key that you want Signer to use to sign your code.
You can create multiple versions of a signing profile, each with a different code-signing certificate. You can also revoke a version of a signing profile if you no longer want to use it for signing code artifacts.
In this case, the company wants to ensure that all the code that the developer wrote can no longer be deployed to the Lambda functions. One way to achieve this is to revoke all versions of the signing profile that was assigned to the developer. This will prevent Signer from using that signing profile to sign any new code artifacts, and also invalidate any existing signatures that were created with that signing profile. This way, the company can ensure that only trusted and authorized code can be deployed to the Lambda functions.
The other options are incorrect because:
AWS-Certified-Security-Specialty dumps exhibit B. Examining the developer’s IAM roles and removing all permissions that grant access to Signer may not be sufficient to prevent the deployment of the developer’s code. The developer may have already signed some code artifacts with a valid signing profile before leaving the company, and those signatures may still be accepted by Lambda unless the signing profile is revoked.
AWS-Certified-Security-Specialty dumps exhibit C. Re-encrypting all source code with a new AWS Key Management Service (AWS KMS) key may not be effective or practical. AWS KMS is a service that lets you create and manage encryption keys for your data. However, Lambda does not require encryption keys for deploying code artifacts, only valid signatures from Signer. Therefore, re-encrypting the source code may not prevent the deployment of the developer’s code if it has already been signed with a valid signing profile. Moreover, re-encrypting all source code may be time-consuming and disruptive for other developers who are working on the same code base.
AWS-Certified-Security-Specialty dumps exhibit D. Using Amazon CodeGuru to profile all the code that the Lambda functions use may not help with preventing the deployment of the developer’s code. Amazon CodeGuru is a service that provides intelligent recommendations to improve your code quality and identify an application’s most expensive lines of code. However, CodeGuru does not perform any security checks or validations on your code artifacts, nor does it interact with Signer or Lambda in any way. Therefore, using CodeGuru may not prevent unauthorized or untrusted code from being deployed to the Lambda functions.
References:
1: What is AWS Signer? - AWS Signer

NEW QUESTION 2
A company has a set of EC2 Instances hosted in IAM. The EC2 Instances have EBS volumes which is used to store critical information. There is a business continuity requirement to ensure high availability for the EBS volumes. How can you achieve this?

  • A. Use lifecycle policies for the EBS volumes
  • B. Use EBS Snapshots
  • C. Use EBS volume replication
  • D. Use EBS volume encryption

Answer: B

Explanation:
Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations as part of normal operation of those services and at no additional charge. However, Amazon EBS replication is stored within the same availability zone, not across multiple zones; therefore, it is highly recommended that you conduct regular snapshots to Amazon S3 for long-term data durability Option A is invalid because there is no lifecycle policy for EBS volumes Option C is invalid because there is no EBS volume replication Option D is invalid because EBS volume encryption will not ensure business continuity For information on security for Compute Resources, please visit the below URL: https://d1.awsstatic.com/whitepapers/Security/Security_Compute_Services_Whitepaper.pdf

NEW QUESTION 3
A company has an AWS account that hosts a production application. The company receives an email notification that Amazon GuardDuty has detected an Impact:lAMUser/AnomalousBehavior finding in the account. A security engineer needs to run the investigation playbook for this security incident and must collect and analyze the information without affecting the application.
Which solution will meet these requirements MOST quickly?

  • A. Log in to the AWS account by using read-only credential
  • B. Review the GuardDuty finding for details about the IAM credentials that were use
  • C. Use the IAM console to add a DenyAll policy to the IAM principal.
  • D. Log in to the AWS account by using read-only credential
  • E. Review the GuardDuty finding to determine which API calls initiated the findin
  • F. Use Amazon Detective to review the API calls in context.
  • G. Log in to the AWS account by using administrator credential
  • H. Review the GuardDuty finding for details about the IAM credentials that were use
  • I. Use the IAM console to add a DenyAll policy to the IAM principal.
  • J. Log in to the AWS account by using read-only credential
  • K. Review the GuardDuty finding to determinewhich API calls initiated the findin
  • L. Use AWS CloudTrail Insights and AWS CloudTrail Lake to review the API calls in context.

Answer: B

Explanation:
This answer is correct because logging in with read-only credentials minimizes the risk of accidental or malicious changes to the AWS account. Reviewing the GuardDuty finding can help identify which API calls initiated the finding and which IAM principal was involved. Using Amazon Detective can help analyze and visualize the API calls in context, such as which resources were affected, which IP addresses were used, and how the activity deviated from normal patterns. Amazon Detective can also help identify related findings from other sources, such as AWS Config or AWS Audit Manager.

NEW QUESTION 4
A Development team has built an experimental environment to test a simple stale web application It has built an isolated VPC with a private and a public subnet. The public subnet holds only an Application Load Balancer a NAT gateway, and an internet gateway. The private subnet holds ail of the Amazon EC2 instances
There are 3 different types of servers Each server type has its own Security Group that limits access lo only required connectivity. The Security Groups nave both inbound and outbound rules applied Each subnet has both inbound and outbound network ACls applied to limit access to only required connectivity
Which of the following should the team check if a server cannot establish an outbound connection to the
internet? (Select THREE.)

  • A. The route tables and the outbound rules on the appropriate private subnet security group
  • B. The outbound network ACL rules on the private subnet and the Inbound network ACL rules on the public subnet
  • C. The outbound network ACL rules on the private subnet and both the inbound and outbound rules on the public subnet
  • D. The rules on any host-based firewall that may be applied on the Amazon EC2 instances
  • E. The Security Group applied to the Application Load Balancer and NAT gateway
  • F. That the 0.0.0./0 route in the private subnet route table points to the internet gateway in the public subnet

Answer: CEF

Explanation:
because these are the factors that could affect the outbound connection to the internet from a server in a private subnet. The outbound network ACL rules on the private subnet and both the inbound and outbound rules on the public subnet must allow the traffic to pass through8. The security group applied to the application load balancer and NAT gateway must also allow the traffic from the private subnet9. The 0.0.0.0/0 route in the private subnet route table must point to the NAT gateway in the public subnet, not the internet gateway10. The other options are either irrelevant or incorrect for troubleshooting the outbound connection issue.

NEW QUESTION 5
A company needs a forensic-logging solution for hundreds of applications running in Docker on Amazon EC2 The solution must perform real-time analytics on the togs must support the replay of messages and must persist the logs.
Which IAM services should be used to meet these requirements? (Select TWO)

  • A. Amazon Athena
  • B. Amazon Kinesis
  • C. Amazon SQS
  • D. Amazon Elasticsearch
  • E. Amazon EMR

Answer: BD

Explanation:
Amazon Kinesis and Amazon Elasticsearch are both suitable for forensic-logging solutions. Amazon Kinesis can collect, process, and analyze streaming data in real time3. Amazon Elasticsearch can store, search, and analyze log data using the popular open-source tool Elasticsearch. The other options are not designed for forensic-logging purposes. Amazon Athena is a query service that can analyze data in S3, Amazon SQS is a message queue service that can decouple and scale microservices, and Amazon EMR is a big data platform that can run Apache Spark and Hadoop clusters.

NEW QUESTION 6
A company wants to remove all SSH keys permanently from a specific subset of its Amazon Linux 2 Amazon EC2 instances that are using the same 1AM instance profile However three individuals who have IAM user accounts will need to access these instances by using an SSH session to perform critical duties
How can a security engineer provide the access to meet these requirements'?

  • A. Assign an 1AM policy to the instance profile to allow the EC2 instances to be managed by AWS Systems Manager Provide the 1AM user accounts with permission to use Systems Manager Remove the SSH keys from the EC2 instances Use Systems Manager Inventory to select the EC2 instance and connect
  • B. Assign an 1AM policy to the 1AM user accounts to provide permission to use AWS Systems Manager Run Command Remove the SSH keys from the EC2 instances Use Run Command to open an SSH connection to the EC2 instance
  • C. Assign an 1AM policy to the instance profile to allow the EC2 instances to be managed by AWS Systems Manager Provide the 1AM user accounts with permission to use Systems Manager Remove the SSH keys from the EC2 instances Use Systems Manager Session Manager to select the EC2 instance and connect
  • D. Assign an 1AM policy to the 1AM user accounts to provide permission to use the EC2 service in the AWS Management Console Remove the SSH keys from the EC2 instances Connect to the EC2 instance as the ec2-user through the AWS Management Console's EC2 SSH client method

Answer: C

Explanation:
To provide access to the three individuals who have IAM user accounts to access the Amazon Linux 2 Amazon EC2 instances that are using the same IAM instance profile, the most appropriate solution would be to assign an IAM policy to the instance profile to allow the EC2 instances to be managed by AWS Systems Manager, provide the IAM user accounts with permission to use Systems Manager, remove the SSH keys from the EC2 instances, and use Systems Manager Session Manager to select the EC2 instance and connect.
References: : AWS Systems Manager Session Manager - AWS Systems Manager : AWS Systems Manage AWS Management Console : AWS Identity and Access Management - AWS Management Console : Am Elastic Compute Cloud - Amazon Web Services : Amazon Linux 2 - Amazon Web Services : AWS Syst Manager - AWS Management Console : AWS Systems Manager - AWS Management Console : AWS Systems Manager - AWS Management Console

NEW QUESTION 7
An application team wants to use IAM Certificate Manager (ACM) to request public certificates to ensure that data is secured in transit. The domains that are being used are not currently hosted on Amazon Route 53
The application team wants to use an IAM managed distribution and caching solution to optimize requests to its systems and provide better points of presence to customers The distribution solution will use a primary domain name that is customized The distribution solution also will use several alternative domain names The certificates must renew automatically over an indefinite period of time
Which combination of steps should the application team take to deploy this architecture? (Select THREE.)

  • A. Request a certificate (torn ACM in the us-west-2 Region Add the domain names that the certificate will secure
  • B. Send an email message to the domain administrators to request vacation of the domains for ACM
  • C. Request validation of the domains for ACM through DNS Insert CNAME records into each domain's DNS zone
  • D. Create an Application Load Balancer for me caching solution Select the newly requested certificate from ACM to be used for secure connections
  • E. Create an Amazon CloudFront distribution for the caching solution Enter the main CNAME record as the Origin Name Enter the subdomain names or alternate names in the Alternate Domain Names Distribution Settings Select the newly requested certificate from ACM to be used for secure connections
  • F. Request a certificate from ACM in the us-east-1 Region Add the domain names that the certificate wil secure

Answer: CDF

NEW QUESTION 8
A company recently had a security audit in which the auditors identified multiple potential threats. These potential threats can cause usage pattern changes such as DNS access peak, abnormal instance traffic, abnormal network interface traffic, and unusual Amazon S3 API calls. The threats can come from different sources and can occur at any time. The company needs to implement a solution to continuously monitor its system and identify all these incoming threats in near-real time.
Which solution will meet these requirements?

  • A. Enable AWS CloudTrail logs, VPC flow logs, and DNS log
  • B. Use Amazon CloudWatch Logs to manage these logs from a centralized account.
  • C. Enable AWS CloudTrail logs, VPC flow logs, and DNS log
  • D. Use Amazon Macie to monitor these logs from a centralized account.
  • E. Enable Amazon GuardDuty from a centralized accoun
  • F. Use GuardDuty to manage AWS CloudTrail logs, VPC flow logs, and DNS logs.
  • G. Enable Amazon Inspector from a centralized accoun
  • H. Use Amazon Inspector to manage AWS CloudTrail logs, VPC flow logs, and DNS logs.

Answer: C

Explanation:
Q: Which data sources does GuardDuty analyze? GuardDuty analyzes CloudTrail management event logs, CloudTrail S3 data event logs, VPC Flow Logs, DNS query logs, and Amazon EKS audit logs. GuardDuty can also scan EBS volume data for possible malware when GuardDuty Malware Protection is enabled and identifies suspicious behavior indicative of malicious software in EC2 instance or container workloads. The service is optimized to consume large data volumes for near real-time processing of security detections. GuardDuty gives you access to built-in detection techniques developed and optimized for the cloud, which are maintained and continuously improved upon by GuardDuty engineering.

NEW QUESTION 9
A security engineer needs to implement a write-once-read-many (WORM) model for data that a company will store in Amazon S3 buckets. The company uses the S3 Standard storage class for all of its S3 buckets. The security engineer must en-sure that objects cannot be overwritten or deleted by any user, including the AWS account root user.
Which solution will meet these requirements?

  • A. Create new S3 buckets with S3 Object Lock enabled in compliance mod
  • B. Place objects in the S3 buckets.
  • C. Use S3 Glacier Vault Lock to attach a Vault Lock policy to new S3 bucket
  • D. Wait 24 hours to complete the Vault Lock proces
  • E. Place objects in the S3 buckets.
  • F. Create new S3 buckets with S3 Object Lock enabled in governance mod
  • G. Place objects in the S3 buckets.
  • H. Create new S3 buckets with S3 Object Lock enabled in governance mod
  • I. Add a legal hold to the S3 bucket
  • J. Place objects in the S3 buckets.

Answer: A

NEW QUESTION 10
A security engineer receives a notice from the AWS Abuse team about suspicious activity from a Linux-based Amazon EC2 instance that uses Amazon Elastic Block Store (Amazon EBS>-based storage The instance is making connections to known malicious addresses
The instance is in a development account within a VPC that is in the us-east-1 Region The VPC contains an internet gateway and has a subnet in us-east-1a and us-easMb Each subnet is associate with a route table that uses the internet gateway as a default route Each subnet also uses the default network ACL The suspicious EC2 instance runs within the us-east-1 b subnet. During an initial investigation a security engineer discovers that the suspicious instance is the only instance that runs in the subnet
Which response will immediately mitigate the attack and help investigate the root cause?

  • A. Log in to the suspicious instance and use the netstat command to identify remote connections Use the IP addresses from these remote connections to create deny rules in the security group of the instance Install diagnostic tools on the instance for investigation Update the outbound network ACL for the subnet inus-east- lb to explicitly deny all connections as the first rule during the investigation of the instance
  • B. Update the outbound network ACL for the subnet in us-east-1b to explicitly deny all connections as the first rule Replace the security group with a new security group that allows connections only from a diagnostics security group Update the outbound network ACL for the us-east-1b subnet to remove the deny all rule Launch a new EC2 instance that has diagnostic tools Assign the new security group to the new EC2 instance Use the new EC2 instance to investigate the suspicious instance
  • C. Ensure that the Amazon Elastic Block Store (Amazon EBS) volumes that are attached to the suspicious EC2 instance will not delete upon termination Terminate the instance Launch a new EC2 instance inus-east-1a that has diagnostic tools Mount the EBS volumes from the terminated instance for investigation
  • D. Create an AWS WAF web ACL that denies traffic to and from the suspicious instance Attach the AWS WAF web ACL to the instance to mitigate the attack Log in to the instance and install diagnostic tools to investigate the instance

Answer: B

Explanation:
This option suggests updating the outbound network ACL for the subnet in us-east-1b to explicitly deny all connections as the first rule, replacing the security group with a new one that only allows connections from a diagnostics security group, and launching a new EC2 instance with diagnostic tools to investigate the suspicious instance. This option will immediately mitigate the attack and provide the necessary tools for investigation.

NEW QUESTION 11
An ecommerce company is developing new architecture for an application release. The company needs to implement TLS for incoming traffic to the application. Traffic for the application will originate from the internet TLS does not have to be implemented in an end-to-end configuration because the company is concerned about impacts on performance. The incoming traffic types will be HTTP and HTTPS The application uses ports 80 and 443.
What should a security engineer do to meet these requirements?

  • A. Create a public Application Load Balance
  • B. Create two listeners one listener on port 80 and one listener on port 443. Create one target grou
  • C. Create a rule to forward traffic from port 80 to the listener on port 443 Provision a public TLS certificate in AWS Certificate Manager (ACM). Attach the certificate to the listener on port 443.
  • D. Create a public Application Load Balance
  • E. Create two listeners one listener on port 80 and one listener on port 443. Create one target grou
  • F. Create a rule to forward traffic from port 80 to the listener on port 443 Provision a public TLS certificate in AWS Certificate Manager (ACM). Attach the certificate to the listener on port 80.
  • G. Create a public Network Load Balance
  • H. Create two listeners one listener on port 80 and one listener on port 443. Create one target grou
  • I. Create a rule to forward traffic from port 80 to the listener on port 443. Set the protocol for the listener on port 443 to TLS.
  • J. Create a public Network Load Balance
  • K. Create a listener on port 443. Create one target grou
  • L. Create a rule to forward traffic from port 443 to the target grou
  • M. Set the protocol for the listener on port 443 to TLS.

Answer: A

Explanation:
An Application Load Balancer (ALB) is a type of load balancer that operates at the application layer (layer 7) of the OSI model. It can distribute incoming traffic based on the content of the request, such as the host
header, path, or query parameters. An ALB can also terminate TLS connections and decrypt requests from clients before sending them to the targets.
To implement TLS for incoming traffic to the application, the following steps are required:
AWS-Certified-Security-Specialty dumps exhibit Create a public ALB in a public subnet and register the EC2 instances as targets in a target group.
AWS-Certified-Security-Specialty dumps exhibit Create two listeners for the ALB, one on port 80 for HTTP traffic and one on port 443 for HTTPS traffic.
AWS-Certified-Security-Specialty dumps exhibit Create a rule for the listener on port 80 to redirect HTTP requests to HTTPS using the same host, path, and query parameters.
AWS-Certified-Security-Specialty dumps exhibit Provision a public TLS certificate in AWS Certificate Manager (ACM) for the domain name of the application. ACM is a service that lets you easily provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services and your internal connected resources.
AWS-Certified-Security-Specialty dumps exhibit Attach the certificate to the listener on port 443 and configure the security policy to negotiate secure connections between clients and the ALB.
AWS-Certified-Security-Specialty dumps exhibit Configure the security groups for the ALB and the EC2 instances to allow inbound traffic on ports 80 and 443 from the internet and outbound traffic on any port to the EC2 instances.
This solution will meet the requirements of implementing TLS for incoming traffic without impacting performance or requiring end-to-end encryption. The ALB will handle the TLS termination and decryption, while forwarding unencrypted requests to the EC2 instances.
Verified References:
AWS-Certified-Security-Specialty dumps exhibit https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html
AWS-Certified-Security-Specialty dumps exhibit https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
AWS-Certified-Security-Specialty dumps exhibit https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html

NEW QUESTION 12
A Security Engineer is asked to update an AWS CloudTrail log file prefix for an existing trail. When attempting to save the change in the CloudTrail console, the
Security Engineer receives the following error message: `There is a problem with the bucket policy.` What will enable the Security Engineer to save the change?

  • A. Create a new trail with the updated log file prefix, and then delete the original trai
  • B. Update the existing bucket policy in the Amazon S3 console with the new log file prefix, and then update the log file prefix in the CloudTrail console.
  • C. Update the existing bucket policy in the Amazon S3 console to allow the Security Engineer's Principal to perform PutBucketPolicy, and then update the log file prefix in the CloudTrail console.
  • D. Update the existing bucket policy in the Amazon S3 console with the new log file prefix, and then update the log file prefix in the CloudTrail console.
  • E. Update the existing bucket policy in the Amazon S3 console to allow the Security Engineer's Principal to perform GetBucketPolicy, and then update the log file prefix in the CloudTrail console.

Answer: C

Explanation:
The correct answer is C. Update the existing bucket policy in the Amazon S3 console with the new log file prefix, and then update the log file prefix in the CloudTrail console.
According to the AWS documentation1, a bucket policy is a resource-based policy that you can use to grant access permissions to your Amazon S3 bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner.
When you create a trail in CloudTrail, you can specify an existing S3 bucket or create a new one to store your log files. CloudTrail automatically creates a bucket policy for your S3 bucket that grants CloudTrail write-only access to deliver log files to your bucket. The bucket policy also grants read-only access to AWS services that you can use to view and analyze your log data, such as Amazon Athena, Amazon CloudWatch Logs, and Amazon QuickSight.
If you want to update the log file prefix for an existing trail, you must also update the existing bucket policy in the S3 console with the new log file prefix. The log file prefix is part of the resource ARN that identifies the objects in your bucket that CloudTrail can access. If you don’t update the bucket policy with the new log file prefix, CloudTrail will not be able to deliver log files to your bucket, and you will receive an error message when you try to save the change in the CloudTrail console.
The other options are incorrect because:
AWS-Certified-Security-Specialty dumps exhibit A. Creating a new trail with the updated log file prefix, and then deleting the original trail is not necessary and may cause data loss or inconsistency. You can simply update the existing trail and its associated bucket policy with the new log file prefix.
AWS-Certified-Security-Specialty dumps exhibit B. Updating the existing bucket policy in the S3 console to allow the Security Engineer’s Principal to perform PutBucketPolicy is not relevant to this issue. The PutBucketPolicy action allows you to create or replace a policy on a bucket, but it does not affect CloudTrail’s ability to deliver log files to your bucket. You still need to update the existing bucket policy with the new log file prefix.
AWS-Certified-Security-Specialty dumps exhibit D. Updating the existing bucket policy in the S3 console to allow the Security Engineer’s Principal to perform GetBucketPolicy is not relevant to this issue. The GetBucketPolicy action allows you to retrieve a policy on a bucket, but it does not affect CloudTrail’s ability to deliver log files to your bucket. You still need to update the existing bucket policy with the new log file prefix.
References:
1: Using bucket policies - Amazon Simple Storage Service

NEW QUESTION 13
A company has two AWS accounts. One account is for development workloads. The other account is for production workloads. For compliance reasons the production account contains all the AWS Key Management. Service (AWS KMS) keys that the company uses for encryption.
The company applies an IAM role to an AWS Lambda function in the development account to allow secure access to AWS resources. The Lambda function must access a specific KMS customer managed key that exists in the production account to encrypt the Lambda function's data.
Which combination of steps should a security engineer take to meet these requirements? (Select TWO.)

  • A. Configure the key policy for the customer managed key in the production account to allow access to the Lambda service.
  • B. Configure the key policy for the customer managed key in the production account to allow access to the IAM role of the Lambda function in the development account.
  • C. Configure a new IAM policy in the production account with permissions to use the customer managed ke
  • D. Apply the IAM policy to the IAM role that the Lambda function in the development account uses.
  • E. Configure a new key policy in the development account with permissions to use the customer managed ke
  • F. Apply the key policy to the IAM role that the Lambda function in the development account uses.
  • G. Configure the IAM role for the Lambda function in the development account by attaching an IAM policy that allows access to the customer managed key in the production account.

Answer: BE

Explanation:
To allow a Lambda function in one AWS account to access a KMS customer managed key in another AWS account, the following steps are required:
AWS-Certified-Security-Specialty dumps exhibit Configure the key policy for the customer managed key in the production account to allow access to the IAM role of the Lambda function in the development account. A key policy is a resource-based policy that defines who can use or manage a KMS key. To grant cross-account access to a KMS key, you must specify the AWS account ID and the IAM role ARN of the external principal in the key policy statement. For more information, see Allowing users in other accounts to use a KMS key.
AWS-Certified-Security-Specialty dumps exhibit Configure the IAM role for the Lambda function in the development account by attaching an IAM policy that allows access to the customer managed key in the production account. An IAM policy is an identity-based policy that defines what actions an IAM entity can perform on which resources. To allow an IAM role to use a KMS key in another account, you must specify the KMS key ARN and the kms:Encrypt action (or any other action that requires access to the KMS key) in the IAM policy statement. For more information, see Using IAM policies with AWS KMS.
This solution will meet the requirements of allowing secure access to a KMS customer managed key across AWS accounts.
The other options are incorrect because they either do not grant cross-account access to the KMS key (A, C), or do not use a valid policy type for KMS keys (D).
Verified References:
AWS-Certified-Security-Specialty dumps exhibithttps://docs.aws.amazon.com/kms/latest/developerguide/iam-policies.html

NEW QUESTION 14
A company has implemented IAM WAF and Amazon CloudFront for an application. The application runs on Amazon EC2 instances that are part of an Auto Scaling group. The Auto Scaling group is behind an Application Load Balancer (ALB).
The IAM WAF web ACL uses an IAM Managed Rules rule group and is associated with the CloudFront distribution. CloudFront receives the request from IAM WAF and then uses the ALB as the distribution's origin.
During a security review, a security engineer discovers that the infrastructure is susceptible to a large, layer 7 DDoS attack.
How can the security engineer improve the security at the edge of the solution to defend against this type of attack?

  • A. Configure the CloudFront distribution to use the Lambda@Edge featur
  • B. Create an IAM Lambda function that imposes a rate limit on CloudFront viewer request
  • C. Block the request if the rate limit is exceeded.
  • D. Configure the IAM WAF web ACL so that the web ACL has more capacity units to process all IAM WAF rules faster.
  • E. Configure IAM WAF with a rate-based rule that imposes a rate limit that automatically blocks requests when the rate limit is exceeded.
  • F. Configure the CloudFront distribution to use IAM WAF as its origin instead of the ALB.

Answer: C

Explanation:
To improve the security at the edge of the solution to defend against a large, layer 7 DDoS attack, the security engineer should do the following:
AWS-Certified-Security-Specialty dumps exhibit Configure AWS WAF with a rate-based rule that imposes a rate limit that automatically blocks requests when the rate limit is exceeded. This allows the security engineer to use a rule that tracks the number of requests from a single IP address and blocks subsequent requests if they exceed a specified threshold within a specified time period.

NEW QUESTION 15
A developer at a company uses an SSH key to access multiple Amazon EC2 instances. The company discovers that the SSH key has been posted on a public GitHub repository. A security engineer verifies that the key has not been used recently.
How should the security engineer prevent unauthorized access to the EC2 instances?

  • A. Delete the key pair from the EC2 consol
  • B. Create a new key pair.
  • C. Use the ModifylnstanceAttribute API operation to change the key on any EC2 instance that is using the key.
  • D. Restrict SSH access in the security group to only known corporate IP addresses.
  • E. Update the key pair in any AMI that is used to launch the EC2 instance
  • F. Restart the EC2 instances.

Answer: C

Explanation:
To prevent unauthorized access to the EC2 instances, the security engineer should do the following:
AWS-Certified-Security-Specialty dumps exhibit Restrict SSH access in the security group to only known corporate IP addresses. This allows the security engineer to use a virtual firewall that controls inbound and outbound traffic for their EC2 instances, and limit SSH access to only trusted sources.

NEW QUESTION 16
A company needs to store multiple years of financial records. The company wants to use Amazon S3 to store copies of these documents. The company must implement a solution to prevent the documents from being edited, replaced, or deleted for 7 years after the documents are stored in Amazon S3. The solution must also encrypt the documents at rest.
A security engineer creates a new S3 bucket to store the documents. What should the security engineer do next to meet these requirements?

  • A. Configure S3 server-side encryptio
  • B. Create an S3 bucket policy that has an explicit deny rule for all users for s3:DeleteObject and s3:PutObject API call
  • C. Configure S3 Object Lock to use governance mode with a retention period of 7 years.
  • D. Configure S3 server-side encryptio
  • E. Configure S3 Versioning on the S3 bucke
  • F. Configure S3 ObjectLock to use compliance mode with a retention period of 7 years.
  • G. Configure S3 Versionin
  • H. Configure S3 Intelligent-Tiering on the S3 bucket to move the documents to S3 Glacier Deep Archive storag
  • I. Use S3 server-side encryption immediatel
  • J. Expire the objects after 7 years.
  • K. Set up S3 Event Notifications and use S3 server-side encryptio
  • L. Configure S3 Event Notifications to target an AWS Lambda function that will review any S3 API call to the S3 bucket and deny the s3:DeleteObject and s3:PutObject API call
  • M. Remove the S3 event notification after 7 years.

Answer: B

NEW QUESTION 17
A company has deployed Amazon GuardDuty and now wants to implement automation for potential threats. The company has decided to start with RDP brute force attacks that come from Amazon EC2 instances in the company’s AWS environment. A security engineer needs to implement a solution that blocks the detected communication from a suspicious instance until investigation and potential remediation can occur.
Which solution will meet these requirements?

  • A. Configure GuardDuty to send the event to an Amazon Kinesis data strea
  • B. Process the event with an Amazon Kinesis Data Analytics for Apache Flink application that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS). Add rules to the network ACL to block traffic to and from the suspicious instance.
  • C. Configure GuardDuty to send the event to Amazon EventBridge (Amazon CloudWatch Events). Deploy an AWS WAF web AC
  • D. Process the event with an AWS Lambda function that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS) and adds a web ACL rule to block traffic to and from the suspicious instance.
  • E. Enable AWS Security Hub to ingest GuardDuty findings and send the event to Amazon EventBridge (Amazon CloudWatch Events). Deploy AWS Network Firewal
  • F. Process the event with an AWS Lambda function that adds a rule to a Network Firewall firewall policy to block traffic to and from the suspicious instance.
  • G. Enable AWS Security Hub to ingest GuardDuty finding
  • H. Configure an Amazon Kinesis data stream as an event destination for Security Hu
  • I. Process the event with an AWS Lambda function that replaces the security group of the suspicious instance with a security group that does not allow any connections.

Answer: C

Explanation:
https://aws.amazon.com/blogs/security/automatically-block-suspicious-traffic-with-aws-network-firewall-and-a

NEW QUESTION 18
A security engineer recently rotated all IAM access keys in an AWS account. The security engineer then configured AWS Config and enabled the following AWS Config managed rules; mfa-enabled-for-iam-console-access, iam-user-mfa-enabled, access-key-rotated, and iam-user-unused-credentials-check.
The security engineer notices that all resources are displaying as noncompliant after the IAM GenerateCredentialReport API operation is invoked.
What could be the reason for the noncompliant status?

  • A. The IAM credential report was generated within the past 4 hours.
  • B. The security engineer does not have the GenerateCredentialReport permission.
  • C. The security engineer does not have the GetCredentialReport permission.
  • D. The AWS Config rules have a MaximumExecutionFrequency value of 24 hours.

Answer: D

Explanation:
The correct answer is D. The AWS Config rules have a MaximumExecutionFrequency value of 24 hours. According to the AWS documentation1, the MaximumExecutionFrequency parameter specifies the maximum frequency with which AWS Config runs evaluations for a rule. For AWS Config managed rules, this value can be one of the following:
AWS-Certified-Security-Specialty dumps exhibit One_Hour
AWS-Certified-Security-Specialty dumps exhibit Three_Hours
AWS-Certified-Security-Specialty dumps exhibit Six_Hours
AWS-Certified-Security-Specialty dumps exhibit Twelve_Hours
AWS-Certified-Security-Specialty dumps exhibit TwentyFour_Hours
If the rule is triggered by configuration changes, it will still run evaluations when AWS Config delivers the configuration snapshot. However, if the rule is triggered periodically, it will not run evaluations more often than the specified frequency.
In this case, the security engineer enabled four AWS Config managed rules that are triggered periodically. Therefore, these rules will only run evaluations every 24 hours, regardless of when the IAM credential report is generated. This means that the resources will display as noncompliant until the next evaluation cycle, which could take up to 24 hours after the IAM access keys are rotated.
The other options are incorrect because:
AWS-Certified-Security-Specialty dumps exhibit A. The IAM credential report can be generated at any time, but it will not affect the compliance status of the resources until the next evaluation cycle of the AWS Config rules.
AWS-Certified-Security-Specialty dumps exhibit B. The security engineer was able to invoke the IAM GenerateCredentialReport API operation, which means they have the GenerateCredentialReport permission. This permission is required to generate a credential report that lists all IAM users in an AWS account and their credential status2.
AWS-Certified-Security-Specialty dumps exhibit C. The security engineer does not need the GetCredentialReport permission to enable or evaluate AWS Config rules. This permission is required to retrieve a credential report that was previously generated by using the GenerateCredentialReport operation2.
References:
1: AWS::Config::ConfigRule - AWS CloudFormation 2: IAM: Generate and retrieve IAM credential reports

NEW QUESTION 19
A security engineer wants to forward custom application-security logs from an Amazon EC2 instance to Amazon CloudWatch. The security engineer installs the CloudWatch agent on the EC2 instance and adds the path of the logs to the CloudWatch configuration file. However, CloudWatch does not receive the logs. The security engineer verifies that the awslogs service is running on the EC2 instance.
What should the security engineer do next to resolve the issue?

  • A. Add AWS CloudTrail to the trust policy of the EC2 instanc
  • B. Send the custom logs to CloudTrail instead of CloudWatch.
  • C. Add Amazon S3 to the trust policy of the EC2 instanc
  • D. Configure the application to write the custom logs to an S3 bucket that CloudWatch can use to ingest the logs.
  • E. Add Amazon Inspector to the trust policy of the EC2 instanc
  • F. Use Amazon Inspector instead of the CloudWatch agent to collect the custom logs.
  • G. Attach the CloudWatchAgentServerPolicy AWS managed policy to the EC2 instance role.

Answer: D

Explanation:
The correct answer is D. Attach the CloudWatchAgentServerPolicy AWS managed policy to the EC2 instance role.
According to the AWS documentation1, the CloudWatch agent is a software agent that you can install on your EC2 instances to collect system-level metrics and logs. To use the CloudWatch agent, you need to attach an IAM role or user to the EC2 instance that grants permissions for the agent to perform actions on your behalf. The CloudWatchAgentServerPolicy is an AWS managed policy that provides the necessary permissions for the agent to write metrics and logs to CloudWatch2. By attaching this policy to the EC2 instance role, the security engineer can resolve the issue of CloudWatch not receiving the custom application-security logs.
The other options are incorrect for the following reasons:
AWS-Certified-Security-Specialty dumps exhibit A. Adding AWS CloudTrail to the trust policy of the EC2 instance is not relevant, because CloudTrail is a service that records API activity in your AWS account, not custom application logs3. Sending the custom logs to CloudTrail instead of CloudWatch would not meet the requirement of forwarding them to CloudWatch.
AWS-Certified-Security-Specialty dumps exhibit B. Adding Amazon S3 to the trust policy of the EC2 instance is not necessary, because S3 is a storage service that does not require any trust relationship with EC2 instances4. Configuring the application to write the custom logs to an S3 bucket that CloudWatch can use to ingest the logs would be an alternative solution, but it would be more complex and costly than using the CloudWatch agent directly.
AWS-Certified-Security-Specialty dumps exhibit C. Adding Amazon Inspector to the trust policy of the EC2 instance is not helpful, because Inspector is a service that scans EC2 instances for software vulnerabilities and unintended network exposure, not custom application logs5. Using Amazon Inspector instead of the CloudWatch agent would not meet the requirement of forwarding them to CloudWatch.
References:
1: Collect metrics, logs, and traces with the CloudWatch agent - Amazon CloudWatch 2: CloudWatchAgentServerPolicy - AWS Managed Policy 3: What Is AWS CloudTrail? - AWS CloudTrail 4: Amazon S3 FAQs - Amazon Web Services 5: Automated Software Vulnerability Management - Amazon
Inspector - AWS

NEW QUESTION 20
......

100% Valid and Newest Version AWS-Certified-Security-Specialty Questions & Answers shared by Downloadfreepdf.net, Get Full Dumps HERE: https://www.downloadfreepdf.net/AWS-Certified-Security-Specialty-pdf-download.html (New 372 Q&As)