AWS-Solution-Architect-Associate Exam - Amazon AWS Certified Solutions Architect - Associate

certleader.com

Exam Code: AWS-Solution-Architect-Associate (Practice Exam Latest Test Questions VCE PDF)
Exam Name: Amazon AWS Certified Solutions Architect - Associate
Certification Provider: Amazon
Free Today! Guaranteed Training- Pass AWS-Solution-Architect-Associate Exam.

Free demo questions for Amazon AWS-Solution-Architect-Associate Exam Dumps Below:

NEW QUESTION 1

A company has an on-premises MySQL database that handles transactional data. The company is migrating the database to the AWS Cloud. The migrated database must maintain compatibility with the company's applications that use the database. The migrated database also must scale automatically during periods of increased demand.
Which migration solution will meet these requirements?

  • A. Use native MySQL tools to migrate the database to Amazon RDS for MySQ
  • B. Configureelastic storage scaling.
  • C. Migrate the database to Amazon Redshift by using the mysqldump utilit
  • D. Turn on Auto Scaling for the Amazon Redshift cluster.
  • E. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Auror
  • F. Turn on Aurora Auto Scaling.
  • G. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoD
  • H. Configure an Auto Scaling policy.

Answer: C

Explanation:
To migrate a MySQL database to AWS with compatibility and scalability, Amazon Aurora is a suitable option. Aurora is compatible with MySQL and can scale automatically with Aurora Auto Scaling. AWS Database Migration Service (AWS DMS) can be used to migrate the database from on-premises to Aurora with minimal downtime. References:
✑ What Is Amazon Aurora?
✑ Using Amazon Aurora Auto Scaling with Aurora Replicas
✑ What Is AWS Database Migration Service?

NEW QUESTION 2

A company is developing a mobile gaming app in a single AWS Region. The app runs on multiple Amazon EC2 instances in an Auto Scaling group. The company stores the app data in Amazon DynamoDB. The app communicates by using TCP traffic and UDP traffic between the users and the servers. The application will be used globally. The company wants to ensure the lowest possible latency for all users.
Which solution will meet these requirements?

  • A. Use AWS Global Accelerator to create an accelerato
  • B. Create an Application Load Balancer (ALB) behind an accelerator endpoint that uses Global Accelerator integration and listening on the TCP and UDP port
  • C. Update the Auto Scaling group to register instances on the ALB.
  • D. Use AWS Global Accelerator to create an accelerato
  • E. Create a Network Load Balancer (NLB) behind an accelerator endpoint that uses Global Accelerator integration and listening on the TCP and UDP port
  • F. Update the Auto Scaling group to register instances on the NLB
  • G. Create an Amazon CloudFront content delivery network (CDN) endpoin
  • H. Create a Network Load Balancer (NLB) behind the endpoint and listening on the TCP and UDP port
  • I. Update the Auto Scaling group to register instances on the NL
  • J. Update CloudFront to use the NLB as the origin.
  • K. Create an Amazon Cloudfront content delivery network (CDN) endpoin
  • L. Create an Application Load Balancer (ALB) behind the endpoint and listening on the TCP and UDP port
  • M. Update the Auto Scaling group to register instances on the AL
  • N. Update CloudFront to use the ALB as the origin

Answer: B

Explanation:
AWS Global Accelerator is a networking service that improves the performance and availability of applications for global users. It uses the AWS global network to route user traffic to the optimal endpoint based on performance and health. It also provides static IP addresses that act as a fixed entry point to the applications and support both TCP and UDP protocols1. By using AWS Global Accelerator, the solution can ensure the lowest possible latency for all users.
* A. Use AWS Global Accelerator to create an accelerator. Create an Application Load Balancer (ALB) behind an accelerator endpoint that uses Global Accelerator integration and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB. This solution will not work, as ALB does not support UDP protoco2l.
* C. Create an Amazon CloudFront content delivery network (CDN) endpoint. Create a Network Load Balancer (NLB) behind the endpoint and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the NLB. Update CloudFront to use the NLB as the origin. This solution will not work, as CloudFront does not support UDP protocol3.
* D. Create an Amazon Cloudfront content delivery network (CDN) endpoint. Create an Application Load Balancer (ALB) behind the endpoint and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB. Update CloudFront to use the ALB as the origin. This solution will not work, as CloudFront and ALB do not support UDP protocol23.
Reference URL: https://aws.amazon.com/global-accelerator/

NEW QUESTION 3

A company provides a Voice over Internet Protocol (VoIP) service that uses UDP connections. The service consists of Amazon EC2 instances that run in an Auto Scaling group. The company has deployments across multiple AWS Regions.
The company needs to route users to the Region with the lowest latency. The company also needs automated failover between Regions.
Which solution will meet these requirements?

  • A. Deploy a Network Load Balancer (NLB) and an associated target grou
  • B. Associate the target group with the Auto Scaling grou
  • C. Use the NLB as an AWS Global Accelerator endpoint in each Region.
  • D. Deploy an Application Load Balancer (ALB) and an associated target grou
  • E. Associate the target group with the Auto Scaling grou
  • F. Use the ALB as an AWS Global Accelerator endpoint in each Region.
  • G. Deploy a Network Load Balancer (NLB) and an associated target grou
  • H. Associate the target group with the Auto Scaling grou
  • I. Create an Amazon Route 53 latency record that points to aliases for each NL
  • J. Create an Amazon CloudFront distribution that uses the latency record as an origin.
  • K. Deploy an Application Load Balancer (ALB) and an associated target grou
  • L. Associate the target group with the Auto Scaling grou
  • M. Create an Amazon Route 53 weighted record that points to aliases for each AL
  • N. Deploy an Amazon CloudFront distribution that uses the weighted record as an origin.

Answer: D

Explanation:
https://aws.amazon.com/global-accelerator/faqs/
HTTP /HTTPS - ALB ; TCP and UDP - NLB; Lowest latency routing and more throughput. Also supports failover, uses Anycast Ip addressing - Global Accelerator Caching at Egde Locations – Cloutfront
WS Global Accelerator automatically checks the health of your applications and routes user traffic only to healthy application endpoints. If the health status changes or you make configuration updates, AWS Global Accelerator reacts instantaneously to route your users to the next available endpoint..

NEW QUESTION 4

A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3. Company policy requires the files to be stored for 4 years before they can be deleted Immediate accessibility is always required as the files contain critical business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are rarely accessed after the first 30 days
Which storage solution is MOST cost-effective?

  • A. Create an S3 bucket lifecycle policy to move Mm from S3 Standard to S3 Glacier 30 days from object creation Delete the Tiles 4 years after object creation
  • B. Create an S3 bucket lifecycle policy to move tiles from S3 Standard to S3 One Zone- infrequent Access (S3 One Zone-IA] 30 days from object creatio
  • C. Delete the fees 4 years after object creation
  • D. Create an S3 bucket lifecycle policy to move files from S3 Standard-infrequent Access (S3 Standard -lA) 30 from object creatio
  • E. Delete the ties 4 years after object creation
  • F. Create an S3 bucket Lifecycle policy to move files from S3 Standard to S3 Standard- Infrequent Access (S3 Standard-IA) 30 days from object creation Move the files to S3 Glacier 4 years after object carton.

Answer: B

Explanation:
https://aws.amazon.com/s3/storage-classes/?trk=66264cd8-3b73-416c-9693-ea7cf4fe846a&sc_channel=ps&s_kwcid=AL!4422!3!536452716950!p!!g!!aws%20s3%20pri cing&ef_id=Cj0KCQjwnbmaBhD- ARIsAGTPcfVHUZN5_BMrzl5zBcaC8KnqpnNZvjbZzqPkH6k7q4JcYO5KFLx0YYgaAm6nE ALw_wcB:G:s&s_kwcid=AL!4422!3!536452716950!p!!g!!aws%20s3%20pricing

NEW QUESTION 5

A company is designing a cloud communications platform that is driven by APIs. The application is hosted on Amazon EC2 instances behind a Network Load Balancer (NLB). The company uses Amazon API Gateway to provide external users with access to the application through APIs. The company wants to protect the platform against web exploits like SQL injection and also wants to detect and mitigate large, sophisticated DDoS attacks.
Which combination of solutions provides the MOST protection? (Select TWO.)

  • A. Use AWS WAF to protect the NLB.
  • B. Use AWS Shield Advanced with the NLB.
  • C. Use AWS WAF to protect Amazon API Gateway.
  • D. Use Amazon GuardDuty with AWS Shield Standard.
  • E. Use AWS Shield Standard with Amazon API Gateway.

Answer: BC

Explanation:
AWS Shield Advanced provides expanded DDoS attack protection for your Amazon EC2 instances, Elastic Load Balancing load balancers, CloudFront distributions, Route 53 hosted zones, and AWS Global Accelerator standard accelerators.
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to your protected web application resources. You can protect
the following resource types:
Amazon CloudFront distribution Amazon API Gateway REST API Application Load Balancer
AWS AppSync GraphQL API Amazon Cognito user pool
https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html

NEW QUESTION 6

A company wants to rearchitect a large-scale web application to a serverless microservices architecture. The application uses Amazon EC2 instances and is written in Python.
The company selected one component of the web application to test as a microservice. The component supports hundreds of requests each second. The company wants to create and test the microservice on an AWS solution that supports Python. The solution must also scale automatically and require minimal infrastructure and minimal operational support.
Which solution will meet these requirements?

  • A. Use a Spot Fleet with auto scaling of EC2 instances that run the most recent Amazon Linux operating system.
  • B. Use an AWS Elastic Beanstalk web server environment that has high availability configured.
  • C. Use Amazon Elastic Kubernetes Service (Amazon EKS). Launch Auto Scaling groups of self-managed EC2 instances.
  • D. Use an AWS Lambda function that runs custom developed code.

Answer: D

Explanation:
AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. You can use Lambda to create and test microservices that are written in Python or other supported languages. Lambda scales automatically to handle the number of requests per second. You only pay for the compute time you consume. Lambda also integrates with other AWS services, such as Amazon API Gateway, Amazon S3, Amazon DynamoDB, and Amazon SQS, to enable event-driven architectures. Lambda has minimal infrastructure and operational overhead, as you do not need to manage servers, operating systems, patches, or scaling policies.
The other options are not serverless solutions and require more infrastructure and operational support. They also do not scale automatically to handle the number of requests per second. A Spot Fleet is a collection of EC2 instances that run on spare capacity at low prices. However, Spot Instances can be interrupted by AWS at any time, which can affect the availability and performance of your microservice. AWS Elastic Beanstalk is a service that automates the deployment and management of web applications on EC2 instances. However, you still need to provision, configure, and monitor the underlying EC2 instances and load balancers. Amazon EKS is a service that runs Kubernetes on AWS. However, you still need to create, configure, and manage the EC2 instances that form the Kubernetes cluster and nodes. You also need to install and update the Kubernetes software and tools. References:
✑ What is AWS Lambda?
✑ Building Lambda functions with Python
✑ Create a layer for a Lambda Python function
✑ AWS Lambda – Function in Python
✑ How do I call my AWS Lambda function from a local python script?

NEW QUESTION 7

A company wants to move a multi-tiered application from on premises to the AWS Cloud to improve the application's performance. The application consists of application tiers that communicate with each other by way of RESTful services. Transactions are dropped when one tier becomes overloaded. A solutions architect must design a solution that resolves these issues and modernizes the application.
Which solution meets these requirements and is the MOST operationally efficient?

  • A. Use Amazon API Gateway and direct transactions to the AWS Lambda functions as the application laye
  • B. Use Amazon Simple Queue Service (Amazon SQS) as the communication layer between application services.
  • C. Use Amazon CloudWatch metrics to analyze the application performance history to determine the server's peak utilization during the performance failure
  • D. Increase the size of the application server's Amazon EC2 instances to meet the peak requirements.
  • E. Use Amazon Simple Notification Service (Amazon SNS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling grou
  • F. Use Amazon CloudWatch to monitor the SNS queue length and scale up and down as required.
  • G. Use Amazon Simple Queue Service (Amazon SQS) to handle the messaging between application servers running on Amazon EC2 in an Auto Scaling grou
  • H. Use Amazon CloudWatch to monitor the SQS queue length and scale up when communication failures are detected.

Answer: A

Explanation:
https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/module-4/
Build a Serverless Web Application with AWS Lambda, Amazon API Gateway, AWS Amplify, Amazon DynamoDB, and Amazon Cognito. This example showed similar setup as question: Build a Serverless Web Application with AWS Lambda, Amazon API Gateway, AWS Amplify, Amazon DynamoDB, and Amazon Cognito

NEW QUESTION 8

A media company is evaluating the possibility ot moving rts systems to the AWS Cloud The company needs at least 10 TB of storage with the maximum possible I/O performance for video processing. 300 TB of very durable storage for storing media content, and 900 TB of storage to meet requirements for archival media that is not in use anymore
Which set of services should a solutions architect recommend to meet these requirements?

  • A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
  • B. Amazon EBS for maximum performance, Amazon EFS for durable data storage and Amazon S3 Glacier for archival storage
  • C. Amazon EC2 instance store for maximum performanc
  • D. Amazon EFS for durable data storage and Amazon S3 for archival storage
  • E. Amazon EC2 Instance store for maximum performanc
  • F. Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

Answer: A

Explanation:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

NEW QUESTION 9

A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate retrieval of data at no additional cost.
How can these requirements be met?

  • A. Deploy Amazon S3 Glacier Vault and enable expedited retrieva
  • B. Enable provisioned retrieval capacity for the workload.
  • C. Deploy AWS Storage Gateway using cached volume
  • D. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.
  • E. Deploy AWS Storage Gateway using stored volumes to store data locall
  • F. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
  • G. Deploy AWS Direct Connect to connect with the on-premises data cente
  • H. Configure AWS Storage Gateway to store data locall
  • I. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.

Answer: B

Explanation:
The solution that will meet the requirements is to deploy AWS Storage Gateway using cached volumes and use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally. This solution will allow the company to migrate its storage infrastructure to AWS while minimizing bandwidth costs, as it will only transfer data that is not cached locally. The solution will also allow for immediate retrieval of data at no additional cost, as the cached volumes will provide low-latency access to the most recently used data. The data stored in Amazon S3 will be durable, scalable, and secure.
The other solutions are not as effective as the first one because they either do not meet the requirements or introduce additional costs or complexity. Deploying Amazon S3 Glacier Vault and enabling expedited retrieval will not meet the requirements, as it will incur additional costs for both storage and retrieval. Amazon S3 Glacier is a low-cost storage service for data archiving and backup, but it has longer retrieval times than Amazon S3. Expedited retrieval is a feature that allows faster access to data, but it charges a higher fee per GB retrieved. Provisioned retrieval capacity is a feature that reserves dedicated capacity for expedited retrievals, but it also charges a monthly fee per provisioned capacity unit. Deploying AWS Storage Gateway using stored volumes to store data locally and use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3 will not meet the requirements, as it will not migrate the storage infrastructure to AWS, but only create backups. Stored volumes are volumes that store the primary data locally and back up snapshots to Amazon S3. This solution will not reduce the storage capacity needed on-premises, nor will it leverage the benefits of cloud storage. Deploying AWS Direct Connect to connect with the on-premises data center and configuring AWS Storage Gateway to store data locally and use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3 will not meet the requirements, as it will also not migrate the storage infrastructure to AWS, but only create backups. AWS Direct Connect is a service that establishes a dedicated network connection between the on-premises data center and AWS, which can reduce network costs and increase bandwidth. However, this solution will also not reduce the storage capacity needed on- premises, nor will it leverage the benefits of cloud storage.
References:
✑ AWS Storage Gateway
✑ Cached volumes - AWS Storage Gateway
✑ Amazon S3 Glacier
✑ Retrieving archives from Amazon S3 Glacier vaults - Amazon Simple Storage Service
✑ Stored volumes - AWS Storage Gateway
✑ AWS Direct Connect

NEW QUESTION 10

A company is building a RESTful serverless web application on AWS by using Amazon API Gateway and AWS Lambda. The users of this web application will be geographically distributed, and the company wants to reduce the latency of API requests to these users Which type of endpoint should a solutions architect use to meet these requirements?

  • A. Private endpoint
  • B. Regional endpoint
  • C. Interface VPC endpoint
  • D. Edge-optimzed endpoint

Answer: D

Explanation:
An edge-optimized API endpoint is best for geographically distributed clients, as it routes the API requests to the nearest CloudFront Point of Presence (POP). This reduces the latency and improves the performance of the API. Edge-optimized endpoints are the default type for API Gateway REST APIs1.
A regional API endpoint is intended for clients in the same region as the API, and it does not use CloudFront to route the requests. A private API endpoint is an API endpoint that can only be accessed from a VPC using an interface VPC endpoint. A regional or private endpoint would not meet the requirement of reducing the latency for geographically distributed users1.

NEW QUESTION 11

A company wants to share accounting data with an external auditor. The data is stored in an Amazon RDS DB instance that resides in a private subnet. The auditor has its own AWS account and requires its own copy of the database.
What is the MOST secure way for the company to share the database with the auditor?

  • A. Create a read replica of the databas
  • B. Configure IAM standard database authentication to grant the auditor access.
  • C. Export the database contents to text file
  • D. Store the files in an Amazon S3 bucke
  • E. Create a new IAM user for the audito
  • F. Grant the user access to the S3 bucket.
  • G. Copy a snapshot of the database to an Amazon S3 bucke
  • H. Create an IAM use
  • I. Share the user's keys with the auditor to grant access to the object in the $3 bucket.
  • J. Create an encrypted snapshot of the databas
  • K. Share the snapshot with the audito
  • L. Allow access to the AWS Key Management Service (AWS KMS) encryption key.

Answer: D

Explanation:
This answer is correct because it meets the requirements of sharing the database with the auditor in a secure way. You can create an encrypted snapshot of the database by using AWS Key Management Service (AWS KMS) to encrypt the snapshot
with a customer managed key. You can share the snapshot with the auditor by modifying the permissions of the snapshot and specifying the AWS account ID of the auditor. You can also allow access to the AWS KMS encryption key by adding a key policy statement that grants permissions to the auditor’s account. This way, you can ensure that only the auditor can access and restore the snapshot in their own AWS account.
References:
✑ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapsh ot.html
✑ https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key- policy-default-allow-root-enable-iam

NEW QUESTION 12

A company offers a food delivery service that is growing rapidly. Because of the growth, the company’s order processing system is experiencing scaling problems during peak traffic hours. The current architecture includes the following:
• A group of Amazon EC2 instances that run in an Amazon EC2 Auto Scaling group to collect orders from the application
• Another group of EC2 instances that run in an Amazon EC2 Auto Scaling group to fulfill orders
The order collection process occurs quickly, but the order fulfillment process can take longer. Data must not be lost because of a scaling event.
A solutions architect must ensure that the order collection process and the order fulfillment process can both scale properly during peak traffic hours. The solution must optimize
utilization of the company’s AWS resources. Which solution meets these requirements?

  • A. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling group
  • B. Configure each Auto Scaling group’s minimum capacity according to peak workload values.
  • C. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the Auto Scaling group
  • D. Configure a CloudWatch alarm to invoke an Amazon Simple Notification Service (Amazon SNS) topic that creates additional Auto Scaling groups on demand.
  • E. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillmen
  • F. Configure the EC2 instances to poll their respective queu
  • G. Scale the Auto Scaling groups based on notifications that the queues send.
  • H. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for order collection and another for order fulfillmen
  • I. Configure the EC2 instances to poll their respective queu
  • J. Create a metric based on a backlog per instance calculatio
  • K. Scale the Auto Scaling groups based on this metric.

Answer: D

Explanation:
The number of instances in your Auto Scaling group can be driven by how long it takes to process a message and the acceptable amount of latency (queue delay). The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain.

NEW QUESTION 13

A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and removing application nodes as needed based on the number of jobs to be processed. The processor application is stateless. The solutions architect must ensure that the application is loosely coupled and the job items are durably stored.
Which design should the solutions architect use?

  • A. Create an Amazon SNS topic to send the jobs that need to be processed Create an Amazon Machine Image (AMI) that consists of the processor application Create a launch configuration that uses the AMI Create an Auto Scaling group using the launch configuration Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage
  • B. Create an Amazon SQS queue to hold the jobs that need to be processed Create an Amazon Machine image (AMI) that consists of the processor application Create a launch configuration that uses the AM' Create an Auto Scaling group using the launch configuration Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage
  • C. Create an Amazon SQS queue to hold the jobs that needs to be processed Create an Amazon Machine image (AMI) that consists of the processor application Create a launch template that uses the AMI Create an Auto Scaling group using the launch template Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
  • D. Create an Amazon SNS topic to send the jobs that need to be processed Create an Amazon Machine Image (AMI) that consists of the processor application Create a launch template that uses the AMI Create an Auto Scaling group using the launch template Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic

Answer: C

Explanation:
"Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue"
In this case we need to find a durable and loosely coupled solution for storing jobs. Amazon SQS is ideal for this use case and can be configured to use dynamic scaling based on the number of jobs waiting in the queue.To configure this scaling you can use the backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows: Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue

NEW QUESTION 14

A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB) The website serves static content Website traffic is increasing and the company is concerned about a potential increase in cost.
What should a solutions architect do to reduce the cost of the website?

  • A. Create an Amazon CloudFront distribution to cache static files at edge locations.
  • B. Create an Amazon ElastiCache cluster Connect the ALB to the ElastiCache cluster to serve cached files.
  • C. Create an AWS WAF web ACL and associate it with the AL
  • D. Add a rule to the web ACL to cache static files.
  • E. Create a second ALB in an alternative AWS Region Route user traffic to the closest Region to minimize data transfer costs

Answer: A

Explanation:
Amazon CloudFront is a content delivery network (CDN) that can improve the performance and reduce the cost of serving static content from a website. CloudFront
can cache static files at edge locations closer to the users, reducing the latency and data transfer costs. CloudFront can also integrate with Amazon S3 as the origin for the static content, eliminating the need for EC2 instances to host the website. CloudFront meets all the requirements of the question, while the other options do not. References:
✑ https://aws.amazon.com/blogs/architecture/architecting-a-low-cost-web-content-publishing-system/
✑ https://nodeployfriday.com/posts/static-website-hosting/
✑ https://aws.amazon.com/cloudfront/

NEW QUESTION 15

A company needs to minimize the cost of its 1 Gbps AWS Direct Connect connection. The company's average connection utilization is less than 10%. A solutions architect must recommend a solution that will reduce the cost without compromising security.
Which solution will meet these requirements?

  • A. Set up a new 1 Gbps Direct Connect connectio
  • B. Share the connection with another AWS account.
  • C. Set up a new 200 Mbps Direct Connect connection in the AWS Management Console.
  • D. Contact an AWS Direct Connect Partner to order a 1 Gbps connectio
  • E. Share the connection with another AWS account.
  • F. Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for an existing AWS account.

Answer: D

Explanation:
company need to setup a cheaper connection (200 M) but B is incorrect because you can only order port speeds of 1, 10, or 100 Gbps for more flexibility you can go with hosted connection, You can order port speeds between 50 Mbps and 10 Gbps. https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct- connect.html

NEW QUESTION 16

A company has implemented a self-managed DNS service on AWS. The solution consists of the following:
• Amazon EC2 instances in different AWS Regions
• Endpomts of a standard accelerator m AWS Global Accelerator
The company wants to protect the solution against DDoS attacks What should a solutions architect do to meet this requirement?

  • A. Subscribe to AWS Shield Advanced Add the accelerator as a resource to protect
  • B. Subscribe to AWS Shield Advanced Add the EC2 instances as resources to protect
  • C. Create an AWS WAF web ACL that includes a rate-based rule Associate the web ACL with the accelerator
  • D. Create an AWS WAF web ACL that includes a rate-based rule Associate the web ACL with the EC2 instances

Answer: A

Explanation:
AWS Shield is a managed service that provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS. AWS Shield Standard is automatically enabled to all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service. AWS Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53. https://docs.aws.amazon.com/waf/latest/developerguide/ddos-event-mitigation-logic- gax.html

NEW QUESTION 17

A company has an ecommerce checkout workflow that writes an order to a database and calls a service to process the payment. Users are experiencing timeouts during the checkout process. When users resubmit the checkout form, multiple unique orders are created for the same desired transaction.
How should a solutions architect refactor this workflow to prevent the creation of multiple orders?

  • A. Configure the web application to send an order message to Amazon Kinesis Data Firehos
  • B. Set the payment service to retrieve the message from Kinesis Data Firehose and process the order.
  • C. Create a rule in AWS CloudTrail to invoke an AWS Lambda function based on the logged application path request Use Lambda to query the database, call the payment service, and pass in the order information.
  • D. Store the order in the databas
  • E. Send a message that includes the order number to Amazon Simple Notification Service (Amazon SNS). Set the payment service to pollAmazon SN
  • F. retrieve the message, and process the order.
  • G. Store the order in the databas
  • H. Send a message that includes the order number to an Amazon Simple Queue Service (Amazon SQS) FIFO queu
  • I. Set the payment service to retrieve the message and process the orde
  • J. Delete the message from the queue.

Answer: D

Explanation:
This approach ensures that the order creation and payment processing steps are separate and atomic. By sending the order information to an SQS FIFO queue, the payment service can process the order one at a time and in the order they were received. If the payment service is unable to process an order, it can be retried later, preventing the creation of multiple orders. The deletion of the message from the queue after it is processed will prevent the same message from being processed multiple times.

NEW QUESTION 18

A company has hundreds of Amazon EC2 Linux-based instances in the AWS Cloud. Systems administrators have used shared SSH keys to manage the instances After a recent audit, the company's security team is mandating the removal of all shared keys. A solutions architect must design a solution that provides secure access to the EC2 instances.
Which solution will meet this requirement with the LEAST amount of administrative overhead?

  • A. Use AWS Systems Manager Session Manager to connect to the EC2 instances.
  • B. Use AWS Security Token Service (AWS STS) to generate one-time SSH keys on demand.
  • C. Allow shared SSH access to a set of bastion instance
  • D. Configure all other instances to allow only SSH access from the bastion instances
  • E. Use an Amazon Cognito custom authorizer to authenticate user
  • F. Invoke an AWS Lambda function to generate a temporary SSH key.

Answer: A

Explanation:
Session Manager is a fully managed AWS Systems Manager capability. With Session Manager, you can manage your Amazon Elastic Compute Cloud (Amazon EC2) instances, edge devices, on-premises servers, and virtual machines (VMs). You can use either an interactive one-click browser-based shell or the AWS Command Line Interface (AWS CLI). Session Manager provides secure and auditable node management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. Session Manager also allows you to comply with corporate policies that require controlled access to managed nodes, strict security practices, and fully auditable logs with node access details, while providing end users with simple one-click cross-platform access to your managed nodes. https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html

NEW QUESTION 19

A company has a mulli-tier application that runs six front-end web servers in an Amazon EC2 Auto Scaling group in a single Availability Zone behind an Application Load Balancer (ALB). A solutions architect needs lo modify the infrastructure to be highly available without modifying the application.
Which architecture should the solutions architect choose that provides high availability?

  • A. Create an Auto Scaling group that uses three Instances across each of tv/o Regions.
  • B. Modify the Auto Scaling group to use three instances across each of two Availability Zones.
  • C. Create an Auto Scaling template that can be used to quickly create more instances in another Region.
  • D. Change the ALB in front of the Amazon EC2 instances in a round-robin configuration to balance traffic to the web tier.

Answer: B

Explanation:
High availability can be enabled for this architecture quite simply by modifying the existing Auto Scaling group to use multiple availability zones. The ASG will automatically balance the load so you don't actually need to specify the instances per AZ.

NEW QUESTION 20

A company has an Amazon S3 data lake that is governed by AWS Lake Formation The company wants to create a visualization in Amazon QuickSight by joining the data in the data lake with operational data that is stored in an Amazon Aurora MySQL database The company wants to enforce column-level authorization so that the company's marketing team can access only a subset of columns in the database
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine Include only the required columns
  • B. Use AWS Glue Studio to ingest the data from the database to the S3 data lake Attach an IAM policy to the QuickSight users to enforce column-level access contro
  • C. Use Amazon S3 as the data source in QuickSight
  • D. Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3 Create an S3 bucket policy to enforce column-level access control for the QuickSight users Use Amazon S3 as the data source in QuickSight.
  • E. Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake Use Lake Formation to enforce column-level access control for the QuickSight users Use Amazon Athena as the data source in QuickSight

Answer: D

Explanation:
Enforce column-level authorization with Amazon QuickSight and AWS Lake Formation https://aws.amazon.com/blogs/big-data/enforce-column-level-authorization-with- amazon-quicksight-and-aws-lake-formation/

NEW QUESTION 21
......

Recommend!! Get the Full AWS-Solution-Architect-Associate dumps in VCE and PDF From DumpSolutions.com, Welcome to Download: https://www.dumpsolutions.com/AWS-Solution-Architect-Associate-dumps/ (New 981 Q&As Version)