资讯详情

2022 最新版 AWS SAP 100道练习题(含解释)

问题Q1. A company runs a legacy system on a single m4.2xlarge Amazon EC2 instance with Amazon EBS2 storage. The EC2 instance runs both the web server and a self-managed Oracle database. A snapshot is made of the EBS volume every 12 hours, and an AMI was created from the fully configured EC2 instance. A recent event that terminated the EC2 instance led to several hours of downtime. The application was successfully launched from the AMI, but the age of the EBS snapshot and the repair of the database resulted in the loss of 8 hours of data. The system was also down for 4 hours while the Systems Operators manually performed these processes. What architectural changes will minimize downtime and reduce the chance of lost data?

A. Create an Amazon CloudWatch alarm to automatically recover theinstance. Create a script that will check and repair the database uponreboot. Subscribe the Operations team to the Amazon SNS messagegenerated by the CloudWatch alarm.

B. Run the application on m4.xlarge EC2 instances behind an Elastic LoadBalancer/Application Load Balancer. Run the EC2 instances in an AutoScaling group across multiple Availability Zones with a minimum instancecount of two. Migrate the database to an Amazon RDS Oracle Multi-AZ DBinstance.

C. Run the application on m4.2xlarge EC2 instances behind an ElasticLoad Balancer/Application Load Balancer. Run the EC2 instances in anAuto Scaling group access multiple Availability Zones with a minimuminstance count of one. Migrate the database to an Amazon RDS OracleMulti-AZ DB instance.

D. Increase the web server instance count to two m4.xlarge instances anduse Amazon Route S3 round- robin load balancing to spread the load.Enable Route S3 health checks on the web servers. Migrate the databaseto an Amazon RDS Oracle Multi-AZ DB instance.

Answer:B

Analyze:

A.Not highly available C.One instance still not highly available D.Route 53 don't have round-robin load balancing(may be weighting with 50/50?). Without auto scale it is not really scalable.

问题Q2. A Solutions Architect is working with a company that operates a standard three-tier web application in AWS. The web and application tiers run on Amazon EC2 and the database tier runs on Amazon RDS. The company is redesigning the web and application tiers to use Amazon API Gateway and AWS Lambda, and the company intends to deploy the new application within 6 months. The IT Manager has asked the Solutions Architect to reduce costs in the interim. Which solution will be MOST cost effective while maintaining reliability?

A. Use Spot Instances for the web tier, On-Demand Instances for theapplication tier, and Reserved Instances for the database tier.

B. Use On-Demand Instances for the web and application tiers, andReserved Instances for the database tier.

C. Use Spot Instances for the web and application tiers, and ReservedInstances for the database tier.

D. Use Reserved Instances for the web, application, and database tiers.

Answer:B

Analyze:

A.Spot instance can be interrupted C.Spot instance can be interrupted D.RI will need at least 1 year rental term, waste money after 6 month

问题Q3. A company uses Amazon S3 to store documents that may only be accessible to an Amazon EC2 instance in a certain virtual private cloud (VPC). The company fears that a malicious insider with access to this instance could also set up an EC2 instance in another VPC to access these documents.Which of the following solutions will provide the required protection?

A. Use an S3 VPC endpoint and an S3 bucket policy to limit access tothis VPC endpoint.

B. Use EC2 instance profiles and an S3 bucket policy to limit access tothe role attached to the instance profile.

C. Use S3 client-side encryption and store the key in the instancemetadata.

D. Use S3 server-side encryption and protect the key with an encryptioncontext.

Answer:A

Analyze:

B.The same role can be attached to another EC2 in another VPC C.Instance metadata is not a safe place to store keyD.Other EC2 can use the same encryption context as well Endpoint connections cannot be extended out of a VPC. Resources on the other side of a VPN connection, VPC peering connection, AWS Direct Connect connection, or ClassicLink connection in your VPC cannot use the endpoint to communicate with resources in the endpoint service.

问题Q4. The Solutions Architect manages a serverless application that consists of multiple API gateways, AWS Lambda functions, Amazon S3 buckets, and Amazon DynamoDB tables. Customers say that a few application components slow while loading dynamic images, and some are timing out with the "504 Gateway Timeout" error. While troubleshootin the scenario, the Solutions Architect confirms that DynamoDB monitoring metrics are at acceptable levels. Which of the following steps would be optimal for debugging these application issues? (Choose two.)

A. Parse HTTP logs in Amazon API Gateway for HTTP errors to determinethe root cause of the errors.

B. Parse Amazon CloudWatch Logs to determine processing times forrequested images at specified intervals.

C. Parse VPC Flow Logs to determine if there is packet loss between theLambda function and S3.

D. Parse AWS X-Ray traces and analyze HTTP methods to determine the rootcause of the HTTP errors.

E. Parse S3 access logs to determine if objects being accessed are fromspecific IP addresses to narrow the scope to geographic latency issues.

Answer:BD

Analyze:

A.API gateway http log(cloudwatch) won't help with root cause C.S3 is not VPC based (unless use vpc endpoint). Lambda could be VPC enabled, but not mentioned here. E.Dynamic images are most likely go through a lambda function and S3 accessed by lambda should not have latency issues.

问题Q5. A Solutions Architect is designing the storage layer for a recently purchased application. The application will be running on Amazon EC2 instances and has the following layers and requirements:* Data layer: A POSIX file system shared across many systems.* Service layer: Static file content that requires block storage with more than 100k IOPS. Which combination of AWS services will meet these needs? (Choose two.)

A. Data layer - Amazon S3

B. Data layer - Amazon EC2 Ephemeral Storage

C. Data layer - Amazon EFS

D. Service layer - Amazon EBS volumes with Provisioned IOPS

E. Service layer - Amazon EC2 Ephemeral Storage

Answer:CE

Analyze:

A.Not POSIXB.Not persistentD.Maximum EBS IOPS is 64000

问题Q6. A company has an application that runs a web service on Amazon EC2 instances and stores .jpg images in Amazon S3. The web traffic has a predictable baseline, but often demand spikes unpredictably for short periods of time. The application is loosely coupled and stateless. The .jpg images stored in Amazon S3 are accessed frequently for the first 15 to 20 days, they are seldom accessed thereafter but always need to be immediately available. The CIO has asked to find ways to reduce costs. Which of the following options will reduce costs? (Choose two.)

A. Purchase Reserved instances for baseline capacity requirements anduse On-Demand instances for the demand spikes.

B. Configure a lifecycle policy to move the .jpg images on Amazon S3 toS3 IA after 30 days.

C. Use On-Demand instances for baseline capacity requirements and useSpot Fleet instances for the demand spikes.

D. Configure a lifecycle policy to move the .jpg images on Amazon S3 toAmazon Glacier after 30 days.

E. Create a script that checks the load on all web servers andterminates unnecessary On-Demand instances.

Answer:AB

Analyze:

C.Spot instance for spike is not good as spot can be interrupted D.Glacier can take up to hours to access data E.Should use auto scale group

问题Q7. A hybrid network architecture must be used during a company's multi-year data center migration from multiple private data centers to AWS. The current data centers are linked together with private fiber. Due to unique legacy applications, NAT cannot be used. During the migration period, many applications will need access to other applications in both the data centers and AWS. Which option offers a hybrid network architecture that is secure and highly available, that allows for high bandwidth and a multi-region deployment post-migration?

A. Use AWS Direct Connect to each data center from different ISPs, andconfigure routing to failover to the other data center's Direct Connectif one fails. Ensure that no VPC CIDR blocks overlap one another or theon-premises network.

B. Use multiple hardware VPN connections to AWS from the on-premisesdata center. Route different subnet traffic through different VPNconnections. Ensure that no VPC CIDR blocks overlap one another or theon-premises network.

C. Use a software VPN with clustering both in AWS and the on-premisesdata center, and route traffic through the cluster. Ensure that no VPCCIDR blocks overlap one another or the on-premises network.

D. Use AWS Direct Connect and a VPN as backup, and configure both to usethe same virtual private gateway and BGP. Ensure that no VPC CIDR blocksoverlap one another or the on-premises network.

Answer:A

Analyze:

B. is not high bandwidth C.One VPN connection is not HA (cluster still have one connection) D.As a backup, VPN is not sufficient with high bandwidth. Also, what if the region that have the virtual private gateway fails?

问题Q8. A company is currently running a production workload on AWS that is very I/O intensive. Its workload consists of a single tier with 10 c4.8xlarge instances, each with 2 TB gp2 volumes. The number of processing jobs has recently increased, and latency has increased as well. The team realizes that they are constrained on the IOPS. For the application to perform efficiently, they need to increase the IOPS by 3,000 for each of the instances. Which of the following designs will meet the performance goal MOST cost effectively?

A. Change the type of Amazon EBS volume from gp2 to io1 and setprovisioned IOPS to 9,000.

B. Increase the size of the gp2 volumes in each instance to 3 TB.

C. Create a new Amazon EFS file system and move all the data to this newfile system. Mount this file system to all 10 instances.

D. Create a new Amazon S3 bucket and move all the data to this newbucket. Allow each instance to access this S3 bucket and use it forstorage.

Answer:B

Analyze:

A.Cost will be 3000 * 0.125 + 9000 * 0.065 B.Cost will be 3000 * 0.1 (gp2 has 3 IOPS per GB) C.EFS has higher latency than EBS provisioned IOPS (https://docs.aws.amazon.com/efs/latest/ug/ performance.html) D.S3 won't be as fast as EBS in terms of IO

问题Q9. A company's data center is connected to the AWS Cloud over a minimally used 10-Gbps AWS Direct Connect connection with a private virtual interface to its virtual private cloud (VPC). The company internet connection is 200 Mbps, and the company has a 150-TB dataset that is created each Friday. The data must be transferred and available in Amazon S3 on Monday morning. Which is the LEAST expensive way to meet the requirements while allowing for data transfer growth?

A. Order two 80-GB AWS Snowball appliances. Offload the data to theappliances and ship them to AWS.AWS will copy the data from the Snowball appliances to Amazon S3.

B. Create a VPC endpoint for Amazon S3. Copy the data to Amazon S3 byusing the VPC endpoint, forcing the transfer to use the Direct Connectconnection.

C. Create a VPC endpoint for Amazon S3. Set up a reverse proxy farmbehind a Classic Load Balancer in the VPC. Copy the data to Amazon S3using the proxy.

D. Create a public virtual interface on a Direct Connect connection, andcopy the data to Amazon S3 over the connection.

Answer:D

Analyze:

A.Won't be fast enough (courier on the weekend?~!) B.S3 VPC endpoint is Gateway Endpoint and it cannot extend across direct connect https:// docs.amazonaws.cn/en_us/vpc/latest/userguide/vpce-gateway.html#Gateway-Endpoint-Limitations C.Proxy farm is more expensive than D

问题Q10. A company has created an account for individual Development teams, resulting in a total of 200 accounts. All accounts have a single virtual private cloud (VPC) in a single region with multiple microservices running in Docker containers that need to communicate with microservices in other accounts. The Security team requirements state that these microservices must not traverse the public internet, and only certain internal services should be allowed to call other individual services. If there is any denied network traffic for a service, the Security team must be notified of any denied requests, including the source IP. How can connectivity be established between service while meeting the security requirements?

A. Create a VPC peering connection between the VPCs. Use security groupson the instances to allow traffic from the security group IDs that arepermitted to call the microservice. Apply network ACLs to and allowtraffic from the local VPC and peered VPCs only. Within the taskdefinition in Amazon ECS for each of the microservices, specify a logconfiguration by using the awslogs driver. Within Amazon CloudWatchLogs, create a metric filter and alarm off of the number of HTTP 403responses. Create an alarm when the number of messages exceeds athreshold set by the Security team.

B. Ensure that no CIDR ranges are overlapping, and attach a virtualprivate gateway (VGW) to each VPC.Provision an IPsec tunnel between each VGW and enable route propagationon the route table.Configure security groups on each service to allow the CIDR ranges ofthe VPCs on the other accounts.Enable VPC Flow Logs, and use an Amazon CloudWatch Logs subscriptionfilter for rejected traffic.Create an IAM role and allow the Security team to call the AssumeRoleaction for each account.

C. Deploy a transit VPC by using third-party marketplace VPN appliancesrunning on Amazon EC2, dynamically routed VPN connections between theVPN appliance, and the virtual private gateways (VGWs) attached to eachVPC within the region. Adjust network ACLs to allow traffic from thelocal VPC only. Apply security groups to the microservices to allowtraffic from the VPN appliances only.Install the awslogs agent on each VPN appliance, and configure logs toforward to Amazon CloudWatch Logs in the security account for theSecurity team to access.

D. Create a Network Load Balancer (NLB) for each microservice. Attachthe NLB to a PrivateLink endpoint service and whitelist the accountsthat will be consuming this service.Create an interface endpoint in the consumer VPC and associate asecurity group that allows only the security group IDs of the servicesauthorized to call the producer service. On the producer services,create security groups for each microservice and allow only the CIDRrange the allowed services.Create VPC Flow Logs on each VPC to capture rejected traffic that willbe delivered to an Amazon CloudWatch Logs group. Create a CloudWatchLogs subscription that streams the log data to a security account.

Answer:D

Analyze:

C is not correct as a VPN solution between VPC's would require traffic traversing the internet secure yes but it will traverse the internet. D would be the correct answer providing "only the CIDR range the allowed services" meant only the CIDR range of the producer services as only the ELB would be sending traffic to those services not the consumers directly. A.HTTP 403 won't be denied requests as the request will never get to ECS VPC peering will maintain the original IP (therefore no CIDR overlap is allowed) B.Log in multiple account is not best practice. Moreover, if only one of two services in a VPC should access a particular micro service, this won't work as the SG allow the whole VPC VPN will keep the original IP, unless NAT is used before traffic going into the tunnel C.All traffic go to the same VPN appliance which means cannot actually block service access. ACLs allow local VPC only, than the transit VPC will not work.D.This will not work as PrivateLink will create a service endpoint with the local VPC's private IP, which means you will not have the source IP, and the security group in the producer cannot range the allowed services.https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-privatelink.html However, when a service is rejected on the consumer VPC, a log should have the source IP for the VPC endpoint. However, the allowed IP on the producer side is tricky. I think it means the allowed service within the same VPC. Anyway, I think this is the only solution that make sense, even though there is a lot of vague description in it

问题Q11. A company runs a dynamic mission-critical web application that has an SLA of 99.99%. Global application users access the application 24/7. The application is currently hosted on premises and routinely fails to meet its SLA, especially when millions of users access the application concurrently. Remote users complain of latency. How should this application be redesigned to be scalable and allow for automatic failover at the lowest cost?

A. Use Amazon Route 53 failover routing with geolocation-based routing.Host the website on automatically scaled Amazon EC2 instances behind anApplication Load Balancer with an additional Application Load Balancerand EC2 instances for the application layer in each region. Use aMulti-AZ deployment with MySQL as the data layer.

B. Use Amazon Route 53 round robin routing to distribute the load evenlyto several regions with health checks. Host the website on automaticallyscaled Amazon ECS with AWS Fargate technology containers behind aNetwork Load Balancer, with an additional Network Load Balancer andFargate containers for the application layer in each region. Use AmazonAurora replicas for the data layer.

C. Use Amazon Route 53 latency-based routing to route to the nearestregion with health checks. Host the website in Amazon S3 in each regionand use Amazon API Gateway with AWS Lambda for the application layer.Use Amazon DynamoDB global tables as the data layer with Amazon DynamoDBAccelerator (DAX) for caching.

D. Use Amazon Route 53 geolocation-based routing. Host the website onautomatically scaled AWS argate containers behind a Network LoadBalancer with an additional Network Load Balancer and Fargate containersfor the application layer in each region. Use Amazon Aurora Multi-Masterfor Aurora MySQL as the data layer.

Answer:C

Analyze:

A.This will be more expensive than C B.Route 53 round robin routing is not a thing NLB do not support sticky session and web application most likely will need one C.Using managed service is the best practice. S3, Lambda and DynamoDB is so much cheaper than EC2 and RDS D.Sticky session not supported in NLB, and Multi Master cannot cross region

问题Q12. A company manages more than 200 separate internet-facing web applications. All of the applications are deployed to AWS in a single AWS Region The fully qualified domain names (FQDNs) of all of the applications are made available through HTTPS using Application Load Balancers (ALBs). The ALBs are configured to use public SSL/TLS certificates. A Solutions Architect needs to migrate the web applications to a multi-region architecture. All HTTPS services should continue to work without interruption. Which approach meets these requirements?

A. Request a certificate for each FQDN using AWS KMS. Associate thecertificates with the ALBs in the primary AWS Region. Enablecross-region availability in AWS KMS for the certificates and associatethe certificates with the ALBs in the secondary AWS Region.

B. Generate the key pairs and certificate requests for each FQDN usingAWS KMS. Associate the certificates with the ALBs in both the primaryand secondary AWS Regions.

C. Request a certificate for each FQDN using AWS Certificate Manager.Associate the certificates with the ALBs in both the primary andsecondary AWS Regions.

D. Request certificates for each FQDN in both the primary and secondaryAWS Regions using AWS Certificate Manager. Associate the certificateswith the corresponding ALBs in each AWS Region.

Answer:D

Analyze:

A.KMS is not for certificate B.KMS is not for certificate C.Certificates for ELB cannot be cross region https://aws.amazon.com/certificate-manager/faqs/

问题Q13. An e-commerce company is revamping its IT infrastructure and is planning to use AWS services. The company's CIO has asked a Solutions Architect to design a simple, highly available, and loosely coupled order processing application. The application is responsible for receiving and processing orders before storing them in an Amazon DynamoDB table. The application has a sporadic traffic pattern and should be able to scale during marketing campaigns to process the orders with minimal delays. Which of the following is the MOST reliable approach to meet the requirements?

A. Receive the orders in an Amazon EC2-hosted database and use EC2instances to process them.

B. Receive the orders in an Amazon SQS queue and trigger an AWS Lambdafunction to process them.

C. Receive the orders using the AWS Step Functions program and triggeran Amazon ECS container to process them.

D. Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2instances to process them.

Answer:B

Analyze:

A.Really bad...B.Lambda function is more reliable and scalableC.This is not what step function is forD.Need to config auto scale, and kinesis do not have item level ack

问题Q14. A company has an application written using an in-house software framework. The framework installation takes 30 minutes and is performed with a user data script. Company Developers deploy changes to the application frequently. The framework installation is becoming a bottleneck in this process. Which of the following would speed up this process?

A. Create a pipeline to build a custom AMI with the framework installedand use this AMI as a baseline for application deployments.

B. Employ a user data script to install the framework but compress theinstallation files to make them smaller.

C. Create a pipeline to parallelize the installation tasks and call thispipeline from a user data script.

D. Configure an AWS OpsWorks cookbook that installs the frameworkinstead of employing user data.Use this cookbook as a base for all deployments.

Answer:A

Analyze:

B.Installation cannot be parallelized... C.Installation cannot be parallelized... D.Cookbook is a collection of receipts, I think it should be receipt here. However, this still need to run the installation and won't shorter the time

问题Q15. A company wants to ensure that the workloads for each of its business units have complete autonomy and a minimal blast radius in AWS. The Security team must be able to control access to the resources and services in the account to ensure that particular services are not used by the business units. How can a Solutions Architect achieve the isolation requirements?

A. Create individual accounts for each business unit and add the accountto an OU in AWS Organizations.Modify the OU to ensure that the particular services are blocked.Federate each account with an IdP, and create separate roles for thebusiness units and the Security team.

B. Create individual accounts for each business unit. Federate eachaccount with an IdP and create separate roles and policies for businessunits and the Security team.

C. Create one shared account for the entire company. Create separateVPCs for each business unit.Create individual IAM policies and resource tags for each business unit.Federate each account with an IdP, and create separate roles for thebusiness units and the Security team.

D. Create one shared account for the entire company. Create individualIAM policies and resource tags for each business unit. Federate theaccount with an IdP, and create separate roles for the business unitsand the Security team.

Answer:A

Analyze:

A.Best practice with minimal blast radius and autonomy B.

问题Q16. A company is migrating a subset of its application APIs from Amazon EC2 instances to run on a serverless infrastructure. The company has set up Amazon API Gateway, AWS Lambda, and Amazon DynamoDB for the new application. The primary responsibility of the Lambda function is to obtain data from a third-party Software as a Service (SaaS) provider. For consistency, the Lambda function is attached to the same virtual private cloud (VPC) as the original EC2 instances. Test users report an inability to use this newly moved functionality, and the company is receiving 5xx errors from API Gateway. Monitoring reports from the SaaS provider shows that the requests never made it to its systems. The company notices that Amazon CloudWatch Logs are being generated by the Lambda functions. When the same functionality is tested against the EC2 systems, it works as expected What is causing the issue?

A. Lambda is in a subnet that does not have a NAT gateway attached to itto connect to the SaaS provider.

B. The end-user application is misconfigured to continue using theendpoint backed by EC2 instances.

C. The throttle limit set on API Gateway is too low and the requests arenot making their way through.

D. API Gateway does not have the necessary permissions to invoke Lambda.

Answer:A

Analyze:

B.There is Lambda logs C.If this is the case, some of the request will work D.There is lambda logs

问题Q17. A Solutions Architect is working with a company that is extremely sensitive to its IT costs and wishes to implement controls that will result in a predictable AWS spend each month. Which combination of steps can help the company control and monitor its monthly AWS usage to achieve a cost that is as close as possible to the target amount? (Choose three.)

A. Implement an IAM policy that requires users to specify a 'workload'tag for cost allocation when launching Amazon EC2 instances.

B. Contact AWS Support and ask that they apply limits to the account sothat users are not able to launch more than a certain number of instancetypes.

C. Purchase all upfront Reserved Instances that cover 100% of theaccount's expected Amazon EC2 usage.

D. Place conditions in the users' IAM policies that limit the number ofinstances they are able to launch.

E. Define 'workload' as a cost allocation tag in the AWS Billing andCost Management console.

F. Set up AWS Budgets to alert and notify when a given workload isexpected to exceed a defined cost.

Answer:AEF

Analyze:

A.aws:RequestTag/tag-key B.Bad practice C.Not going to work as this may ends up cost more D.IAM do not support this https://forums.aws.amazon.com/thread.jspa?threadID=174503 E. F.

问题Q18. A large global company wants to migrate a stateless mission-critical application to AWS. The application is based on IBM WebSphere (application and integration middleware), IBM MQ (messaging middleware), and IBM DB2 (database software) on a z/OS operating system. How should the Solutions Architect migrate the application to AWS?

A. Re-host WebSphere-based applications on Amazon EC2 behind a loadbalancer with Auto Scaling. Re- platform the IBM MQ to an AmazonEC2-based MQ. Re-platform the z/OS-based DB2 to Amazon RDS DB2.

B. Re-host WebSphere-based applications on Amazon EC2 behind a loadbalancer with Auto Scaling.Re- platform the IBM MQ to an Amazon MQ.Re-platform z/OS-based DB2 to Amazon EC2-based DB2.

C. Orchestrate and deploy the application by using AWS ElasticBeanstalk. Re-platform the IBM MQ to Amazon SQS. Re-platform z/OS-basedDB2 to Amazon RDS DB2.

D. Use the AWS Server Migration Service to migrate the IBM WebSphere andIBM DB2 to an Amazon EC2-based solution. Re-platform the IBM MQ to anAmazon MQ.

Answer:B

Analyze:

A.RDS does not support DB2 B. C.RDS does not support DB2 D.Server MIgration Service works with VM and nothing about VM is mentioned, SMS only support Linux and windows https://docs.aws.amazon.com/server-migration-service/latest/userguide/prereqs.html#os_prereqs

问题Q19. A media storage application uploads user photos to Amazon S3 for processing. End users are reporting that some uploaded photos are not being processed properly. The Application Developers trace the logs and find that AWS Lambda is experiencing execution issues when thousands of users are on the system simultaneously. Issues are caused by: * Limits around concurrent executions. * The performance of Amazon DynamoDB when saving data. Which actions can be taken to increase the performance and reliability of the application? (Choose two.)

A. Evaluate and adjust the read capacity units (RCUs) for the DynamoDBtables.

B. Evaluate and adjust the write capacity units (WCUs) for the DynamoDBtables.

C. Add an Amazon ElastiCache layer to increase the performance of Lambdafunctions

D. Configure a dead letter queue that will reprocess failed or timed-outLambda functions.

E. Use S3 Transfer Acceleration to provide lower-latency access to endusers.

Answer:BD

Analyze:

问题Q20. A company operates a group of imaging satellites. The satellites stream data to one of the company's ground stations where processing creates about 5 GB of images per minute. This data is added to network- attached storage, where 2 PB of data are already stored. The company runs a website that allows its customers to access and purchase the images over the Internet. This website is also running in the ground station. Usage analysis shows that customers are most likely to access images that have been captured in the last 24 hours. The company would like to migrate the image storage and distribution system to AWS to reduce costs and increase the number of customers that can be served. Which AWS architecture and migration strategy will meet these requirements?

A. Use multiple AWS Snowball appliances to migrate the existing imageryto Amazon S3.Create a 1-Gb AWS Direct Connect connection from the ground station toAWS, and upload new data to Amazon S3 through the Direct Connectconnection. Migrate the data distribution website to Amazon EC2instances. By using Amazon S3 as an origin, have this website serve thedata through Amazon CloudFront by creating signed URLs.

B. Create a 1-Gb Direct Connect connection from the ground station toAWS. Use the AWS Command Line Interface to copy the existing data andupload new data to Amazon S3 over the Direct Connect connection. Migratethe data distribution website to EC2 instances. By using Amazon S3 as anorigin, have this website serve the data through CloudFront by creatingsigned URLs.

C. Use multiple Snowball appliances to migrate the existing images toAmazon S3. Upload new data by regularly using Snowball appliances toupload data from the network-attached storage. Migrate the datadistribution website to EC2 instances. By using Amazon S3 as an origin,have this website serve the data through CloudFront by creating signedURLs.

D. Use multiple Snowball appliances to migrate the existing images to anAmazon EFS file system. Create a 1-Gb Direct Connect connection from theground station to AWS, and upload new data by mounting the EFS filesystem over the Direct Connect connection.Migrate the data distribution website to EC2 instances. By usingwebservers in EC2 that mount the EFS file system as the origin, havethis website serve the data through CloudFront by creating signed URLs.

Answer:A

Analyze:

A. B.1GB for 2PB will be too slow C.Snowball cannot ensure data is available for last 24 hour D.EFS is expensive in this case

问题Q21. A company ingests and processes streaming market data. The data rate is constant. A nightly process that calculates aggregate statistics is run, and each execution takes about 4 hours to complete. The statistical analysis is not mission critical to the business, and previous data points are picked up on the next execution if a particular run fails. The current architecture uses a pool of Amazon EC2 Reserved Instances with 1-year reservations running full time to ingest and store the streaming data in attached Amazon EBS volumes. On- Demand EC2 instances are launched each night to perform the nightly processing, accessing the stored data from NFS shares on the ingestion servers, and terminating the nightly processing servers when complete. The Reserved Instance reservations are expiring, and the company needs to determine whether to purchase new reservations or implement a new design. Which is the most cost-effective design?

A. Update the ingestion process to use Amazon Kinesis Data Firehose tosave data to Amazon S3. Use a fleet of On-Demand EC2 instances thatlaunches each night to perform the batch processing of the S3 data andterminates when the processing completes.

B. Update the ingestion process to use Amazon Kinesis Data Firehouse tosave data to Amazon S3. Use AWS Batch to perform nightly processing witha Spot market bid of 50% of the On-Demand price.

C. Update the ingestion process to use a fleet of EC2 Reserved Instancesbehind a Network Load Balancer with 3-year leases. Use Batch with Spotinstances with a maximum bid of 50% of the On- Demand price for thenightly processing.

D. Update the ingestion process to use Amazon Kinesis Data Firehose tosave data to Amazon Redshift.Use an AWS Lambda function scheduled to run nightly with AmazonCloudWatch Events to query Amazon Redshift to generate the dailystatistics.

Answer:B

Analyze:

A.More expensive than B B.As it is not mission critical and can pick up from previous data point, Spot instance makes sense C.If we still use EBS, each instance will have its own EBS and data is hard to aggregate. EC2 is expensive as well D.Lambda has process limit of 15 mins

问题Q22. A three-tier web application runs on Amazon EC2 instances. Cron daemons are used to trigger scripts that collect the web server, application, and database logs and send them to a centralized location every hour. Occasionally, scaling events or unplanned outages have caused the instances to stop before the latest logs were collected, and the log files were lost. Which of the following options is the MOST reliable way of collecting and preserving the log files?

A. Update the cron to run every 5 minutes instead of every hour toreduce the possibility of log messages being lost in an outage.

B. Use Amazon CloudWatch Events to trigger Amazon Systems Manager RunCommand to invoke the log collection scripts more frequently to reducethe possibility of log messages being lost in an outage.

C. Use the Amazon CloudWatch Logs agent to stream log messages directlyto CloudWatch Logs.Configure the agent with a batch count of 1 to reduce the possibility oflog messages being lost in an outage.

D. Use Amazon CloudWatch Events to trigger AWS Lambda to SSH into eachrunning instance and invoke the log collection scripts more frequentlyto reduce the possibility of log messages being lost in an outage.

Answer:C

Analyze:

Almost no delay. Most reliable

问题Q23. A company stores sales transaction data in Amazon DynamoDB tables. To detect anomalous behaviors and respond quickly, all changes to the items stored in the DynamoDB tables must be logged within 30 minutes. Which solution meets the requirements?

A. Copy the DynamoDB tables into Apache Hive tables on Amazon EMR everyhour and analyze them for anomalous behaviors. Send Amazon SNSnotifications when anomalous behaviors are detected.

B. Use AWS CloudTrail to capture all the APIs that change the DynamoDBtables. Send SNS notifications when anomalous behaviors are detectedusing CloudTrail event filtering.

C. Use Amazon DynamoDB Streams to capture and send updates to AWSLambda. Create a Lambda function to output records to Amazon KinesisData Streams. Analyze any anomalies with Amazon Kinesis Data Analytics.Send SNS notifications when anomalous behaviors are detected.

D. Use event patterns in Amazon CloudWatch Events to capture DynamoDBAPI call events with an AWS Lambda function as a target to analyzebehavior. Send SNS notifications when anomalous behaviors are detected.

Answer:C

Analyze:

B.We want to track item changes, not table changes C.Best practice D.DynamoDB is not supported by cloudwatch events, you will need cloudtrail

问题Q24. A company is running multiple applications on Amazon EC2. Each application is deployed and managed by multiple business units. All applications are deployed on a single AWS account but on different virtual private clouds (VPCs). The company uses a separate VPC in the same account for test and development purposes. Production applications suffered multiple outages when users accidentally terminated and modified resources that belonged to another business unit. A Solutions Architect has been asked to improve the availability of the company applications while allowing the Developers access to the resources they need. Which option meets the requirements with the LEAST disruption?

A. Create an AWS account for each business unit. Move each businessunit's instances to its own account and set up a federation to allowusers to access their business unit's account.

B. Set up a federation to allow users to use their corporatecredentials, and lock the users down to their own VPC. Use a network ACLto block each VPC from accessing other VPCs.

C. Implement a tagging policy based on business units. Create an IAMpolicy so that each user can terminate instances belonging to their ownbusiness units only.

D. Set up role-based access for each user and provide limitedpermissions based on individual roles and the services for which eachuser is responsible.

Answer:C

Analyze:

Principal ?Control what the person making the request (the principal) is allowed to do based on the tags that are attached to that person's IAM user or role. To do this, use the aws:PrincipalTag/key-name condition key to specify what tags must be attached to the IAM user or role before the request is allowed. https:// docs.aws.amazon.com/IAM/latest/UserGuide/access_iam-tags.htmlA: This would be too disruptive and Organizations should be used instead.B: Question did not say if prod\dev\test are in separate VPC or not. It could be separated using business units instead. Hence this is not feasible.D: This is too much effort and disruption.tagging policy means you can indicate the environment which allows you to terminate or start. For example, The current environment tag of the instance is Development which will be assigned to the Developer group only. When you tried to terminate the Production instance which is tagged as the Production environment, the Developer team will get access denied to terminate.Arguably, the original answer was D, as explained below, changed to C after many studies:There is no disruption to users by setting up roles and policies. Using least privileges is obviously something that was not setup and really needs to be.C is not correct as it does not cover the scenario. The issue was that people were terminating AND modifying resources of other people.A.Move instance across account lead to interruptionB.Will stop inter service communicationC.Develop and test instances won't be catered for

问题Q25. An enterprise runs 103 line-of-business applications on virtual machines in an onpremises data center. Many of the applications are simple PHP, Java, or Ruby web applications, are no longer actively developed, and serve little traffic. Which approach should be used to migrate these applications to AWS with the LOWEST infrastructure costs?

A. Deploy the applications to single-instance AWS Elastic Beanstalkenvironments without a load balancer.

B. Use AWS SMS to create AMIs for each virtual machine and run them inAmazon EC2.

C. Convert each application to a Docker image and deploy to a smallAmazon ECS cluster behind an Application Load Balancer.

D. Use VM Import/Export to create AMIs for each virtual machine and runthem in single-instance AWS Elastic Beanstalk environments byconfiguring a custom image.

Answer:C

Analyze:

A.103 EC2 still needed B.103 EC2 C.We could run all ECS container in one small EC2 and use ALB to route, which can be really cheap D.103 EC2

问题Q26. A Solutions Architect must create a cost-effective backup solution for a company's 500MB source code repository of proprietary and sensitive applications. The repository runs on Linux and backs up daily to tape. Tape backups are stored for 1 year. The current solutions are not meeting the company's needs because it is a manual process that is prone to error, expensive to maintain, and does not meet the need for a Recovery Point Objective (RPO) of 1 hour or Recovery Time Objective (RTO) of 2 hours. The new disaster recovery requirement is for backups to be stored offsite and to be able to restore a single file if needed. Which solution meets the customer's needs for RTO, RPO, and disaster recovery with the LEAST effort and expense?

A. Replace local tapes with an AWS Storage Gateway virtual tape libraryto integrate with current backup software. Run backups nightly and storethe virtual tapes on Amazon S3 standard storage in US-EAST-1. Usecross-region replication to create a second copy in US-WEST-2. UseAmazon S3 lifecycle policies to perform automatic migration to AmazonGlacier and deletion of expired backups after 1 year?

B. Configure the local source code repository to synchronize files to anAWS Storage Gateway file Amazon gateway to store backup copies in anAmazon S3 Standard bucket.Enable versioning on the Amazon S3 bucket. Create Amazon S3 lifecyclepolicies to automatically migrate old versions of objects to Amazon S3Standard 0 Infrequent Access, then Amazon Glacier, then delete backupsafter 1 year.

C. Replace the local source code repository storage with a StorageGateway stored volume.Change the default snapshot frequency to 1 hour. Use Amazon S3 lifecyclepolicies to archive snapshots to Amazon Glacier and remove old snapshotsafter 1 year. Use cross-region replication to create a copy of thesnapshots in US-WEST-2.

D. Replace the local source code repository storage with a StorageGateway cached volume.Create a snapshot schedule to take hourly snapshots. Use an AmazonCloudWatch Events schedule expression rule to run on hourly AWS Lambdatask to copy snapshots from US-EAST -1 to US-WEST- 2.

Answer:B

Analyze:

A.Cannot meet RPO of 1 hour C.Volume gateway store a snapshot, which doesn't allow restore of single file https://aws.amazon.com/ storagegateway/faqs

问题Q27. A company CFO recently analyzed the company's AWS monthly bill and identified an opportunity to reduce the cost for AWS Elastic Beanstalk environments in use. The CFO has asked a Solutions Architect to design a highly available solution that will spin up an Elastic Beanstalk environment in the morning and terminate it at the end of the day. The solution should be designed with minimal operational overhead and to minimize costs. It should also be able to handle the increased use of Elastic Beanstalk environments among different teams, and must provide a one-stop scheduler solution for all teams to keep the operational costs low. What design will meet these requirements?

A. Set up a Linux EC2 Micro instance. Configure an IAM role to allow thestart and stop of the Elastic Beanstalk environment and attach it to theinstance. Create scripts on the instance to start and stop the ElasticBeanstalk environment. Configure cron jobs on the instance to executethe scripts.

B. Develop AWS Lambda functions to start and stop the Elastic Beanstalkenvironment.Configure a Lambda execution role granting Elastic Beanstalk environmentstart/stop permissions, and assign the role to the Lambda functions.Configure cron expression Amazon CloudWatch Events rules to trigger theLambda functions.

C. Develop an AWS Step Functions state machine with "wait" as its typeto control the start and stop time.Use the activity task to start and stop the Elastic Beanstalkenvironment.Create a role for Step Functions to allow it to start and stop theElastic Beanstalk environment. Invoke Step Functions daily.

D. Configure a time-based Auto Scaling group. In the morning, have theAuto Scaling group scale up an Amazon EC2 instance and put the ElasticBeanstalk environment start command in the EC2 instance user date. Atthe end of the day, scale down the instance number to 0 to terminate theEC2 instance.

Answer:B

Analyze:

A.Need to have an EC2 running all the time B.Recommended solution https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/ C.Step function is not used for this, and the role with step function will not help the worker task. D.EC2 need to run during datetime..and not really good solution

问题Q28. A company plans to move regulated and security-sensitive businesses to AWS. The Security team is developing a framework to validate the adoption of AWS best practice and industryrecognized compliance standards. The AWS Management Console is the preferred method for teams to provision resources. Which strategies should a Solutions Architect use to meet the business requirements and continuously assess, audit, and monitor the configurations of AWS resources? (Choose two.)

A. Use AWS Config rules to periodically audit changes to AWS resourcesand monitor the compliance of the configuration. Develop AWS Configcustom rules using AWS Lambda to establish a testdriven developmentapproach, and further automate the evaluation of configuration changesagainst the required controls.

B. Use Amazon CloudWatch Logs agent to collect all the AWS SDK logs.Search the log data using a pre- defined set of filter patterns thatmachines mutating API calls. Send notifications using Amazon CloudWatchalarms when unintended changes are performed. Archive log data by usinga batch export to Amazon S3 and then Amazon Glacier for a long-termretention and auditability.

C. Use AWS CloudTrail events to assess management activities of all AWSaccounts. Ensure that CloudTrail is enabled in all accounts andavailable AWS services. Enable trails, encrypt CloudTrail event logfiles with an AWS KMS key, and monitor recorded activities withCloudWatch Logs.

D. Use the Amazon CloudWatch Events near-real-time capabilities tomonitor system events patterns, and trigger AWS Lambda functions toautomatically revert non-authorized changes in AWS resources. Also,target Amazon SNS topics to enable notifications and improve theresponse time of incident responses.

E. Use CloudTrail integration with Amazon SNS to automatically notifyunauthorized API activities. Ensure that CloudTrail is enabled in allaccounts and available AWS services.Evaluate the usage of Lambda functions to automatically revertnon-authorized changes in AWS resources.

Answer:AC

Analyze:

A. B.Management Console do not go through SDK C. D.Need cloudtrail to log resource change to cloudwatch E.Cloudtrail to SNS has no filtering so you will need to send all the logs. https://docs.aws.amazon.com/ awscloudtrail/latest/userguide/configure-sns-notifications-for- cloudtrail.html#configure-cloudtrail-to-send- notifications

问题Q29. A company is running a high-user-volume media-sharing application on premises. It currently hosts about 400 TB of data with millions of video files. The company is migrating this application to AWS to improve reliability and reduce costs. The Solutions Architecture team plans to store the videos in an Amazon S3 bucket and use Amazon CloudFront to distribute videos to users. The company needs to migrate this application to AWS 10 days with the least amount of downtime possible. The company currently has 1 Gbps connectivity to the Internet with 30 percent free capacity. Which of the following solutions would enable the company to migrate the workload to AWS and meet all of the requirements?

A. Use a multi-part upload in Amazon S3 client to parallel-upload thedata to the Amazon S3 bucket over the Internet. Use the throttlingfeature to ensure that the Amazon S3 client does not use more than 30percent of available Internet capacity.

B. Request an AWS Snowmobile with 1 PB capacity to be delivered to thedata center. Load the data into Snowmobile and send it back to have AWSdownload that data to the Amazon S3 bucket. Sync the new data that wasgenerated while migration was in flight.

C. Use an Amazon S3 client to transfer data from the data center to theAmazon S3 bucket over the Internet. Use the throttling feature to ensurethe Amazon S3 client does not use more than 30 percent of availableInternet capacity.

D. Request multiple AWS Snowball devices to be delivered to the datacenter. Load the data concurrently into these devices and send it back.Have AWS download that data to the Amazon S3 bucket. Sync the new datathat was generated while migration was in flight.

Answer:D

Analyze:

A.Takes 123 day.... Parralel still have the internet connection as bottleneck B.Snowmobile is recommended for more than 10PB C.Takes 123 days...

问题Q30. A company has developed a new billing application that will be released in two weeks. Developers are testing the application running on 10 EC2 instances managed by an Auto Scaling group in subnet31.0.0/24 within VPC A with CIDR block 172.31.0.0/16. The Developers noticed connection timeout errors in the application logs while connecting to an Oracle database running on an Amazon EC2 instance in the same region within VPC B with CIDR block 172.50.0.0/16. The IP of the database instance is hard- coded in the application instances. Which recommendations should a Solutions Architect present to the Developers to solve the problem in a secure way with minimal maintenance and overhead?

A. Disable the SrcDestCheck attribute for all instances running theapplication and Oracle Database.Change the default route of VPC A to point ENI of the Oracle Databasethat has an IP address assigned within the range of 172.50.0.0/26

B. Create and attach internet gateways for both VPCs. Configure defaultroutes to the Internet gateways for both VPCs. Assign an Elastic IP foreach Amazon EC2 instance in VPC A

C. Create a VPC peering connection between the two VPCs and add a routeto the routing table of VPC A that points to the IP address range of50.0.0/16

D. Create an additional Amazon EC2 instance for each VPC as a customergateway; create one virtual private gateway (VGW) for each VPC,configure an end-to-end VPC, and advertise the routes for 172.50.0.0/16

Answer:C

Analyze:

A.This is for NAT? And it is not going to help as the destination is the database and the source will be the EC2s B.Database connection should not go through internet D.Transit VPC is too much a trouble!

问题Q31. A Solutions Architect has been asked to look at a company's Amazon Redshift cluster, which has quickly become an integral part of its technology and supports key business process. The Solutions Architect is to increase the reliability and availability of the cluster and provide options to ensure that if an issue arises, the cluster can either operate or be restored within four hours. Which of the following solution options BEST addresses the business need in the most costeffective manner?

A. Ensure that the Amazon Redshift cluster has been set up to make useof Auto Scaling groups with the nodes in the cluster spread acrossmultiple Availability Zones.

B. Ensure that the Amazon Redshift cluster creation has been templateusing AWS CloudFormation so it can easily be launched in anotherAvailability Zone and data populated from the automated Redshiftback-ups stored in Amazon S3.

C. Use Amazon Kinesis Data Firehose to collect the data ahead ofingestion into Amazon Redshift and create clusters using AWSCloudFormation in another region and stream the data to both clusters.

D. Create two identical Amazon Redshift clusters in different regions(one as the primary, one as the secondary). Use Amazon S3 cross-regionreplication from the primary to secondary). Use Amazon S3 cross-regionreplication from the primary to secondary region, which triggers an AWSLambda function to populate the cluster in the secondary region.

Answer:B

Analyze:

A.Redshift cluster is single AZ... https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html#az-considerations B.Best practice C.We have 4 hour RPO so we don't need a redundant cluster D.Not making sense, and lambda probably time out....

问题Q32. A company prefers to limit running Amazon EC2 instances to those that were launched from AMIs pre- approved by the Information Security department. The Development team has an agile continuous integration and deployment process that cannot be stalled by the solution. Which method enforces the required controls with the LEAST impact on the development process? (Choose two.)

A. Use IAM policies to restrict the ability of users or other automatedentities to launch EC2 instances based on a specific set of pre-approvedAMIs, such as those tagged in a specific way by Information Security.

B. Use regular scans within Amazon Inspector with a custom assessmenttemplate to determine if the EC2 instance that the Amazon InspectorAgent is running on is based upon a pre-approved AMI. If it is not, shutdown the instance and inform information Security by email that thisoccurred.

C. Only allow launching of EC2 instances using a centralized DevOpsteam, which is given work packages via notifications from an internalticketing system. Users make requests for resources using this ticketingtool, which has manual information security approval steps to ensurethat EC2 instances are only launched from approved AMIs.

D. Use AWS Config rules to spot any launches of EC2 instances based onnon-approved AMIs, trigger an AWS Lambda function to automaticallyterminate the instance, and publish a message to an Amazon SNS topic toinform Information Security that this occurred.

E. Use a scheduled AWS Lambda function to scan through the list ofrunning instances within the virtual private cloud (VPC) and determineif any of these are based on unapproved AMIs.Publish a message to an SNS topic to inform Information Security thatthis occurred and then shut down the instance.

Answer:AD

Analyze:

B.AWS inspector is used to find security vulnerability, not used to find AMI C.Not agile... E.Scheduled lambda is not a thing, you need cloudwatch event to trigger Lambda

问题Q33. A Company has a security event whereby an Amazon S3 bucket with sensitive information was made public. Company policy is to never have public S3 objects, and the Compliance team must be informed immediately when any public objects are identified. How can the presence of a public S3 object be detected, set to trigger alarm notifications, and automatically remediated in the future? (Choose two.)

A. Turn on object-level logging for Amazon S3. Turn on Amazon S3 eventnotifications to notify by using an Amazon SNS topic when a PutObjectAPI call is made with a public-read permission.

B. Configure an Amazon CloudWatch Events rule that invokes an AWS Lambdafunction to secure the S3 bucket.

C. Use the S3 bucket permissions for AWS Trusted Advisor and configure aCloudWatch event to notify by using Amazon SNS.

D. Turn on object-level logging for Amazon S3. Configure a CloudWatchevent to notify by using an SNS topic when a PutObject API call withpublic-read permission is detected in the AWS CloudTrail logs.

E. Schedule a recursive Lambda function to regularly change all objectpermissions inside the S3 bucket.

Answer:BD

Analyze:

Triggering the remediation Lambda function with CloudWatch Event is more efficient. A.S3 event may be lost in some cases, and could take up to minutes to arrive https:// docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html S3 event message does not contain information regarding permission https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content- structure.html C.We could take advice from trust advisor, but use a policy for trust advisor not going to help...

问题Q34. A company is using an Amazon CloudFront distribution to distribute both static and dynamic content from a web application running behind an Application Load Balancer. The web application requires user authorization and session tracking for dynamic content. The CloudFront distribution has a single cache behavior configured to forward the Authorization, Host, and User-Agent HTTP whitelist headers and a session cookie to the origin. All other cache behavior settings are set to their default value. A valid ACM certificate is applied to the CloudFront distribution with a matching CNAME in the distribution settings. The ACM certificate is also applied to the HTTPS listener for the Application Load Balancer. The CloudFront origin protocol policy is set to HTTPS only. Analysis of the cache statistics report shows that the miss rate for this distribution is very high. What can the Solutions Architect do to improve the cache hit rate for this distribution without causing the SSL/TLS handshake between CloudFront and the Application Load Balancer to fail?

A. Create two cache behaviors for static and dynamic content. Remove theUser-Agent and Host HTTP headers from the whitelist headers section onboth if the cache behaviors.Remove the session cookie from the whitelist cookies section and theAuthorization HTTP header from the whitelist headers section for cachebehavior configured for static content.

B. Remove the User-Agent and Authorization HTTPS headers from thewhitelist headers section of the cache behavior. Then update the cachebehavior to use presigned cookies for authorization.

C. Remove the Host HTTP header from the whitelist headers section andremove the session cookie from the whitelist cookies section for thedefault cache behavior. Enable automatic object compression and useLambda@Edge viewer request events for user authorization.

D. Create two cache behaviors for static and dynamic content. Remove theUser-Agent HTTP header from the whitelist headers section on both of thecache behaviors. Remove the session cookie from the whitelist cookiessection and the Authorization HTTP header from the whitelist headerssection for cache behavior configured for static content.

Answer:D

Analyze:

A.Host header need to pass in as CloudFront and the origin are using the same certificate, which means the certificate's list of domain may not match the Origin Domain Name, and then hHost header is required https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/http-502-bad-gateway.html B.Static content perform better without session cookie C.HOST header is needed

问题Q35. An organization has a write-intensive mobile application that uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The application has scaled well, however, costs have increased exponentially because of higher than anticipated Lambda costs. The application's use is unpredictable, but there has been a steady 20% increase in utilization every month. While monitoring the current Lambda functions, the Solutions Architect notices that the executiontime averages 4.5 minutes. Most of the wait time is the result of a high-latency network call to a 3-TB MySQL database server that is on-premises. A VPN is used to connect to the VPC, so the Lambda functions have been configured with a five-minute timeout. How can the Solutions Architect reduce the cost of the current architecture?

A. Replace the VPN with AWS Direct Connect to reduce the network latencyto the on-premises MySQL database.Enable local caching in the mobile application to reduce the Lambdafunction invocation calls.Monitor the Lambda function performance;gradually adjust the timeout and memory properties to lower values whilemaintaining an acceptable execution time.Offload the frequently accessedrecords from DynamoDB to Amazon ElastiCache.

B. Replace the VPN with AWS Direct Connect to reduce the network latencyto the on-premises MySQL database.Cache the API Gateway results to Amazon CloudFront. Use Amazon EC2Reserved Instances instead of Lambda.Enable Auto Scaling on EC2, and useSpot Instances during peak times.Enable DynamoDB Auto Scaling to managetarget utilization.

C. Migrate the MySQL database server into a Multi-AZ Amazon RDS forMySQL.Enable caching of the Amazon API Gateway results in Amazon CloudFront toreduce the number of Lambda function invocations.Monitor the Lambdafunction performance; gradually adjust the timeout and memory propertiesto lower values while maintaining an acceptable execution time.EnableDynamoDB Accelerator for frequently accessed records, and enable theDynamoDB Auto Scaling feature.

D. Migrate the MySQL database server into a Multi-AZ Amazon RDS forMySQL.Enable API caching on API Gateway to reduce the number of Lambdafunction invocations.Continue to monitor the AWS Lambda functionperformance; gradually adjust the timeout and memory properties to lowervalues while maintaining an acceptable execution time.Enable AutoScaling in DynamoDB.

Answer:D

Analyze:

A.This will not help if the latency is from the on premise network (i.e. the on prem network itself is supper slow) B.EC2 is more expensive, Direct Connect is not cheap as well C.As the application is scaled well, we may not need DAX and cloudfront which cost more money. Moreover, if you use DAX, all request will go to DAX cluster first. You cannot just enable DAX for some records.

问题Q36. A company runs a video processing platform. Files are uploaded by users who connect to a web server, which stores them on an Amazon EFS share. This web server is running on a single Amazon EC2 instance. A different group of instances, running in an Auto Scaling group, scans the EFS share directory structure for new files to process and generates new videos (thumbnails, different resolution, compression, etc.) according to the instructions file, which is uploaded along with the video files. A different application running on a group of instances managed by an Auto Scaling group processes the video files and then deletes them from the EFS share. The results are stored in an S3 bucket. Links to the processed video files are emailed to the customer. The company has recently discovered that as they add more instances to the Auto Scaling Group, many files are processed twice, so image processing spe

标签: 12vdc继电器q90f

锐单商城拥有海量元器件数据手册IC替代型号,打造 电子元器件IC百科大全!

锐单商城 - 一站式电子元器件采购平台