A) Encrypt the Amazon S3 bucket where the logs are stored by using AWS Key Management Service (AWS KMS) . Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basis. Query the data as required.
B) Disable encryption on the Amazon Redshift cluster, configure audit logging, and encrypt the Amazon Redshift cluster. Use Amazon Redshift Spectrum to query the data as required.
C) Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryption. Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basis. Query the data as required.
D) Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryption. Use Amazon Redshift Spectrum to query the data as required.
Correct Answer
verified
Multiple Choice
A) Convert the log files to Apace Avro format.
B) Add a key prefix of the form date=year-month-day/ to the S3 objects to partition the data.
C) Convert the log files to Apache Parquet format.
D) Add a key prefix of the form year-month-day/ to the S3 objects to partition the data.
E) Drop and recreate the table with the PARTITIONED BY clause. Run the ALTER TABLE ADD PARTITION statement.
F) Drop and recreate the table with the PARTITIONED BY clause. Run the MSCK REPAIR TABLE statement.
Correct Answer
verified
Multiple Choice
A) Migrate the validation process from Lambda to AWS Glue.
B) Migrate the Lambda consumers from standard data stream iterators to an HTTP/2 stream consumer.
C) Increase the number of shards in the Kinesis data stream.
D) Send the posts stream to Amazon Managed Streaming for Apache Kafka instead of the Kinesis data stream.
Correct Answer
verified
Multiple Choice
A) Use enhanced fan-out in Kinesis Data Streams.
B) Increase the number of shards for the Kinesis data stream.
C) Reduce the propagation delay by overriding the KCL default settings.
D) Develop consumers by using Amazon Kinesis Data Firehose.
Correct Answer
verified
Multiple Choice
A) Use multiple COPY commands to load the data into the Amazon Redshift cluster.
B) Use S3DistCp to load multiple files into the Hadoop Distributed File System (HDFS) and use an HDFS connector to ingest the data into the Amazon Redshift cluster.
C) Use LOAD commands equal to the number of Amazon Redshift cluster nodes and load the data in parallel into each node.
D) Use a single COPY command to load the data into the Amazon Redshift cluster.
Correct Answer
verified
Multiple Choice
A) An EVEN distribution style for both tables
B) A KEY distribution style for both tables
C) An ALL distribution style for the product table and an EVEN distribution style for the transactions table
D) An EVEN distribution style for the product table and an KEY distribution style for the transactions table
Correct Answer
verified
Multiple Choice
A) Select Amazon Elasticsearch Service (Amazon ES) as the endpoint for Kinesis Data Firehose. Set up a Kibana dashboard using the data in Amazon ES with the desired analyses and visualizations.
B) Select Amazon S3 as the endpoint for Kinesis Data Firehose. Read data into an Amazon SageMaker Jupyter notebook and carry out the desired analyses and visualizations.
C) Select Amazon Redshift as the endpoint for Kinesis Data Firehose. Connect Amazon QuickSight with SPICE to Amazon Redshift to create the desired analyses and visualizations.
D) Select Amazon S3 as the endpoint for Kinesis Data Firehose. Use AWS Glue to catalog the data and Amazon Athena to query it. Connect Amazon QuickSight with SPICE to Athena to create the desired analyses and visualizations.
Correct Answer
verified
Multiple Choice
A) Use an AWS Glue ML transform to create a forecast and then use Amazon QuickSight to visualize the data.
B) Use Amazon QuickSight to visualize the data and then use ML-powered forecasting to forecast the key business metrics.
C) Use a pre-build ML AMI from the AWS Marketplace to create forecasts and then use Amazon QuickSight to visualize the data.
D) Use calculated fields to create a new forecast and then use Amazon QuickSight to visualize the data.
Correct Answer
verified
Multiple Choice
A) Use Amazon QuickSight with Amazon Athena as the data source. Use heat maps as the visual type.
B) Use Amazon QuickSight with Amazon S3 as the data source. Use heat maps as the visual type.
C) Use Amazon QuickSight with Amazon Athena as the data source. Use pivot tables as the visual type.
D) Use Amazon QuickSight with Amazon S3 as the data source. Use pivot tables as the visual type.
Correct Answer
verified
Multiple Choice
A) Create an EMR security configuration and ensure the security configuration is associated with the EMR clusters when they are created.
B) Check the security group of the EMR clusters regularly to ensure it does not allow inbound traffic from IPv4 0.0.0.0/0 or IPv6 ::/0.
C) Enable the block public access setting for Amazon EMR at the account level before any EMR cluster is created.
D) Use AWS WAF to block public internet access to the EMR clusters across the board.
Correct Answer
verified
Multiple Choice
A) Use Amazon Aurora as the data catalog. Create AWS Lambda functions that will connect and gather the metadata information from multiple sources and update the data catalog in Aurora. Schedule the Lambda functions periodically.
B) Use the AWS Glue Data Catalog as the central metadata repository. Use AWS Glue crawlers to connect to multiple data stores and update the Data Catalog with metadata changes. Schedule the crawlers periodically to update the metadata catalog.
C) Use Amazon DynamoDB as the data catalog. Create AWS Lambda functions that will connect and gather the metadata information from multiple sources and update the DynamoDB catalog. Schedule the Lambda functions periodically.
D) Use the AWS Glue Data Catalog as the central metadata repository. Extract the schema for RDS and Amazon Redshift sources and build the Data Catalog. Use AWS crawlers for data stored in Amazon S3 to infer the schema and automatically update the Data Catalog.
Correct Answer
verified
Multiple Choice
A) Create an Amazon Managed Streaming for Kafka cluster to ingest the data, and use an Apache Spark Streaming with Apache Kafka consumer API in an automatically scaled Amazon EMR cluster to process the incoming data. Use the Spark Streaming application to detect the known event sequence and send the SNS message.
B) Create a REST-based web service using Amazon API Gateway in front of an AWS Lambda function. Create an Amazon RDS for PostgreSQL database with sufficient Provisioned IOPS (PIOPS) . In the Lambda function, store incoming events in the RDS database and query the latest data to detect the known event sequence and send the SNS message.
C) Create an Amazon Kinesis Data Firehose delivery stream to capture the incoming sensor data. Use an AWS Lambda transformation function to detect the known event sequence and send the SNS message.
D) Create an Amazon Kinesis data stream to capture the incoming sensor data and create another stream for alert messages. Set up AWS Application Auto Scaling on both. Create a Kinesis Data Analytics for Java application to detect the known event sequence, and add a message to the message stream. Configure an AWS Lambda function to poll the message stream and publish to the SNS topic.
Correct Answer
verified
Multiple Choice
A) Ingest the data stream with Amazon Kinesis Data Streams. Have an AWS Lambda consumer evaluate the stream, collect the number status codes, and evaluate the data against a previously trained RCF model. Persist the source and results as a time series to Amazon DynamoDB.
B) Ingest the data stream with Amazon Kinesis Data Streams. Have a Kinesis Data Analytics application evaluate the stream over a 5-minute window using the RCF function and summarize the count of status codes. Persist the source and results to Amazon S3 through output delivery to Kinesis Data Firehouse.
C) Ingest the data stream with Amazon Kinesis Data Firehose with a delivery frequency of 1 minute or 1 MB in Amazon S3. Ensure Amazon S3 triggers an event to invoke an AWS Lambda consumer that evaluates the batch data, collects the number status codes, and evaluates the data against a previously trained RCF model. Persist the source and results as a time series to Amazon DynamoDB.
D) Ingest the data stream with Amazon Kinesis Data Firehose with a delivery frequency of 5 minutes or 1 MB into Amazon S3. Have a Kinesis Data Analytics application evaluate the stream over a 1-minute window using the RCF function and summarize the count of status codes. Persist the results to Amazon S3 through a Kinesis Data Analytics output to an AWS Lambda integration.
Correct Answer
verified
Multiple Choice
A) Create an S3 bucket for each given use case, create an S3 bucket policy that grants permissions to appropriate individual IAM users. and apply the S3 bucket policy to the S3 bucket.
B) Create an Athena workgroup for each given use case, apply tags to the workgroup, and create an IAM policy using the tags to apply appropriate permissions to the workgroup.
C) Create an IAM role for each given use case, assign appropriate permissions to the role for the given use case, and add the role to associate the role with Athena.
D) Create an AWS Glue Data Catalog resource policy for each given use case that grants permissions to appropriate individual IAM users, and apply the resource policy to the specific tables used by Athena.
Correct Answer
verified
Multiple Choice
A) Upload the individual files to Amazon S3 and run the COPY command as soon as the files become available.
B) Split the number of files so they are equal to a multiple of the number of slices in the Amazon Redshift cluster. Gzip and upload the files to Amazon S3. Run the COPY command on the files.
C) Split the number of files so they are equal to a multiple of the number of compute nodes in the Amazon Redshift cluster. Gzip and upload the files to Amazon S3. Run the COPY command on the files.
D) Apply sharding by breaking up the files so the distkey columns with the same values go to the same file. Gzip and upload the sharded files to Amazon S3. Run the COPY command on the files.
Correct Answer
verified
Multiple Choice
A) Set up a trusted connection with HSM using a client and server certificate with automatic key rotation.
B) Modify the cluster with an HSM encryption option and automatic key rotation.
C) Create a new HSM-encrypted Amazon Redshift cluster and migrate the data to the new cluster.
D) Enable HSM with key rotation through the AWS CLI.
E) Enable Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) encryption in the HSM.
Correct Answer
verified
Multiple Choice
A) There are multiple shards in a stream and order needs to be maintained in the shard. The data analyst needs to make sure there is only a single shard in the stream and no stream resize runs.
B) The hash key generation process for the records is not working correctly. The data analyst should generate an explicit hash key on the producer side so the records are directed to the appropriate shard accurately.
C) The records are not being received by Kinesis Data Streams in order. The producer should use the PutRecords API call instead of the PutRecord API call with the SequenceNumberForOrdering parameter.
D) The consumer is not processing the parent shard completely before processing the child shards after a stream resize. The data analyst should process the parent shard completely first before processing the child shards.
Correct Answer
verified
Multiple Choice
A) Use Amazon Kinesis Data Firehose to store the data in Amazon S3. Use Amazon QuickSight to build the dashboard.
B) Use Apache Spark Streaming on Amazon EMR to read the data in near-real time. Develop a custom application for the dashboard by using D3.js.
C) Use Amazon Kinesis Data Firehose to push the data into an Amazon Elasticsearch Service (Amazon ES) cluster. Visualize the data by using a Kibana dashboard.
D) Use AWS Glue streaming ETL to store the data in Amazon S3. Use Amazon QuickSight to build the dashboard.
Correct Answer
verified
Multiple Choice
A) A geospatial color-coded chart of sales volume data across the country.
B) A pivot table of sales volume data summed up at the state level.
C) A drill-down layer for state-level sales volume data.
D) A drill through to other dashboards containing state-level sales volume data.
Correct Answer
verified
Multiple Choice
A) Create a customer master key (CMK) in AWS KMS. Assign the CMK an alias. Use the AWS Encryption SDK, providing it with the key alias to encrypt and decrypt the data.
B) Create a customer master key (CMK) in AWS KMS. Assign the CMK an alias. Enable server-side encryption on the Kinesis data stream using the CMK alias as the KMS master key.
C) Create a customer master key (CMK) in AWS KMS. Create an AWS Lambda function to encrypt and decrypt the data. Set the KMS key ID in the function's environment variables.
D) Enable server-side encryption on the Kinesis data stream using the default KMS key for Kinesis Data Streams.
Correct Answer
verified
Showing 1 - 20 of 135
Related Exams