AWS-Certified-Machine-Learning-Specialty Exam - AWS Certified Machine Learning - Specialty

certleader.com

Our pass rate is high to 98.9% and the similarity percentage between our AWS-Certified-Machine-Learning-Specialty study guide and real exam is 90% based on our seven-year educating experience. Do you want achievements in the Amazon AWS-Certified-Machine-Learning-Specialty exam in just one try? I am currently studying for the Amazon AWS-Certified-Machine-Learning-Specialty exam. Latest Amazon AWS-Certified-Machine-Learning-Specialty Test exam practice questions and answers, Try Amazon AWS-Certified-Machine-Learning-Specialty Brain Dumps First.

Free AWS-Certified-Machine-Learning-Specialty Demo Online For Amazon Certifitcation:

NEW QUESTION 1
A Machine Learning Specialist observes several performance problems with the training portion of a machine learning solution on Amazon SageMaker The solution uses a large training dataset 2 TB in size and is using the SageMaker k-means algorithm The observed issues include the unacceptable length of time it takes before the training job launches and poor I/O throughput while training the model
What should the Specialist do to address the performance issues with the current solution?

  • A. Use the SageMaker batch transform feature
  • B. Compress the training data into Apache Parquet format.
  • C. Ensure that the input mode for the training job is set to Pipe.
  • D. Copy the training dataset to an Amazon EFS volume mounted on the SageMaker instance.

Answer: B

NEW QUESTION 2
A Machine Learning Specialist is training a model to identify the make and model of vehicles in images The Specialist wants to use transfer learning and an existing model trained on images of general objects The Specialist collated a large custom dataset of pictures containing different vehicle makes and models

  • A. Initialize the model with random weights in all layers including the last fully connected layer
  • B. Initialize the model with pre-trained weights in all layers and replace the last fully connected layer.
  • C. Initialize the model with random weights in all layers and replace the last fully connected layer
  • D. Initialize the model with pre-trained weights in all layers including the last fully connected layer

Answer: D

NEW QUESTION 3
An online reseller has a large, multi-column dataset with one column missing 30% of its data A Machine Learning Specialist believes that certain columns in the dataset could be used to reconstruct the missing data.
Which reconstruction approach should the Specialist use to preserve the integrity of the dataset?

  • A. Listwise deletion
  • B. Last observation carried forward
  • C. Multiple imputation
  • D. Mean substitution

Answer: C

NEW QUESTION 4
A Machine Learning Specialist is building a prediction model for a large number of features using linear models, such as linear regression and logistic regression During exploratory data analysis the Specialist observes that many features are highly correlated with each other This may make the model unstable
What should be done to reduce the impact of having such a large number of features?

  • A. Perform one-hot encoding on highly correlated features
  • B. Use matrix multiplication on highly correlated features.
  • C. Create a new feature space using principal component analysis (PCA)
  • D. Apply the Pearson correlation coefficient

Answer: B

NEW QUESTION 5
A Machine Learning Specialist built an image classification deep learning model. However the Specialist ran into an overfitting problem in which the training and testing accuracies were 99% and 75%r respectively.
How should the Specialist address this issue and what is the reason behind it?

  • A. The learning rate should be increased because the optimization process was trapped at a local minimum.
  • B. The dropout rate at the flatten layer should be increased because the model is not generalized enough.
  • C. The dimensionality of dense layer next to the flatten layer should be increased because the model is not complex enough.
  • D. The epoch number should be increased because the optimization process was terminated before it reached the global minimum.

Answer: A

NEW QUESTION 6
When submitting Amazon SageMaker training jobs using one of the built-in algorithms, which common parameters MUST be specified? (Select THREE.)

  • A. The training channel identifying the location of training data on an Amazon S3 bucket.
  • B. The validation channel identifying the location of validation data on an Amazon S3 bucket.
  • C. The 1AM role that Amazon SageMaker can assume to perform tasks on behalf of the users.
  • D. Hyperparameters in a JSON array as documented for the algorithm used.
  • E. The Amazon EC2 instance class specifying whether training will be run using CPU or GPU.
  • F. The output path specifying where on an Amazon S3 bucket the trained model will persist.

Answer: CEF

NEW QUESTION 7
A retail company uses a machine learning (ML) model for daily sales forecasting. The company’s brand manager reports that the model has provided inaccurate results for the past 3 weeks.
At the end of each day, an AWS Glue job consolidates the input data that is used for the forecasting with the actual daily sales data and the predictions of the model. The AWS Glue job stores the data in Amazon S3. The company’s ML team is using an Amazon SageMaker Studio notebook to gain an understanding about the source of the model's inaccuracies.
What should the ML team do on the SageMaker Studio notebook to visualize the model's degradation MOST accurately?

  • A. Create a histogram of the daily sales over the last 3 week
  • B. In addition, create a histogram of the daily sales from before that period.
  • C. Create a histogram of the model errors over the last 3 week
  • D. In addition, create a histogram of the model errors from before that period.
  • E. Create a line chart with the weekly mean absolute error (MAE) of the model.
  • F. Create a scatter plot of daily sales versus model error for the last 3 week
  • G. In addition, create a scatter plot of daily sales versus model error from before that period.

Answer: C

NEW QUESTION 8
A retail chain has been ingesting purchasing records from its network of 20,000 stores to Amazon S3 using Amazon Kinesis Data Firehose To support training an improved machine learning model, training records will require new but simple transformations, and some attributes will be combined The model needs lo be retrained daily
Given the large number of stores and the legacy data ingestion, which change will require the LEAST amount of development effort?

  • A. Require that the stores to switch to capturing their data locally on AWS Storage Gateway for loading into Amazon S3 then use AWS Glue to do the transformation
  • B. Deploy an Amazon EMR cluster running Apache Spark with the transformation logic, and have the cluster run each day on the accumulating records in Amazon S3, outputting new/transformed records to Amazon S3
  • C. Spin up a fleet of Amazon EC2 instances with the transformation logic, have them transform the data records accumulating on Amazon S3, and output the transformed records to Amazon S3.
  • D. Insert an Amazon Kinesis Data Analytics stream downstream of the Kinesis Data Firehouse stream that transforms raw record attributes into simple transformed values using SQL.

Answer: D

NEW QUESTION 9
A data scientist is using an Amazon SageMaker notebook instance and needs to securely access data stored in a specific Amazon S3 bucket.
How should the data scientist accomplish this?

  • A. Add an S3 bucket policy allowing GetObject, PutObject, and ListBucket permissions to the AmazonSageMaker notebook ARN as principal.
  • B. Encrypt the objects in the S3 bucket with a custom AWS Key Management Service (AWS KMS) key that only the notebook owner has access to.
  • C. Attach the policy to the IAM role associated with the notebook that allows GetObject, PutObject, and ListBucket operations to the specific S3 bucket.
  • D. Use a script in a lifecycle configuration to configure the AWS CLI on the instance with an access key ID and secret.

Answer: C

NEW QUESTION 10
An ecommerce company is automating the categorization of its products based on images. A data scientist has trained a computer vision model using the Amazon SageMaker image classification algorithm. The images for each product are classified according to specific product lines. The accuracy of the model is too low when categorizing new products. All of the product images have the same dimensions and are stored within an Amazon S3 bucket. The company wants to improve the model so it can be used for new products as soon as possible.
Which steps would improve the accuracy of the solution? (Choose three.)

  • A. Use the SageMaker semantic segmentation algorithm to train a new model to achieve improved accuracy.
  • B. Use the Amazon Rekognition DetectLabels API to classify the products in the dataset.
  • C. Augment the images in the datase
  • D. Use open source libraries to crop, resize, flip, rotate, and adjust the brightness and contrast of the images.
  • E. Use a SageMaker notebook to implement the normalization of pixels and scaling of the image
  • F. Store the new dataset in Amazon S3.
  • G. Use Amazon Rekognition Custom Labels to train a new model.
  • H. Check whether there are class imbalances in the product categories, and apply oversampling or undersampling as require
  • I. Store the new dataset in Amazon S3.

Answer: BCE

NEW QUESTION 11
A company wants to predict the sale prices of houses based on available historical sales data. The target variable in the company’s dataset is the sale price. The features include parameters such as the lot size, living area measurements, non-living area measurements, number of bedrooms, number of bathrooms, year built, and postal code. The company wants to use multi-variable linear regression to predict house sale prices. Which step should a machine learning specialist take to remove features that are irrelevant for the analysis and reduce the model’s complexity?

  • A. Plot a histogram of the features and compute their standard deviatio
  • B. Remove features with high variance.
  • C. Plot a histogram of the features and compute their standard deviatio
  • D. Remove features with low variance.
  • E. Build a heatmap showing the correlation of the dataset against itsel
  • F. Remove features with low mutual correlation scores.
  • G. Run a correlation check of all features against the target variabl
  • H. Remove features with low target variable correlation scores.

Answer: D

NEW QUESTION 12
A company has set up and deployed its machine learning (ML) model into production with an endpoint using Amazon SageMaker hosting services. The ML team has configured automatic scaling for its SageMaker instances to support workload changes. During testing, the team notices that additional instances are being launched before the new instances are ready. This behavior needs to change as soon as possible.
How can the ML team solve this issue?

  • A. Decrease the cooldown period for the scale-in activit
  • B. Increase the configured maximum capacity of instances.
  • C. Replace the current endpoint with a multi-model endpoint using SageMaker.
  • D. Set up Amazon API Gateway and AWS Lambda to trigger the SageMaker inference endpoint.
  • E. Increase the cooldown period for the scale-out activity.

Answer: A

NEW QUESTION 13
A company uses a long short-term memory (LSTM) model to evaluate the risk factors of a particular energy sector. The model reviews multi-page text documents to analyze each sentence of the text and categorize it as either a potential risk or no risk. The model is not performing well, even though the Data Scientist has experimented with many different network structures and tuned the corresponding hyperparameters.
Which approach will provide the MAXIMUM performance boost?

  • A. Initialize the words by term frequency-inverse document frequency (TF-IDF) vectors pretrained on a large collection of news articles related to the energy sector.
  • B. Use gated recurrent units (GRUs) instead of LSTM and run the training process until the validation loss stops decreasing.
  • C. Reduce the learning rate and run the training process until the training loss stops decreasing.
  • D. Initialize the words by word2vec embeddings pretrained on a large collection of news articles related to the energy sector.

Answer: C

NEW QUESTION 14
A media company with a very large archive of unlabeled images, text, audio, and video footage wishes to index its assets to allow rapid identification of relevant content by the Research team. The company wants to use machine learning to accelerate the efforts of its in-house researchers who have limited machine learning expertise.
Which is the FASTEST route to index the assets?

  • A. Use Amazon Rekognition, Amazon Comprehend, and Amazon Transcribe to tag data into distinct categories/classes.
  • B. Create a set of Amazon Mechanical Turk Human Intelligence Tasks to label all footage.
  • C. Use Amazon Transcribe to convert speech to tex
  • D. Use the Amazon SageMaker Neural Topic Model (NTM) and Object Detection algorithms to tag data into distinct categories/classes.
  • E. Use the AWS Deep Learning AMI and Amazon EC2 GPU instances to create custom models for audio transcription and topic modeling, and use object detection to tag data into distinct categories/classes.

Answer: A

NEW QUESTION 15
A Machine Learning Specialist is attempting to build a linear regression model.
Given the displayed residual plot only, what is the MOST likely problem with the model?

  • A. Linear regression is inappropriat
  • B. The residuals do not have constant variance.
  • C. Linear regression is inappropriat
  • D. The underlying data has outliers.
  • E. Linear regression is appropriat
  • F. The residuals have a zero mean.
  • G. Linear regression is appropriat
  • H. The residuals have constant variance.

Answer: D

NEW QUESTION 16
A library is developing an automatic book-borrowing system that uses Amazon Rekognition. Images of library members’ faces are stored in an Amazon S3 bucket. When members borrow books, the Amazon Rekognition CompareFaces API operation compares real faces against the stored faces in Amazon S3.
The library needs to improve security by making sure that images are encrypted at rest. Also, when the images are used with Amazon Rekognition. they need to be encrypted in transit. The library also must ensure that the images are not used to improve Amazon Rekognition as a service.
How should a machine learning specialist architect the solution to satisfy these requirements?

  • A. Enable server-side encryption on the S3 bucke
  • B. Submit an AWS Support ticket to opt out of allowing images to be used for improving the service, and follow the process provided by AWS Support.
  • C. Switch to using an Amazon Rekognition collection to store the image
  • D. Use the IndexFaces andSearchFacesByImage API operations instead of the CompareFaces API operation.
  • E. Switch to using the AWS GovCloud (US) Region for Amazon S3 to store images and for Amazon Rekognition to compare face
  • F. Set up a VPN connection and only call the Amazon Rekognition API operations through the VPN.
  • G. Enable client-side encryption on the S3 bucke
  • H. Set up a VPN connection and only call the Amazon Rekognition API operations through the VPN.

Answer: B

NEW QUESTION 17
......

100% Valid and Newest Version AWS-Certified-Machine-Learning-Specialty Questions & Answers shared by Surepassexam, Get Full Dumps HERE: https://www.surepassexam.com/AWS-Certified-Machine-Learning-Specialty-exam-dumps.html (New 307 Q&As)