Home » Machine Learning with R Cognitive Class Exam Answers

Machine Learning with R Cognitive Class Exam Answers

by IndiaSuccessStories
0 comment

Introduction to Machine Learning with R

Introduction to Machine Learning with R is a comprehensive guide that introduces readers to the fundamentals of machine learning using the R programming language. In this book, readers will learn how to implement various machine learning algorithms and techniques using practical examples and hands-on exercises.

Key topics covered in “Introduction to Machine Learning with R” typically include:

  1. Introduction to Machine Learning: Basic concepts and principles of machine learning, including supervised and unsupervised learning, model evaluation, and feature selection.
  2. R Programming: Essential R programming skills needed for machine learning, such as data manipulation, visualization, and working with packages like tidyverse and caret.
  3. Supervised Learning Algorithms: Implementation of popular supervised learning algorithms such as linear regression, logistic regression, decision trees, random forests, support vector machines (SVM), and k-nearest neighbors (kNN).
  4. Unsupervised Learning Algorithms: Techniques for unsupervised learning tasks, including clustering (k-means, hierarchical clustering) and dimensionality reduction (principal component analysis, t-SNE).
  5. Model Evaluation and Validation: Methods for evaluating and validating machine learning models, such as cross-validation, ROC curves, and precision-recall curves.
  6. Feature Selection and Engineering: Techniques for selecting relevant features and engineering new features from existing data.
  7. Advanced Topics: Depending on the book, it may cover advanced topics like ensemble methods, deep learning with R (using packages like keras or tensorflow), and natural language processing (NLP) applications.
  8. Case Studies and Projects: Practical examples and case studies to demonstrate how to apply machine learning techniques to real-world datasets.

Overall, “Introduction to Machine Learning with R” aims to provide readers with a solid foundation in both machine learning concepts and practical skills using R. It’s suitable for beginners who are new to machine learning as well as more experienced practitioners looking to expand their knowledge of applying machine learning techniques using R.

Machine Learning with R Cognitive Class Certification Answers

Question 1: Machine Learning was developed shortly (within the same century) as statistical modelling, therefore adopting many of its practices.

banner
  • True
  • False

Question 2: Supervised learning deals with unlabeled data, while unsupervised learning deals with labelled data.

  • True
  • False

Question 3: Machine Learning is applied in current technologies, such as:

  • Trend Prediction (ex. House Price Trends)
  • Gesture Recognition (ex. Xbox Connect)
  • Facial Recognition (ex. Snapchat)
  • A and B, but not C
  • All of the above

Question 1: In K-Nearest Neighbors, which of the following is true:

  • A very high value of K (ex. K = 100) produces a model that is better than a very low value of K (ex. K = 1)
  • A very high value of K (ex. K = 100) produces an overly generalised model, while a very low value of k (ex. k = 1) produces a highly complex model.
  • A very low value of K (ex. K = 1) produces an overly generalised model, while a very high value of k (ex. k = 100) produces a highly complex model.
  • All of the Above

Question 2: A difficulty that arises from trying to classify out-of-sample data is that the actual classification may not be known, therefore making it hard to produce an accurate result.

  • True
  • False

Question 3: When building a decision tree, we want to split the nodes in a way that decreases entropy and increases information gain.

  • True
  • False

Question 1: Which of the following is generally true about the evaluation models: Train and Test on the Same Dataset and Train/Test Split.

  • Train and Test on the Same Dataset has a high training accuracy and high out-of-sample accuracy, while Train/Test Split has a low training accuracy and low out-of-sample accuracy.
  • Train and Test on the Same Dataset has a low training accuracy and high out-of-sample accuracy, while Train/Test Split has a high training accuracy and low out-of-sample accuracy.
  • Train and Test on the Same Dataset has a high training accuracy and low out-of-sample accuracy, while Train/Test Split has a low training accuracy and high out-of-sample accuracy.
  • Train and Test on the Same Dataset has a low training accuracy and low out-of-sample accuracy, while Train/Test Split has a high training accuracy and high out-of-sample accuracy.

Question 2: Which of the following is true about bias and variance?

  • Having a high bias underfits the data and produces a model that is overly complex, while having high variance overfits the data and produces a model that is overly generalized.
  • Having a high bias underfits the data and produces a model that is overly generalized, while having high variance overfits the data and produces a model that is overly complex.
  • Having a high bias overfits the data and produces a model that is overly complex, while having high variance underfits the data and produces a model that is overly generalized.
  • Having a high bias overfits the data and produces a model that is overly generalized, while having high variance underfits the data and produces a model that is overly complex.

Question 3: Root Mean Squared Error is the most popular evaluation metric out of the three discussed, because it produces the same units as the response vector, making it easy to relate information.

  • True
  • False

Question 1: What are some disadvantages that K-means clustering presents?

  • Updating can occur even though there is a possibility of a centroid not having data points in its group
  • K-means clustering is generally slower, compared to many other clustering algorithms
  • There is high bias in the models, due to where the centroids are initiated
  • None of the above

Question 2: Decision Trees tend to have high bias and low variance, which Random Forests fix.

  • True
  • False

Question 3: A Dendrogram can only be read for Agglomerative Hierarchical Clustering, not Divisive Hierarchical Clustering.

  • True
  • False

Question 1: Filters produce a feature set that does not contain assumptions based on the predictive model, making it a useful tool to expose relationships between features.

  • True
  • False

Question 2: Principle Components Analysis retains all information during the projection process of higher order features to lower orders.

  • True
  • False

Question 3: Which of the following is not a challenge to a recommendation system that uses collaborative filtering?

  • Diversity Sheep
  • Shilling Attacks
  • Scalability
  • Synonyms

Question 1: Randomness is important in Random Forests because it allows us to have distinct, different trees that are based off of different data.

  • True
  • False

Question 2: When building a decision tree, we want to split the nodes in a way that increases entropy and decreases information gain.

  • True
  • False

Question 3: Which of the following is true?

  • A high value of K in KNN creates a model with low bias and high variance
  • An observation must contain values for all features
  • A categorical value cannot be numeric
  • None of the above

Question 4: In terms of Bias and Variance, Variance is the inconsistency of a model due to small changes in the dataset.

  • True
  • False

Question 5: Which is the definition of entropy?

  • The purity of each node in a random forest.
  • Information collected that can increase the level of certainty in a particular prediction.
  • The information that is used to randomly select a subset of data.
  • The amount of information disorder in the data.

Question 6: Which of the following is true about hierarchical linkages?

  • Average linkage is the average distance of each point in one cluster to every point in another cluster
  • Complete linkage is the shortest distance between a point in two clusters
  • Centroid linkage is the distance between two randomly generated centroids in two clusters
  • Single linkage is the distance between any points in two clusters

Question 7: In terms of Bias and Variance, Variance is the inconsistency of a model due to small changes in the dataset.

  • True
  • False

Question 8: Which is true about bootstrapping?

  • All data points must be used when bootstrapping is applied
  • The data points are randomly selected with replacement
  • The data points are randomly selected without replacement
  • It is the same as bagging

Question 9: Machine Learning is still in early development and does not have much of an impact on the current society.

  • True
  • False

Question 10: In comparison to supervised learning, unsupervised learning has:

  • Less tests
  • More models
  • A better controlled environment
  • More tests, but less models

Question 11: Outliers are points that are classified by Density-Based Clustering that do not belong to any cluster.

  • True
  • False

Question 12: Which of the following is false about Linear Regression?

  • It does not require tuning parameters
  • It is highly interpretable
  • It is fast
  • It has a low variability on predictive accuracy

Question 13: Machine Learning uses algorithms that can learn from data without relying on standard programming practices.

  • True
  • False

Question 14: Which of the following are types of supervised learning?

  • Clustering
  • Regression
  • Classification
  • Both A and B

Question 15: A Bottom-Up version of hierarchical clustering is known as Divisive clustering. It is a more popular method than the agglomerative method.

  • True
  • False

Question 16: Which is NOT a specific outcome of how Dimensionality Performance improves production?

  • Highlights the main linear technique called Principle Components Analysis.
  • Creates step-wise regression.
  • Reduces number of features to be considered.
  • Highlights relevant variables only and omits irrelevant ones.

Question 17: Feature Selection is the process of selecting the variables that will be projected from a high-order dimension to a lower one.

  • True
  • False

Question 18: Hierarchical Clustering is one of the three main algorithms for clustering along with K-Means and Density Based Clustering.

  • True
  • False

Question 19: Which one is NOT a feature of Dimensionality Reduction?

  • It can be divided into two subcategories called Feature Selection and Feature Extraction
  • Removal of an “outsider” from the least cohesive cluster.
  • Feature Selection includes Wrappers, Filters, and Embedded.
  • Feature Extraction includes Principle Components Analysis.
  • It reduces the number of variables/features in review.

Question 20: Low bias tends to create overly generalized models, which can cause a loss of relevant relations between the features and target output. When a model has low bias, we say that it “under fits” the data.

  • True
  • False

You may also like

Leave a Comment

Indian Success Stories Logo

Indian Success Stories is committed to inspiring the world’s visionary leaders who are driven to make a difference with their ground-breaking concepts, ventures, and viewpoints. Join together with us to match your business with a community that is unstoppable and working to improve everyone’s future.

Edtior's Picks

Latest Articles

Copyright © 2024 Indian Success Stories. All rights reserved.