Table of Contents
Enroll Here: Deep Learning with TensorFlow Cognitive Class Exam Quiz Answers
Introduction to Deep Learning with TensorFlow
Deep Learning with TensorFlow is an exciting field that combines neural networks with the powerful TensorFlow framework, allowing you to build and train advanced neural network models for various tasks like image classification, natural language processing, and more. Here’s a brief introduction:
What is TensorFlow?
TensorFlow is an open-source machine learning library developed by Google. It allows you to build and train machine learning models, particularly neural networks, efficiently. TensorFlow provides a flexible ecosystem of tools, libraries, and community resources that help researchers and developers build and deploy ML models easily.
What is Deep Learning?
Deep Learning is a subset of machine learning where artificial neural networks, inspired by the human brain’s structure and function, learn from large amounts of data. These networks have multiple layers (hence “deep”) that allow them to learn hierarchical representations of data.
Key Concepts in Deep Learning with TensorFlow:
- TensorFlow Operations (Ops) and Graphs: TensorFlow uses a dataflow graph to represent your computation in terms of the dependencies between individual operations (Ops). These Ops create, manipulate, and destroy tensors (multi-dimensional arrays) during execution.
- Neural Networks: Deep Learning in TensorFlow primarily involves constructing neural network architectures using high-level APIs like Keras (which is integrated into TensorFlow). Neural networks are composed of layers of neurons that process and transform data.
- Training and Optimization: TensorFlow provides optimization algorithms and techniques to train neural networks on large datasets. This includes defining loss functions, selecting appropriate optimizers (e.g., SGD, Adam), and monitoring training performance.
- Deployment: After training, TensorFlow allows you to deploy your models in various environments, from mobile devices to cloud platforms, using TensorFlow Serving or TensorFlow Lite for mobile and embedded devices.
Getting Started with TensorFlow:
To start with Deep Learning and TensorFlow, you typically follow these steps:
- Installation: Install TensorFlow and its dependencies (can be done via pip).
- Hello World: Begin with simple examples like training a neural network on the MNIST dataset (handwritten digit classification).
- Exploration: Understand different layers, activation functions, and how to manipulate tensors.
- Advanced Topics: Explore more advanced topics like transfer learning, custom models, and deploying models.
Conclusion:
Deep Learning with TensorFlow opens up a world of possibilities in artificial intelligence and machine learning. Whether you’re a beginner or an experienced practitioner, TensorFlow offers the tools and resources to build and deploy sophisticated models effectively. As you dive deeper, you’ll find yourself exploring cutting-edge research and applications in this rapidly evolving field.
Deep Learning with TensorFlow Cognitive Class Certificate Answers
Module 1 – Intro to Tensorflow Quiz Answers
Question 1: Which statement is FALSE about TensorFlow?
- TensorFlow is well suited for handling Deep Learning Problems
- TensorFlow library is not proper for handling Machine Learning Problems
- TensorFlow has a C/C++ backend as well as Python modules
- TensorFlow is an opensource library
- All of the above
Question 2: What is a Data Flow Graph?
- A representation of data dependencies between operations
- A cartesian (x,y) chart
- A graphics user interface
- A flowchart describing an algorithm
- None of the above
Question 3: What is the main reasons of increasing popularity of Deep Learning?
- The advances in machine learning algorithms and research.
- The availability of massive amounts of data for training computer systems.
- The dramatic increases in computer processing capabilities.
- All of the above
Question 4: Which statement is TRUE about TensorFlow?
- Runs on CPU and GPU
- Runs on CPU only
- Runs on GPU only
Question 5: Why is TensorFlow the proper library for Deep Learning?
- It will benefit from TensorFlow’s auto-differentiation and suite of first-rate optimizers
- It provides a collection of trainable mathematical functions that are useful for neural networks.
- It has extensive built-in support for deep learning
- All of the above
Module 2 – Convolutional Networks Quiz Answers
Question 1: What can be achieved with “convolution” operations on Images?
- Noise Filtering
- Image Smoothing
- Image Blurring
- Edge Detection
- All of the above
Question 2: For convolution, it is better to store images in a TensorFlow Graph as:
- Placeholder
- CSV file
- Numpy array
- Variable
- None of the above
Question 3: Which of the following statements is TRUE about Convolution Neural Networks (CNNs)?
- CNN can be applied ONLY on Image and Text data
- CNN can be applied on ANY 2D and 3D array of data
- CNN can be applied ONLY on Text and Speech data
- CNN can be applied ONLY on Image data
- All of the above
Question 4: Which of the following Layers can be part of Convolution Neural Networks (CNNs)
- Dropout
- Softmax
- Maxpooling
- Relu
- All of the above
Question 5: The objective of the Activation Function is to:
- Increase the Size of the Network
- Handle Non-Linearity in the Network
- Handle Linearity in the Network
- Reduce the Size of the Network
- None of the above
Module 3 – Recurrent Neural Networks Quiz Answers
Question 1: What is a Recurrent Neural Network?
- A Neural Network that can recur to itself, and is proper for handling sequential data
- An infinite layered Neural Network which is proper for handling structured data
- A special kind of Neural Network to predict weather
- A markovian model to handle temporal data
Question 2: What is NOT TRUE about RNNs?
- RNNs are VERY suitable for sequential data.
- RNNs need to keep track of states, which is computationally expensive.
- RNNs are very robust against vanishing gradient problem.
Question 3: What application(s) is(are) suitable for RNNs?
- Estimating temperatures from weather data
- Natural Language Processing
- Video context retriever
- Speech Recognition
- All of the above
Question 4: Why are RNNs susceptible to issues with their gradients?
- Numerical computation of gradients can drive into instabilities
- Gradients can quickly drop and stabilize at near zero
- Propagation of errors due to the recurrent characteristic
- Gradients can grow exponentially
- All of the above
Question 5: What is TRUE about LSTM gates?
- The Read Gate in LSTM, determine how much old information to forget
- The Write Gate in LSTM, reads data from the memory cell and sends that data back to the network.
- The Forget Gate, in LSTM maintains or deletes data from the information cell.
- The Read Gate in LSTM, is responsible for writing data into the memory cell.
Module 4 – Restricted Boltzmann Machine Quiz Answers
Question 1: What is the main application of RBM?
- Data dimensionality reduction
- Feature extraction
- Collaborative filtering
- All of the above
Question 2: How many layers does an RBM (Restricted Boltzmann Machine) have?
- Infinite
- 4
- 2
- 3
- All of the above
Question 3: How does an RBM compare to a PCA?
- RBM cannot reduce dimensionality
- PCA cannot generate original data
- PCA is another type of Neural Network
- Both can regenerate input data
- All of the above
Question 4: Which statement is TRUE about RBM?
- It is a Boltzmann machine, but with no connections between nodes in the same layer
- Each node in the first layer has a bias
- The RBM reconstructs data by making several forward and backward passes between the visible and hidden layers
- At the hidden layer’s nodes, X is multiplied by a W (weight matrix) and added to h_bias
- All of the above
Question 5: Which statement is TRUE statement about an RBM?
- The objective function is to maximize the likelihood of our data being drawn from the reconstructed data distribution
- The Negative phase of an RBM decreases the probability of samples generated by the model
- Contrastive Divergence (CD) is used to approximate the negative phase of an RBM
- The Positive phase of an RBM increases the probability of training data
- All of the above
Module 5 – Autoecoders Quiz Answers
Question 1: What is the difference between Autoencoders and RBMs?
- Autoencoders are used for supervised learning, but RBMs are used for unsupervised learning.
- Autoencoders use a deterministic approach, but RBMs use a stochastic approach.
- Autoencoders have less layers than RBMs.
- All of the above
Question 2: Which of the following problems cannot be solved by Autoencoders:
- Dimensionality Reduction
- Time series prediction
- Image Reconstruction
- Emotion Detection
- All of the above
Question 3: What is TRUE about Autoencoders:
- Help to Reduce the Curse of Dimensionality
- Used to Learn the Most important Features in Data
- Used for Unsupervised Learning
- All of the Above
Question 4: What are Autoencoders:
- A Neural Network that is designed to replace Non-Linear Regression
- A Neural Network that is trained to attempt to copy its input to its output
- A Neural Network that learns all the weights by using labelled data
- A Neural Network where different layer inputs are controlled by gates
- All of the Above
Question 5: What is a Deep Autoencoder:
- An Autoencoder with Multiple Hidden Layers
- An Autoencoder with multiple input and output layers
- An Autoencoder stacked with Multiple Visible Layers
- An Autoencoder stacked with over 1000 layers
- None of the Above
Deep Learning with TensorFlow Final Exam Answers
Question 1: Why use a Data Flow graph to solve Mathematical expressions?
- To create a pipeline of operations and its corresponding values to be parsed
- To represent the expression in a human-readable form
- To show the expression in a GUI
- Because it is the only way to solve mathematical expressions in a digital computer
- None of the above
Question 2: What is an Activation Function?
- A function that triggers a neuron and generates the outputs
- A function that models a phenomenon or process
- A function to normalize the output
- All of the above
- None of the above
Question 3: Why is TensorFlow considered fast and suitable for Deep Learning?
- It is suitable to operate over large multi-dimensional tensors
- It runs on CPU
- Its core is based on C++
- It runs on GPU
- All of the above
Question 4: Can TensorFlow replace Numpy?
- None of the above
- No, whatsoever
- With only Numpy we can’t solve Deep Learning problems, therefore, TensorFlow is required
- Yes, completely
- Partially for some operations on tensors, such as minimization
Question 5: What is FALSE about Convolution Neural Networks (CNNs)?
- They fully connect to all neurons in all of the layers
- They connect only to neurons in the local region (kernel size) of input images
- They build feature maps hierarchically in every layer
- They are inspired by human visual systems
- None of the above
Question 6: What is the meaning of “Strides” in Maxpooling?
- The number of pixels the kernel should add
- The number of pixels the kernel should move
- The size of the kernel
- The number of pixels the kernel should remove
- None of the above
Question 7: What is TRUE about “Padding” in Convolution?
- size of the input image is reduced for the “VALID” padding
- Size of the input image is reduced for the “SAME” padding
- Size of the input image is increased for the “SAME” padding
- Size of the input image is increased for the “VALID” padding
- All of the above
Question 8: Which of the following best describes the Relu Function?
- (-1,1)
- (0,5)
- (0, Max)
- (-inf,inf)
- (0,1)
Question 9: Which are types of Recurrent Neural Networks? (Select all that apply)
- LSTM
- Hopfield Network
- Recursive Neural Network
- Deep Belief Network
- Elman Networks and Jordan Networks
Question 10: Which is TRUE about RNNs?
- RNNs can predict the future
- RNNs are VERY suitable for sequential data
- RNNs are NOT suitable for sequential data
- RNNs are ONLY suitable for sequential data
- All of the above
Question 11: What is the problem with RNNs and gradients?
- Numerical computation of gradients can drive into instabilities
- Gradients can quickly drop and stabilize at near zero
- Propagation of errors due to the recurrent characteristic
- Gradients can grow exponentially
- All of the above
Question 12: What type of RNN would you use in an NLP project to predict the next word in a phrase? (only one is correct)
- Bi-directional RNN
- Neural history compressor
- Long Short-Term Memory
- Echo state network
- None of the above
Question 13: Which one does NOT happen in the “forward pass” in RBM?
- Making a deterministic decision about returning values into network.
- Multiplying inputs by weights, and adding an overall bias, in each hidden unit.
- Applying an activation function on the results in hidden units.
- Feeding the network with the input images converted to binary values.
Question 14: Which one IS NOT a sample of CNN application?
- Creating art images using pre-trained models
- Object Detection in images
- Coloring black and white images
- Predicting next word in a sentence
Question 15: Select all possible uses of Autoencoders and RBMs (select all that apply):
- Clustering
- Pattern Recognition
- Dimensionality Reduction
- Predict data in time series
Question 16: Which technique is proper for solving Collaborative Filtering problem?
- DBN
- RBM
- CNN
- RNN
Question 17: Which statement is TRUE for training Autoencoders?
- The Size of Last Layer must be at least 10% of the Input Layer Dimension
- The size of input and Last Layers must be of the Same Dimensions
- The Last Layer must be Double the size of Input Layer Dimension
- The Last Layer must be half the size of Input Layer Dimension
- None of the Above
Question 18: To Design a Deep Autoencoder Architecture, what factors are to be considered?
- The size of the centre-most layer has to be close to number of Important Features to be extracted
- The centre-most layer should have the smallest size compared to all other layers
- The Network should have an odd number of layers
- All the layers must be symmetrical with respect to the centre-most layer
- All of the Above
Question 19: With is TRUE about Back-Propogation?
- It can be used to train LSTMs
- It can be used to train CNNs
- It can be used to train RBMs
- It can be used to train Autoencoders
- All of the Above
Question 20: How can Autoencoders be improved to handle highly non-linear data?
- By using Genetic Algorithms
- By adding more Hidden Layers to the Network
- By using Higher initial Weight Values
- By using Lower initial Weight Values
- All of the Above