Unit 3
Unit 3
Unit 3
1. Artificial neural networks are loosely inspired by: a) The internal structure of the solar
system. b) The interconnected web of neurons in the human brain. c) The
branching patterns of trees. d) The design of electrical circuits.
2. What is the fundamental unit of computation in an artificial neural network? a)
Transistors b) Pixels (in image processing) c) Artificial neurons (inspired by
biological neurons) d) Gigabytes of memory
3. Artificial neurons process information using: a) Complex mathematical equations
beyond human understanding. b) Weighted sums and activation functions. c)
Keyword matching and pattern recognition. d) Storing and retrieving data from a
database.
4. Neural networks learn through a process called: a) Hardcoding specific rules and
instructions. b) Adjusting weights based on training data and error correction. c)
Manually fine-tuning millions of parameters. d) Downloading pre-trained models
from the internet.
5. What is the role of an activation function in an artificial neuron? a) To generate
random numbers for weight initialization. b) To introduce non-linearity into the
network's outputs. c) To store the weights associated with each input connection. d)
To determine the number of hidden layers in the network.
6. A common activation function used in neural networks is: a) Linear function (y = mx
+ b) b) Sigmoid function (squashes values between 0 and 1) c) Square root function
d) Absolute value function
7. In a neural network architecture, what is a layer? a) A single neuron operating
independently. b) A group of interconnected neurons that process information
together. c) The overall network structure without any internal details. d) The
software program used to train the network.
8. A typical neural network architecture consists of: a) An input layer only, responsible
for receiving data. b) An output layer only, responsible for generating predictions. c)
An input layer, hidden layers, and an output layer. d) There is no standard
architecture, it varies greatly depending on the task.
9. What is the purpose of hidden layers in a neural network? a) To pre-process the input
data before feeding it to the network. b) To learn complex patterns and features
from the data. c) To visualize the results of the network's predictions. d) To store
intermediate calculations during the forward pass.
10. Training a neural network involves: a) Manually defining the optimal weights for
each connection. b) Feeding the network with training data and adjusting weights
based on errors. c) Uploading a large dataset and letting the network learn on its
own. d) Choosing the most complex architecture with the most neurons.
11. Overfitting is a potential problem in neural network training. What does it mean? a) The
network memorizes the training data too well and fails to generalize to unseen data. b) The
network does not learn enough from the training data and performs poorly. c) The network
takes too long to train due to a very complex architecture. d) The network runs out of
memory during the training process.
12. Techniques to prevent overfitting in neural networks include: a) Increasing the
learning rate during training. b) Using dropout (randomly dropping neurons
during training). c) Gathering more and more training data without any limitations.
d) Reducing the number of hidden layers in the network architecture.
13. What is a common application of neural networks in computer vision? a) Text
summarization and sentiment analysis b) Image recognition and object detection c)
Speech-to-text conversion and machine translation d) Time series forecasting and
anomaly detection
14. Neural networks can be used for natural language processing tasks like: a) Spam
filtering based on keyword matching only. b) Sentiment analysis to understand the
emotional tone of text. c) Simple text search and retrieval based on exact keyword
matches. d) Optical character recognition (OCR) without any language understanding.
15. Recurrent Neural Networks (RNNs) are a type of neural network architecture
designed for: a) Processing large images with high resolution. b) Analyzing
sequential data like text or time series. c) Performing
12. Bidirectional RNNs are a variation that can be beneficial for tasks requiring: a)
Processing audio signals, where order doesn't necessarily matter. b) Analyzing text
where understanding context from both directions (past and future) is helpful
(e.g., sentiment analysis of reviews). c) Reducing the number of training epochs
required for the network to converge. d) Extracting features from large images
efficiently.
13. Choosing the right RNN architecture (standard RNN, LSTM, GRU) depends on
factors like: a) The size of the dataset available for training. b) The computational
resources available for training. c) The length of the sequences being processed. d)
All of the above factors can influence the choice of RNN architecture.
14. RNNs are a powerful tool, but they also have limitations. What is NOT a typical
application of RNNs? a) Text summarization (extracting key points from a
document) b) Music generation (creating new musical pieces based on a style) c)
Image segmentation (separating objects from the background in an image) -
better suited for CNNs d) Anomaly detection in time series data (identifying
unusual patterns)
15. Recurrent Neural Networks are an active area of research. What is one of the ongoing
areas of exploration? a) Developing RNNs that can process data with very high
dimensionality. b) Finding ways to train RNNs with less data to overcome data
scarcity issues. c) Simplifying RNN architectures to make them more interpretable.
d) All of the above are areas of ongoing research in RNNs.
16. Recurrent Neural Networks are a building block for more complex architectures. An
example is: a) Support Vector Machines (SVMs) for classification tasks. b) Decision
Trees for making rule-based predictions. c) Encoder-decoder models commonly
used for machine translation or text summarization. d) K-Nearest Neighbors
(KNN) for finding similar data points.