0% found this document useful (0 votes)
1K views

Unit 2 - MCQ Bank PDF

The document discusses feedback neural networks and competitive learning networks. It provides explanations for multiple choice questions related to these topics. Some key points covered include: - Feedback neural networks are primarily used for pattern storage. Stochastic update can help reduce the effect of false minima. Mean field approximation is used to speed up Boltzmann learning. - Competitive learning networks combine feedforward and feedback connections between layers. The second layer has self-excitatory feedback weights. Certain conditions like non-linear outputs and on-center off-surround connections allow them to perform pattern clustering and feature mapping. - Basic competitive learning adjusts weight vectors towards the input vector. This update can be represented as w
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

Unit 2 - MCQ Bank PDF

The document discusses feedback neural networks and competitive learning networks. It provides explanations for multiple choice questions related to these topics. Some key points covered include: - Feedback neural networks are primarily used for pattern storage. Stochastic update can help reduce the effect of false minima. Mean field approximation is used to speed up Boltzmann learning. - Competitive learning networks combine feedforward and feedback connections between layers. The second layer has self-excitatory feedback weights. Certain conditions like non-linear outputs and on-center off-surround connections allow them to perform pattern clustering and feature mapping. - Basic competitive learning adjusts weight vectors towards the input vector. This update can be represented as w
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

1. For what purpose Feedback neural networks are primarily used?

a) classification
b) feature mapping
c) pattern mapping
d) none of the mentioned
View Answer
Answer: d
Explanation: Feedback neural networks are primarily used for pattern storage.

2. Presence of false minima will have what effect on probability of error in recall?
a) directly
b) inversely
c) no effect
d) directly or inversely
View Answer
Answer: a
Explanation: Presence of false minima will increase the probability of error in recall.

3. How is effect false minima reduced


a) deterministic update of weights
b) stochastic update of weights
c) deterministic or stochastic update of weights
d) none of the mentioned
View Answer
Answer: b
Explanation: Presence of false minima can be reduced by stochastic update.

4. Is Boltzman law practical for implementation?


a) yes
b) no
View Answer
Answer: b
Explanation: Boltzman law is too slow for implementation.

5. For practical implementation what type of approximation is used on boltzman law?


a) max field approximation
b) min field approximation
c) hopfield approximation
d) none of the mentioned
View Answer
Answer: d
Explanation: For practical implementation mean field approximation is used.

6. What happens when we use mean field approximation with boltzman learning?
a) it slows down
b) it get speeded up
c) nothing happens
d) may speedup or speed down
View Answer
Answer: b
Explanation: Boltzman learning get speeded up using mean field approximation.
7. Approximately how much times the boltzman learning get speeded up using mean
field approximation?
a) 5-10
b) 10-30
c) 30-50
d) 50-70
View Answer
Answer: b
Explanation: Boltzman learning get speeded up 10-30 using mean field approximation.

8.False minima can be reduced by deterministic updates?


a) yes
b) no
View Answer
Answer: b
Explanation: Presence of false minima can be reduced by stochastic update.

9. In boltzman learning which algorithm can be used to arrive at equilibrium?


a) hopfield
b) mean field
c) hebb
d) none of the mentioned
View Answer
Answer: d
Explanation: Metropolis algorithm can be used to arrive at equilibrium.

10. Boltzman learning is a?


a) fast process
b) steady process
c) slow process
d) none of the mentioned
View Answer
Answer: d
Explanation: Boltzman learning is a slow process.

1. How are input layer units connected to second layer in competitive learning networks?
a) feedforward manner
b) feedback manner
c) feedforward and feedback
d) feedforward or feedback
View Answer
Answer: a
Explanation: The output of input layer is given to second layer with adaptive feedforward
weights.

2. Which layer has feedback weights in competitive neural networks?


a) input layer
b) second layer
c) both input and second layer
d) none of the mentioned
View Answer
Answer: b
Explanation: Second layer has weights which gives feedback to the layer itself.

3. What is the nature of general feedback given in competitive neural networks?


a) self excitatory
b) self inhibitory
c) self excitatory or self inhibitory
d) none of the mentioned
View Answer
Answer: a
Explanation: The output of each unit in second layer is fed back to itself in self –
excitatory manner.

4. What consist of competitive learning neural networks?


a) feedforward paths
b) feedback paths
c) either feedforward or feedback
d) combination of feedforward and feedback
View Answer
Answer: Competitive learning neural networks is a combination of feedforward and
feedback connection layers resulting in some kind of competition.

5. What conditions are must for competitive network to perform pattern clustering?
a) non linear output layers
b) connection to neighbours is excitatory and to the farther units inhibitory
c) on centre off surround connections
d) none of the mentioned fulfils the whole criteria
View Answer
Answer: d
Explanation: If the output functions of units in feedback laye are made non-linear , with
fixed weight on-centre off-surround connections, the pattern clustering can be
performed.

6. What conditions are must for competitive network to perform feature mapping?
a) non linear output layers
b) connection to neighbours is excitatory and to the farther units inhibitory
c) on centre off surround connections
d) none of the mentioned fulfils the whole criteria
View Answer
Answer: d
Explanation: If cndition in a, b, c are met then feature mapping can be performed.

7. If a competitive network can perform feature mapping then what is that network can
be called?
a) self excitatory
b) self inhibitory
c) self organization
d) none of the mentioned
View Answer
Answer: c
Explanation: Competitive network that can perform feature mapping can be called as self
organization network.
8. What is an instar?
a) receives inputs from all others
b) gives output to all others
c) may receive or give input or output to others
d) none of the mentioned
View Answer
Answer: a
Explanation: An instar receives inputs from all other input units.

9. How is weight vector adjusted in basic competitive learning?


a) such that it moves towards the input vector
b) such that it moves away from input vector
a) such that it moves towards the output vector
b) such that it moves away from output vector
View Answer
Answer: a
Explanation: Weight vector is adjusted such that it moves towards the input vector.

10. The update in weight vector in basic competitive learning can be represented by?
a) w(t + 1) = w(t) + del.w(t)
b) w(t + 1) = w(t)
c) w(t + 1) = w(t) – del.w(t)
d) none of the mentioned
View Answer
Answer: a
Explanation: The update in weight vector in basic competitive learning can be
represented by w(t + 1) = w(t) + del.w(t).

1. How are input layer units connected to second layer in competitive learning networks?
a) feedforward manner
b) feedback manner
c) feedforward and feedback
d) feedforward or feedback
View Answer
Answer: a
Explanation: The output of input layer is given to second layer with adaptive feedforward
weights.

2. Which layer has feedback weights in competitive neural networks?


a) input layer
b) second layer
c) both input and second layer
d) none of the mentioned
View Answer
Answer: b
Explanation: Second layer has weights which gives feedback to the layer itself.

3. What is the nature of general feedback given in competitive neural networks?


a) self excitatory
b) self inhibitory
c) self excitatory or self inhibitory
d) none of the mentioned
View Answer
Answer: a
Explanation: The output of each unit in second layer is fed back to itself in self –
excitatory manner.

4. What consist of competitive learning neural networks?


a) feedforward paths
b) feedback paths
c) either feedforward or feedback
d) combination of feedforward and feedback
View Answer
Answer: Competitive learning neural networks is a combination of feedforward and
feedback connection layers resulting in some kind of competition.

5. What conditions are must for competitive network to perform pattern clustering?
a) non linear output layers
b) connection to neighbours is excitatory and to the farther units inhibitory
c) on centre off surround connections
d) none of the mentioned fulfils the whole criteria
View Answer
Answer: d
Explanation: If the output functions of units in feedback laye are made non-linear , with
fixed weight on-centre off-surround connections, the pattern clustering can be
performed.

6. What conditions are must for competitive network to perform feature mapping?
a) non linear output layers
b) connection to neighbours is excitatory and to the farther units inhibitory
c) on centre off surround connections
d) none of the mentioned fulfils the whole criteria
View Answer
Answer: d
Explanation: If condition in a, b, c are met then feature mapping can be performed.

7. If a competitive network can perform feature mapping then what is that network can
be called?
a) self excitatory
b) self inhibitory
c) self organization
d) none of the mentioned
View Answer
Answer: c
Explanation: Competitive network that can perform feature mapping can be called as self
organization network.

8. What is an instar?
a) receives inputs from all others
b) gives output to all others
c) may receive or give input or output to others
d) none of the mentioned
View Answer
Answer: a
Explanation: An instar receives inputs from all other input units.

9. How is weight vector adjusted in basic competitive learning?


a) such that it moves towards the input vector
b) such that it moves away from input vector
a) such that it moves towards the output vector
b) such that it moves away from output vector
View Answer
Answer: a
Explanation: Weight vector is adjusted such that it moves towards the input vector.

10. The update in weight vector in basic competitive learning can be represented by?
a) w(t + 1) = w(t) + del.w(t)
b) w(t + 1) = w(t)
c) w(t + 1) = w(t) – del.w(t)
d) none of the mentioned
View Answer
Answer: a
Explanation: The update in weight vector in basic competitive learning can be
represented by w(t + 1) = w(t) + del.w(t).

1. What kind of learning is involved in pattern clustering task?


a) supervised
b) unsupervised
c) learning with critic
d) none of the mentioned
View Answer
Answer: b
Explanation: Since pattern classes are formed on unlabelled classes.

2. In pattern clustering, does physical location of a unit relative to other unit has any
significance?
a) yes
b) no
c) depends on type of clustering
d) none of the mentioned
View Answer
Answer: b
Explanation: Physical location of a unit doesn’t effect the output.

3. How is feature mapping network distinct from competitive learning network?


a) geometrical arrangement
b) significance attached to neighbouring units
c) nonlinear units
d) none of the mentioned
View Answer
Answer: d
Explanation: Both the geometrical arrangement and significance attached to
neighbouring units make it distinct.

4. What is the objective of feature maps?


a) to capture the features in space of input patterns
b) to capture just the input patterns
c) update weights
d) to capture output patterns
View Answer
Answer: a
Explanation: The objective of feature maps is to capture the features in space of input
patterns.

5. How are weights updated in feature maps?


a) updated for winning unit only
b) updated for neighbours of winner only
c) updated for winning unit and its neighbours
d) none of the mentioned
View Answer
Answer: c
Explanation: Weights are updated in feature maps for winning unit and its neighbours.

6. In feature maps, when weights are updated for winning unit and its neighbour, which
type learning it is known as?
a) karnaugt learning
b) boltzman learning
c) kohonen’s learning
d) none of the mentioned
View Answer
Answer: c
Explanation: Self organization network is also known as Kohonen learning.

7. In self organizing network, how is layer connected to output layer?


a) some are connected
b) all are one to one connected
c) each input unit is connected to each output unit
d) none of the mentioned
View Answer
Answer: c
Explanation: In self organizing network, each input unit is connected to each output unit.

8. What is true regarding adaline learning algorithm


a) uses gradient descent to determine the weight vector that leads to minimal error
b) error is defined as MSE between neurons net input and its desired output
c) this technique allows incremental learning
d) all of the mentioned
View Answer
Answer: d
Explanation: Incremental learning means refining of the weights as more training
samples are added, rest are basic statements that defines adaline learning.
9. What is true for competitive learning?
a) nodes compete for inputs
b) process leads to most efficient neural representation of input space
c) typical for unsupervised learning
d) all of the mentioned
View Answer
Answer: d
Explanation: These all statements defines the competitive learning.

10. Use of nonlinear units in the feedback layer of competitive network leads to concept
of?
a) feature mapping
b) pattern storage
c) pattern classification
d) none of the mentioned
View Answer
Answer: d
Explanation: Use of nonlinear units in the feedback layer of competitive network leads to
concept of pattern clustering.

1. What is the use of MLFFNN?


a) to realize structure of MLP
b) to solve pattern classification problem
c) to solve pattern mapping problem
d) to realize an approximation to a MLP
View Answer
Answer: d
Explanation: MLFFNN stands for multilayer feedforward network and MLP stands for
multilayer perceptron.

2. What is the advantage of basis function over mutilayer feedforward neural networks?
a) training of basis function is faster than MLFFNN
b) training of basis function is slower than MLFFNN
c) storing in basis function is faster than MLFFNN
d) none of the mentioned
View Answer
Answer: a
Explanation: The main advantage of basis function is that the training of basis function is
faster than MLFFNN.

3. Why is the training of basis function is faster than MLFFNN?


a) because they are developed specifically for pattern approximation
b) because they are developed specifically for pattern classification
c) because they are developed specifically for pattern approximation or classification
d) none of the mentioned
View Answer
Answer: c
Explanation: Training of basis function is faster than MLFFNN because they are
developed specifically for pattern approximation or classification.

4. Pattern recall takes more time for?


a) MLFNN
b) Basis function
c) Equal for both MLFNN and basis function
d) None of the mentioned
View Answer
Answer: b
Explanation: The first layer of basis function involves computations.

5. In which type of networks training is completely avoided?


a) GRNN
b) PNN
c) GRNN and PNN
d) None of the mentioned
View Answer
Answer: c
Explanation: In GRNN and PNN networks training is completely avoided.

6. What does GRNN do?


a) function approximation task
b) pattern classification task
c) function approximation and pattern classification task
d) none of the mentioned
View Answer
Answer: a
Explanation: GRNN stand for Generalized Regression Neural Networks.

7. What does PNN do?


a) function approximation task
b) pattern classification task
c) function approximation and pattern classification task
d) none of the mentioned
View Answer
Answer: b
Explanation: PNN stand for Probabilistic Neural Networks.

8. Th CPN provides practical approach for implementing?


a) patter approximation
b) pattern classification
c) pattern mapping
d) pattern clustering
View Answer
Answer: c
Explanation: CPN i.e counterpropagation network provides a practical approach for
implementing pattern mapping.

9. What consist of a basic counterpropagation network?


a) a feedforward network only
b) a feedforward network with hidden layer
c) two feedforward network with hidden layer
d) none of the mentioned
View Answer
Answer: c
Explanation: Counterpropagation network consist of two feedforward network with a
common hidden layer.

10. How does the name counterpropagation signifies its architecture?


a) its ability to learn inverse mapping functions
b) its ability to learn forward mapping functions
c) its ability to learn forward and inverse mapping functions
d) none of the mentioned
View Answer
Answer: c
Explanation: Counterpropagation network has ability to learn forward and inverse
mapping functions.

1. Which application out of these of robots can be made of single layer feedforward
network?
a) wall climbing
b) rotating arm and legs
c) gesture control
d) wall following
View Answer
Answer: d
Explanation: Wall folloing is a simple task and doesn’t require any feedback.

2. Which is the most direct application of neural networks?


a) vector quantization
b) pattern mapping
c) pattern classification
d) control applications
View Answer
Answer: c
Explanation: Its is the most direct and multilayer feedforward networks became popular
because of this.

3. What are pros of neural networks over computers?


a) they have ability to learn b examples
b) they have real time high computational rates
c) they have more tolerance
d) all of the mentioned
View Answer
Answer: d
Explanation: Because of their parallel structure, they have high computational rates than
conventional computers, so all are true.

4. what is true about single layer associative neural networks?


a) performs pattern recognition
b) can find the parity of a picture
c) can determine whether two or more shapes in a picture are connected or not
d) none of the mentioned
View Answer
Answer: a
Explanation: It can only perform pattern recognition, rest is not true for a single layer
neural.

5. which of the following is false?


a) neural networks are artificial copy of the human brain
b) neural networks have high computational rates than conventional computers
c) neural networks learn by examples
d) none of the mentioned
View Answer
Answer: d
Explanation: All statements are true for a neural network.

6. For what purpose, hamming network is suitable?


a) classification
b) association
c) pattern storage
d) none of the mentioned
View Answer
Answer: a
Explanation: Hamming network performs template matching between stored templates
and inputs.

7. What happens in upper subnet of the hamming network?


a) classification
b) storage
c) output
d) none of the mentioned
View Answer
Answer: d
Explanation: In upper subnet, competitive interaction among units take place.

8. The competition in upper subnet of hamming network continues till?


a) only one unit remains negative
b) all units are destroyed
c) output of only one unit remains positive
d) none of the mentioned
View Answer
Answer: c
Explanation: The competition in upper subnet of hamming network continues till output
of only one unit remains positive.

9. What does the activation value of winner unit is indicative of?


a) greater the degradation more is the activation value of winning units
b) greater the degradation less is the activation value of winning units
c) greater the degradation more is the activation value of other units
d) greater the degradation less is the activation value of other units
View Answer
Answer: b
Explanation: Simply, greater the degradation less is the activation value of winning units.
10. What does the matching score at first layer in recognition hamming network is
indicative of?
a) dissimilarity of input pattern with patterns stored
b) noise immunity
c) similarity of input pattern with patterns stored
d) none of the mentioned
View Answer
Answer: c
Explanation: Matching score is simply a indicative of similarity of input pattern with
patterns stored.

1. Can Invariances be build as static functions in the structure?


a) yes
b) no
View Answer
Answer: b
Explanation: Invariances have to be dynamically estimated from data.

2. What is the objective of associative memories?


a) to store patters
b) to recall patterns
c) to store association between patterns
d) none of the mentioned
View Answer
Answer: d
Explanation: The objective of associative memories is to store association between
patterns for later recall of one of patterns given the other.

3. Is it possible to capture implicit reasoning process by patten classification network?


a) yes
b) maybe
c) no
d) cannot be determined
View Answer
Answer: a
Explanation: For example neural network for contract bridge game.

4. Are classification methods based on correlation matching using moment features


useful for problems of handwritten characters?
a) yes
b) no
View Answer
Answer: b
Explanation: Because different parts of handwritten characters are deformed differently.
5. Associative memory, if used in feedback structure of hopfield type can function as?
a) data memory
b) cluster
c) content addressable memory
d) none of the mentioned
View Answer
Answer: c
Explanation: Associative memory, if used in feedback structure of hopfield type can
function as content addressable memory.

6. In feedforward network, the associations corresponding to input – output patterns are


stored in?
a) activation state
b) output layer
c) hidden layer
d) none of the mentioned
View Answer
Answer: d
Explanation: In feedforward network, the associations corresponding to input – output
patterns are stored in weights of the network.

7. Which is one of the application of associative memories?


a) direct pattern recall
b) voice signal recall
c) mapping of the signal
d) image pattern recall from noisy clues
View Answer
Answer: d
Explanation: The objective of associative memories is to store association between
patterns for later recall of one of patterns given the other, so noisy versions of the same
image can be recalled.

8. How can optimization be applied in images?


a) by use of simulated annealing
b) by attaching a feedback network
c) by adding an additional hidden layer
d) none of the mentioned
View Answer
Answer: a
Explanation: Optimization be applied in images by use of simulated annealing to
formulate the problem as energy minimization problem.

9. In control applications, how many ways are there to control a plant?


a) 1
b) 2
c) 4
d) infinite
View Answer
Answer: b
Explanation: Open loop and feedback loop are the two ways.
10. Neuro – Fuzzy systems can lead to more powerful neural network?
a) yes
b) no
c) may be
d) cannot be determined
View Answer
Answer: a
Explanation: If fuzzy logic is incorporated into conventional ANN models, more powerful
systems can be created.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy