Unit 2 - MCQ Bank PDF
Unit 2 - MCQ Bank PDF
a) classification
b) feature mapping
c) pattern mapping
d) none of the mentioned
View Answer
Answer: d
Explanation: Feedback neural networks are primarily used for pattern storage.
2. Presence of false minima will have what effect on probability of error in recall?
a) directly
b) inversely
c) no effect
d) directly or inversely
View Answer
Answer: a
Explanation: Presence of false minima will increase the probability of error in recall.
6. What happens when we use mean field approximation with boltzman learning?
a) it slows down
b) it get speeded up
c) nothing happens
d) may speedup or speed down
View Answer
Answer: b
Explanation: Boltzman learning get speeded up using mean field approximation.
7. Approximately how much times the boltzman learning get speeded up using mean
field approximation?
a) 5-10
b) 10-30
c) 30-50
d) 50-70
View Answer
Answer: b
Explanation: Boltzman learning get speeded up 10-30 using mean field approximation.
1. How are input layer units connected to second layer in competitive learning networks?
a) feedforward manner
b) feedback manner
c) feedforward and feedback
d) feedforward or feedback
View Answer
Answer: a
Explanation: The output of input layer is given to second layer with adaptive feedforward
weights.
5. What conditions are must for competitive network to perform pattern clustering?
a) non linear output layers
b) connection to neighbours is excitatory and to the farther units inhibitory
c) on centre off surround connections
d) none of the mentioned fulfils the whole criteria
View Answer
Answer: d
Explanation: If the output functions of units in feedback laye are made non-linear , with
fixed weight on-centre off-surround connections, the pattern clustering can be
performed.
6. What conditions are must for competitive network to perform feature mapping?
a) non linear output layers
b) connection to neighbours is excitatory and to the farther units inhibitory
c) on centre off surround connections
d) none of the mentioned fulfils the whole criteria
View Answer
Answer: d
Explanation: If cndition in a, b, c are met then feature mapping can be performed.
7. If a competitive network can perform feature mapping then what is that network can
be called?
a) self excitatory
b) self inhibitory
c) self organization
d) none of the mentioned
View Answer
Answer: c
Explanation: Competitive network that can perform feature mapping can be called as self
organization network.
8. What is an instar?
a) receives inputs from all others
b) gives output to all others
c) may receive or give input or output to others
d) none of the mentioned
View Answer
Answer: a
Explanation: An instar receives inputs from all other input units.
10. The update in weight vector in basic competitive learning can be represented by?
a) w(t + 1) = w(t) + del.w(t)
b) w(t + 1) = w(t)
c) w(t + 1) = w(t) – del.w(t)
d) none of the mentioned
View Answer
Answer: a
Explanation: The update in weight vector in basic competitive learning can be
represented by w(t + 1) = w(t) + del.w(t).
1. How are input layer units connected to second layer in competitive learning networks?
a) feedforward manner
b) feedback manner
c) feedforward and feedback
d) feedforward or feedback
View Answer
Answer: a
Explanation: The output of input layer is given to second layer with adaptive feedforward
weights.
5. What conditions are must for competitive network to perform pattern clustering?
a) non linear output layers
b) connection to neighbours is excitatory and to the farther units inhibitory
c) on centre off surround connections
d) none of the mentioned fulfils the whole criteria
View Answer
Answer: d
Explanation: If the output functions of units in feedback laye are made non-linear , with
fixed weight on-centre off-surround connections, the pattern clustering can be
performed.
6. What conditions are must for competitive network to perform feature mapping?
a) non linear output layers
b) connection to neighbours is excitatory and to the farther units inhibitory
c) on centre off surround connections
d) none of the mentioned fulfils the whole criteria
View Answer
Answer: d
Explanation: If condition in a, b, c are met then feature mapping can be performed.
7. If a competitive network can perform feature mapping then what is that network can
be called?
a) self excitatory
b) self inhibitory
c) self organization
d) none of the mentioned
View Answer
Answer: c
Explanation: Competitive network that can perform feature mapping can be called as self
organization network.
8. What is an instar?
a) receives inputs from all others
b) gives output to all others
c) may receive or give input or output to others
d) none of the mentioned
View Answer
Answer: a
Explanation: An instar receives inputs from all other input units.
10. The update in weight vector in basic competitive learning can be represented by?
a) w(t + 1) = w(t) + del.w(t)
b) w(t + 1) = w(t)
c) w(t + 1) = w(t) – del.w(t)
d) none of the mentioned
View Answer
Answer: a
Explanation: The update in weight vector in basic competitive learning can be
represented by w(t + 1) = w(t) + del.w(t).
2. In pattern clustering, does physical location of a unit relative to other unit has any
significance?
a) yes
b) no
c) depends on type of clustering
d) none of the mentioned
View Answer
Answer: b
Explanation: Physical location of a unit doesn’t effect the output.
6. In feature maps, when weights are updated for winning unit and its neighbour, which
type learning it is known as?
a) karnaugt learning
b) boltzman learning
c) kohonen’s learning
d) none of the mentioned
View Answer
Answer: c
Explanation: Self organization network is also known as Kohonen learning.
10. Use of nonlinear units in the feedback layer of competitive network leads to concept
of?
a) feature mapping
b) pattern storage
c) pattern classification
d) none of the mentioned
View Answer
Answer: d
Explanation: Use of nonlinear units in the feedback layer of competitive network leads to
concept of pattern clustering.
2. What is the advantage of basis function over mutilayer feedforward neural networks?
a) training of basis function is faster than MLFFNN
b) training of basis function is slower than MLFFNN
c) storing in basis function is faster than MLFFNN
d) none of the mentioned
View Answer
Answer: a
Explanation: The main advantage of basis function is that the training of basis function is
faster than MLFFNN.
1. Which application out of these of robots can be made of single layer feedforward
network?
a) wall climbing
b) rotating arm and legs
c) gesture control
d) wall following
View Answer
Answer: d
Explanation: Wall folloing is a simple task and doesn’t require any feedback.