Objective Questions
Objective Questions
10) The nonlinear relation between the analog and digital frequencies is called .
a) aliasing b) warping c) prewarping d) antialiasing
11) In the method of steepest descent, ensemble averaging is performed -----------computing the learning
curve.
a) before b) after c) in parallel d) simultaneously
12) The learning curve for the method of steepest descent is --------in nature.
(a) deterministic (b) probabilistic (c) stochastic (d) algebraic
13) In the LMS case, ensemble averaging is performed ---------computing the “noisy” learning curves.
(a) before (b) after (c) in parallel (d) simultaneously
14) The learning curve of the LMS is -----------------in nature.
(a) deterministic (b) probabilistic (c) stochastic (d) algebraic
16) The LMS algorithm behaves like a low-pass filter with a small cutoff frequency when step size
parameter is ------------.
(a) large (b) equal (c) zero (d) small
17) The mean-square deviation (MSD) learning curve is thus a plot of the mean-square deviation
versus ---------------------.
(a) adaptation cycle (b) adoptation cycle (c) gain (d) frequency
18) Mean-square error J(n) and mean-square deviation d(n) in the LMS algorithm depend on the --------
-------.
(a) adaptation cycle (b) adoptation cycle (c) gain (d) frequency
19) Under Assumption 2, the irreducible estimation error eo(n) produced by the Wiener filter is
statistically -----------of the input vector u(n).
(a) double (b) three times (c) Independent (d) four times
20) Convergence in the mean square is inherently linked to the ensemble-average ---------curves.
(a) gain (b) Learning (c) magnitude (d) frequency
10) In normalized LMS algorithm, the weight controller applies a weight adjustment to the FIR filter
in response to the combined action of the ------------and error signal.
(a) input vector (b) output vector (c) desired vector (d) noise vector
11) The normalized LMS algorithm may be viewed as an LMS algorithm with a -----------step-size
parameter.
(a) frequency varying (b) constant (c) variable (d) time varying
12) In a block-adaptive filter, the incoming data sequence u(n) is sectioned into L-point blocks by
means of a ---------------converter.
(a) serial-to-parallel (b) parallel -to- serial (c) parallel -to-parallel (d) serial-to- serial
13) Block adaptive filtering does not contain
(a) Frequency-domain adaptive filtering (FDAF) (b) Self-orthogonalizing adaptive filtering (c)
Subband adaptive filtering (d) Fixed weight adaptive filtering
14) In Block adaptive filtering, adaptation of the filter proceeds on a -----------basis.
(a) block-by-block (b) sample-by-sample (c) serial (d) parallel
15) In block LMS algorithm, the most preferred choice between block size (L) and length of adaptive
filter (M) is --------------
(a) L < M (b) L > M (c) L = M (d) L ≥ M
16) For M=1024, the fast block LMS algorithm is roughly --------------- times faster than the traditional
LMS algorithm in computational terms.
(a) 32 (b) 64 (c) 8 (d) 16
17) The fast block LMS algorithm provides a computational cost per adaptation cycle --------- than/to
that of the traditional LMS algorithm.
(a) equal (b) greater (c) lesser (d) equal or greater
18) Analysis and synthesis sections are the components of ---------------------
(a) LMS (b) block LMS (c) fast block LMS (d) Subband adaptive filters
19) The synthesis filter bank consists of the -------------- connection of a set of digital filters.
(a) series (b) parallel (c) series and parallel (d) mixed
20) Fast block LMS algorithm m exploits the computational advantage offered by---------
(a) overlap-save method (b) overlap-add method (c) overlap-mix method (d) overlap- divide
method
9) A _________ represents a parallel computing network ideally suited for mapping a number of
important linear algebra computations, such as matrix multiplication, triangularization, and back
substitution
a) Vector Space b) systolic array c) both a and b d) none
10) Two basic types of processing elements may be distinguished in a systolic array__________
a) boundary cells and external cells b) boundary cells and internal cells c) internal cells
and external cells d) all the above
11) RLS algorithm for real-time applications are __________
a) signal and data processing b) communications c) control systems d) all of the above
12) The difference between unconstrained and linear-equality constrained RLS solutions lies in the
__________
a) initial value and final value b) final value c) initial value d) origin
13) An array version of fast RLS algorithms known as__________
a) Fast Fourier Transform (FFT) b) FTF (Fast Transversal Filter) c) IIR filter d) FIR filter
14) __________ is an adaptive filter algorithm that recursively finds the coefficients that minimize
a weighted linear least squares cost function relating to the input signals
a) Recursive least squares (RLS) b) Least mean squares (LMS) c) both a and b d) Mean square
error
15) The RLS exhibits extremely __________
a) fast convergence b) slow convergence c) medium convergence d) slow and medium
convergence
16) Steady-state value of the averaged squared error produced by the RLS algorithm is
__________
a) large b) medium c) small d) extremely large
17) The confirmation of the RLS algorithm produces __________ misadjustment
a) one b) zero c) three d) five
18) __________ of RLS algorithm is relatively insensitive to variations in the eigenvalue spread
a) Misadjustment b) Robustness c) steady state value d) Rate of convergence
19) Convergence of RLS algorithm is attained in about __________ iterations
a) 2 b) 20 c) 200 d)2000
20) Rate of convergence is defined by __________ of algorithm
a) Time span b) accuracy c) number of iterations d) complexity
ADAPTIVE SIGNAL PROCESSING (18PE0EC3B)
MULTIPLE CHOICE QUESTIONS
Answers:
Unit-1 Unit-2 Unit-3 Unit-4 Unit-5
1. b 1. d 1. c 1. a 1. b
2. c 2. b 2. d 2. b 2. a
3. a 3. c 3. b 3. b 3. a
4. b 4. d 4. d 4. C 4. b
5. b 5. c 5. a 5. a 5. a
6. c 6. a 6. b 6. d 6. b
7. b 7. b 7. d 7. a 7. a
8. a 8. c 8. d 8. a 8. a
9. d 9. a 9. a 9. a 9. b
10. a 10. b 10. a 10. a 10. b
11. c 11. a 11. d 11. b 11. d
12. d 12. a 12. a 12. a 12. c
13. c 13. b 13. d 13. b 13. b
14. b 14. c 14. a 14. a 14. a
15. d 15. a 15. c 15. b 15. a
16. a 16. d 16. d 16. b 16. c
17. b 17. a 17. c 17. a 17. b
18. d 18. a 18. d 18. a 18. d
19. a 19. c 19. b 19. a 19. b
20. c 20. b 20. a 20. a 20. c