0% found this document useful (0 votes)
141 views7 pages

Objective Questions

Uploaded by

jagannam01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
141 views7 pages

Objective Questions

Uploaded by

jagannam01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ADAPTIVE SIGNAL PROCESSING (18PE0EC3B)

MULTIPLE CHOICE QUESTIONS


Unit-1
1. filter is a system that extracts information about a prescribed quantity of interest from --------- data.
a) original b) noisy c) message d) information
2. The three basic kinds of information-processing operations are filtering, smoothing, and ----------.
a) sampling b) quantizing c) predicting d) arranging
3. A filter is said to be linear if the output is a -------- function of the filter input.
a) linear b) 2nd order c) non-linear d) curvilinear
4. Error signal defined as the difference between desired response and the actual filter--------.
a) input b) parameter c) condition d) output
5. For ----------inputs, the resulting solution is commonly known as the Wiener filter.
a) nonstationary b) stationary c) variable d) multiplexed
6. Wiener filter, which is said to be optimum in the -----------------sense.
a) average b) r.m.s. c) mean-square-error d) percentge
7. A plot of the mean-square value of the error signal versus the adjustable parameters of
a linear filter is called ----------------- surface.
a) solution b) error-performance c) performance d) error
8. The minimum point of error-performance surface represents the -------------- solution.
a) Wiener b) Ideal c) Partial d) Non-optimal
9. The Wiener filter is inadequate for dealing with situations in which --------- of the signal and/or noise is intrinsic to the
problem.
a) stationarity b) Integrity c) mean d) nonstationarity
10. In case of nonlinear signal, the optimum filter has to assume a---------------- form.
a) time-varying b) mean-varying c) frequency-varying d) space-varying
11. For optimum filter, the statistical characteristics of the input data should match the a ---------
information on which the design of the filter is based.
a) posteriori b) variable c) priori d) reliable
12. Adaptive filter relies for its operation on a ------------------- algorithm.
a) non-recursive b) non-adjustible c) non-optimal d) recursive
13. The ----------------- algorithm starts from some predetermined set of initial conditions
a) non- adaptive b) estimate and plug c) adaptive d) perfect
14. The number of adaptation cycles required for the algorithm, in response to stationary inputs,
to converge to the optimum Wiener solution is called -------------.
a) Tracking b) Rate of convergence c) Robustness d) Trend of convergence
15. The deviation of final value of the mean-square error from Wiener solution is ------------..
a) Tracking b) Rate of convergence c) Robustness d) Misadjustment
16. In a adaptive filter, small disturbances can only result in -------------- estimation errors.
a) small b) large c) infinite d) unpredictable
17. Tracking is needed when adaptive filter operates in ---------- environment.
a) stationary b) nonstationary c) suitable d) responsive
18. When an algorithm is implemented numerically, inaccuracies are produced due to --------------- errors.
a) sampling b) mean square c) averaging d) quantization
19. The element r(0) on the main diagonal of a correlation matrix is always -----------valued.
a) real b) complex c) square d) real or complex
20. The correlation matrix of a stationary discrete-time stochastic process is --------------.
a) Lagrangian b) Null c) Hermitian d) adjoint

ADAPTIVE SIGNAL PROCESSING (18PE0EC3B)

MULTIPLE CHOICE QUESTIONS


Unit-2
1) Wiener filters are a class of --------------optimum discrete-time filters.
(a) Circular (b) curvilinear (c) nonlinear (d) linear
2) In many practical situations, the observables are measured in ---------- form.
(a) modulated (b) baseband (c) overmodulated (d) undermodulated
3) FIR filter is inherently stable, since its structure involves the use of --------- paths.
(a) feedback (b) feed forward and feedback (c) feed forward (d) jumper
4) ------------ filters are preferred in adaptive filtering.
(a) linear (b) digital (c) IIR (d) FIR
5) The difference between the desired response d(n) and the filter output y(n) is called ----- error.
(a) proportional (b) relative (c) estimation (d) misadjustment
6) At the bottom of the error-performance surface, the cost function J attains its -------- value.
(a) minimum (b) maximum (c) average (d) zero
7) The steepest-descent algorithm is subject to the possibility of becoming unstable due to presence of -----
---------.
(a) feedforward (b) feed backward (c) adder (d) delay
8) The steepest-descent algorithm does not depend on ------------.
(a) initial value (b) gradient vector (c) path (d) step-size parameter
9) I I R digital filters are of -------------- nature.
a) Recursive b) Non Recursive c) Reversive d) Non Reversive

10) The nonlinear relation between the analog and digital frequencies is called .
a) aliasing b) warping c) prewarping d) antialiasing

11) In the method of steepest descent, ensemble averaging is performed -----------computing the learning
curve.
a) before b) after c) in parallel d) simultaneously

12) The learning curve for the method of steepest descent is --------in nature.
(a) deterministic (b) probabilistic (c) stochastic (d) algebraic

13) In the LMS case, ensemble averaging is performed ---------computing the “noisy” learning curves.
(a) before (b) after (c) in parallel (d) simultaneously
14) The learning curve of the LMS is -----------------in nature.
(a) deterministic (b) probabilistic (c) stochastic (d) algebraic

15) The LMS algorithm is ------------ compared to the Wiener solution.


(a) suboptimal (b) superoptimal (c) can not be compared (d) far better

16) The LMS algorithm behaves like a low-pass filter with a small cutoff frequency when step size
parameter is ------------.
(a) large (b) equal (c) zero (d) small
17) The mean-square deviation (MSD) learning curve is thus a plot of the mean-square deviation
versus ---------------------.
(a) adaptation cycle (b) adoptation cycle (c) gain (d) frequency

18) Mean-square error J(n) and mean-square deviation d(n) in the LMS algorithm depend on the --------
-------.
(a) adaptation cycle (b) adoptation cycle (c) gain (d) frequency

19) Under Assumption 2, the irreducible estimation error eo(n) produced by the Wiener filter is
statistically -----------of the input vector u(n).
(a) double (b) three times (c) Independent (d) four times

20) Convergence in the mean square is inherently linked to the ensemble-average ---------curves.
(a) gain (b) Learning (c) magnitude (d) frequency

ADAPTIVE SIGNAL PROCESSING (18PE0EC3B)

MULTIPLE CHOICE QUESTIONS


Unit-3
1) Misadjustment is defined as the ratio of the ----------value of the excess mean-square error Jex(∞) to
the minimum mean-square error Jmin.
(a) absolute (b) average (c) steady-state (d) expected
2) It is customary to express Misadjustment (M) as ---------------------.
(a)absolute value (b) average value (c) ratio (d) percentage
3) The misadjustment factor of the LMS algorithm is directly proportional to the step-size parameter
(a) inversely (b) directly (c) jointly (d) obliquely
4) For misadjustment to be small, step size should be correspondingly kept small.
(a) large (b) zero (c) equal (d) small
5) Keeping step size small, robustness of the LMS algorithm (with a small misadjustment factor) is
traded off for ----------------------.
(a) efficiency (b) estimation error (c) mean square error (d) input parameter
6) In the traditional LMS algorithm, adjustment is directly proportional to the tap-input vector u(n).
(a) inversely (b) directly (c) jointly (d) obliquely
7) When tap-input vector u(n) is large, the LMS algorithm suffers from a --------noise amplification
problem.
(a) quantization (b) granular (c) round off (d) gradient
8) The normalized LMS algorithm is preferred to traditional LMS algorithm to avoid ------ noise
amplification problem.
(a) quantization (b) granular (c) round off (d) gradient
9) The normalized LMS algorithm finds application to --------------- cancellation.
(a) acoustic echo (b) granular noise (c) round- off noise(d) quantization noise

10) In normalized LMS algorithm, the weight controller applies a weight adjustment to the FIR filter
in response to the combined action of the ------------and error signal.
(a) input vector (b) output vector (c) desired vector (d) noise vector
11) The normalized LMS algorithm may be viewed as an LMS algorithm with a -----------step-size
parameter.
(a) frequency varying (b) constant (c) variable (d) time varying
12) In a block-adaptive filter, the incoming data sequence u(n) is sectioned into L-point blocks by
means of a ---------------converter.
(a) serial-to-parallel (b) parallel -to- serial (c) parallel -to-parallel (d) serial-to- serial
13) Block adaptive filtering does not contain
(a) Frequency-domain adaptive filtering (FDAF) (b) Self-orthogonalizing adaptive filtering (c)
Subband adaptive filtering (d) Fixed weight adaptive filtering
14) In Block adaptive filtering, adaptation of the filter proceeds on a -----------basis.
(a) block-by-block (b) sample-by-sample (c) serial (d) parallel

15) In block LMS algorithm, the most preferred choice between block size (L) and length of adaptive
filter (M) is --------------
(a) L < M (b) L > M (c) L = M (d) L ≥ M

16) For M=1024, the fast block LMS algorithm is roughly --------------- times faster than the traditional
LMS algorithm in computational terms.
(a) 32 (b) 64 (c) 8 (d) 16
17) The fast block LMS algorithm provides a computational cost per adaptation cycle --------- than/to
that of the traditional LMS algorithm.
(a) equal (b) greater (c) lesser (d) equal or greater
18) Analysis and synthesis sections are the components of ---------------------
(a) LMS (b) block LMS (c) fast block LMS (d) Subband adaptive filters
19) The synthesis filter bank consists of the -------------- connection of a set of digital filters.
(a) series (b) parallel (c) series and parallel (d) mixed
20) Fast block LMS algorithm m exploits the computational advantage offered by---------
(a) overlap-save method (b) overlap-add method (c) overlap-mix method (d) overlap- divide
method

ADAPTIVE SIGNAL PROCESSING (18PE0EC3B)

MULTIPLE CHOICE QUESTIONS


Unit-4
1) The forward prediction error equals the difference between the ____________ and ____________
a) Input sample, its predicted value b) Lattice, filter c) Vector space, filter d) Correlation,
autocorrection
2) Hermitian transposition is ____________
a) transposition combined with variable conjugation b) transposition combined with complex
conjugation c) both d) none
3) The backward prediction error equals the difference between the ____________ and ____________
a) Input sample, its predicted value b) actual sample value, its predicted value c) both d)
correlation, autocorrection
4) How many properties of Prediction-Error Filters are available ____________?
a) 4 b) 5 c) 6 d) 7
5) A filter that operates on the set of samples u(n), u(n-1), u(n-2) ……..u(n-M) to produce the forward
prediction error fM(n) at its output is called a ____________?
a) forward prediction error filter b) backward prediction error filter c) equaliser filter d) Prediction filter
6) The method is recursive in nature and makes particular use of the Toeplitz structure of the correlation
matrix of the tap inputs of the filter. It is known as the____________?
a) Mean Square Error algorithm b) Recursive Least Square algorithm c) Least Mean Square
algorithm d) Levinson–Durbin algorithm
7) optimum regression-coefficient vector ho is defined by ____________?
a) ho = D-1 z b) ho = D-1 xyz c) ho = L-1 z d) ho = L-1 xyz
8) Normalized gradient lattice algorithm has been used extensively in applications such as
____________?
a) echo cancellation b) Slow convergence rate c) Large convergence rate d) All the above
9) The gain term 1/dp(n) would be fairly small____________?
a) because the adaptive lattice would not respond very rapidly to the high noise environment.
b) because the adaptive lattice would respond very rapidly to the high noise environment
c) because least squares update relations
d) None of the above
10) Stochastic gradient algorithm is also called ____________?
a) Lease Mean square algorithm b) Recursive least square algorithm c) Mean square error algorithm
d) None of the above
11) Noise belongs to category of ____________?
a) Deterministic Signal b) Random signal c) Energy signal d) none of the above
12) Generally Random process follows ____________?
a) Gaussian Process b) Fourier Process c) Random process d) all of the above
13) A Random process x(n) is said to stationary if ____________?
a)Mean is variable b) Mean is constant c) Variance of the process is infinite d) None of the above
14) Comparation of two different signals is called ____________?
a) Cross-correlation b) Auto-correlation c) Random process d) all of the above
15) Comparation with the same signals is called ____________?
a) Cross-correlation b) Auto-correlation c) Random process d) all of the above
16) Linear Prediction Filter is designed by using ___________?
a) FIR b) IIR c) Active d) None of the above
17) Solving normal equations for prediction coefficients are done by ___________?
a) Levinson – Durbin Algorithm b) Sampling Theorem c) Dirichlet’s Theorem d) None of the above
18) ___________ is used for prediction analysis of signal?
a) Linear predictive b) active predictive c) least square predictive d) None of the above
19) In statistical process, linear prediction models are referred to as ___________?
a) autoregressive process b) random processes c) cell process d) None of the above
20) Which of the following is the application of lattice filter?
a) Digital speech processing b) Adaptive filter c) Electroencephalogram d) All of the above

ADAPTIVE SIGNAL PROCESSING (18PE0EC3B)

MULTIPLE CHOICE QUESTIONS


Unit-5
1) Which of the following does not hold true for RLS algorithms?
a) Complex b) Slow convergence rate c) Large convergence rate d) None
2) A concept central to the development of vector space approaches to RLS is that of a subspace of
the ____________?
a) linear vector space b) Perpendicular vector space c) orthogonal vector space d) both b and c
3) The last of the four transversal fillers necessary to implement the FTF algorithm will be referred
to as the ____________?
a) gain transversal filter b) grad transversal filter c) Joint transversal filter d) Lattice
transversal filter
4) The reduction of approximately O(N) arithmetic operations, which occurs by considering a variant
of the basic FTF algorithm that will be called the ___________
a) Power-normalized FTF b) gain-normalized FTF c) filter normalized FTF d) none of the
above
5) When the angle Ø is 90° [i.e, when the input vectors u(n) and u(n - 1) are orthogonal to each
other], the rate of convergence of the normalized LMS algorithm is the ___________
a) Fastest b) Slowest c) Constant d) all the above
6) When the angle Ø is zero or 180° [i.e., when the input vectors u(n) and u(n - 1) point in the same
direction or in opposite directions], the rate of convergence of the normalized LMS algorithm is
the ___________
a) Fastest b) Slowest c) Constant d) all the above
7) Two important square-foot adaptive filtering algorithms for RLS estimation are known as the
___________
a) QR-decomposition-based RLS (QRD-RLS) algorithm and inverse QRD-RLS algorithm
b) Mean square error algorithm and Least mean square algorithm
c) inverse QRD-RLS algorithm and Least mean square algorithm
d) QRD-RLS algorithm and Mean square error algorithm
8) The derivation of QRD-RLS algorithm and inverse QRD-RLS algorithms has traditionally in one
form or another, on the use of an orthogonal triangularization process known in matrix algebra as
__________
a) QR-decomposition b) VR-decomposition c) CR-decomposition d) SR-decomposition

9) A _________ represents a parallel computing network ideally suited for mapping a number of
important linear algebra computations, such as matrix multiplication, triangularization, and back
substitution
a) Vector Space b) systolic array c) both a and b d) none
10) Two basic types of processing elements may be distinguished in a systolic array__________
a) boundary cells and external cells b) boundary cells and internal cells c) internal cells
and external cells d) all the above
11) RLS algorithm for real-time applications are __________
a) signal and data processing b) communications c) control systems d) all of the above
12) The difference between unconstrained and linear-equality constrained RLS solutions lies in the
__________
a) initial value and final value b) final value c) initial value d) origin
13) An array version of fast RLS algorithms known as__________
a) Fast Fourier Transform (FFT) b) FTF (Fast Transversal Filter) c) IIR filter d) FIR filter
14) __________ is an adaptive filter algorithm that recursively finds the coefficients that minimize
a weighted linear least squares cost function relating to the input signals
a) Recursive least squares (RLS) b) Least mean squares (LMS) c) both a and b d) Mean square
error
15) The RLS exhibits extremely __________
a) fast convergence b) slow convergence c) medium convergence d) slow and medium
convergence
16) Steady-state value of the averaged squared error produced by the RLS algorithm is
__________
a) large b) medium c) small d) extremely large
17) The confirmation of the RLS algorithm produces __________ misadjustment
a) one b) zero c) three d) five
18) __________ of RLS algorithm is relatively insensitive to variations in the eigenvalue spread
a) Misadjustment b) Robustness c) steady state value d) Rate of convergence
19) Convergence of RLS algorithm is attained in about __________ iterations
a) 2 b) 20 c) 200 d)2000
20) Rate of convergence is defined by __________ of algorithm
a) Time span b) accuracy c) number of iterations d) complexity
ADAPTIVE SIGNAL PROCESSING (18PE0EC3B)
MULTIPLE CHOICE QUESTIONS
Answers:
Unit-1 Unit-2 Unit-3 Unit-4 Unit-5
1. b 1. d 1. c 1. a 1. b
2. c 2. b 2. d 2. b 2. a
3. a 3. c 3. b 3. b 3. a
4. b 4. d 4. d 4. C 4. b
5. b 5. c 5. a 5. a 5. a
6. c 6. a 6. b 6. d 6. b
7. b 7. b 7. d 7. a 7. a
8. a 8. c 8. d 8. a 8. a
9. d 9. a 9. a 9. a 9. b
10. a 10. b 10. a 10. a 10. b
11. c 11. a 11. d 11. b 11. d
12. d 12. a 12. a 12. a 12. c
13. c 13. b 13. d 13. b 13. b
14. b 14. c 14. a 14. a 14. a
15. d 15. a 15. c 15. b 15. a
16. a 16. d 16. d 16. b 16. c
17. b 17. a 17. c 17. a 17. b
18. d 18. a 18. d 18. a 18. d
19. a 19. c 19. b 19. a 19. b
20. c 20. b 20. a 20. a 20. c

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy