0% found this document useful (0 votes)
10 views

Unit 1 Mid Term

Soft computing
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
10 views

Unit 1 Mid Term

Soft computing
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 3
a and SOVver erent Iding nary the es of Wee Qa Shoice Questions | ul ple | Unit-1_ cis the basic building block of a neural networ kK? ferve (b) Synapse i Neuron (d) Axon FO aa: The point of communication between oe neurons is called: (a) Synapse (b) Dendrite (e) Axon (d) Node Which part of the neuron receives signals from other neurons? : (a) Soma (b) Axon (cy Dendrite (d) Synapse “The structure of a biological neuron most closely resembles: (a) Linear regression model (b) Logistic regression model (c) Perceptron model (@)Artificial neural network model The activation function of an artificial neuron is used to: (a) Sum up the input signals (b) Make predictions (0) Introduce non-linearity (@ Compute the cost function Which activation function is commonly used in binary classification tasks? (a) ReLU (Rectified Linear Unit) praised ©) Tanh (Hyperbolic Tangent) (4) Leaky ReLU The main advantage of the ReLU activation function is: (2) 1tis computationally efficient (0) It always outputs positive values a W works well with negative inputs he easy to differentiate lear Bete efunction can help mitigate the vanishing ®) Sofimax Se ? 7; a m Nobel Series Bo (b) LI regu (b) Tanh (Hyperbolic Tangent) (12 regu (c) ReLU (Rectified Linear Unit) bay eaeh (d) Sigmoid } aes 0, The activation function that introduces smatt so,, 16. fe) Len negative inputs is called: for (a) Leaky ReLU (b) Parametric ReLU (b) The spec (c) ELU.(Exponential Linear Unit) (c) The batc (d) Swish (@) The num 10. Which type of neural network is designed to prog 17. Which optir sequences of data, such as text or time series? ag the weights (a) Convolutional Neural Network (CNN) (@) Stochasti (b) Recurrent Neural Network (RNN) (b) Gradient (c) Feedforward Neural Network (FNN) . (c) Adam (A (d) Generative Adversarial Network (GAN) (d) Randomi: 11. What is the process of adjusting the weights of a neural 18. The term “d network during training called? (a) Removin; (a) Weight initialization _(b) Gradient descent during tre (©) Forward propagation (4) Backpropagation (b) Reducing 12, In the context of neural networks, what does the term | (c) Adding 1 “epoch” refer to? (a) The number of hidden layers in the network (b) The number of input features = (c) The number of training iterations over the entire dataset (@ The number of neurons in a layer 13. Which neural network architecture is primarily used for’ image recognition, tasks? (a) Recurrent Neural Network (RNN) (bY Convolutional Neural Network (CNN) (c) Radial Basis Function Network (RBFN) (d) Multilayer Perceptron (MLP) 14, The vanishing gradient problem occurs when: (a) The learning rate is too high (b) The model is too complex ; A® Gradients become extremely small during backpropagation | @) Decreasir 19. Which neurz new data th: (a) Convoluti Ab) Autoenco (©) Long Sho (d) Support \ hat is the 1 (8) To introd (6) To reduce (©) To allow. () To shift (4) The model converges too quickly Be) 9 15. Which regularization technique helps to reduce overfiting 7. (a) ° f _by adding a penalty term to the loss function? 3. (b) 14, tt (®) Dropout Brey 25. {¢ Bse-30007 eet | lope fo, Process | neural 18. he term dataset used for ypagati verfitti Nobel Series |99 (b) LI regularization (Lasso) (c) L2 regularization (Ridge) (d) Batch normalization The learning rate in neural networks controls: (a) The number of neurons in each layer (b) The speed at which the model converges (c) The batch size for training (d) The number of epochs for training 7. Which optimization algorithm is commonly used to update the weights of a neural network in mini-batch training? (a) Stochastic Gradient Descent (SGD) (b) Gradient Descent (c) Adam (Adaptive Moment Estimation) (d) Randomized Least Squares 8. The term “dropout” in neural networks refers to: (a) Removing a random number of neurons from each layer during training (b) Reducing the learning rate over time (c) Adding noise to the input data (d) Decreasing the number of layers in the network Which neural network architecture is useful for generating new data that is similar to the training dataset? (a) Convolutional Neural Network (CNN) Autoencoder (©) Long Short-Term Memory (LSTM) network (d) Support Vector Machine (SVM) 4. What is the role of the “bias” in an artificial neuron model? (a) To introduce non-linearity (b) To reduce overfitting (¢) To allow the neuron to ignore certain input features (4) To shift the activation function WD 2a) sii) 4, (C)ie 5. (Cc) 26. (b) BE i) 8.(b) 9. (@) 106) 11.) 12.0) + (b) Be e 15. (c) 16.(b) 17. (c) 18. (a) 0) a

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy