Quiz 3
Quiz 3
Quiz 3
learning
1. What does the analogy “AI is the new electricity” refer to?
2. Which of these are reasons for Deep Learning recently taking off? (Check the two options that apply.)
3. Recall this diagram of iterating over different ML ideas. Which of the statements below are true? (Check all that
apply.)
Being able to try out ideas quickly allows deep learning engineers to iterate more quickly.
Faster computation can help speed up how long a team takes to iterate to a good idea.
It is faster to train on a big dataset than a small dataset.
Recent progress in deep learning algorithms has allowed us to train good models faster (even without
changing the CPU/GPU hardware).
Note: A bigger dataset generally requires more time to train on a same model.
4. When an experienced deep learning engineer works on a new problem, they can usually use insight from
previous problems to train a good model on the first try, without needing to iterate multiple times through different
models. True/False?
True
False
Note: Maybe some experience may help, but nobody can always find the best model or hyperparameters without
iterations.
True
False
7. A demographic dataset with statistics on different cities' population, GDP per capita, economic growth is an
example of “unstructured” data because it contains data coming from different sources. True/False?
True
False
8. Why is an RNN (Recurrent Neural Network) used for machine translation, say translating English to French?
(Check all that apply.)
9. In this diagram which we hand-drew in lecture, what do the horizontal axis (x-axis) and vertical axis (y-axis)
represent?
10. Assuming the trends described in the previous question's figure are accurate (and hoping you got the axis labels
right), which of the following are true? (Check all that apply.)
Increasing the training set size generally does not hurt an algorithm’s performance, and it may help
significantly.
Increasing the size of a neural network generally does not hurt an algorithm’s performance, and it
may help significantly.
Decreasing the training set size generally does not hurt an algorithm’s performance, and it may help
significantly.
Decreasing the size of a neural network generally does not hurt an algorithm’s performance, and it
may help significantly.
Note: The output of a neuron is a = g(Wx + b) where g is the activation function (sigmoid, tanh, ReLU, ...).
3. Suppose img is a (32,32,3) array, representing a 32x32 image with 3 color channels red, green and blue. How do
you reshape this into a column vector?
x = img.reshape((32 * 32 * 3, 1))
b (column vector) is copied 3 times so that it can be summed to each column of a. Therefore, c.shape = (2,
3).
"*" operator indicates element-wise multiplication. Element-wise multiplication requires same dimension between
two matrices. It's going to be an error.
6. Suppose you have n_x input features per example. Recall that X=[x^(1), x^(2)...x^(m)]. What is the dimension of
X?
(n_x, m)
Note: A stupid way to validate this is use the formula Z^(l) = W^(l)A^(l) when l = 1, then we have
A^(1) = X
X.shape = (n_x, m)
Z^(1).shape = (n^(1), m)
W^(1).shape = (n^(1), n_x)
7. Recall that np.dot(a,b) performs a matrix multiplication on a and b, whereas a*b performs an element-wise
multiplication.
# a.shape = (3,4)
# b.shape = (4,1)
for i in range(3):
for j in range(4):
c[i][j] = a[i][j] + b[j]
c = a + b.T
a = np.random.randn(3, 3)
b = np.random.randn(3, 1)
c = a * b
What will be c?
This will invoke broadcasting, so b is copied three times to become (3,3), and ∗ is an element-wise product so
c.shape = (3, 3).
Answer: (a - 1) * (b + c)