CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
leechanwoo-kor

Real-time collaboration for Jupyter Notebooks, Linux Terminals, LaTeX, VS Code, R IDE, and more,
all in one place. Commercial Alternative to JupyterHub.

GitHub Repository: leechanwoo-kor/coursera
Path: blob/main/deep-learning-specialization/course-4-convolutional-neural-network/Week 1 Quiz - The Basics of ConvNets.md
Views: 34199

Week 1 Quiz - The Basics of ConvNets

1. What do you think applying this filter to a grayscale image will do?

image

  • Detect vertical edges.

  • Detect 45-degree edges.

  • Detecting image contrast.

  • Detect horizontal edges.

📌 Notice that there is a high delta between the values in the top left part and the ones in the bottom right part. When convolving this filter on a grayscale image, the edges forming a 45-degree angle with the horizontal will be detected.

2. Suppose your input is a 128 by 128 grayscale image, and you are not using a convolutional network. If the first hidden layer has 256 neurons, each one fully connected to the input, how many parameters does this hidden layer have (including the bias parameters)?

  • 4194304

  • 12583168

  • 12582912

  • 4194560

3. Suppose your input is a 300 by 300 color (RGB) image, and you use a convolutional layer with 100 filters that are each 5x5. How many parameters does this hidden layer have (including the bias parameters)?

  • 2501

  • 7600

  • 7500

  • 2600

📌 you have 25×3=7525 \times 3 = 75 weights and 11 bias per filter. Given that you have 100 filters, you get 7,600 parameters for this layer.

4. You have an input volume that is 127×127×16127 \times 127 \times 16, and convolve it with 32 filters of 5×55 \times 5, using a stride of 2 and no padding. What is the output volume?

  • 123×123×32123 \times 123 \times 32

  • 123×123×16123 \times 123 \times 16

  • 62×62×1662 \times 62 \times 16

  • 62×62×3262 \times 62 \times 32

📌 Using the formula nH[l]=nH[l1]+2×pfs+1n_H^{[l]} = \dfrac{n_H^{[l−1]} + 2 \times p − f}{s} + 1 with nH[l1]=127,p=0,f=5,ands=2n_H^{[l−1]} = 127, p = 0, f = 5, and s = 2 we get 62.

5. You have an input volume that is 31×31×3231 \times 31 \times 32, and pad it using “pad=1”. What is the dimension of the resulting volume (after padding)?

  • 31×31×3431 \times 31 \times 34

  • 33×33×3233 \times 33 \times 32

  • 33×33×3333 \times 33 \times 33

  • 32×32×3232 \times 32 \times 32

📌 If the padding is 1 you add 2 to the height dimension and 2 to the width dimension.

6. You have a volume that is 121×121×32121 \times 121 \times 32, and convolve it with 32 filters of 5×55\times 5, and a stride of 1. You want to use a "same" convolution. What is the padding?

  • 2

  • 3

  • 5

  • 0

📌 When using a padding of 2 the output volume has nH=1215+41+1n_H = \dfrac{121−5+4}{1}+1.

7. You have an input volume that is 128×128×12128 \times 128 \times 12, and apply max pooling with a stride of 4 and a filter size of 4. What is the output volume?

  • 32×32×1232 \times 32 \times 12

  • 128×128×3128 \times 128 \times 3

  • 32×32×332 \times 32 \times 3

  • 64×64×1264 \times 64 \times 12

📌 Using the formula nH[l]=nH[l1]+2×pfs+1n_H^{[l]} = \dfrac{n_H^{[l−1]} + 2 \times p − f}{s} + 1 with p=0,f=4,s=4p=0, f=4, s=4 and nH[l1]=32n_H^{[l−1]} = 32.

8. Because pooling layers do not have parameters, they do not affect the backpropagation (derivatives) calculation.

  • True

  • False

📌 Everything that influences the loss should appear in the backpropagation because we are computing derivatives. In fact, pooling layers modify the input by choosing one value out of several values in their input volume. Also, to compute derivatives for the layers that have parameters (Convolutions, Fully-Connected), we still need to backpropagate the gradient through the Pooling layers.

9. Which of the following are true about convolutional layers? (Check all that apply)

  • It allows parameters learned for one task to be shared even for a different task (transfer learning).

  • It speeds up the training since we don't need to compute the gradient for convolutional layers.

  • Convolutional layers provide sparsity of connections.

📌 This happens since the next activation layer depends only on a small number of activations from the previous layer.

  • It allows a feature detector to be used in multiple locations throughout the whole input volume.

📌 Since convolution involves sliding the filter throughout the whole input volume the feature detector is computed over all the volume.

10. The following image depicts the result of a convolution at the right when using a stride of 1 and the filter is shown right next.

image

On which pixels does the circled pixel of the activation at the right depend?

  • It depends on the pixels enclosed by the green square.

  • ...

📌 this is the position of the filter when we move it two pixels down and one to the right.