Hyperparameter tuning, Batch Normalization, Programming Frameworks
- Which of the following are true about hyperparameter search?
- If it is only possible to tune two parameters from the following due to limited computational resources. Which two would you choose?
- Using the “Panda” strategy, it is possible to create several models. True/False?
- If you think
β
\beta
β (hyperparameter for momentum) is between 0.9 and 0.99, which of the following is the recommended way to sample a value for beta?
- Finding good hyperparameter values is very time-consuming. So typically you should do it once at the start of the project, and try to find very good hyperparameters so that you don’t ever have to tune them again. True or false?
- When using batch normalization it is OK to drop the parameter
W
[
l
]
W^{[l]}
W[l] from the forward propagation since it will be subtracted out when we compute
z
~
normalize
[
l
]
\tilde{z}^{[l]}_{\text{normalize}}
z~normalize[l]?=
β
[
l
]
?
z
^
[
l
]
\beta^{[l]} \, \hat{z}^{[l]}
β[l]z^[l]+
γ
[
l
]
\gamma^{[l]}
γ[l]. True/False?
- In the normalization formula
z
n
o
r
m
(
i
)
=
z
(
i
)
?
μ
σ
2
+
ε
z_{norm}^{(i)} = \frac{z^{(i)} - \mu}{\sqrt{\sigma^2 + \varepsilon}}
znorm(i)?=σ2+ε
?z(i)?μ? , why do we use epsilon?
- **Which of the following statements about
γ
\gamma
γ and
β
\beta
β in Batch Norm are true? **
- A neural network is trained with Batch Norm. At test time, to evaluate the neural network on a new example you should perform the normalization using
μ
\mu
μ and
σ
2
\sigma^2
σ2 estimated using an exponentially weighted average across mini-batches seen during training. True/false?
- Which of the following are some recommended criteria to choose a deep learning framework?
|