1. Explain how a unit in a neural network combines its inputs with a set of weights and an activation function to produce an output.
2. What does the reLU activation function do? What advantages does it offer over a step activation function or a logistic activation function?
3. When you use TensorFlow to construct a hidden layer with N inputs and M units, you will typically use the dense()
function to construct that layer. dense()
constructs a single node in the TensorFlow graph. How does this single node compute a total of M outputs for the M units you wanted in your hidden layer?
4. Explain how you can use gradient descent to train the weights in a neural network.
5. How do you use an optimizer to perform gradient descent in TensorFlow?
6. What is the vanishing gradients problem? How can He initialization and/or batch normalization help with this problem?
7. Explain how a convolution layer in a neural network works.
8. Explain how you could use a recurrent neural network to predict the next element in a sequence given the previous elements in the sequence as its input.
9. What is sentiment analysis? How can you use a recurrent neural network to perform this task?
10. What is reinforcement learning? How can you use reinforcement learning with a neural network to solve, say, the cart-pole problem?