Tutorial 1 - Hong Kong Polytechnic University



EIE6207: Theoretical Fundamental and Engineering Approaches for Intelligent Signal and Information Processing

Tutorial: Neural Networks and Backpropagation

1. The weight update formula for the backpropagation algorithm for a DNN has the form:

[pic]

where [pic] is the squared error between the desired outputs and the actual outputs of the DNN, w represents all of the connection weights in the DNN, t is the iteration index, and [pic] is a positive learning rate.

a) Explain why the learning rate must be positive.

b) Explain why the learning rate should not be large, say 10 times the value of [pic].

c) Can the backpropagation algorithm find the global minimum of E? Explain your answer.

2. To use DNNs for classification, we typically put a softmax layer on top of the last hidden layer whose number of nodes is equal to the number of classes.

[pic]

a) Discuss the advantage of using the softmax layer.

b) For classification tasks, we typically use the cross-entropy between the actual output and the target output as the objective function:

[pic]

What is the advantage of cross-entropy over the mean square error

[pic]

as the objective function in training the DNN for classification.

c) As the ultimate goal of the DNN is to classify samples, why not simply using classification error as the objective function?

3. The following figure shows the decision boundary of a neural network with two inputs, one hidden layer (two hidden nodes), and one output. You may assume that the hidden and output nodes of the neural network have a sigmoid non-linear function.

a) Explain why the decision boundaries produced by the network comprise two straight lines.

b) What is the purpose of the output neuron (node) in the neural network.

[pic]

[pic]

[pic]

[pic]

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download