MLPs with Backpropagation Learning

嚜燐LPs with Backpropagation Learning

CS 270 每 Backpropagation

1

Multilayer Nets?

Linear Systems

F(cx) = cF(x)

F(x+y) = F(x) + F(y)

I

N

M

Z

Z = (M(NI)) = (MN)I = PI

CS 270 每 Backpropagation

2

Early Attempts

Committee Machine

R a n d o m ly Connected

(Adaptive)

Vote

T a k ing TLU

(non-adaptive)

Majority Logic

"Least Perturbation Principle"

For each pattern, if incorrect, change just enough weights

into internal units to give majority. Choose those closest to

CS 270 每 Backpropagation

their threshold (LPP & changing undecided nodes)

3

Perceptron (Frank Rosenblatt)

Simple Perceptron

S-Units

A-units

R-units

( S e n s o r ) ( A s s o c i a t i o n ) (Response)

Random to A-units

fixed weights adaptive

Variations on Delta rule learning

Why S-A units?

CS 270 每 Backpropagation

4

Backpropagation

l

l

l

l

l

l

Rumelhart (1986), Werbos (74),#, explosion of neural net

interest

Multi-layer supervised learning

Able to train multi-layer perceptrons (and other topologies)

Commonly uses differentiable sigmoid function which is

the smooth (squashed) version of the threshold function

Error is propagated back through earlier layers of the

network

Very fast efficient way to compute gradients!

CS 270 每 Backpropagation

5

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download