The MATLAB Notebook v1.5.2



CS378 INTRODUCTION TO NEURAL NETWORK

HOMEWORK 10

NAME: WEI HAIQI

STUDENT ID: 150232

E16.1 Consider Layer 1 of the art1 network with [pic] = 0.02. Assume two neurons in Layer 2, two elements in the input vector and the following weight matrix and input:

W2:1 = [pic] P = [pic].

Also assume that neuron 2 of Layer 2 is active.

i. Find and plot the response n1 if +b1 = 2 and -b1 = 3.

ii. Find and plot the reponse n1 if +b1 = 4 and -b1 = 5.

iii. Find and plot the reponse n1 if +b1 = 4 and -b1 = 4.

iv. Check to see that the answers to parts (i) – (iii) satisfy the steady state response predicted by Eq. (16.21). Explain any inconsistencies.

v. Check to see that the answers to parts (i) – (iii) by writing a MATLAB M-file to simulate Layer 1 of the ART1 network. Use the ode45 routine. Plot the reponse for each case.

Answer:

i. The equation of operation of Layer 1 are

[pic]

[pic]

[pic]

[pic]

If we assume that both neurons start with zero initial condition, the solutions are

[pic]

[pic], which are displayed in the figure below.

clear

n0 = [0;0];

tf = 0.3;

options = odeset('RelTol',1e-4);

[t,n] = ode45('E161_1',[0, tf],n0,options);

figure;

plot(t,n(:,1),t,n(:,2),':')

axis([0 0.3 -.4 .3]);

text(.2,0,'+b=2, -b=3');

legend('n1(t)','n2(t)');

function dn = E161_1(t,n)

dn = zeros(2,1);

dn(1) = -150*n(1) - 50;

dn(2) = -200*n(2) +50;

[pic]

ii. In this case, [pic], [pic], and the solutions are:

[pic]

[pic], which are displayed in the figure below.

[pic]

iii. In this case, [pic], [pic], and the solutions are:

[pic] ( if assume the initial value is 0,)

[pic], which are displayed in the figure below.

[pic]

iv. From the figure for part (i), we can see [pic], which agrees with the steady state analysis: [pic]

Similarly, for part (ii) we have [pic],

for part (iii) we have [pic]

Therefore the steady state conditions are satisfied for all three cases.

v. Please see above codes and figures.

E16.3 Consider the orienting Subsystem of the ART1 network with the following parameters:

[pic] = 0.1 +b0 = -b0 = 2

The inputs to the Orienting Subsytem are

P = [pic] a1 = [pic].

i. Find and plot the response of the Orienting Subsystem n0 (t), for [pic]

ii. Find and plot the response of the Orienting Subsystem n0 (t), for [pic]

iii. Verify that the steady state condition are satisfied in parts (i) and (ii).

iv. Check your answers to parts (i) and (ii) by writing a MATLAB M-file to simulate the Orienting Subsystem

Answer:

i. The equation of operation of the Orienting Subsystem is

[pic]

[pic]

[pic]

The response is then [pic]

clear

n0 = 0;

tf = 0.5;

options = odeset('RelTol',1e-4);

[t,n] = ode45('E163_1',[0, tf],n0,options);

figure;

plot(t,n)

text(.25,-.5,'alfa = 0.5, beta = 4');

text(.15,-0.72,'n0(t)');

function dn = E163_1(t,n)

dn = -65*n - 50;

From the plot, we see that [pic]is negtive, a0 = 0, so a reset signal will not be sent to Layer 2.

[pic]

ii. In this case, we have [pic][pic], and the solution is

[pic].

[pic]

iii. For part (i), [pic], therefore a0 = 0, which agrees with the results of (i).

For part (ii), [pic], therefore a0 = 0, which agrees with the results of (ii).

Therefore the steady state conditions are satisfied for both cases.

iv. Please see the above code and plots.

E16.5 Train an ART1 network using the following input vectors:

P1 = [pic], P2 = [pic], P3 = [pic], P4 = [pic]

Use the parameter [pic] = 2, and choose S2 = 3 (3 categories).

i. Train the network to convergence using [pic] = 0.3.

ii. Repeat part (i) using [pic] = 0.6.

iii. Repeat part (ii) using [pic] = 0.9.

Answer:

i. Since [pic],[pic],[pic], the initial weights will be:

[pic], then begin the algorithm using the following code:

clear

S1 = 4; S2 = 3;

rou = 0.3;

P = [0 1 1 1;1 0 1 1;0 0 0 1;1 1 0 1];

W21 = ones(4,3);

W12 = 0.4*ones(3,4);

O_winner = -1 * ones(6,1);

for k = 1 : 4

flag = 0;

while (flag == 0)

fprintf('Present input P%d\n', k);

a1 = P(:,k)

n2 = W12 * a1;

for j = 1 : S2

if (O_winner(j)== 0)

n2(j)= -10;

end

end

a2 = compet(n2)

for i = 1 : S2

if (a2(i)==1)

winner = i

end

end

a1 = P(:,k) & W21(:,winner)

q = (norm(a1))^2/(norm(P(:,k)))^2;

if (q < rou)

a0 = 1

O_winner(winner) = 0;

else

a0 = 0

W12(winner,:)= 2*a1'/(2+(norm(a1))^2-1)

W21(:,winner)= a1

flag = 1;

end

if (O_winner(1:S2,1) == zeros(S2,1))

fprintf('The dimension of weight matrices is modified');

S2 = S2 + 1;

W12(S2,:) = [0.4 0.4 0.4 0.4]

W21(:,S2) = [1;1;1;1]

end

end

end

The results of each step are listed as follows:

Present input P1

a1 =

0

1

0

1

a2 =

(1,1) 1

winner =

1

a1 =

0

1

0

1

a0 =

0

W12 =

0 0.6667 0 0.6667

0.4000 0.4000 0.4000 0.4000

0.4000 0.4000 0.4000 0.4000

W21 =

0 1 1

1 1 1

0 1 1

1 1 1

Present input P2

a1 =

1

0

0

1

a2 =

(2,1) 1

winner =

2

a1 =

1

0

0

1

a0 =

0

W12 =

0 0.6667 0 0.6667

0.6667 0 0 0.6667

0.4000 0.4000 0.4000 0.4000

W21 =

0 1 1

1 0 1

0 0 1

1 1 1

Present input P3

a1 =

1

1

0

0

a2 =

(3,1) 1

winner =

3

a1 =

1

1

0

0

a0 =

0

W12 =

0 0.6667 0 0.6667

0.6667 0 0 0.6667

0.6667 0.6667 0 0

W21 =

0 1 1

1 0 1

0 0 0

1 1 0

Present input P4

a1 =

1

1

1

1

a2 =

(1,1) 1

winner =

1

a1 =

0

1

0

1

a0 =

0

W12 =

0 0.6667 0 0.6667

0.6667 0 0 0.6667

0.6667 0.6667 0 0

W21 =

0 1 1

1 0 1

0 0 0

1 1 0

ii. For [pic], the training will proceed exactly as in part (i), until pattern P4 is presented, so the following gives the results for each step after P4 is first presented. Note that since there's no adequate matech between this input and the existing prototypes, a new neuron was added into Layer 2 during the training process:

Present input P4

a1 =

1

1

1

1

a2 =

(1,1) 1

winner =

1

a1 =

0

1

0

1

a0 =

1

Present input P4

a1 =

1

1

1

1

a2 =

(2,1) 1

winner =

2

a1 =

1

0

0

1

a0 =

1

Present input P4

a1 =

1

1

1

1

a2 =

(3,1) 1

winner =

3

a1 =

1

1

0

0

a0 =

1

The dimension of weight matrices is modified.

W12 =

0 0.6667 0 0.6667

0.6667 0 0 0.6667

0.6667 0.6667 0 0

0.4000 0.4000 0.4000 0.4000

W21 =

0 1 1 1

1 0 1 1

0 0 0 1

1 1 0 1

Present input P4

a1 =

1

1

1

1

a2 =

(4,1) 1

winner =

4

a1 =

1

1

1

1

a0 =

0

W12 =

0 0.6667 0 0.6667

0.6667 0 0 0.6667

0.6667 0.6667 0 0

0.4000 0.4000 0.4000 0.4000

W21 =

0 1 1 1

1 0 1 1

0 0 0 1

1 1 0 1

iii. For [pic], since the value of [pic] after the dimension of weight matrices was modified, the results for each step are the same as those in part (ii).

E16.7 Write a Matlab M-file to implement the ART1 algorighm (with the modification described in Exercise E16.6). Use the M-file to train an ART1 network using the following input vectors (see Problem P16.7):

Present the vectors in the order p1-p2-p3-p1-p4 (i.e., p1 is presetned twice in each epoch). Use the parameters [pic] = 2 and [pic] = 0.9, and choose S2 = 3 (3 categories). Train the network until the weights have converged. Compare your results with Problem P16.7.

Answer:

The training was done using the following code. A total of 15 iterations of the algorithm were performed (three epochs of the sequence P1-P2-P3-P1-P4) before the weights are stable. The results are summarized in the table below. (A complete ouput of the results can be obtained by running this code.)

Compared to problem P16.7, the vigilance parameter is increased, so a fourth neuron is needed in Layer 2. From the summary table, we can see the 4th neuron was added in the ninth interation.

clear

S1 = 25; S2 = 3;

rou = 0.9;

P = [1 0 1 1 1;0 0 0 0 0;1 0 1 1 0;0 0 0 0 0;1 0 1 1 1;0 0 0 0 0;

1 0 1 1 1;1 0 1 1 0;1 0 1 1 1;0 0 0 0 0;1 0 1 1 0;1 0 1 1 0;

1 0 1 1 0;1 0 1 1 0;1 0 1 1 0;0 0 0 0 0;1 1 0 1 0;1 1 0 1 0;

1 1 0 1 0;0 0 0 0 0;1 1 0 1 0;0 0 0 0 0;1 1 0 1 0;0 0 0 0 0;

1 1 0 1 0];

W21 = ones(S1,S2);

W12 = 0.0769*ones(S2,S1);

for epoch = 1 : 4

W21_old = W21;

W12_old = W12;

for k = 1 : 5

flag = 0;

O_winner = -1 * ones(10,1);

while (flag == 0)

fprintf('Present input P%d\n', k);

a1 = P(:,k)

n2 = W12 * a1;

for j = 1 : S2

if (O_winner(j)== 0)

n2(j)= -10;

end

end

a2 = compet(n2)

for i = 1 : S2

if (a2(i)==1)

winner = i

end

end

a1 = P(:,k) & W21(:,winner)

q = (norm(a1))^2/(norm(P(:,k)))^2;

if (q < rou)

a0 = 1

O_winner(winner) = 0;

else

a0 = 0

W12(winner,:)= 2*a1'/(2+(norm(a1))^2-1)

W21(:,winner)= a1

flag = 1;

end

if (O_winner(1:S2,1) == zeros(S2,1))

fprintf('The dimension of weight matrices is modified.\n');

S2 = S2 + 1;

W12(S2,:) = 0.0769*ones(1,S1)

W21(:,S2) = ones(25,1)

W12_old(S2,:) = 0.0769*ones(1,S1)

W21_old(:,S2) = ones(25,1)

end

end

end

if (W21 == W21_old)

epoch

break;

end

end

Summary of the results:

Iteration |Input |[pic] |[pic] |[pic] |[pic] |Interation |Input |[pic] |[pic] |[pic] |[pic] | |1 |P1 |* | | | |9 |P1 |v2 |v3 |v1 |* | |2 |P2 |* | | | |10 |P4 | |* | | | |3 |P3 | |* | | |11 |P1 | | | |* | |4 |P1 |v2 |v1 |* | |12 |P2 |* | | | | |5 |P4 | |* | | |13 |P3 | | |* | | |6 |P1 | | |* | |14 |P1 | | | |* | |7 |P2 |* | | | |15 |P4 | |* | | | |8 |P3 | |v1 |* | | | | | | | | |

Note: 1. A star (*) indicates the resonance point;

2. A check mark (v) indicates where a reset occurred. When more than one reset occurred in a given iteration, the number beside the check mark indicates the order in which the reset occurred.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download