2 - University of Illinois at Chicago



1. (a). Name three classification techniques. No need to explain how they work.

( (3%) How do you describe overfitting in classification?

(c) (3%) Given the following decision tree, generate all the rules from the tree. Note that we have two classes, Yes and No.

d) List three objective interestingness measures of rules, and list two subjective interestingness measures of rules. No need to explain.

e) (5) To build a naïve Bayesian classifier, we can make use of association rule mining. How to compute P(Ai = aj | C= ck) from association rules, where Ai is an attribute and aj is a value of Ai, and ck is a class value of the class attribute C?

2. (10%) Given the following table with three attributes, a1, a2, and a3:

a1 a2 a3

C B H

B F S

A F F

C B H

B F G

B E O

We want to mine all the large (or frequent) itemsets in the data. Assume the minimum support is 30%. Following the Apriori algorithm, give the set of large itemsets in L1, L2, …., and candidate itemsets in C2, C3, …. (after the join step and the prune step). What additional pruning can be done in candidate generation and how?

3. (10%) In the multiple minimum support association rule mining, we can assign a minimum support to each item, called minimum item support (MIS). We define that an itemset, {item1, item2, …}, is large (or frequent) if its support is greater than or equal to

min(MIS(item1), MIS(item2), …..)

Given the transaction data:

{Beef, Bread}

{Bread, Cloth}

{Bread, Cloth, Milk}

{Cheese, Boots}

{Beef, Bread, Cheese, Shoes}

{Beef, Bread, Cheese, Milk}

{Bread, Milk, Cloth}

If we have the following minimum item support assignments for the items in the transaction data,

MIS(Milk) = 50%,

MIS(Bread) = 70%

The MIS values for the rest of the items in the data are all 25%.

Following the MSapriori algorithm, give the set of large (or frequent) itemsets in L1, L2, ….?

4. (10%) Given the following training data, which has two attributes A and B, and a class C, compute all the probability values required to build a naïve bayesian classifier. Ignore smoothing.

Answer:

P(C = y) = P(C= n) =

P(A=m | C=y) =

P(A=g | C=y) =

P(A=h | C=y) =

P(A=m | C=n) =

P(A=g | C=n) =

P(A=h | C=n) =

P(B=t | C=y) =

P(B=s | C=y) =

P(B=q | C=y) =

P(B=t | C=n) =

P(B=s | C=n) =

P(B=q | C=n) =

5. Using agglomerative clustering to cluster the following one dimensional data: 1, 2, 4, 6, 9, 11, 20, 23, 27, 30, 34, 100, 120, 130. You are required to draw the cluster tree and write the value of the cluster center represented by each node next to the node.

6. Given the following positive and negative data points, draw a possible decision tree partition and a possible SVM decision surface respectively.

[pic]

7. In a marketing application, a predictive model is built to score a test database to identify likely customers. After scoring, the following configuration for 10 bins is obtained. Each number of the second row is the number of positive cases in the test data that fall into the corresponding bin. Draw the lift chart for the results. Your drawing should be reasonably accurate.

8. Given the classification results in the following confusion matrix, compute the classification accuracy, precision, and recall scores of the positive data.

9. Given the following table with three attributes, a1, a2, and a3:

a1 a2 a3

C B H

B F S

A F F

C B H

B F G

B E O

we want to mine all the large (or frequent) itemsets using the multiple minimum support technique. If we have the following minimum item support assignments for the items,

MIS(a2=F) = 60%,

The MIS values for the rest of the items in the data are all 30%.

Following the MSapriori algorithm, give the set of large (or frequent) itemsets in L1, L2, …. and candidate itemsets in C2, C3, … (after the join step and the prune step)?

-----------------------

No

Yes

No

job

n

y

Yes

Yes

Sex

F

=50k

M

< 40

Age

>= 40

|A |B |C |

|m |t |y |

|m |s |y |

|g |q |y |

|h |s |y |

|g |q |y |

|g |q |n |

|g |s |n |

|h |t |n |

|h |q |n |

|m |t |n |

1

1103

23

2

30

27

20

34

1303

120

9

11

4

6

Draw a SVM decision surface

36.9

15.2

3.25

5.5

26.8

30.3

116.7

125

28.5

21.5

10

5

1.5

|Classified as | |

| |Correct |

|Positive |Negative | |

|50 |10 |Positive |

|5 |200 |Negative |

Bin 1 |Bin 2 |Bin 3 |Bin 4 |Bin 5 |Bin 6 |Bin 7 |Bin 8 |Bin 9 |Bin 10 | |240 |120 |40 |30 |20 |20 |10 |8 |6 |6 | |

Draw a possible decision tree partition

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download