### CS 548 KNOWLEDGE DISCOVERY AND DATA MINING - Fall 2017 Project 2: Decision Trees, Linear Regression, Model Trees, Regression Trees

#### PROF. CAROLINA RUIZ

DUE DATE: Tuesday October 10th, 2017.
• Slides: Submit via Canvas by 2:00 pm.
• Written report: Hand in a hardcopy by the beginning of class (by 3:59 pm).

### Project Assignment:

1. Study Sections 4.1-4.5 and Appendix D of the textbook in great detail.

2. Study Witten's, Frank's, and Hall textbook (available on reserve in the WPI Library) Sect. 3.3, 4.6 (linear regression), and 6.6.

3. Study all the materials posted on the course Lecture Notes:
In particular, you should know the algorithms to construct decision trees, regression trees, and model trees very well, and be able to use these algorithms to construct trees from data by hand during the test. See examples provided in the Lecture Notes linked above. (Note: for model and regression trees, a software tool will be used to obtain the necessary linear regressions.)

4. THOROUGHLY READ AND FOLLOW THE PROJECT GUIDELINES. These guidelines contain detailed information about how to structure your project, and how to prepare your written summary, and how to study for the test.

*** You must use the Project 2 Template provided for your written report. Do not exceed the page limits stated in the template nor decrease the font size ***. (If you prefer not to use Word, you can copy and paste this format in a different editor as long as you respect the stated page structure and page limit.)

• Data Mining Technique(s): Run experiments in Weka AND in Python using the following techniques:

• Pre-processing Techniques: Feature selection, feature creation, dimensionality reduction, noise reduction, attribute discretization, ...

• Classification Techniques:
• Zero-R (majority class)
• One-R
• Decision trees: Using Weka (J4.8) and Python.
Since these decision tree implementations are able to handle continuous attributes and missing values directly, make sure to run some experiments with no pre-processing and some experiments with pre-processing (discretizing continuous attributes and replacing missing values before hand), and compare the results.

• Regression Techniques:
• Linear Regression: Weka (under "functions") and Python.
• Regression Trees: Weka (M5P under "trees") and Python.
• Model Trees: Weka (M5P under "trees") and Python.

• Dataset: Use the Heart Disease Data Set. This dataset is available at the UCI Machine Learning Repository.
• Use the dataset in the processed.cleveland.data file.
• See a more detailed description of the dataset in the heart-disease.names file.
• Before you run experiments, transform the dataset as follows:

• For classification experiments:
• All data values are provided as numeric in the dataset, even though most attributes are actually discrete (=nominal). In the classification experiments, for each of the following discrete attributes:
```2. #4 (sex)
3. #9 (cp)
6. #16 (fbs)
7. #19 (restecg)
9. #38 (exang)
11. #41 (slope)
12. #44 (ca)
13. #51 (thal)
14. #58 (num)
```
replace its values with more readable nominal names following the description provided in the dataset webpage. For example, for attribute "2. #4 sex", use male instead of value 1, and female instead of value 0. This will make your job reading and analyzing the models much easier.

• Keep the remaining numeric attributes as continuous:
```1. #3 (age)
4. #10 (trestbps)
5. #12 (chol)
8. #32 (thalach)
10. #40 (oldpeak)
```
• Use the discrete "14. #58 (num)" attribute as the classification target.

• For regression experiments:
• Use all the attributes as numeric (=continuous) as originally provided in the dataset.
• Use the continuous "1. #3 (age)" attribute as the regression target.

Run experiments with and without discretizing the predicting attributes; removing attributes that are too related to the target or that make the trees too long; and with any other pre-processing and post-processing that produce useful and meaningful models.

• Performance Metric(s):
• Use the following metrics or evaluation methods:
1. For classification tasks: use classification accuracy, precision, recall, ROC Area, and confusion matrices.
For regression tasks: use correlation coefficient AND any subset of the following error metrics that you find appropriate: mean-squared error, root mean-squared error, mean absolute error, relative squared error, root relative squared error, and relative absolute error. An important part of the data mining evaluation in this project is to try to make sense of these performance metrics and to become familiar with them.
2. size of the tree,
3. readability of the tree, as separate measures to evaluate the "goodness" of your models, and
4. time it took to construct the tree.
• Compare each accuracy/error you obtained against those of benchmarking techniques as ZeroR and OneR over the same (sub-)set of data instances you used in the corresponding experiment.
• Remember to experiment with pruning of your tree: Experiment with pre- and/or post-prunning of the tree in order to increase the classification accuracy, reduce the prediction error, and/or reduce the size of the tree.

• Advanced Topic(s): Investigate in more depth (experimentally, theoretically, or both) a topic of your choice that is related to decision or model/regression trees and that was not covered already in this project, class lectures, or the textbook. This tree-related topic might be something that was described or mentioned briefly in the textbook or in class; comes from your own research; or is related to your interests. Just a few sample ideas are: The prune functions in Python; C4.5; C4.5 pruning methods (for trees or for rules); any of the additional tree classifiers in Weka: DecisionStump, LMT RandomForest, RandomTree, REPTree; meta-learning applied to decision trees (see Classifier -> Choose -> meta); other useful functionality in Python; an idea from a research paper that you find intriguing; or any other tree-related topic.