

This part of the project consists of three subparts:
You must follow the guidelines below for the training of your neural nets:
Also, thanks to Nathan Sheldon, here are the lists with the appropriate (not CMU) directory paths. I have also left a copy of that file in /cs/cs4341/Proj4/proj4_lists.txt
Consider the following approach to training a neural net that uses genetic algorithms instead of the error backpropagation algorithm. For simplicity, assume that we are training a 2-layer, feedforward neural net with 4 inputs, 1 hidden layer with 2 hidden nodes, and one output node. We have a collection of say n training examples to train the net.
Solve the following problems:
Consider the following dataset that specifies the type of contact lenses
that is prescribed to a patient based on the patient's age, spectacle
prescription, astigmatism, and tear production rate.
Use information theory, more precisely entropy, to construct a minimal
decision tree that
predicts the type of contact lenses that will be prescribed to a patient
based on the patient's attributes.
SHOW EACH STEP OF THE CALCULATIONS and the resulting tree.
Attribute age values: {young, pre-presbyopic, presbyopic}
Attribute spectacle-prescrip values: {myope, hypermetrope}
Attribute astigmatism values: {no, yes}
Attribute tear-prod-rate values: {reduced, normal}
Attribute contact-lenses values: {soft, hard, none}
age spectacle- astigmatism tear-prod-rate contact-lenses
prescription
young myope no reduced none
young myope no normal soft
young myope yes reduced none
young myope yes normal hard
young hypermetrope no reduced none
young hypermetrope no normal soft
young hypermetrope yes reduced none
young hypermetrope yes normal hard
pre-presbyopic myope no reduced none
pre-presbyopic myope no normal soft
pre-presbyopic myope yes reduced none
pre-presbyopic myope yes normal hard
pre-presbyopic hypermetrope no reduced none
pre-presbyopic hypermetrope no normal soft
pre-presbyopic hypermetrope yes reduced none
pre-presbyopic hypermetrope yes normal none
presbyopic myope no reduced none
presbyopic myope no normal none
presbyopic myope yes reduced none
presbyopic myope yes normal hard
presbyopic hypermetrope no reduced none
presbyopic hypermetrope no normal soft
presbyopic hypermetrope yes reduced none
presbyopic hypermetrope yes normal none
Sunglasses (Q1-Q4) 20 points
Obtaining results 3
Q4:
Code Modifications 5
Classification Accuracy 5
# of Epochs 5
Validation set 1
Test set 1
Face Recognition (Q5-Q8) 30 points
Obtaining results 3
Q8: 7
Q7:
code modifications 8
#output nodes and
output convention 8
Class Accuracy 1
# Epochs 1
Validation set 1
Test set 1
Pose Recognition (Q9-Q11) 30 points
Obtaining results 10
Code modification 6
Output endoding 6
# epochs 1
Validation set 1
Test set 1
Visualization (Q12-Q13) 20 points
Q13(a) 10
Q13(b) 10
Report 20 points
Q2-Q4 16
Q5 4
Level 1 of the tree (4 attributes): 15 points
For each attribute
- using the right formula and taking into
consideration all values of the attribute
and of the target attribute 2
- right calculations 1
Selecting the right attribute (least entropy): 3
(this selection will be considered correct if
the attribute w/ least entropy is chosen even if
the calculations of the entropies are wrong).
Level 2 of the tree (3 attributes): 12 points
For each attribute (same as above) 2+1=3
Selecting the right attribute (least entropy): 3
Level 3 of the tree (2 attributes): 09 points
For each attribute (same as above) 2+1=3
Selecting the right attribute (least entropy): 3
Level 4 of the tree (1 attribute, in each case): 4 points
Selecting the right attribute (least entropy): 2
This adds up to 40 points: 30 points + 10 extra-credit points
Intuitively, how do you think that the results of this "match against templates" approach would compare against the neural networks approach above? Explain your answer.