WORCESTER POLYTECHNIC INSTITUTE
Computer Science Department

CS4341 ❏ Artificial Intelligence ❏ B'07

Version: Tue Nov 6 20:05:52 EST 2007

PROJECT 1 - RI & IS - Evaluation Criteria

All projects will be graded out of 100 points for convenience, and will be adjusted later to conform to the class grading scheme already provided to you. Grading will be subtractive. That is, you start with 100 points, and points will be deducted for problems found. This produces lower scores, is harder to grade, but is much fairer overall as it is more consistent.



The grading will be divided into consideration of:
* 10 pts  Presentation   (i.e., style, layout, writing, comments, ...)
* 50 pts  Required       (i.e., what the problem description asked for)
* 40 pts  Demonstration  (i.e., the output from the system 
                                -- layout, clarity, completeness, how well tested).


10 Presentation:

        Clear, structured, well written documentation.
        Good coding standards:
	-- Clear, appropriate comments in code.
	-- Clear program layout.
        -- Clear functional decomposition for system.
	-- Clear ordering of functions in system.
	-- Good naming conventions.
	-- Appropriate use of abstraction (e.g., help functions) to raise
	   the level of the code to the level of the problem and away
           from LISP/Scheme.
	Good choice of data structures.
        Clear, readable, well laid out system (RI & IS) output.
        Good choice of action and predicate names.
        Good choice of rule language (readability).
        Good choice of predicate description language.
        Good choice of action description language.
        Good choice of WM description language. 


50 Required:	

   20 Part 1:

        Forward Chaining implemented.
	Separate loading of... 
            rules (from file)	
            actions (from file)	
            predicates (from file)	
            WM (from file)		
	Has Conflict resolution via specificity.
        RI is domain and task independent.

	Brief, clear documentation (external to the program) that
	   describes the design of your RI, including the overall
	   architecture, any special algorithms used, special data
	   structures used, assumptions made, and important global
	   variables.

        A description of how you've handled rule selection from the
           conflict set.
        A description of your rule language.
        All the test cases in your demonstration, with the
          corresponding demonstration output, that shows that the RI 
          is functioning correctly.
        A listing of all the RI test rules, predicates, and actions.
        All the code.

        ...............

   30 Part 2:

        Appropriate Problem choice:
          - be challenging enough so that it cannot be done in a single step;
          - have intermediate results
          - uses many rules for each solution obtained
          - requires some specialized knowledge to solve problems
          - uses just forward chaining
          - uses just attribute/value WM format
          - does not require any, or much, calculation
          - does not require any human intervention or additional input;
          - does not require backtracking
          - problems solving requires intelligence

        System must be an example of one of these types of tasks.
          - Configuration, Criticism, Diagnosis by Classification, 
            Evaluation, Parametric Design.

        Descriptions of your problem and your domain.
        Brief, clear documentation that describes the design of your
           intelligent system.
        A description of any changes you made to your RI for Part 2.
        All the IS test cases, commented.
        Includes the output from the IS demonstration runs (not annotated).
        A listing of all the IS rules, predicates, and actions.


40 Demonstration:

   15   RI is working & correct.
	How well tested (i.e., enough good tests done?)
	                (e.g., when doesn't it work?)
        Elementary syntactic/semantic checking of rules.
	Completeness of RI test Demo messages.
            (i.e., can we understand what's happening?)

        ..............

   25   IS is working & correct.
        Is clearly an example of a required task (e.g., configuration).
        Uses the Landscape.
	How well tested (i.e., enough good tests done?)
	                (e.g., when doesn't it work?)
	Completeness of IS test Demo messages.
            (i.e., can we understand what's happening?)
        Knowledge seems sensible (and not "fake")
        Includes at least one "impressive" example 
            (i.e., it seems "smart"!)