INSTRUCTIONS FOR A TYPICAL INTERACTION

 

The following is an example walkthrough that highlights some of Mikey’s capabilities. Please execute these steps in order:

 

1.    Begin by filling in the hypothesis template using the drop down boxes. Set the first to be “steepness”, the second “distance”, and the last “increase”. The hypothesis should read: “I think changing the steepness from low to high will make the distance increase.”

 

2.    Assume that you do not know what you should do next. Type “next” in the Talk To Mikey box and press enter. Mikey will respond that the next step is to set up an experiment.

 

3.    Set up your first trial as follows: Surface Type = “Smooth”, Steepness = “Low”, Ball Type = “Rubber”, and Run Length = “Half Length”. The ramp values should already default to these values, so nothing needs to be done.

 

4.    Press the Run button and wait for the ball to roll down the ramp.

 

5.    Assume that you do not know what is next. Type “next” in the Talk to Mikey box and press enter. Mikey will respond telling you to press the Reset button.

 

6.    Press the Reset button. Mikey will log the results of your trial in the data table.

 

7.    Set up your first trial as follows: Surface Type = “Rough”, Steepness = “Low”, Ball Type = “Golf”, and Run Length = “Half Length”.

 

8.    Press the Run button. Note that this is an incorrect trial: we did not change the target variable from the hypothesis, the steepness. We also changed the Ball Type and Surface Type. We therefore have a confounded experiment as well.

 

9.    Press the Reset button. After some time, Mikey will report in the window that there were problems with the trial and will ask us to setup a trial adhering to the Control of Variables strategy.

 

10.Let us ignore his advice and hit the Run button again with the same input.

 

11.Press the Reset button to have Mikey check our work once more.

 

12.After some time, Mikey will again report an error, but this time he will highlight the specific ramp options that are causing the incorrect setup.

 

13.Now, setup a “correct” trial as follows: Surface Type = “Smooth”, Steepness = “High”, Ball Type = “Rubber”, and Run Length = “Half Length”.

 

14.Press the Reset button and Mikey will report that you successfully setup the trials. He will also have filled in the trial results in the table.

 

15.Next we will form our conclusions. Setup the conclusions so that they read “My experiments show that my hypothesis was incorrect because the distance decreased from low to high.”

 

16.Press the Submit button in the conclusions box to have Mikey check the results.

 

17.Mikey will report back that there were two problems with the conclusions. The first error states that the original hypothesis is actually correct because the hypothesis explains what occurred in the data table. The second error states that the distance did not decrease based on the observed data.

 

18.Fix the errors by changing the conclusions to read “My experiments show that my hypothesis was correct because the distance increased from low to high.”

 

19.Hit the Submit button in the conclusions box. Mikey will report that you successfully completed an entire experiment.

 

 

 

Close down and restart the application. To give Mikey a more rigorous test try the following:

 

·         When you first start the program, type “begin” in the Talk to Mikey box and press enter. Mikey will automatically create a hypothesis for you. See if you can follow through with the process correctly for the new hypothesis.

 

·         Try making combinations of errors when setting up your second trial. Set up an unconfounded experiment with the hypothesis variable unchanged. Try changing the hypothesis variable and also making the experiment confounded. You will see that Mikey can handle cases where the student makes only one kind of error.

 

·         Instead of following the hypothesis template for setting up trials (for example steepness from low to high), first create a trial with a high steepness and then one with a low steepness. Mikey should still correctly identify errors in the conclusions even though we created the trials out of order.

 

·         Try other combinations of incorrect conclusions.

 

 

WHAT WORKED

The overall goal of my project was to generate an intelligent tutoring environment based on an existing graphical simulation I created. To that end, this project required me to (1) determine a problem that could be solved by someone using the simulation, (2) create an intelligent tutor to help that person through the process of solving the problem by changing its pedagogical strategy, and (3) make the system flexible enough to be able to solve those problems. Some of the successes are listed as follows:

 

·         Creating a large task model with recursion, optional steps, and deep decompositions. For better or for worse, I tried to put as much functionality as I could into the task model and keep functionality out of the integration component, PedagogicalAgent.java. The main reason was to truly have the intelligence living within the task model so it could be modified independently of the rest of the system. Most of the primitive actions are meant to be executed by the system. The task decompositions encode pedagogical strategies for giving feedback to the user. Finally, prompting tasks exist for the sole purpose of reporting information to the user.

 

·         Creating solution evaluation mechanisms entirely within JavaScript. The intelligent tutoring system should function correctly for any legal hypothesis, trial setup, and conclusion the UI lets the user specify. Even though there is hard coding for the variable names and types (see “What Didn’t Work” for a discussion on this), the code for checking correctness is independent of these values. Also, despite the fact that the process of generating a full solution is rigid, there is still a plethora of combinations of hypotheses, trials, and conclusions to explore in this system.

 

·         Supporting different kinds of feedback. I created two kinds of feedback: verbal feedback and highlighting feedback that can occur while running experiments. Currently, only verbal feedback is supported for conclusions. How and when this feedback is given is determined entirely within the task model; thus, by changing the task model, one can change the pedagogical strategy. The highlighting feedback is particularly interesting since this actually makes callbacks to the integration component. The callbacks are designed to highlight incorrect ramp parameters. This incorrectness is determined JavaScript-side and only the names of the incorrect variables and a highlighting color are passed to the integration component. There is room for improvement in this area; see “What Didn’t Work” for more information.

 

·         Supporting retries. When a user fails to perform a step correctly (e.g. creating a confounded setup), the task model lets the user retry that step. To support retry, the primary task to retry contains two decompositions, the “real” one and another to recursively call the same task if the first decomposition fails. Supporting feedback while still permitting retry requires some additional task model finagling. For example, one child step of SetupTrialCheckCVS is the CheckCVS task. If this task’s postcondition is satisfied, the decomposition steps are not executed. If the postcondition fails, the decompositions will be expanded. These decompositions yield the corrective feedback. Assuming these tasks do not change the state of the objects used in the postcondition, the postcondition will still fail. Since there are no more decompositions to try, the CheckCVS task fails, which causes the SetupTrialCheckCVS task to take its retry branch, the recursive call.

 

·         Making tasks “promptable”. I generated seemingly intelligent prompts by enabling tasks to be defined as “promptable”. This was achieved by associating them with a @prompt = true tag in the .properties file. If my PedagogicalAgent integration code notices this tag, the task is first executed to fill in any missing slot values. Then the formatted text is displayed to the user as spoken by Mikey.

 

·         Creating and passing objects within the task model. The ramp domain is defined using several classes, including IndependentVariable, OutcomeVariable, Hypothesis, Trial, and Conclusion. These objects are constructed within the task model and passed between different tasks. Given elementary input from the user in the format of strings, there are tasks that construct objects from this primitive input to be executed only by the system. Thus, there are a fair number of tasks with scripts involving object construction.

 

·         Processing system tasks automatically. A large portion of my task model involves actions executed only by the system. To prevent the user from constantly advancing the state, my integration code contained in PedagogicalAgent.java contains a method for automatically executing live tasks that are deemed as “system” tasks. System tasks have scripts and either no preference for their external attribute or external = false.

 

In summary, I believe that Mikey appears intelligent in three ways. First, he can reason over a large number of hypothesis, trial, and conclusions combinations. Second, he can employ different pedagogical strategies to help the student. Third, he can respond in a conversational style when critiquing unique user solutions and answering questions the user has about the next step.



WHAT DIDN'T WORK

 

There were several aspects of my intelligent user interface I did not overcome:

 

·         Allowing repeated changes to the hypothesis. While a user is creating a hypothesis they are unable to change pre-chosen values for a particular option. In other words, the user cannot undo her changes while formulating conclusions. One solution to this problem is obvious: add a submit button for submitting a completed hypothesis just like the conclusions. Another option I tried was to make each primitive child action of FormHypothesis repeatable as necessary (maxOccurs = unbounded). However, this did not work within my system: after completing a task, the “next” task would always be the most recently executed primitive action. I believe this can be fixed by extending the “next” command to choose required live tasks before optional live tasks.

 

·         Having Mikey say “Try again” before reporting how an experimental setup is incorrect. This part of the task model is ordered as follows: say “there is a problem”, optionally take an action about an unchanged target variable (minOccurs = 0), optionally take action about a confounded setup, and say “try again”. The current Guide will process the ‘say try again’ action before the optional actions even though they are totally ordered. Though I have not tried this, making the subtasks list be unordered with ordered = false and imposing an ordering using “requires” at each step may help.

 

·         Creating a task model for critiquing an experimental setup that is domain independent. Throughout the task model there are assumptions made that the domain is an experiment with four boolean variables and two outcomes. For example, the SetupTrial primitive action has four inputs corresponding to each variable. The RecordOutcome task performed by the system assumes a JTable exists with columns in a particular order corresponding to the variables and outcomes. Finally, the JavaScript for processing conclusions assumes that each variable is boolean. Solving this requires more design work and revisions and refactoring of the GUI and task model. I do have a few ideas though on how this could be done. First, the integration code (PedagogicalAgent.java) could create all of the key JavaScript objects (Hypothesis, VariableCollection, Conclusions) as input to the primitive actions rather than have steps for their creation. Second, the integration code could register an environment containing information like variable names and the domains of variables that are prerequisites for the task model to work correctly. These values are currently defined in the task model’s initialization script. Finally, rather than interacting with widgets directly as in the case of RecordOutcome a helper method for abstracting away specifics of how data is stored in the widget, similar to the approach taken for highlighting. This is the responsibility of the integration code to provide mappings to the widgets by the task model.

 

·         Enabling a more open-ended approach to creating an experiment. Currently the task model is highly ordered; the user first enters the hypothesis, then runs experiments, and finally formulates conclusions. She can never return to a previous main step once it is complete. This prevents actions like changing her hypothesis while running experiments. This kind of functionality is quite difficult to attain with my current approach. Currently the solution mechanisms, experimental steps, and pedagogical approaches are all tied together in the task model. If the user is running experiments and repeatedly makes mistakes, they may change their hypothesis to make the trial correct rather than changing the simulation variables. First, the focus would need to shift back to a previously accomplished task. The new hypothesis would then need to be propagated through the task model to the previous focus. Even if the FormHypothesis task was infinitely repeatable I do not believe the changed object would propagate down to the previous focus of setting up a task. As another example, the current task model also does not let a user redo their first trial. I believe part of the solution lies in not creating and passing data objects like the Hypothesis or a Trial within the task model. Instead, the tasks should operate on objects already defined in the world and leave the management of those objects to a world state manager.



FUTURE DIRECTIONS

In addition to the enhancements stated in the “What Didn’t Work” section and making the user interface easier to use and more presentable, the following could also enhance the existing intelligent tutoring system:

·         Add “why” functionality. Currently, the user can only receive help when they do not know what next step to perform. A “why” mechanism would enable users to explore the reasoning behind the steps of the process. This could be implemented by reporting the purposes of each parent task, similar to Chris Gianfrancesco’s approach.

 

·         Have the user fill in the table. The task model and integration could be extended to make the user log their own trials. The resulting model would look similar to the structures created for analyzing the Control of Variables Strategy and the conclusions.

 

·         Use Sphinx4 to have the user respond to scaffolding or reflection questions. Part of the tutoring process can involve having students respond to questions on the fly. Rather than create widgets for every kind of possible interaction (like the widgets for the hypothesis and conclusions), I could make use of language processing to gain answers to more specific supporting questions like “What is your target variable?” or “Did the distance increase from trial 1 to trial 2?”

 

·         Give feedback when the system does not understand a user’s primitive action. Currently the system does nothing when a step is performed out of order (e.g. trying to run a setup with an incompletely specified hypothesis). The system should report to the user that a step was performed out of order. This can be accomplished by reporting an out-of-order step when a primitive task is performed and that task is not live. A more difficult approach would be to see if a user’s action corresponds to a live step in the decomposition and if that step is correct, report that the step was correct but unexpected.

 

·         Support running repeated trials. The simulation has random elements meaning that a particular setup can return different outcome values each time it is run. To support this functionality, the task model must be extended to enable critiquing of multiple trials. Also, the code for analyzing conclusions must be extended.

 

·         Improved Feedback. Mikey only says a handful of phrases that are defined in the .properties file. One extension to simulate more intelligent responses would be to define a set of responses, formats 1…n, and have the Guide randomly cycle through those formats.
 

 

There are also possible enhancements that could be made to the existing task engine:

 

·         Add a focus stack. Adding a focus stack enables changing the focus to a task not in the current decomposition level. When that task is complete, the stack is popped and the original task is again in focus. This functionality may aid in making Mikey a more open-ended intelligent tutoring system.

 

·         Add support for multi-valued slots. The addition of typed multi-valued slots would alleviate the need for defining a collection class like VariableCollection to pass sets of related objects.

 

·         Allow referencing to object attributes within a binding’s value. In some of my tasks, I had to pass an entire object down to a subtask even though the object only needed the values of particular attributes of that object. Enabling access to attribute values can lead to improved task cohesion.

 

·         Allow a default value for the ‘external’ slot. I was unable to mark the value of the external slot from within a task definition. Currently, the way to achieve this is to bind the external slot of a child task within a decomposition.

 

·         Enable specification of a retry on fail attribute. Instead of employing the “retry” decomposition design strategy for handling failed tasks, a task could be marked as able to be tried again if it fails. This could lead to simpler task models. The implementation of this could be nothing more than syntactic sugar which gets translated to its recursive counterpart, thus alleviating the need to update the CEA Standard.

 

·         Enable specification of decomposition preferences. There are times in my task model when multiple decompositions can be valid and a choice must be made to continue (see CheckConclusions). Since all the child tasks are meant to be executed by the system, I wanted the Guide to greedily choose any decomposition on its own. To support this, I extended the Guide to do just that. However, it would have been nice to have a task-level attribute for stating this.

 

 

CODE REFERENCE

 

The code most relevant to this discussion is found in the following locations within the code base:

 

·         edu/assistment/experiments/inclinedplane/InclinePlane.java

·         edu/assistment/simulation/view/PedagogyJPanel.java

·         edu/assistment/experiments/pedagogy/Guide.java

·         edu/assistment/experiments/pedagogy/PedagogicalAgent.java

·         edu/assistment/experiments/pedagogy/Inquiry.xml

·         edu/assistment/experiments/pedagogy/Inquiry.properties