Detailed Description: Orville

  • Instructions for a typical interaction

    When the application is launched, Orville greets the user and ask him/her to select an aircraft. Orville will load the xml task model that corresponds to the aircraft selected. Currently the Cessna 172 is supported. The user may ask ‘why’ by clicking the why button at this point or at any time during the application. Orville will provide detailed information via speech and text about the tasks selected from the task selection combo box. After the aircraft is selected, Orville begins to walk the user through sequences of flight tasks. The highest level tasks are Conduct Pre-flight, Prepare for Takeoff, Takeoff, and Cruise. After the user selects a task, the appropriate task panel is loaded for that task. For instance, a slider widget will be presented for an Adjust Throttle task. Additionally, for any task that Orville has a grounding script for, he will immediately offer to automatically perform that action for the user. This is intended to be a collaborative type interaction, which is designed to act as an autopilot type feature that can be overridden. When Orville asks if he can help, the user may respond using three modes, which are manual widget adjustment, text entry, or microphone speech. Throughout the execution of the program red outlines are placed around the available widgets for input during particular times in the interaction. This eases the effort figure out which multimodal inputs are available for particular tasks. As the user progresses through the task model, Orville will notify them why we’re performing certain tasks. For example, Orville will tell the user that we’re about to do a series of tasks in order to start the engine. Or he may tell them that we just completed a series of tasks, which fulfill the Pre-flight procedures. This allows the user to realize why we’re doing certain sub steps, and which goals they fulfill. Finally, at any time the user may look at the flight history in the right-hand flight history pane. You will notice that Orville notes which tasks he performed, so the user knows which tasks were automated.


  • What worked

    Overall, the project was a success. I was able to use the task model to drive the GUI, and the collaborative and multimodal features discussed above were successfully implemented. At the heart of the application is the task model, which is unique to the type of aircraft selected by the user. I was able to use the task model to drive GUI navigation, widget displays, speech and text output, and collaboration inquiries. When the application starts, the task model corresponding to the aircraft chosen is loaded into the task engine. I have implemented an “Orville Guide”, which extends the standard Guide issued with the cetask implementation. After the task model is loaded, the Orville Guide instructs my GUI Manager to query the task engine for the available live tasks. The available tasks are stored in the task selection combo box, which the user may choose from. The user can also click the “Why?” button to get more information about a particular task selected in that combo box. This functionality is implemented using Java properties files, which contain a “why” property for each task in the task model. After the user clicks on their desired task, I again use my GUI Manager to locate the appropriate widget panel for the current task focus. Additionally, I put a red outline around the available modes for that task, which helps the user decipher the available input mechanisms. Tasks which have grounding scripts are immediately offered up for automatic execution. For example, Orville may ask, “Shall I adjust the throttle?”. The user may answer “yes” in the text box, with the microphone, or she may choose to manually manipulate the throttle widget. If the user asked Orville to automatically perform the task, he reports back to the user what he sent to the aircraft control system. For instance, he may say and type “I sent a 20% Fuel-Oxygen mixture to the control system”. Orville will also enter this information in the flight history panel, and note that he performed it. After each task is completed, Orville again goes back to the GUI Manager in order to population the task selection combo box with the available tasks at certain stages of the flight. Throughout the entire program sequence, the same GUI is being used with the exception of the specific task widget panels. There is one panel for each task type. For example there is an Adjust Throttle Panel and a Brief Passengers Panel. The appropriate task panel is swapped into the main frame using Swing Card Layout functionality. Finally, I was able to keep the user informed of progress within the flight plan using the primitive “Say” task concept our class discussed. I supplemented certain multi step tasks with “Say” tasks both before and after the main subtasks. For example, at the beginning of Start Engine sequence I notify the user with text and speech that we’re beginning the Start Engine process. Similarly at the end, I notify the user that we have completed starting the engine. The task engine was modified to automatically execute the “Say” tasks, so that the user may be informed without explicitly asking for information.


  • What didn't

    The only thing that didn’t work out cleanly implementation-wise was the concept of default input values. Due to the unique nature of my desired collaborative/autopilot functionality, I wanted Orville to offer up grounding script execution immediately when a task was initiated. At the same time, I wanted the user to be able to ignore this grounding script offer so they could manually adjust the task input. Furthermore, I wanted to use constant input slot values to populate the GUI widget settings. For example, if the Adjust Throttle task was initiated through the GUI, I want the following sequence of events to occur: Orville reads the constant valued input slot of the particular Adjust Throttle step encountered in the task model. He uses that value for default widget settings. Orville then immediately offers to automatically perform the task for the user using this default input value. The user can type or say “yes”, or she can choose to manually adjust the Throttle. The problem is that I can’t adjust the input values that were previously defined as constants. All of this functionality is a bit of a stretch for the task engine provided. However, I was able to get all of this functionality by simply using the output slots for the desired behavior. The output slots can start off constant, and be changed on the fly, so this bought me exactly what I needed. There may have been a better way to do this.


  • Future directions

    First, I would like try more aircraft flight plans in order to see how they would fit my architecture. It would be interesting to see if the same GUI layout, GUI management, and underlying task navigation would support various other aircrafts. Secondly, it would be nice to continue the sequence of flight tasks. Only a subset of the Cessna 172 aircraft’s flight plan was implemented here. The landing steps were left out, which could provide a more interesting interaction with a bit more uncertain user sequences. There are several aesthetic add-ons I could see implementing. Semi-random voice output could help with the monotony of the sequences. Also, different task selection mechanisms could be used. At the start I anticipated that the user could select from multiple tasks at one time from the combo box. It turned out the flight plan was very sequential, and really only the Cruise task permitted unordered tasking. If I re-implemented the application, I would automatically bring up the task widgets when there is only one available task to the user. Perhaps I wouldn’t use a combo box at all. I would probably replace it with buttons like we saw in the Laura application. I would also be very curious to add more speech input to drive the GUI. I allowed the user to respond to “yes” questions with the microphone, but it could add a nice effect if the user could navigate through tasks by voice. Overall I’m very pleased with how the project turned out, and I could see many extensions that could be performed.