WPI Worcester Polytechnic Institute

Computer Science Department

ISRG Schedule, Fall 2014

September 11, 2pm 
 
  • Speaker: All
  • Title: Warm Up Meeting
  • Where: Beckett Conference Room (2nd Floor, Fuller Labs)
  • Abstract:
  • Cookies: Matt
  • September 25, 2pm 
     
  • Speaker: Jia Wang
  • Title: Coordinated 3D Interaction in Tablet- and HMD-Based Hybrid Virtual Environments
  • Where: Beckett Conference Room (2nd Floor, Fuller Labs)
  • Abstract:

    Traditional 3D User Interfaces (3DUI) in immersive virtual reality can be inefficient in tasks that involve diversities in scale, perspective, reference frame, and dimension. This paper proposes a solution to this problem using a coordinated, tablet- and HMD-based, hybrid virtual environment system. Wearing a non-occlusive HMD, the user is able to view and interact with a tablet mounted on the non-dominant forearm, which provides a multi-touch interaction surface, as well as an exocentric God view of the virtual world. To reduce transition gaps across 3D interaction tasks and interfaces, four coordination mechanisms are proposed, two of which were implemented, and one was evaluated in a user study featuring complex level-editing tasks. Based on subjective ratings, task performance, interview feedback, and video analysis, we found that having multiple Interaction Contexts (ICs) with complementary benefits can lead to good performance and user experience, despite the complexity of learning and using the hybrid system. The results also suggest keeping 3DUI tasks synchronized across the ICs, as this can help users understand their relationships, smoothen within- and between-task IC transitions, and inspire more creative use of different interfaces.

  • Cookies: Rob
  • October 9, 2pm 
     
  • Speaker: Benzun Pious Wisely
  • Title: Tight Coupling between Manipulation and Perception using SLAM
  • Where: Beckett Conference Room (2nd Floor, Fuller Labs)
  • Abstract:

    A tight coupling between perception and manipulation is required for dynamic robots to react in a timely and appropriate manner to changes in the world. In conventional robotics, perception transforms visual information into internal models which are used by planning algorithms to generate trajectories for motion. Under this paradigm, it is possible for a plan to become stale if the robot or environment changes configuration before the robot can replan. Perception and actuation are only loosely coupled through planning; there is no rapid feedback or interplay between them. For a statically stable robot in a slowly changing environment, this is an appropriate strategy for manipulating the world. A tightly coupled system, by contrast, connects perception directly to actuation, allowing for rapid feedback. This tight coupling is important for a dynamically unstable robot which engages in active manipulation. In such robots, planning does not fall between percepti! on and manipulation; rather planning creates the connection between perception and manipulation. We show that Simultaneous Localization and Mapping (SLAM) can be used as a tool to perform the tight coupling for a humanoid robot with numerous proprioceptive and exteroceptive sensors. Three different approaches to generate a motion plan for grabbing a piece of debris is evaluated using for Atlas humanoid robot. Results indicate higher success rate and accuracy for motion plans that implement tight coupling between perception and manipulation using SLAM.

  • Cookies: Emmanuel
  • Oct 30, 2pm 
     
  • Speaker: Supreeth Rao
  • Title: The MOPED framework: Object Recognition and Pose Estimation for Manipulation
  • Where: Beckett Conference Room (2nd Floor, Fuller Labs)
  • Abstract:

    We present MOPED, a framework for Multiple Object Pose Estimation and Detection that seamlessly integrates single-image and multi-image object recognition and pose estimation in one optimized, robust, and scalable framework. We address two main challenges in computer vision for robotics: robust performance in complex scenes, and low latency for real-time operation.

    We achieve robust performance with Iterative Clustering- Estimation (ICE), a novel algorithm that iteratively combines feature clustering with robust pose estimation. Feature clustering quickly partitions the scene and produces object hypotheses. The hypotheses are used to further refine the feature clusters, and the two steps iterate until convergence. ICE is easy to parallelize, and easily integrates single- and multi-camera object recognition and pose estimation. We also introduce a novel object hypothesis scoring function based on M-estimator theory, and a novel pose clustering algorithm that robustly handles recognition outliers. We achieve scalability and low latency with an improved feature matching algorithm for large databases, a GPU/CPU hybrid architecture that exploits parallelism at all levels, and an optimized resource scheduler. We provide extensive experimental results demonstrating state-of-the-art performance in terms of recognition, scalability, and latency in real-world robotic applications.

  • Cookies: Rob
  • December 4, 2pm 
     
  • Speaker: Che Sun
  • Title: Real-time Global Illumination - Introduction to a voxel-based hybrid method
  • Where: Beckett Conference Room (2nd Floor, Fuller Labs)
  • Abstract:

    Real-time global illumination has gained more and more interests in recent years due to the rapid evolvement of GPU computing power. Several approaches have been developed to try to simulate global illumination as accurate as possible by implementing robust ray-tracing algorithms using Shader Model 5 features of current GPUs. For example, bidirectional path tracing is employed to resolve the classic singularity issue of instant radiosity. While the rendering speed is acceptable for a certain level of scene complexity, maintaining high frame rates for arbitrary scenes is still a challenging work. In this presentation, I will give an introduction to our novel hybrid real-time global illumination rendering system that combines bidirectional path tracing and scene voxelization together to accelerate virtual point light (VPL) visibility tests and global ray-bundles generation.

  • Cookies: Rob
  • December 18, 2pm 
     
  • Speaker: Kaiyu Zhao
  • Title: Visual Analytics of Time Series Co-movement
  • Where: Beckett Conference Room (2nd Floor, Fuller Labs)
  • Abstract:

    A significant task within data mining is to group or classify objects based on some similarity metrics. Unfortunately, no two researchers can agree on which metric to use, and even within a given metric, the configuration space can be quite overwhelming. In this paper we present our work towards the development of a visual analytics tool that supports the building of associations between time series objects by progressively integrating multiple models and using different resolutions to suggest evidence for co-movement relationship. The model analytic space integrates views to support multiple models at multiple granularities. The user can interactively group or isolate objects based on the evidence he/she sees thus far. While exploring the model analytic space, the user can also examine the aggregated relationship in the model consensus view that shows enhanced or weaken relationships. We demonstrate this process using a case study involving instances of financial stock price, interactively identifying the critical features for isolating interesting relationships between companies in terms of the time series movement.

  • Cookies: Rob
  • [Return to the WPI Homepage] [Return to the CS Homepage] [Return to the ISRG Homepage]

    webmaster@cs.wpi.edu / Updated 16th Jan 2014