Anonymous

Changes

From cvss
1,631 bytes added ,  19:46, 24 February 2015
no edit summary
Line 36: Line 36:  
| February 26
 
| February 26
 
| Jingjing Zheng
 
| Jingjing Zheng
| TBD
+
| Submodular Attribute Selection for Action Recognition in Video
 
|-
 
|-
 
| March 5
 
| March 5
Line 86: Line 86:     
===PSPGC: Part-Based Seeds for Parametric Graph-Cuts===
 
===PSPGC: Part-Based Seeds for Parametric Graph-Cuts===
Speaker: [http://www.cs.umd.edu/~bharat/ Bharat Singh] -- Date: Feburary 19, 2015
+
Speaker: [http://www.cs.umd.edu/~bharat/ Bharat Singh] -- Date: February 19, 2015
    
Abstract: PSPGC is a detection-based parametric graph-cut method for accurate image segmentation. Experiments show that seed positioning plays an important role in graph-cut based methods, so, we propose three seed generation strategies which incorporate information about location and color of object parts, along with size and shape. Combined with low-level regular grid seeds, PSPGC can leverage both low-level and high-level cues about objects present in the image. Multiple-parametric graph-cuts using these seeding strategies are solved to obtain a pool of segments, which have a high rate of producing the ground truth segments. Experiments on the challenging PASCAL2010 and 2012 segmentation datasets show that the accuracy of the segmentation hypotheses generated by PSPGC outperforms other state-of-the-art methods when measured by three different metrics(average overlap, recall and covering) by up to 3.5%. We also obtain the best average overlap score in 15 out of 20 categories on PASCAL2010. Further, we provide a quantitative evaluation of the efficacy of each seed generation strategy introduced.
 
Abstract: PSPGC is a detection-based parametric graph-cut method for accurate image segmentation. Experiments show that seed positioning plays an important role in graph-cut based methods, so, we propose three seed generation strategies which incorporate information about location and color of object parts, along with size and shape. Combined with low-level regular grid seeds, PSPGC can leverage both low-level and high-level cues about objects present in the image. Multiple-parametric graph-cuts using these seeding strategies are solved to obtain a pool of segments, which have a high rate of producing the ground truth segments. Experiments on the challenging PASCAL2010 and 2012 segmentation datasets show that the accuracy of the segmentation hypotheses generated by PSPGC outperforms other state-of-the-art methods when measured by three different metrics(average overlap, recall and covering) by up to 3.5%. We also obtain the best average overlap score in 15 out of 20 categories on PASCAL2010. Further, we provide a quantitative evaluation of the efficacy of each seed generation strategy introduced.
 +
 +
===Submodular Attribute Selection for Action Recognition in Video===
 +
Speaker: [https://sites.google.com/site/jingjingzhengumd/ Jingjing Zheng] -- Date: February 26, 2015
 +
 +
Abstract: We present an approach to jointly learn a set of view-specific dictionaries and a common dictionary for cross-view action recognition. The set of  view-specific dictionaries is learned for specific views while the common dictionary is shared across different views. Our approach represents videos in each view using  both the corresponding view-specific dictionary and the common dictionary. More importantly, it encourages the set of videos taken from different views of the same action to have similar sparse representations. In this way, we can align view-specific features in the sparse feature spaces spanned by the view-specific dictionary set and transfer the view-shared features in the sparse feature space spanned by the common dictionary. Meanwhile, the incoherence between the common dictionary and the view-specific dictionary set enables us to exploit the discrimination information encoded in view-specific features and view-shared features separately. In addition, the learned common dictionary not only has the capability to represent actions from  unseen views, but also makes our approach effective in a semi-supervised setting where no correspondence videos exist and only a few labels exist in the target view. Extensive experiments using the multi-view IXMAS dataset demonstrate that our approach outperforms many recent approaches for cross-view action recognition.
 +
    
==Past Semesters==
 
==Past Semesters==
77

edits