Anonymous

Changes

From cvss
789 bytes added ,  02:39, 13 November 2013
no edit summary
Line 64: Line 64:  
| November 14
 
| November 14
 
| Sumit Shekhar
 
| Sumit Shekhar
| TBA
+
| Joint Sparse Representation for Multimodal Biometric Recognition
 
|-
 
|-
 
| November 21
 
| November 21
Line 105: Line 105:     
Discriminative appearance features are effective for recognizing actions in a fixed view, but generalize poorly to changes in viewpoint. We present a method for view-invariant action recognition based on sparse representations using a transferable dictionary pair. A transferable dictionary pair consists of two dictionaries that correspond to the source and target views respectively. The two dictionaries are learned simultaneously from pairs of videos taken at different views and aim to encourage each video in the pair to have the same sparse representation. Thus, the transferable dictionary pair links features between the two views that are useful for action recognition. Both unsupervised and supervised algorithms are presented for learning transferable dictionary pairs. Using the sparse representation as features, a classifier built in the source view can be directly transferred to the target view. We extend our approach to transferring an action model learned from multiple source views to one target view. We demonstrate the effectiveness of our approach on the multi-view IXMAS data set. Our results compare favorably to the the state of the art.
 
Discriminative appearance features are effective for recognizing actions in a fixed view, but generalize poorly to changes in viewpoint. We present a method for view-invariant action recognition based on sparse representations using a transferable dictionary pair. A transferable dictionary pair consists of two dictionaries that correspond to the source and target views respectively. The two dictionaries are learned simultaneously from pairs of videos taken at different views and aim to encourage each video in the pair to have the same sparse representation. Thus, the transferable dictionary pair links features between the two views that are useful for action recognition. Both unsupervised and supervised algorithms are presented for learning transferable dictionary pairs. Using the sparse representation as features, a classifier built in the source view can be directly transferred to the target view. We extend our approach to transferring an action model learned from multiple source views to one target view. We demonstrate the effectiveness of our approach on the multi-view IXMAS data set. Our results compare favorably to the the state of the art.
 +
 +
===Joint Sparse Representation for Multimodal Biometric Recognition===
 +
Speaker: [http://www.umiacs.umd.edu/~sshekha/ Sumit Shekhar] -- Date: November 14, 2013
 +
 +
In this talk, I will present the work on feature-level fusion method for multimodal biometric recognition. Traditional methods for combining outputs from different modalities are based on score-level or decision-level fusion. Feature-level fusion can be more discriminative, but has hardly been explored due to challenges of different feature outputs and high feature dimensions. Here, I will present a framework using joint sparsity to combine information, and show its application to multimodal biometric recognition, face recognition and vidoe-based recognition.
 +
    
==Past Semesters==
 
==Past Semesters==
199

edits