Changes

767 bytes added ,  20:48, 2 July 2011
no edit summary
Line 9: Line 9:     
* Encourage interaction between computer vision students;
 
* Encourage interaction between computer vision students;
   
* Provide an opportunity for computer vision students to be aware of and possibly get involved in the research their peers are conducting;
 
* Provide an opportunity for computer vision students to be aware of and possibly get involved in the research their peers are conducting;
   
* Provide an opportunity for computer vision students to receive feedback on their current research;
 
* Provide an opportunity for computer vision students to receive feedback on their current research;
   
* Provide speaking opportunities for computer vision students.
 
* Provide speaking opportunities for computer vision students.
   Line 57: Line 54:  
| July 7
 
| July 7
 
| Raghuraman Gopalan
 
| Raghuraman Gopalan
|
+
| Exploring Context in Unsupervised Object Identification Scenarios
 
|-
 
|-
 
| July 14
 
| July 14
Line 111: Line 108:     
Over the years, the spatial resolution of cameras has steadily increased but the temporal resolution has remained the same. In this talk, I will present my work on converting a regular slow camera into a faster one. We capture and accurately reconstruct fast events using our slower prototype camera by exploiting the temporal redundancy in videos. First, I will show how by fluttering the shutter during the exposure duration of a slow 25fps camera we can capture and reconstruct a fast periodic video at 2000fps. Next, I will present its generalization where we show that per-pixel modulation during exposure, in combination with brightness constancy constraints allows us to capture a broad class of motions at 200fps using a 25fps camera. In both these techniques we borrow ideas from compressive sensing theory for acquisition and recovery.
 
Over the years, the spatial resolution of cameras has steadily increased but the temporal resolution has remained the same. In this talk, I will present my work on converting a regular slow camera into a faster one. We capture and accurately reconstruct fast events using our slower prototype camera by exploiting the temporal redundancy in videos. First, I will show how by fluttering the shutter during the exposure duration of a slow 25fps camera we can capture and reconstruct a fast periodic video at 2000fps. Next, I will present its generalization where we show that per-pixel modulation during exposure, in combination with brightness constancy constraints allows us to capture a broad class of motions at 200fps using a 25fps camera. In both these techniques we borrow ideas from compressive sensing theory for acquisition and recovery.
 +
 +
 +
===Exploring Context in Unsupervised Object Identification Scenarios===
 +
 +
The utility of context for supervised object recognition has been well acknowledged from the early seventies, and has been practically demonstrated by many systems in the last few years. The goal of this talk is to understand the role of context in unsupervised pattern identification scenarios. We consider two problems of clustering a set of unlabelled data points using maximum margin principles, and adapting a classifier trained on a specific domain to identify instances across novel domain shifting transformations, and propose contextual sources that provide pertinent information on the identity of the unlabelled data.
     
199

edits