Changes

6,207 bytes removed ,  22:08, 3 February 2015
no edit summary
Line 20: Line 20:  
* Topics may include current research, past research, general topic presentations, paper summaries and critiques, or anything else beneficial to the computer vision graduate student community.
 
* Topics may include current research, past research, general topic presentations, paper summaries and critiques, or anything else beneficial to the computer vision graduate student community.
   −
==Schedule Fall 2014==
+
==Schedule Spring 2015==
    
All talks take place on Thursdays at 3:30pm in AVW 3450.
 
All talks take place on Thursdays at 3:30pm in AVW 3450.
Line 30: Line 30:  
! Title
 
! Title
 
|-
 
|-
| October 16
+
| February 12
| Abhishek Sharma
+
| TBD
| Recursive Context Propagation Network for Semantic Scene Labeling
+
| TBD
 
|-
 
|-
| October 23
+
| February 19
| Ang Li
+
| TBD
| Planar Structure Matching Under Projective Uncertainty for Geolocation
+
| TBD
 
|-
 
|-
| October 30
+
| February 26
 
| Cancelled
 
| Cancelled
 
| Cancelled
 
| Cancelled
 
|-
 
|-
| November 6
+
| March 5
| Ejaz Ahmed
+
| TBD
| Knowing a Good HOG Filter When You See It: Efficient Selection of Filters for Detection
+
| TBD
 
|-
 
|-
| November 13
+
| March 12
| ''CVPR deadline, no meeting''
+
| TBD
 +
| TBD
 +
|-
 +
| March 19
 +
| ''Spring Break, no meeting''
 
|  
 
|  
 
|-
 
|-
| November 20
+
| March 26
| Kota Hara
+
| TBD
| Growing Regression Forests by Classification: Applications to Object Pose Estimation
+
| TBD
 +
|-
 +
| April 2
 +
| TBD
 +
| TBD
 +
|-
 +
| April 9
 +
| TBD
 +
| TBD
 +
|-
 +
| April 16
 +
| ''ICCV deadline, no meeting''
 +
|  
 
|-
 
|-
| November 27
+
| April 23
| ''Thanksgiving break, no meeting''
+
| ''Post ICCV deadline, no meeting''
 
|  
 
|  
 
|-
 
|-
| December 4
+
| April 30
| Angjoo Kanazawa
+
| TBD
| Locally Convolutional Neural Network
+
| TBD
 +
|-
 +
| May 7
 +
| TBD
 +
| TBD
 
|-
 
|-
| December 11
+
| May 14
| Aleksandrs
+
| ''Final Exam, no meeting''
|  
+
|
 
|}
 
|}
   −
==Talk Abstracts Fall 2014==
+
==Talk Abstracts Spring 2015==
 
  −
===Recursive Context Propagation Network for Semantic Scene Labeling===
  −
Speaker: [https://www.cs.umd.edu/~bhokaal/ Abhishek Sharma] -- Date: October 16, 2014
  −
 
  −
Abstract: The talk will briefly touch upon the Multi-scale CNN of Lecun and Farabet to extract pixel-wise features for semantic segmentation and then I will move on to discuss the work we did to enhance the model further in order to result in a real-time and accurate pixel-wise labeling pipeline. I will talk about a deep feed-forward neural network architecture for pixel-wise semantic scene labeling. It uses a novel recursive neural network architecture for context propagation, referred to as rCPN. It first maps the local features into a semantic space followed by a bottom-up aggregation of local information into a global feature of the entire image. Then a top-down propagation of the aggregated  information takes place that enhances the contextual information of each local features. Therefore, the information from every location in the image is propagated to every other location. Experimental results on Stanford background and SIFT Flow datasets show that the proposed method outperforms previous approaches in terms of accuracy. It is also orders of magnitude faster than previous methods and takes only 0.07 seconds on a GPU for pixel-wise labeling of a 256 by 256 image starting from raw RGB pixel values, given the super-pixel mask that takes an additional 0.3 seconds using an off-the-shelf implementation.
  −
 
  −
===Planar Structure Matching Under Projective Uncertainty for Geolocation===
  −
Speaker: [http://www.cs.umd.edu/~angli/ Ang Li] -- Date: October 23, 2014
  −
 
  −
Abstract: Image based geolocation aims to answer the question: where was this ground photograph taken? We present an approach to geoloca- lating a single image based on matching human delineated line segments in the ground image to automatically detected line segments in ortho images. Our approach is based on distance transform matching. By ob- serving that the uncertainty of line segments is non-linearly amplified by projective transformations, we develop an uncertainty based repre- sentation and incorporate it into a geometric matching framework. We show that our approach is able to rule out a considerable portion of false candidate regions even in a database composed of geographic areas with similar visual appearances.
  −
 
  −
===Knowing a Good HOG Filter When You See It: Efficient Selection of Filters for Detection===
  −
Speaker: [http://www.cs.umd.edu/~ejaz/ Ejaz Ahmed] -- Date: November 6, 2014
  −
 
  −
Abstract: Collections of filters based on histograms of oriented gradients (HOG) are common for several detection methods, notably, poselets and exemplar SVMs. The main bottleneck in training such systems is the selection of a subset of good filters from a large number of possible choices. We show that one can learn a universal model of part “goodness” based on properties that can be computed from the filter itself. The intuition is that good filters across categories exhibit common traits such as, low clutter and gradients that are spatially correlated. This allows us to quickly discard filters that are not promising thereby speeding up the training procedure. Applied to training the poselet model, our automated selection procedure allows us to improve its detection performance on the PASCAL VOC data sets, while speeding up training by an order of magnitude. Similar results are reported for exemplar SVMs.
  −
 
  −
===Growing Regression Forests by Classification: Applications to Object Pose Estimation===
  −
Speaker: [http://www.kotahara.com/ Kota Hara] -- Date: November 20, 2014
  −
 
  −
Abstract: In this work, we propose a novel node splitting method for regression trees and incorporate it into the regression forest framework. Unlike traditional binary splitting, where the splitting rule is selected from a predefined set of binary splitting rules via trial-and-error, the proposed node splitting method first finds clusters of the training data which at least locally minimize the empirical loss without considering the input space. Then splitting rules which preserve the found clusters as much as possible are determined by casting the problem into a classification problem. Consequently, our new node splitting method enjoys more freedom in choosing the splitting rules, resulting in more efficient tree structures. In addition to the Euclidean target space, we present a variant which can naturally deal with a circular target space by the proper use of circular statistics. We apply the regression forest employing our node splitting to head pose estimation (Euclidean target space) and car direction estimation (circular target space) and demonstrate that the proposed method significantly outperforms state-of-the-art methods (38.5\% and 22.5\% error reduction respectively).
  −
 
     −
===Locally Convolutional Neural Network===
+
===TBD===
Speaker: [http://www.umiacs.umd.edu/~kanazawa/ Angjoo Kanazawa] -- Date: December 4, 2014
+
Speaker: [TBD TBD] -- Date: Feburary 12, 2015
   −
Abstract: Convolutional Neural Networks (ConvNets) have shown excellent results on many
+
Abstract: TBD
visual classification tasks. With the exception of ImageNet, these datasets are
  −
carefully crafted such that objects are well-aligned at similar scales. Naturally, the
  −
feature learning problem gets more challenging as the amount of variation in the
  −
data increases, as the models have to learn to be invariant to certain changes in
  −
appearance. Recent results on the ImageNet dataset show that given enough data,
  −
ConvNets can learn such invariances producing very discriminative features [1].
  −
But could we do more: use less parameters, less data, learn more discriminative
  −
features, if certain invariances were built into the learning process? In this paper
  −
we present a simple model that allows ConvNets to learn features in a locally
  −
scale-invariant manner without increasing the number of model parameters. We
  −
show on a modified MNIST dataset that when faced with scale variation, building
  −
in scale-invariance allows ConvNets to learn more discriminative features with
  −
reduced chances of over-fitting.
      
==Past Semesters==
 
==Past Semesters==
77

edits