- 1 Schedule Fall 2014
- 2 Talk Abstracts Fall 2014
- 2.1 Recursive Context Propagation Network for Semantic Scene Labeling
- 2.2 Planar Structure Matching Under Projective Uncertainty for Geolocation
- 2.3 Knowing a Good HOG Filter When You See It: Efficient Selection of Filters for Detection
- 2.4 Growing Regression Forests by Classification: Applications to Object Pose Estimation
- 2.5 Locally Convolutional Neural Network
- 2.6 Shadow-Free Segmentation in Still Images Using Local Density Measure
Schedule Fall 2014
All talks take place on Thursdays at 3:30pm in AVW 3450.
|October 16||Abhishek Sharma||Recursive Context Propagation Network for Semantic Scene Labeling|
|October 23||Ang Li||Planar Structure Matching Under Projective Uncertainty for Geolocation|
|November 6||Ejaz Ahmed||Knowing a Good HOG Filter When You See It: Efficient Selection of Filters for Detection|
|November 13||CVPR deadline, no meeting|
|November 20||Kota Hara||Growing Regression Forests by Classification: Applications to Object Pose Estimation|
|November 27||Thanksgiving break, no meeting|
|December 4||Angjoo Kanazawa||Locally Convolutional Neural Network|
|December 11||Aleksandrs Ecins||Shadow-Free Segmentation in Still Images Using Local Density Measure|
Talk Abstracts Fall 2014
Recursive Context Propagation Network for Semantic Scene Labeling
Speaker: Abhishek Sharma -- Date: October 16, 2014
Abstract: The talk will briefly touch upon the Multi-scale CNN of Lecun and Farabet to extract pixel-wise features for semantic segmentation and then I will move on to discuss the work we did to enhance the model further in order to result in a real-time and accurate pixel-wise labeling pipeline. I will talk about a deep feed-forward neural network architecture for pixel-wise semantic scene labeling. It uses a novel recursive neural network architecture for context propagation, referred to as rCPN. It first maps the local features into a semantic space followed by a bottom-up aggregation of local information into a global feature of the entire image. Then a top-down propagation of the aggregated information takes place that enhances the contextual information of each local features. Therefore, the information from every location in the image is propagated to every other location. Experimental results on Stanford background and SIFT Flow datasets show that the proposed method outperforms previous approaches in terms of accuracy. It is also orders of magnitude faster than previous methods and takes only 0.07 seconds on a GPU for pixel-wise labeling of a 256 by 256 image starting from raw RGB pixel values, given the super-pixel mask that takes an additional 0.3 seconds using an off-the-shelf implementation.
Planar Structure Matching Under Projective Uncertainty for Geolocation
Speaker: Ang Li -- Date: October 23, 2014
Abstract: Image based geolocation aims to answer the question: where was this ground photograph taken? We present an approach to geoloca- lating a single image based on matching human delineated line segments in the ground image to automatically detected line segments in ortho images. Our approach is based on distance transform matching. By ob- serving that the uncertainty of line segments is non-linearly amplified by projective transformations, we develop an uncertainty based repre- sentation and incorporate it into a geometric matching framework. We show that our approach is able to rule out a considerable portion of false candidate regions even in a database composed of geographic areas with similar visual appearances.
Knowing a Good HOG Filter When You See It: Efficient Selection of Filters for Detection
Speaker: Ejaz Ahmed -- Date: November 6, 2014
Abstract: Collections of filters based on histograms of oriented gradients (HOG) are common for several detection methods, notably, poselets and exemplar SVMs. The main bottleneck in training such systems is the selection of a subset of good filters from a large number of possible choices. We show that one can learn a universal model of part “goodness” based on properties that can be computed from the filter itself. The intuition is that good filters across categories exhibit common traits such as, low clutter and gradients that are spatially correlated. This allows us to quickly discard filters that are not promising thereby speeding up the training procedure. Applied to training the poselet model, our automated selection procedure allows us to improve its detection performance on the PASCAL VOC data sets, while speeding up training by an order of magnitude. Similar results are reported for exemplar SVMs.
Growing Regression Forests by Classification: Applications to Object Pose Estimation
Speaker: Kota Hara -- Date: November 20, 2014
Abstract: In this work, we propose a novel node splitting method for regression trees and incorporate it into the regression forest framework. Unlike traditional binary splitting, where the splitting rule is selected from a predefined set of binary splitting rules via trial-and-error, the proposed node splitting method first finds clusters of the training data which at least locally minimize the empirical loss without considering the input space. Then splitting rules which preserve the found clusters as much as possible are determined by casting the problem into a classification problem. Consequently, our new node splitting method enjoys more freedom in choosing the splitting rules, resulting in more efficient tree structures. In addition to the Euclidean target space, we present a variant which can naturally deal with a circular target space by the proper use of circular statistics. We apply the regression forest employing our node splitting to head pose estimation (Euclidean target space) and car direction estimation (circular target space) and demonstrate that the proposed method significantly outperforms state-of-the-art methods (38.5\% and 22.5\% error reduction respectively).
Locally Convolutional Neural Network
Speaker: Angjoo Kanazawa -- Date: December 4, 2014
Abstract: Convolutional Neural Networks (ConvNets) have shown excellent results on many visual classification tasks. With the exception of ImageNet, these datasets are carefully crafted such that objects are well-aligned at similar scales. Naturally, the feature learning problem gets more challenging as the amount of variation in the data increases, as the models have to learn to be invariant to certain changes in appearance. Recent results on the ImageNet dataset show that given enough data, ConvNets can learn such invariances producing very discriminative features . But could we do more: use less parameters, less data, learn more discriminative features, if certain invariances were built into the learning process? In this paper we present a simple model that allows ConvNets to learn features in a locally scale-invariant manner without increasing the number of model parameters. We show on a modified MNIST dataset that when faced with scale variation, building in scale-invariance allows ConvNets to learn more discriminative features with
Shadow-Free Segmentation in Still Images Using Local Density Measure
Speaker: Aleksandrs Ecins -- Date: December 11, 2014
Abstract: Over the last decades several approaches were introduced to deal with cast shadows in background subtraction applications. However, very few algorithms exist that address the same problem for still images. In this paper we propose a figure ground segmentation algorithm to segment objects in still images affected by shadows. Instead of modeling the shadow directly in the segmentation process our approach works actively by first segmenting an object and then testing the resulting boundary for the presence of shadows and resegmenting again with modified segmentation parameters. In order to get better shadow boundary detection results we introduce a novel image preprocessing technique based on the notion of the image density map. This map improves the illumination invariance of classical filterbank based texture description methods. We demonstrate that this texture feature improves shadow detection results. The resulting segmentation algorithm achieves good results on a new figure ground segmentation dataset with challenging illumination conditions.