Difference between revisions of "Main Page"

From cvss
Line 43: Line 43:
 
|-
 
|-
 
| October 10
 
| October 10
| Abhishek Sharma
+
| Yezhou Yang
| A Sentence is Worth a Thousand Pixels
+
| A Context-free Manipulation Action Grammar and Manipulation Action Consequences Detection
 
|-
 
|-
 
| October 17
 
| October 17
Line 51: Line 51:
 
|-
 
|-
 
| October 24
 
| October 24
| ''(CVPR deadline, no meeting)''
+
| Abhishek Sharma
|  
+
| A Sentence is Worth a Thousand Pixels
 
|-
 
|-
 
| October 31
 
| October 31
Line 85: Line 85:
  
 
In this project we introduce a new method for learning image prior that can be used for many applications in image reconstruction. We learn a generative model on natural image patches. Our generative model is similar to one in Gausian Mixture Model (GMM). The key idea of our approach is to force each component of our generative model to share the same set of basis vectors. This leads to a much faster inference at test time. We used image denoising as our test bed for this image prior learning. Our experimental results shows that we reached about 30x speed up over state-of-the-art method while getting slightly improvement in denoising accuracy.
 
In this project we introduce a new method for learning image prior that can be used for many applications in image reconstruction. We learn a generative model on natural image patches. Our generative model is similar to one in Gausian Mixture Model (GMM). The key idea of our approach is to force each component of our generative model to share the same set of basis vectors. This leads to a much faster inference at test time. We used image denoising as our test bed for this image prior learning. Our experimental results shows that we reached about 30x speed up over state-of-the-art method while getting slightly improvement in denoising accuracy.
 +
 +
===A Context-free Manipulation Action Grammar and Manipulation Action Consequences Detection===
 +
Speaker: [http://www.umiacs.umd.edu/~yzyang/ Yezhou Yang] -- Date October 10, 2013
 +
 +
Humanoid robots will need to learn the actions that humans perform. They will need to recognize these actions when they see them and they will need to perform these actions themselves. In this presentation I will introduce a manipulation grammar to perform this learning task. Context-free grammars  in linguistics provide a simple and precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks. Also, the basic recursive structure of natural languages is described exactly. Similarly, for manipulation actions, every complex activity is built from smaller blocks involving hands and their movements, as well as objects, tools and the monitoring of their state. Thus, interpreting a seen action is like understanding language, and executing an action from knowledge in memory is like producing language. Associated with the grammar, a parsing algorithm is proposed, which can be used  bottom-up to interpret videos by dynamically creating a semantic tree structure, and top-down to create the motor commands for a robot to execute  manipulation actions. Experiments on both tasks, i.e. a robot observing people performing manipulation actions, and a robot executing manipulation actions on a simulation platform, validate the proposed formalism.
  
 
===A Sentence is Worth a Thousand Pixels===
 
===A Sentence is Worth a Thousand Pixels===
Speaker: [http://www.umiacs.umd.edu/~bhokaal/ Abhishek Sharma] -- Date: October 10, 2013
+
Speaker: [http://www.umiacs.umd.edu/~bhokaal/ Abhishek Sharma] -- Date: October 24, 2013
  
 
We are interested in holistic scene understanding where images are accompanied with text in the form of complex sentential descriptions. We propose a holistic conditional random field model for semantic parsing which reasons jointly about which objects are present in the scene, their spatial extent as well as semantic segmentation, and employs text as well as image information as input. We automatically parse the sentences and extract objects and their relationships, and incorporate them into the model, both via potentials as well as by re-ranking candidate detections. We demonstrate the effectiveness of our approach in the challenging UIUC sentences dataset and show segmentation improvements of 12.5% over the visual only model and detection improvements of 5% AP over deformable part-based models.
 
We are interested in holistic scene understanding where images are accompanied with text in the form of complex sentential descriptions. We propose a holistic conditional random field model for semantic parsing which reasons jointly about which objects are present in the scene, their spatial extent as well as semantic segmentation, and employs text as well as image information as input. We automatically parse the sentences and extract objects and their relationships, and incorporate them into the model, both via potentials as well as by re-ranking candidate detections. We demonstrate the effectiveness of our approach in the challenging UIUC sentences dataset and show segmentation improvements of 12.5% over the visual only model and detection improvements of 5% AP over deformable part-based models.

Revision as of 19:56, 8 October 2013

Computer Vision Student Seminars

The Computer Vision Student Seminars at the University of Maryland College Park are a student-run series of talks given by current graduate students for current graduate students.

To receive regular information about the Computer Vision Student Seminars, subscribe to our mailing list or our talks list.

Description[edit]

The purpose of these talks is to:

  • Encourage interaction between computer vision students;
  • Provide an opportunity for computer vision students to be aware of and possibly get involved in the research their peers are conducting;
  • Provide an opportunity for computer vision students to receive feedback on their current research;
  • Provide speaking opportunities for computer vision students.

The guidelines for the format are:

  • An hour-long weekly meeting, consisting of one 20-40 minute talk followed by discussion and food.
  • The talks are meant to be casual and discussion is encouraged.
  • Topics may include current research, past research, general topic presentations, paper summaries and critiques, or anything else beneficial to the computer vision graduate student community.

Schedule Fall 2013[edit]

All talks take place on Thursdays at 4:30pm in AVW 3450.

Date Speaker Title
September 19 Mohammad Rastegari Fast Image Prior
September 26 (no meeting)
October 3 (MSR talk, no meeting)
October 10 Yezhou Yang A Context-free Manipulation Action Grammar and Manipulation Action Consequences Detection
October 17 Garrett Warnell TBA
October 24 Abhishek Sharma A Sentence is Worth a Thousand Pixels
October 31 (CVPR deadline, no meeting)
November 7 Jingjing Zheng TBA
November 14 Sumit Shekhar TBA
November 21 Arunkumar Mohananchettiar TBA
November 28 (Thanksgiving, no meeting)
December 5 Arijit Biswas TBA

Talk Abstracts Fall 2013[edit]

Fast Image Prior[edit]

Speaker: Mohammad Rastegari -- Date: September 19, 2013

In this project we introduce a new method for learning image prior that can be used for many applications in image reconstruction. We learn a generative model on natural image patches. Our generative model is similar to one in Gausian Mixture Model (GMM). The key idea of our approach is to force each component of our generative model to share the same set of basis vectors. This leads to a much faster inference at test time. We used image denoising as our test bed for this image prior learning. Our experimental results shows that we reached about 30x speed up over state-of-the-art method while getting slightly improvement in denoising accuracy.

A Context-free Manipulation Action Grammar and Manipulation Action Consequences Detection[edit]

Speaker: Yezhou Yang -- Date October 10, 2013

Humanoid robots will need to learn the actions that humans perform. They will need to recognize these actions when they see them and they will need to perform these actions themselves. In this presentation I will introduce a manipulation grammar to perform this learning task. Context-free grammars in linguistics provide a simple and precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks. Also, the basic recursive structure of natural languages is described exactly. Similarly, for manipulation actions, every complex activity is built from smaller blocks involving hands and their movements, as well as objects, tools and the monitoring of their state. Thus, interpreting a seen action is like understanding language, and executing an action from knowledge in memory is like producing language. Associated with the grammar, a parsing algorithm is proposed, which can be used bottom-up to interpret videos by dynamically creating a semantic tree structure, and top-down to create the motor commands for a robot to execute manipulation actions. Experiments on both tasks, i.e. a robot observing people performing manipulation actions, and a robot executing manipulation actions on a simulation platform, validate the proposed formalism.

A Sentence is Worth a Thousand Pixels[edit]

Speaker: Abhishek Sharma -- Date: October 24, 2013

We are interested in holistic scene understanding where images are accompanied with text in the form of complex sentential descriptions. We propose a holistic conditional random field model for semantic parsing which reasons jointly about which objects are present in the scene, their spatial extent as well as semantic segmentation, and employs text as well as image information as input. We automatically parse the sentences and extract objects and their relationships, and incorporate them into the model, both via potentials as well as by re-ranking candidate detections. We demonstrate the effectiveness of our approach in the challenging UIUC sentences dataset and show segmentation improvements of 12.5% over the visual only model and detection improvements of 5% AP over deformable part-based models.


Past Semesters[edit]

Funded By[edit]

Current Seminar Series Coordinators[edit]

Emails are at umiacs.umd.edu.

Angjoo Kanazawa, kanazawa@ (student of Professor David Jacobs)
Sameh Khamis, sameh@ (student of Professor Larry Davis)
Austin Myers, amyers@ (student of Professor Yiannis Aloimonos)
Raviteja Vemulapalli, raviteja @ (student of Professor Rama Chellappa)

Gone but not forgotten.

Ejaz Ahmed
Anne Jorstad now at EPFL
Jie Ni off this semester
Sima Taheri
Ching Lik Teo