MusicPADD

From DeviceLab
Jump to: navigation, search

Weekly Plans

Goal: To be able to detect single and beamed notes and also provide a smooth natural presentation (No lag)

Feb 10 - Feb 16

Sunday 10:

Monday 11:

Tuesday 12:

Update Journal
Read PADD
Fix Lag in program
Fix screen scaling

Wednesday 6:

Notes that are connected with horizontal beam

Thursday 7:

Read 1/2 of Hierarchical parsing and recognition of hand-sketched diagrams
Allow for filling in shapes later

Friday 8:

Meeting with Professor Guimbretiére
Update Journal 

Saturday 9:

Read 1/2 of SketchREAD: A Multi-Domain Sketch Recognition Engine

TODO LIST

  • Work on detecting notes that are connected with beams
  • Add threads to application
  • Fix scaling of paper to screen
  • Convex Hulls of squares matching to circles
  • Allow user to fill in circles later
  • Work on printing staffs on anoto paper
  • Detect notes and their values
  • Work on overall MUSIC PADD design (Connecting it with Finale and Batch Mode, the Database)
  • Work on Finale Plugin

List of things to Recognize

* Single Gestures

Notes:                               
Whole Whole.png                  
Half Half.png 
Quarter Quarter.png 
Eighth Eighth.png 
Sixteenth Sixteenth.png 
Thirty-Second Thirty-second.png 
Rests
Whole Whole rest.png 
Half Half rest.png
Quarter Quarter rest.gif
Eighth Eighth rest.gif 
Sixteenth Sixteenth rest.gif
Thirty-Second 32nd rest.gif

* Connected Gestures

Connected Eighth Notes
Connected Sixteenth Notes

Connected notes.JPG

  • Articuations
More to come
Tie
Stacco

Detection Ideas

Automata.PNG

Music PADD Design

MusicPaddDiagram.GIF

Explanation and design to come

Finale Plugin Resources (PDK)

I have a few more links on my mac os partition that I need to copy over

Papers to Read

Papers Read

Weekly Meeting Notes

Monday February 11, 2008

We looked at features of my demo:
1. Detecting duration of notes (whole, 1/2, 1/4, 1/16, 1/32)
2. Detecting the tone of a note depending on where it is on the staff

Things to Work on:
1. There is a lot of lag between each gesture written by the user
 - I should use 2 threads for the application
 	Thread #1: Gets pen data and puts it in a que
	Thread #2: Processes the gestures in the que
		- Must be the same thread as the gui application

2. The page isn't scaled well on the screen
 - Possibly revert back to the landscape orientation

3. The convex hull of a square doesn't match well with square templates
 - This is because there are few points in the hull, we can try to manually resample the hull between segments.

4. It would be nice if the user could draw a circle and fill it in later.

"" 5. Connecting notes with horizontal beams""

Research Journal

* January 15, 2008

I finished the Music Notepad paper and got some good ideas from it including:
 1. The arbitrator concept in the system design
   a. That will be helpful to me when resolving conflict
   b. With my system I need to analyze how closely are gestures being matched to multiple items
2. Tokens
     a. The idea of encapsulating feature information can be helpful in the later stages of my project
3. The Microsoft word squiggly line feature
     a. There will be times when someone just writes something that is ambiguous.  We can inform the user when they do this instead    
     of having them search for the mistakes.
     b. We can add functionality that the user can right click on an ambiguous gesture and have it be defined and added to  
     recognition library
4. Distinguishing between types of composition software (sequencing and notation)

It will be good for me to understand Fitt's Law (As it was mentioned in the paper).  

I was having problems detecting filled circles with $1 dollar recognizer; however, it seems to be working better now.  I am trying to      
decide whether I should add note strokes that are made without lifting up the pen.  I was writing out the different variations and it 
looks like it will be a lot.  It is something to ask Francois.

Today I have done a major code refactoring, it is taking a long time.  I am adding a util class and a one dollar recognizer class.  I   
am just making changes to logically separate code for ease in the future.

  • January 22, 2008
One of the important things for me to get right is detecting different kinds of lines.  So far, I can think of 3 different lines   
with  notes:

1. The beam of a quarter or half note
2. The slanted lines for eighth, sixteenth and thirty-second notes
3. The horizontal beams that will connect 1/8 and 1/16 notes together.

Currently, I am only detecting general lines.  I then, based on its context, determine if it is a beam or slanted line (The   
horizontal beam will be added later).
 
Right now, I have a demo that implements the automata that I initially posted.  Conditions:
1. The user must lift up the pen after each stroke (circle, line, slanted line, filled in circle)
2. Before you can draw a slanted line, you must have drawn a vertical beam
3. Currently, you have to intersect the slanted beam with the the vertical beam for it to be detected.
4. Spatial data is not taken into account

I should meet with someone to ensure I am going in the right direction and making okay progress.
  • February 8, 2008
Right now I have a demo that can detect single notes using the bounding box method.  It isn't perfect, but will continue to work on  
it.  Some insights that I have discovered over the past couple of weeks:

1. $1 Dollar Recognizer cannot detect 1-Dimensional shapes as it is in the paper.  You have to add a check.  This is listed in the  
limitations section but I overlooked it.

2. When displaying the shapes in excel, I need to make sure the scales are even, that is why the figures were looking disfigured  
during my last meeting with Francois.

3. For some reason, the convex hulls of squares (and filled squares) are detected as circles by the $1 recognizer

During the past couple of weeks, I have improved my code with refactoring, adding better visualizations for debugging.  I need to 
read more papers.  I need to have the demo for Francois on Monday

Things that don't work:
My method for detecting filled circles will detect filled; however,  it will detect other gestures like a square, a multi-leaved 
clover(like pedals from a flower), and other filled gestures like a filled square.  This is due to the fact that the algorithm  
relies on the condition that most of the gesture's points are inside of the convex hull which is true for filled circles; however,  
it is also true for gestures like hand drawn squares.  Most squares drawn by hand do not have exactly straight lines.  Therefore the  
convex hull could just contain the end points of the square while the rest of the points of the square are just below the segments  
of the convex hull.  In addition to this, the convex hulls of squares seem to match closer to the circle templates than the square 
templates with the $1 recognizer.

In conclusion this method will detect filled circles; however, all the user has to do is draw a gesture that has most of its points  
in its convex hull that isn't a filled circle and there is a good possibility it will break.