CATS-Apr-17-2013

From Theory

Title[edit]

Maximum Entropy Summary Trees

Speaker[edit]

After receiving his PhD from UC Berkeley, Howard Karloff taught at the University of Chicago and Georgia Tech before joining AT&T Labs--Research in 1999. An editor of ACM's Transactions on Algorithms and an ACM Fellow, he has served on the program committees of numerous conferences, chaired the 1998 Symposium of Discrete Algorithms (SODA) program committee, was general chair of the 2012 Symposium on the Theory of Computing (2012) and will be general chair of STOC 2014. He is the author of numerous journal and conference articles and the Birkhauser book "Linear Programming." His research interests span theoretical computer science but extend to more applied areas of computer science such as databases and networking.


Abstract[edit]

Given a very large, node-weighted, rooted tree on, say, n nodes, if one has only enough space to display a k-node summary of the tree, what is the most informative way to draw the tree? We define a type of weighted tree that we call a "summary tree" of the original tree, that results from aggregating nodes of the original tree subject to certain constraints. We suggest that the best choice of which summary tree to use (among those with a fixed number of nodes) is the one that maximizes the information-theoretic entropy of a natural probability distribution associated with the summary tree, and we provide a (pseudopolynomial-time) dynamic-programming algorithm to compute this maximum entropy summary tree, when the weights are integral. The result is an automated way to summarize large trees and retain as much information about them as possible, while using (and displaying) only a fraction of the original node set. We also provide an additive approximation algorithm and a greedy heuristic that are faster than the optimal algorithm, and generalize to trees with real-valued weights.

This is joint work with Ken Shirley and will appear at Eurovis 2013.