Advanced Structured Prediction, pages: 432, Neural Information Processing Series, MIT Press, November 2014 (Book)
The goal of structured prediction is to build machine learning models that predict relational information that itself has structure, such as being composed of multiple interrelated parts. These models, which reflect prior knowledge, task-specific relations, and constraints, are used in fields including computer vision, speech recognition, natural language processing, and computational biology. They can carry out such tasks as predicting a natural language sentence, or segmenting an image into meaningful components.
These models are expressive and powerful, but exact computation is often intractable. A broad research effort in recent years has aimed at designing structured prediction models and approximate inference and learning procedures that are computationally efficient. This volume offers an overview of this recent research in order to make the work accessible to a broader research community. The chapters, by leading researchers in the field, cover a range of topics, including research trends, the linear programming relaxation approach, innovations in probabilistic modeling, recent theoretical progress, and resource-aware learning.
In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops 2009 (CVPR Workshops 2009), ViSU 2009, pages: 22-29, IEEE Service Center, Piscataway, NJ, USA, 1st International Workshop on Visual Scene Understanding, June 2009 (Conference Paper)
In Pattern Recognition: Proceedings of the 30th DAGM Symposium, DAGM 2008, pages: 31-40, (Editors: Rigoll, G. ), Springer, Berlin, Germany, 30th Annual Symposium of the German Association for Pattern Recognition, June 2008, Main Award DAGM 2008 (Conference Paper)
Most current methods for multi-class object classification
and localization work as independent 1-vs-rest classifiers. They decide
whether and where an object is visible in an image purely on a per-class
basis. Joint learning of more than one object class would generally be
preferable, since this would allow the use of contextual information such
as co-occurrence between classes. However, this approach is usually not
employed because of its computational cost.
In this paper we propose a method to combine the efficiency of single
class localization with a subsequent decision process that works jointly
for all given object classes. By following a multiple kernel learning (MKL)
approach, we automatically obtain a sparse dependency graph of relevant
object classes on which to base the decision. Experiments on the
PASCAL VOC 2006 and 2007 datasets show that the subsequent joint
decision step clearly improves the accuracy compared to single class
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems