Tutorials

ECCV 2012 Tutorials

SUNDAY OCTOBER 7, 2012

T1 – Vision Applications on Mobile using OpenCV

Organizers: Gary Bradski (Industrial Perception), Victor Eruhimov (Itseez Corporation), Vadim Pisarevsky (Itseez Corporation)
Duration: half day
Abstract: It is forecast that in 2012, 450 Million smart phones with cameras will be sold, increasing to 650 Million units in 2013. Those with interests in commercial applications of computer vision simply cannot afford to ignore this growth in  smart cameras  enabled by mobile devices. This tutorial will get you going in computer vision application development on mobile devices using OpenCV. This tutorial is intended to be hands on.
Read more…

 

 

T2 – Internet Video Search

Organizers: Cees G.M. Snoek (Univ. of Amsterdam, NL), Arnold W.M. Smeulders (CWI, NL)
Duration: half day
Abstract: In this half‐day tutorial we focus on the computer vision challenges in internet video search, present methods how to achieve state‐of‐the‐art performance while maintaining efficient execution, and indicate how to obtain spatiotemporal improvements in the near future. Moreover, we give an overview of the latest developments and future trends in the field on the basis of the TRECVID competition – the leading competition for video search engines run by NIST – where we have achieved consistent top‐2 performance over the years, including the 2008, 2009, 2010 and 2011 editions. This half‐day tutorial is especially meant for researchers and practitioners who are new to the field of video search (introductory), people who have started in this direction (intermediate), or people who are interested in a summary of the state‐of‐the‐art in this exciting area (general interest).
Read more…

 

 

T3 – Modern features: advances, applications, and software

Organizers: Andrea Vedaldi (Univ. of Oxford, UK), Jiri Matas, Krystian Mikolajczyk, Tinne Tuytelaars, Cordelia Schmid, Andrew Zisserman (Univ. of Oxford, UK)
Duration: half day
Abstract: This course will introduce local feature detectors and descriptors as foundational tools in a variety of state-of-the-art computer vision applications. The first part of the tutorial will cover popular co-variant detectors (Harris, Laplacian, Hessian corners and blobs, scale and affine adaptation, MSER, SURF, FAST, etc.) and descriptors (SIFT, SURF, BRIEF, LIOP, etc.), with a particular emphasis on recent advances and additions to this set of tools. It will be shown how the various methods achieve different trade-offs in repeatability, speed, geometric accuracy, and applicability to different image contents in term of their performance in benchmarks and applications (tracking, reconstruction, retrieval, stitching, text detection in the wild, etc.). The second part of the tutorial will review software for computing local features and evaluating their performance automatically on benchmark data. In particular, two software resources will be introduced to the community for the first time: a novel extension to the popular open-source VLFeat library containing new reference implementations of co-variant feature detectors; and a novel benchmarking software superseding standard packages for the evaluation of co-variant feature detectors and descriptors.
Read more…

 

 

T4 – Multi-View Geometry and Computational Photography using Non-Classical Cameras

Organizers: Srikumar Ramalingam (MERL, USA), Amit Agrawal (MERL, USA)
Duration: half day
Abstract: This tutorial is meant as an introduction to the design, modeling and implementation of non- classical (multi-perspective) cameras for several computer vision and computational photography applications. The tutorial will provide an overall view of developing a complete system (capture, modeling, and synthesis/reconstruction) as well as provide sufficient details for calibration and modeling such non-central cameras. We hope to provide enough fundamentals to satisfy the technical specialist as well as tools/software’s to aid graphics and vision researchers, including graduate students.
Read more…

 

 

T5 – Embedded Vision on Programmable Processors – CANCELLED

Organizers: Branislav Kisacanin (Texas Instruments, USA), Jagadeesh Sankaran (Texas Instruments, USA)
Duration: full dayCANCELLED
Abstract: This Tutorial is a focused, vertical introduction to this topic. It will teach processor choices for embedded computer vision and explain the embedded approach to vision and how it influences algorithm implementation on programmable embedded processors. Software optimization techniques and system-level considerations will be included as well. The tutorial will provide useful resources and discuss emerging applications of this exciting field.
Read more…

 

T6 – Sparse and Low-Rank Representation for Computer Vision — Theory, Algorithms, and Applications

Organizers:  Yi Ma (Microsoft Research Asia, China), John Wright (Columbia University, USA), Allen Y. Yang (UC Berkeley, USA)
Duration: half day
Abstract: The recent vibrant study of sparse representation and compressive sensing has led to numerous groundbreaking results in signal processing and machine learning. In this tutorial, we will present a series of three talks to provide a high-level overview about its theory, algorithms, and broad applications to computer vision and pattern recognition. We will also point out ready-to-use MATLAB toolboxes available for participants to further acquire hands-on experience on these related topics.
Read more…

 

 

T7 – Additive Kernels and Explicit Embeddings for Large Scale Computer Vision Problems

Organizers: Jianxin Wu (Nanyang Technological University, Singapore), Andrea Vedaldi (Univ. of Oxford, UK), Subhransu Maji (TTI Chicago, USA), Florent Perronnin (Xerox Research Center Europe)
Duration: half day
Abstract: It is generally accepted in our community that: in many vision tasks, more training images will usually lead to better performance. Furthermore, recent advances have shown that additive kernel and explicit embeddings are the best performers in most visual classification tasks–a fact that has been repeatedly verified by various papers and research-oriented public contests (e.g., the ImageNet Large Scale Visual Recognition Challenge.) In this tutorial, we will introduce the theories, applications, algorithms, software, and practical issues of using additive kernels and explicit embeddings in various computer vision domains, especially when the problem scale is very large.
Read more…

 

T8 – Using MATLAB for Computer Vision: Computer Vision System Toolbox and More

Organizers: Bruce Tannenbaum (MathWorks), Dima Lisin (MathWorks), Witek Jachimczyk (MathWorks)
Duration: half day
Abstract: In this tutorial, we will share practical information about Computer Vision System Toolbox as well as other MATLAB products appropriate for computer vision.  This tutorial assumes some experience with MATLAB and Image Processing Toolbox.  We will focus mostly on Computer Vision System Toolbox.
Read more…

 

 

T9 – Similarity-Based Pattern Analysis and Recognition

Organizers: Edwin R. Hancock (Univ. of York, UK), Vittorio Murino (IIT, Italy), Marcello Pelillo (Univ. of Venice, Italy), Richard Wilson (Univ. of York, UK)
Duration: full day
Abstract: The presentation will revolve around two main themes, which basically correspond to the two fundamental questions that arise when abandoning the realm of vectorial, feature-based representations, namely: How can one obtain suitable similarity information from data representations that are more powerful than, or simply different from, the vectorial. How can similarity information be used in order to perform learning and classification tasks ? We shall assume no pre-existing knowledge of similarity-based techniques by the audience, thereby making the tutorial self- contained and understandable by a non-expert. The tutorial will commence with a clear overview of the basics of how dissimilarity data arise, and how it can be characterized as a prerequisite to analysis. We will focus in detail on the differences between Euclidean and non-Euclidean dissimilarities, and in particular the causes of non-Euclidean artifacts, how to test for them and when possible correct for them. With the basic definitions of dissimilarity to hand, we will move on to the topic of analysis in the dissimilarity domain, we will commence by showing how to derive dissimilarities for non- vectorial data, how to impose geometricity on such data via embedding and how to learn in the dissimilarity domain. Finally, we will illustrate how these ideas can be utilised in the computer vision domain with particular emphasis on the dissimilarity representation of shape.
Read more…

Share:
  • del.icio.us
  • Facebook
  • Twitter
  • Google Bookmarks
  • LinkedIn
  • Reddit
  • Slashdot
  • Technorati