{"id":1159,"date":"2012-06-07T18:17:50","date_gmt":"2012-06-07T18:17:50","guid":{"rendered":"http:\/\/eccv2012.unifi.it\/?page_id=1159"},"modified":"2012-10-05T06:34:58","modified_gmt":"2012-10-05T06:34:58","slug":"tutorials","status":"publish","type":"page","link":"http:\/\/eccv2012.unifi.it\/program\/tutorials\/","title":{"rendered":"Tutorials"},"content":{"rendered":"

ECCV 2012 Tutorials<\/h1>\n

SUNDAY OCTOBER 7, 2012<\/h3>\n

T1 – Vision Applications on Mobile using OpenCV<\/h2>\n

Organizers<\/strong>: Gary Bradski (Industrial Perception)<\/em>, Victor Eruhimov (Itseez Corporation)<\/em>,\u00a0Vadim Pisarevsky (Itseez Corporation)<\/em>
\nDuration<\/strong>: half day
\nAbstract<\/strong>:\u00a0It is forecast that in 2012, 450 Million smart phones with cameras will be sold, increasing to 650 Million units in 2013. Those with interests in commercial applications of computer vision simply cannot afford to ignore this growth in \u001c smart cameras\u001d \u00a0enabled by mobile devices. This tutorial will get you going in computer vision application development on mobile devices using OpenCV. This tutorial is intended to be hands on.
\n Read more…<\/a><\/p>\n

 <\/p>\n

 <\/p>\n

T2 – Internet Video Search<\/h2>\n

Organizers<\/strong>: Cees G.M. Snoek (Univ. of Amsterdam, NL)<\/em>, Arnold W.M. Smeulders (CWI, NL)<\/em>
\nDuration<\/strong>: half day
\nAbstract<\/strong>:\u00a0In this half\u2010day tutorial we focus on the computer vision challenges in internet video search, present methods how to achieve state\u2010of\u2010the\u2010art performance while maintaining efficient execution, and indicate how to obtain spatiotemporal improvements in the near future. Moreover, we give an overview of the latest developments and future trends in the field on the basis of the TRECVID competition \u2013 the leading competition for video search engines run by NIST \u2013 where we have achieved consistent top\u20102 performance over the years, including the 2008, 2009, 2010 and 2011 editions. This half\u2010day tutorial is especially meant for researchers and practitioners who are new to the field of video search (introductory), people who have started in this direction (intermediate), or people who are interested in a summary of the state\u2010of\u2010the\u2010art in this exciting area (general interest).
\n
Read more…<\/a><\/p>\n

 <\/p>\n

 <\/p>\n

T3 – Modern features: advances, applications, and software<\/h2>\n

Organizers<\/strong>: Andrea Vedaldi (Univ. of Oxford, UK)<\/em>,\u00a0Jiri Matas, Krystian Mikolajczyk, Tinne Tuytelaars, Cordelia Schmid, Andrew Zisserman\u00a0(Univ. of Oxford, UK)<\/em>
\nDuration<\/strong>: half day
\nAbstract<\/strong>:\u00a0This course will introduce local feature detectors and descriptors as foundational tools in a variety of state-of-the-art computer vision applications. The first part of the tutorial will cover popular co-variant detectors (Harris, Laplacian, Hessian corners and blobs, scale and affine adaptation, MSER, SURF, FAST, etc.) and descriptors (SIFT, SURF, BRIEF, LIOP, etc.), with a particular emphasis on recent advances and additions to this set of tools. It will be shown how the various methods achieve different trade-offs in repeatability, speed, geometric accuracy, and applicability to different image contents in term of their performance in benchmarks and applications (tracking, reconstruction, retrieval, stitching, text detection in the wild, etc.). The second part of the tutorial will review software for computing local features and evaluating their performance automatically on benchmark data. In particular, two software resources will be introduced to the community for the first time: a novel extension to the popular open-source VLFeat library containing new reference implementations of co-variant feature detectors; and a novel benchmarking software superseding standard packages for the evaluation of co-variant feature detectors and descriptors.
\n
Read more…<\/a><\/p>\n

 <\/p>\n

 <\/p>\n

T4 – Multi-View Geometry and Computational Photography using Non-Classical Cameras<\/h2>\n

Organizers<\/strong>:\u00a0Srikumar Ramalingam (MERL, USA)<\/em>,\u00a0Amit Agrawal\u00a0(MERL, USA)<\/em>
\nDuration<\/strong>: half day
\nAbstract<\/strong>: This tutorial is meant as an introduction to the design, modeling and implementation of non- classical (multi-perspective) cameras for several computer vision and computational photography applications. The tutorial will provide an overall view of developing a complete system (capture, modeling, and synthesis\/reconstruction) as well as provide sufficient details for calibration and modeling such non-central cameras. We hope to provide enough fundamentals to satisfy the technical specialist as well as tools\/software\u2019s to aid graphics and vision researchers, including graduate students.
\n
Read more…<\/a><\/p>\n

 <\/p>\n

 <\/p>\n

T5 – Embedded Vision on Programmable Processors – CANCELLED<\/strong><\/h2>\n

Organizers<\/strong>:\u00a0Branislav Kisacanin (Texas Instruments, USA)<\/em>,\u00a0Jagadeesh Sankaran\u00a0(Texas Instruments, USA)<\/em>
\nDuration<\/strong>: full day<\/span> – CANCELLED<\/strong>
\nAbstract<\/strong>: This Tutorial is a focused, vertical introduction to this topic. It will teach processor choices for embedded computer vision and explain the embedded approach to vision and how it influences algorithm implementation on programmable embedded processors. Software optimization techniques and system-level considerations will be included as well. The tutorial will provide useful resources and discuss emerging applications of this exciting field.
\n
Read more…<\/a><\/p>\n

 <\/p>\n

T6 – Sparse and Low-Rank Representation for Computer Vision — Theory, Algorithms, and Applications<\/h2>\n

Organizers<\/strong>: \u00a0Yi Ma (Microsoft Research Asia, China)<\/em>,\u00a0John Wright (Columbia University, USA)<\/em>,\u00a0Allen Y. Yang (UC Berkeley, USA)<\/em>
\nDuration<\/strong>: half day
\nAbstract<\/strong>: The recent vibrant study of sparse representation and compressive sensing has led to numerous groundbreaking results in signal processing and machine learning. In this tutorial, we will present a series of three talks to provide a high-level overview about its theory, algorithms, and broad applications to computer vision and pattern recognition. We will also point out ready-to-use MATLAB toolboxes available for participants to further acquire hands-on experience on these related topics.
\n
Read more…<\/a><\/p>\n

 <\/p>\n

 <\/p>\n

T7 – Additive Kernels and Explicit Embeddings for Large Scale Computer Vision Problems<\/h2>\n

Organizers<\/strong>:\u00a0Jianxin Wu (Nanyang Technological University, Singapore)<\/em>,\u00a0Andrea Vedaldi (Univ. of Oxford, UK)<\/em>,\u00a0Subhransu Maji (TTI Chicago, USA)<\/em>,\u00a0Florent Perronnin (Xerox Research Center Europe)
\n<\/em>Duration<\/strong>: half day
\nAbstract<\/strong>:\u00a0It is generally accepted in our community that: in many vision tasks, more training images will usually lead to better performance. Furthermore, recent advances have shown that additive kernel and explicit embeddings are the best performers in most visual classification tasks\u2013a fact that has been repeatedly verified by various papers and research-oriented public contests (e.g., the ImageNet Large Scale Visual Recognition Challenge.) In this tutorial, we will introduce the theories, applications, algorithms, software, and practical issues of using additive kernels and explicit embeddings in various computer vision domains, especially when the problem scale is very large.
\n
Read more…<\/a><\/p>\n

<\/h2>\n

 <\/p>\n

T8 – Using MATLAB for Computer Vision: Computer Vision System Toolbox and More<\/h2>\n

Organizers<\/strong>:\u00a0Bruce Tannenbaum (MathWorks)<\/em>, Dima Lisin (MathWorks)<\/em>, Witek Jachimczyk\u00a0(MathWorks)<\/em>
\nDuration<\/strong>: half day
\nAbstract<\/strong>: In this tutorial, we will share practical information about Computer Vision System Toolbox as well as other MATLAB products appropriate for computer vision. \u00a0This tutorial assumes some experience with MATLAB and Image Processing Toolbox. \u00a0We will focus mostly on Computer Vision System Toolbox.
\n
Read more…<\/a><\/p>\n

 <\/p>\n

 <\/p>\n

T9 – Similarity-Based Pattern Analysis and Recognition<\/h2>\n

Organizers<\/strong>:\u00a0Edwin R. Hancock (Univ. of York, UK)<\/em>, Vittorio Murino (IIT, Italy)<\/em>, Marcello Pelillo (Univ. of Venice, Italy)<\/em>, Richard Wilson (Univ. of York, UK)
\n<\/em>Duration<\/strong>: full day
\nAbstract<\/strong>:\u00a0The presentation will revolve around two main themes, which basically correspond to the two fundamental questions that arise when abandoning the realm of vectorial, feature-based representations, namely: How can one obtain suitable similarity information from data representations that are more powerful than, or simply different from, the vectorial. How can similarity information be used in order to perform learning and classification tasks ? We shall assume no pre-existing knowledge of similarity-based techniques by the audience, thereby making the tutorial self- contained and understandable by a non-expert. The tutorial will commence with a clear overview of the basics of how dissimilarity data arise, and how it can be characterized as a prerequisite to analysis. We will focus in detail on the differences between Euclidean and non-Euclidean dissimilarities, and in particular the causes of non-Euclidean artifacts,\u00a0how to test for them and when possible correct for them. With the basic definitions of dissimilarity to hand, we will move on to the topic of analysis in the dissimilarity domain, we will commence by showing how to derive dissimilarities for non- vectorial data, how to impose geometricity on such data via embedding and how to learn in the dissimilarity domain. Finally, we will illustrate how these ideas can be utilised in the computer vision domain with particular emphasis on the dissimilarity representation of shape.
\n
Read more…<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

ECCV 2012 Tutorials<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":1053,"menu_order":0,"comment_status":"closed","ping_status":"open","template":"sidebar-page.php","meta":{"ngg_post_thumbnail":0},"_links":{"self":[{"href":"http:\/\/eccv2012.unifi.it\/wp-json\/wp\/v2\/pages\/1159"}],"collection":[{"href":"http:\/\/eccv2012.unifi.it\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/eccv2012.unifi.it\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/eccv2012.unifi.it\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/eccv2012.unifi.it\/wp-json\/wp\/v2\/comments?post=1159"}],"version-history":[{"count":27,"href":"http:\/\/eccv2012.unifi.it\/wp-json\/wp\/v2\/pages\/1159\/revisions"}],"predecessor-version":[{"id":1170,"href":"http:\/\/eccv2012.unifi.it\/wp-json\/wp\/v2\/pages\/1159\/revisions\/1170"}],"up":[{"embeddable":true,"href":"http:\/\/eccv2012.unifi.it\/wp-json\/wp\/v2\/pages\/1053"}],"wp:attachment":[{"href":"http:\/\/eccv2012.unifi.it\/wp-json\/wp\/v2\/media?parent=1159"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}