Organizers: Branislav Kisacanin (Texas Instruments, USA), Jagadeesh Sankaran (Texas Instruments, USA)
Duration: full day
Abstract: This Tutorial is a focused, vertical introduction to this topic. It will teach processor choices for embedded computer vision and explain the embedded approach to vision and how it influences algorithm implementation on programmable embedded processors. Software optimization techniques and system-level considerations will be included as well. The tutorial will provide useful resources and discuss emerging applications of this exciting field.
Outline:
- Part I – Presentation
- Examples of applications of embedded vision
- Automotive Vision
- Video Surveillance
- Body Tracking and Gesture Recognition
- Technical challenges in embedded vision
- High computational complexity and data bandwidth
- Limited resources (memory, power, processing)
- Extreme diversity of algorithms
- Real-time constraints
- Programmable processors for embedded vision
- Why is programmability so important for embedded vision?
- Basic programmable processor architecture model (CPU, DMA, internal RAM, internal cache, external RAM)
- Valuable architecture extensions (SIMD, VLIW)
- Embedded vision approach: separation of data I/O and computations
- Examples of the embedded approach to low-level vision
- Gaussian filter
- Gaussian pyramid
- Morphology
- Median
- Examples of the embedded approach to mid-level vision
- Hough transform
- Edge detection
- Integral image
- Software and system-level issues
- Software optimization (compiler directives, intrinsics, dual buffering, cache optimization)
- System-level decisions (exploiting synergies with optics, minimizing power dissipation, choosing RTOS)
- Recommended sources
- Discussion, Q&A
- Part II – Labs
- Installation of PC simulators
- Embedded low-level vision examples and comparisons
- Embedded mid-level vision examples and comparisons
- Rapid prototyping of an embedded vision application
Material:
- Printed lecture notes.