PBVS 2017 : 13th IEEE Workshop on Perception Beyond the Visible Spectrum Call for Papers

PBVS 2017 : 13th IEEE Workshop on Perception Beyond the Visible Spectrum
- In conjunction with CVPR 2017

Link: http://www.otcbvs.com

When Jul 21, 2017
Where Honolulu, Hawaii 
Submission Deadline Mar 19, 2017 
Notification Due May 1, 2017 
Camera Ready Due Jun 4, 2017

-------Call For Papers-------

OBJECTIVE: 

The objective of this workshop is to highlight cutting edge advances
and state-of-the-art work being made in the exponentially growing
field of PBVS (previously OTCBVS) along its three main axes:
Algorithms, Sensors Processing, and Applications. This field involves
deep theoretical research in sub-areas of image processing, machine
vision, pattern recognition, machine learning, robotics, and augmented
reality within and beyond the visible spectrum. It also presents a
suitable framework for building solid advanced vision based systems.

The computer vision community has typically focused mostly on the
development of vision algorithms for object detection, tracking, and
classification associated with visible range sensors in day and
office-like environments. In the last decade, infrared (IR), depth,
IMU, thermal and other non-visible imaging sensors were used only in
special areas like medicine and defense. That relatively lower
interest level in those sensory in computer vision was due in part to
their high cost, low resolutions, poor image quality, lack of widely
available data sets, and/or lack of consideration of the potential
advantages of the non-visible part of the spectrum. These historical
objections are becoming less relevant as sensory technology is
advancing rapidly and the sensor cost is dropping dramatically. Image
sensing devices with high dynamic range and high IR sensitivity have
started to appear in a growing number of applications ranging from
defense and automotive domains to home and office security. In
addition, mobile hyperspectral and mm-wave sensors are also coming
into existence.

In order to develop robust and accurate vision-based systems that
operate in and beyond the visible spectrum, not only existing methods
and algorithms originally developed for the visible range should be
improved and adapted, but also entirely new algorithms that consider
the potential advantages of non-visible ranges are certainly
required. The fusion of visible and non-visible ranges, like radar and
IR images, depth images or IMU information, or thermal and visible
spectrum images as well as acoustic images, is another dimension to
explore for higher performance of vision-based systems. The
non-visible light is widely employed in night vision-based systems,
and many detection and recognition systems available today in the
market are relying on physiological phenomena produced by IR and
thermal wavelengths. Using artificially controlled lights is a
practical solution to eliminate challenging ambient light effects.

This 13th IEEE CVPR WS on Perception Beyond the Visible Spectrum
(PBVS-2017) creates connections between different communities in
the machine vision world ranging from public research institutes to
private, defense, and federal laboratories. It brings together
academic pioneers, industrial and defense researchers and engineers in
the field of computer vision, image analysis, pattern recognition,
machine learning, signal processing, sensors, and human-computer
interaction.



TOPICS OF INTERESTS: 


#Sensing/Imaging Technologies 

IR/EO imaging system 
Underwater sensing 
Hyperspectral/Satellite imaging 
Spectroscopy/Microscopy imaging 
LIDAR/LDV sensing 
Compressive sensing 
RADAR/SAR imaging 
RGBD sensing 
Applications and Systems 


#Surveillance and reconnaissance systems 

Autonomous vehicles 
Autonomous ships 
Autonomous grasping 
Vision-aided navigation 
Night/Shadow vision 
Sensing for agriculture and food safety 
Vision-based autonomous multi-copter 
Theory and Algorithm 


#Imagery/Video exploitation 

Object/Target tracking and recognition 
Feature extraction and matching 
Activity/Pattern learning and recognition 
Deep/Transfer learning, Domain adaptation 
Multimodal/Multi-sensor/INT fusion 
Multimodal Geo-registration 
3D Reconstruction and Shape modeling