PAMI Special Issue on Egocentric Perception Call for Papers

PAMI Special Issue on Egocentric Perception
Important Dates

Submission period: January 1-30, 2020

Author Notification: May 5, 2020

Revised Papers Due: July 20, 2020

Final Notification: September 20, 2020

Aims and Scope

Wearable devices capable of acquiring and processing images and video from a first-person
perspective, often synchronized with other sensory information
including audio, inertial measurements, and GPS signals, have recently
become increasingly common. The past few years have witnessed a shift
from research prototypes to commercial products such as Google Glass,
Microsoft HoloLens, and Magic Leap One. Such commercial devices
promise to unleash the power of Computer Vision and Machine Learning
for user-centric applications and could soon be able to automatically
understand what camera wearers are doing, manipulating, or attending
to. They will be able to recognize the surrounding scene, understand
gestures and social relationships, and enhance everyday activities
such as sports, education, and entertainment.

While Computer Vision and Machine Learning have made significant
advances, egocentric perception introduces multiple challenges which
need to be addressed by the research community. For instance, wearable
systems must correctly handle the inherently multimodal nature of
egocentric information, which apart from images and video may include
depth, gaze, audio, geolocation, and inertial data. In the presence of
this huge quantity of data, questions such as what to interpret as
well as what to ignore, and how captured information can be turned
into useful data for guidance or summaries, become central. These
questions have long been hindered by the lack of large-scale datasets
in egocentric vision.

Over the past years, new datasets have been proposed to address tasks
such as egocentric video summarization, egocentric localization and
place recognition, egocentric object detection, action recognition and
anticipation. Among those, the introduction of some large-scale
datasets as EgoSum+gaze, EGTEA Gaze+, Charades-Ego, and EPIC-Kitchens
have pushed data-driven learning in egocentric vision to new
heights. Particularly, the largest dataset EPIC-KITCHENS2018 and the
related challenges on egocentric action recognition and anticipation
have attracted significant attention.

The aim of this special issue is to gather recent advances in the
field bringing together different communities which are relevant to
Egocentric Perception, including Computer Vision, Machine Learning,
and Multimedia.

Topics of interest include (but are not limited to):

    Egocentric vision for human action analysis
    Egocentric vision for object/event recognition
    Egocentric vision for summarization
    Egocentric vision for social interaction and human behavior understanding
    Egocentric vision for robotics
    Anticipating future actions, objects, and interactions from egocentric vision
    Head-mounted eye tracking and gaze estimation
    Egocentric perception for attention modelling and next fixation prediction
    Egocentric vision for interactive AR/VR
    Egocentric vision for human-computer interactions
    Egocentric vision for daily life and activity monitoring
    Egocentric vision for augmented human performance
    Multimodal perception for first-person video
    Benchmarking and quantitative evaluation with human subject experiments

 

Paper Submission and Review

Submitted papers must conform to the author guidelines available on
the PAMI website. Authors need to submit full papers online through
the PAMI submission site, selecting the choice that indicates this
special issue: S.I.: Egocentric Perception.

Submissions must represent original material, that has not appeared
elsewhere for publication and that is not under review for another
refereed publication, relevant to one of the topics of the Special
Issue. If any portion of your submission has previously appeared in or
will appear in a conference proceeding, you should notify this at the
time of submission, make sure that the submission references the
conference publication, and supply a copy of the conference
version(s). Please also provide a brief description of the differences
between the submitted manuscript and the preliminary version(s). You
must select the appropriate designation for the files during the
submission process in order to assist the guest editors and reviewers
with differentiating between the files.

Submissions will be evaluated by at least three independent reviewers
on the basis of relevance for the special issue, originality,
significance of contribution, technical quality, scholarship, and
quality of presentation. The number of papers appearing in a special
issue is dependent on quality alone. There is no upper limit. The
editors reserve the right to reject without review any submissions
deemed to be outside the scope of the special issue. Authors are
welcome to contact the special issue editors with questions about
scope before preparing a submission.

Guest Editors 

Giovanni Maria Farinella

Department of Mathematics and Computer Science

University of Catania, IT

Email: gfarinella@dmi.unict.it

 

David Crandall

School of Informatics, Computing, and Engineering

University of Indiana, USA

Email: djcran@indiana.edu

 

Dima Damen

Department of Computer Science

University of Bristol, UK

Email: Dima.Damen@bristol.ac.uk

 

Antonino Furnari

Department of Mathematics and Computer Science

University of Catania, IT

Email: furnari@dmi.unict.it

 

Kristen Grauman

Department of Computer Science

University of Texas at Austin, USA

Email: grauman@cs.utexas.edu