Eurographics Workshop on Intelligent Cinematography and Film Editing Call for Papers
Call for Papers:
Eurographics Workshop on Intelligent Cinematography and Film Editing
The expressive use of virtual cameras, mise-en-scene, lighting and
editing (montage) within 3D synthetic environment shows great promise
to extend the communicative power of film and video into the
artificial environments of interactive games and virtual worlds.
At the same time, recent advances in computer vision-based object,
actor and action recognition make it possible to envision novel
re-cinematography (re-lighting, re-framing) and automatic editing of
The workshop series is intended to bridge the gap between the two
areas and confront research being performed in both domains.
One common area of active research is the representation and
understanding of the story to be told and its relation to its
communicative goals. Another area is the extension of traditional film
grammar towards more immersive and interactive experiences, and the
emergence of virtual reality and augmented reality movie making.
This one-day workshop aims to bring together researchers and
industrial experts working in all aspects of digital cinematography
and film editing in their respective fields, including 3D graphics,
artificial intelligence, computer vision, visualization, interactive
narrative, cognitive and perceptual psychology, computational
linguistics, computational aesthetics and visual effects.
Drawing upon cutting edge research and technologies regarding both the
production and comprehension of cinematographic art-work, the workshop
seeks to offer a glimpse of the future of cinematography and film
editing, as well as a forum for discussion of outstanding research
The 6th edition of the workshop will take place at Villa Lumiere, 25
rue du Premier Film, in Lyon, France on April 24, 2017, immediately
before Eurographics 2017. The workshop will include a visit of the
Lumiere Museum, which honours the contribution to filmmaking by
Auguste and Louis Lumière - inventors of the cinématographe and
fathers of the cinema.
Researchers should submit one of:
* Regular paper (max 8 pages) reporting new work or new ideas in a
relevant research area.
* Short paper (max 4 pages) describing work in progress or a vision of
the near term future of intelligent cinematography.
Submission information will be updated on the workshop site
Proceedings of the workshop will be published by EG Publishing in the
EG Digital Library.
Topics of interest
* Camera path planning and visibility
* Interactive and automatic camera control
* Automatic video editing
* Movie pre-vizualization
* Game cinematics, cinematic replays, and machinima
* Virtual reality and augmented reality movie making
* Immersive and interactive cinema
* Natural user interfaces for cinematography and editing
* Expressive performance of virtual characters
* Cognitive models of film perception
* Automatic video analysis of movies
* Re-cinematography, re-lighting and re-framing
* Computer-assisted multi-camera production
* Evaluation methodologies and user experience
* Analysis of film style
Paper submission: February 10, 2017.
Notification to authors: March 10, 2017.
Camera-ready deadline: March 24, 2017.
Workshop held: April 24, 2017.
The international workshop series is supervised by a steering
committee composed of Magy Seif El-Nasr (Northeastern University),
R. Michael Young (NC State University), Joseph Magliano (Northern
Illinois University), Paolo Burelli (Aalborg University Copenhagen),
Arnav Jhala (UC Santa Cruz), and Remi Ronfard (Inria Grenoble).
This 6th edition of the workshop is co-organized by William Bares
(College of Charleston, South Carolina, USA), Vineet Gandhi (IIIT,
Hyderabad), Quentin Galvane (Technicolor R&D, France) and Rémi Ronfard
* William Bares, College of Charleston, Charleston, SC, USA (firstname.lastname@example.org)
* Rémi Ronfard, INRIA / LJK, France (email@example.com)
Program committee (tentative)
* John Bateman, University of Bremen (firstname.lastname@example.org)
* Paolo Burelli, Aalborg University Copenhagen, Denmark (email@example.com)
* Peter Carr, Disney Research, Pittsburgh (firstname.lastname@example.org)
* Brad Cassell, NC State University, USA (email@example.com)
* Yun-Gyung Cheong, ITU Copenhagen, Denmark (firstname.lastname@example.org,email@example.com)
* Marc Christie, U. Rennes and INRIA, France (firstname.lastname@example.org)
* Michael Gleicher, University of Wisconsin, Madison, USA (email@example.com)
* Arnav Jhala, North Carolina State University, USA (firstname.lastname@example.org)
* Tsai-yen Li (National Cheng Chi University) (email@example.com)
* Henry Lowood, Stanford University, USA (firstname.lastname@example.org)
* Joseph Magliano, Northern Illinois University, USA (email@example.com)
* Roberto Ranon, University of Udine, Italy (firstname.lastname@example.org)
* Magy Seif El-Nasr, Northeastern University (email@example.com)
* Alexander Sorkine-Hornung, Disney Research Zurich (firstname.lastname@example.org)
* I-Cheng Yeh (Yuan Ze University) (email@example.com)
* Michael Young, University of Utah, USA (firstname.lastname@example.org)