---------------------------------------------------------------------------- We invite you to participate in the second workshop on Community Based 3D Content and Its Applications, which will be held in conjunction with the IEEE International Conference on Multimedia & Expo (ICME), on one day of July 9-13, 2012, in Melbourne, Australia. http://vision.ia.ac.cn/ICMEworkshop/call%20for%20paper.pdf ---------------------------------------------------------------------------- AIMS AND SCOPES: Internet community contributed contents such as flickr photos and youtube videos have impacted billions of people’s life. 3D map contents including panorama street views, mesh models of buildings and terrain have greatly enriched the browsing experiences of web map services such as Google Map/Earth or Microsoft Bing Map. With the state of the art computer vision technique, it is possible now for layman users to create their own 3D contents from a set of images or live video input. This emerging technique opens the possibility to augment static 3D map contents with dynamic, user-generated contents which are geometrically registered with 3D maps in a fully automatic manner. Recent advances in mobile devices that embed cameras, GPS and other sensors have shown great potential to create a mobile internet ecosystem, in which 3D contents are co-created, processed, integrated and consumed in a seamless manner. Given the pervasiveness of such mobile devices in near future, this ecosystem is expected to reach billions of users through applications such as mobile photo/video sharing, 3D scene retrieval, navigation, and augmented reality etc. AWARD: A NOKIA smartphone will be awarded to the best paper TOPICS: * Devices: 3D cameras, GPS and/or orientation sensor augmented cameras * 3D Reconstruction: 3D reconstruction techniques for static scenes, 3D tracking and reconstruction from live video, non-static object reconstruction, indexing, searching and alignment of large-scale 3D data, content-based 3D retrieval and recognition, multi-view and multi-sensor imaging * Visualization: 3D mesh, texture, point, and volume-based representation, object-based representation and segmentation, 3D motion animation, 3D scene browsing on mobile devices, 3D electronic-map * Applications: augmented reality and virtual environments, mixing of virtual and real worlds, 3D user interaction, automatic video content analysis, web-based 3D map applications, 3D object tracking applications, content-based 3D retrieval and recognition IMPORTANT DATES: Paper submission deadline: 5th March, 2012 Notification of Acceptance: 26th March, 2012 Camera-ready papers: 9th April, 2012 SUBMISSION: The layout of submitted papers must adhere to the format of ICME papers (See information at http://www.icme2012.org/PaperSubmission_index.php). Manuscripts should be no longer than 6 pages. Only electronic papers in PDF format are acceptable (detailed submission instructions will be announced on workshop website). The workshop reviewing is double blind. This means that the authors and affiliates should be omitted from the review version of the paper. If the paper cites the authors' own work (and identifies it as such) the references to the cited work should be omitted from the bibliography. Online submission system: https://cmt.research.microsoft.com/ICMEWS2012/ Workshop Chairs Lixin Fan, Nokia Research Center, Finland Yihong Wu, Institute of Automation, Chinese Academy of Science, China Esin Guldogan, Tampere University of Technology, Finland Peter Sturm, INRIA, France Jian Zhang, University of Technology Sydney, Australia UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects.