******************************************************************** CALL FOR PAPERS ACM Workshop on Interactive Multimedia on Mobile and Portable Devices (In conjunction with ACM Multimedia 2011) Scottsdale, Arizona, USA ******************************************************************** With the development of silicon technologies, mobile and portable electronics devices, such as personal computers, mobile phones, digital cameras, and PDA, have become ubiquitous for people's daily life. These devices provide multimedia sources for entertainment, communication, and so on. How to design user interfaces of these products that enable natural, intuitive and fun interaction is one of the main challenges the multimedia community is facing. Considering that mobile and portable devices are usually supplied with multiple sensors (e.g., camera and microphone), how to employ multimodal information for interaction has recently received much attention in both academia and industry. But interactive multimedia is still an under-explored field. Many challenges exist when moving to multimodal interaction: for example, how to annotate and search huge data acquired by using multiple sensors, especially in the unconstrained end-user environments? how to effectively extract and select representative multimedia features for human behavior recognition? and how to select the fusion strategy of multimodal data for a given application? To address these challenges, we must adapt the existing approaches or find new solutions suitable for multimedia interaction on mobile and portable devices. This workshop will bring together researchers from both academia and industry in domains including computer vision, audio and speech processing, machine learning, pattern recognition, communications, human-computer interaction, and media technology to share and discuss recent advances in interactive multimedia. Topics include, but are not limited to: • Multimedia description and markup • Multimedia representation and annotation • Multimedia search and retrieval • Presence and environment sensing • Face detection, tracking, and recognition • Hand detection, tracking, and recognition • Emotion/mood recognition • Gesture/action/activity recognition • Audio-visual recognition and interaction • Novel interaction (accelerometer, touch screen, haptics, voice, etc.) • Multimodal data modeling and fusion • Multimedia content adaptation • Context-aware services Important dates ----------------- Submission deadline: **June 19, 2011** Notification of acceptance: July 30, 2011 Camera-ready due: September 5, 2011 Workshop: November 28 - December 1, 2011 (TBD) Workshop Chairs ----------------- Jiebo Luo, Kodak Research Laboratories, USA Caifeng Shan, Philips Research, The Netherlands Ling Shao, The University of Sheffield, UK Minoru Etoh, NTT DOCOMO, Japan Program Committee ---------------- Xavier Binefa, University of Barcelona, Spain Andrea Cavallaro, Queen Mary University of London, UK Berna Erol, Ricoh Innovations, USA Yun (Raymond) Fu, SUNY at Buffalo, USA Ling Guan, Ryerson University, Canada Aki Harma, Philips Research, The Netherlands Winston Hsu, National Taiwan University, Taiwan Alejandro Jaimes, Yahoo! Research, Spain Tae-Kyun Kim, Imperial College London, UK Qian Lin, HP Labs, USA Alexander C. Loui, Kodak Research Labs, USA Xiaoming Liu, GE Global Research, USA Tao Mei, Microsoft Research Asia, China Anton Nijholt, University of Twente, The Netherlands Jean-Marc Odobez, IDIAP Research Institute, Switzerland Yoichi Sato, University of Tokyo, Japan Wolfgang Hόrst, Utrecht University, The Netherlands Shihong Lao, Omron, Japan