4th International Workshop on Semantic Learning and Applications in Multimedia in association with CVPR 2009 Workshop date (full-day): June 21, 2009 Paper submission: 3/16/09 Notification of acceptance: 3/30/09 Receipt of camera ready copy: 4/13/08 The use of semantic knowledge in multimedia is rapidly becoming more widespread and significant. In areas such as multimedia content analysis, media integration, semantic cues and knowledge are being used to achieve performance that is not attainable by purely bottom-up, data-driven approaches. In many applications, meaningful multimedia content recognition is not possible without contextual, semantic support. However, many fundamental challenges still remain. This workshop will bring together an interdisciplinary group of researchers in computer vision, speech/music recognition, knowledge representation and ontologies, machine learning, natural language and other areas to examine the issues and recent results in using semantic knowledge to enhance multimedia. Recent progress in machine learning has enabled the rigorous management of uncertainty in large-scale reasoning problems, and this has stimulated the use of semantic methods and reasoning in multimedia. Simultaneously, the natural language and artificial intelligence communities have developed large computational models and databases of semantic knowledge. The multimedia communities are using both evidential reasoning methods and semantic knowledge bases to fuse multiple data sources for intelligent multimedia content analysis, integration, and delivery. Papers are solicited in all disciplines related to the central theme, including but not limited to: o use of knowledgebases/ontologies for multimedia problems o new ontologies for visual objects, video events, etc. o new ontologies for audio objects, audio scenes/events, etc. o user-centric multimedia ontologies o unsupervised learning of event ontologies o automatic multimedia concept detection o semantic representations of spatio-temporal data o context-based recognition o high-level event recognition o semantic image, audio, music, and video annotation o semantic event-based retrieval of audio/music/video o content-based queries and use cases o integration of vision and natural language o learning vs. prior, structured knowledge o probabilistic models for dynamic systems o temporal logic in speech and vision o multi-agent multi-threaded representations o situational awareness through audio-visual perception o intelligent media agents and middleware PROGRAM The program will include both invited talks from researchers working on multimedia related fields, as well as open submission papers. In addition, theprogram willfeature two keynote speeches and one panel discussion. ORGANIZATION General Chairs: Tom Huang, University of Illinois at Urbana-Champaign Qiang Ji, Rensselaer Polytechnic Institute Jiebo Luo, Kodak Research Labs Program Committee: Kobus Barnard, University of Arizona Serge Belongie, UCSD Matthew Boutell, Rose-Hulman Institute of Technology Daniel Ellis, Columbia University Guoliang Fan, Oklahoma State University Jianping Fan, University of North Carolina at Charlotte Yun Fu, BBN Technologies Alan Hanjalic, Delft University of Technology Anthony Hoogs, Kitware Xian-Sheng Hua, Microsoft Research Asia Horace Ip, City University of Hong Kong Ebroul Izquierdo, University of London Svetlana Lazebnik, UNC Jim Little, UBC Mor Naaman, Rutgers University Christopher Pal, University of Rochester Nemanja Petrovic, Google, Inc. Visvanathan Ramesh, Siemens Corporate Research Nicu Sebe, U Amsterdam Rahul Sukthankar, Intel Research Pittsburg Qi Tian, University of Texas at San Antonio/MSRA Antonio Torralba, MIT George Tzanetakis, University of Victoria Yi Wu, Intel Research Dong Xu, NTU Shuicheng Yan, NUS Zhongfei Zhang, SUNY Binghamton Song-Chun Zhu, UCLA PAPER SUBMISSION In keeping with the spirit of a workshop, submitted papers may emphasize intellectual risks and argue for ideas that do not yet have comprehensive experimental support. Hence papers may not need to describe fully developed algorithms, methods, or results as would normally be required for acceptance at CVPR. Papers should be at most 8 pages in length, in the same style format as CVPR. All accepted papers will be included in the electronic CVPR proceedings. Detailed submission information may be found at http://www.ecse.rpi.edu/slam09