Learning with Multimodal Data for Biomedical Informatics Call for Papers

  

Call for Papers

 

IEEE Transactions on Circuits and Systems for Video Technology

Special Issue on

Learning with Multimodal Data for Biomedical Informatics

 

Fast-growing biomedical and healthcare data of multiple modalities
have encompassed multiple scales ranging from molecules, individuals,
to populations. Meanwhile, the heterogeneous and increasingly more
diverse modalities of the data present major barriers toward their
understanding, fusion, and translation into effective clinical
actions. For example, electronic health records (EHRs) are
representative examples of multimodal/multisource data collections;
including not only traditional medical measurements, but also images,
videos, audios, and free texts. Other examples include mobile health
for remote patient care with typical data modalities such as patient-
or caregiver-generated photos, self-reported symptoms of pain, and
body temperature. The diversity of such information sources and the
increasing amounts of medical data produced by healthcare institutes
annually, pose significant challenges for data-driven biomedical
analysis. While biomedical and healthcare research traditionally
focuses on the structured measurement data, the growing availability
of novel data modalities has created a compelling demand for novel
machine learning, image/video/audio/text processing, and multi-modal
fusion algorithms that specifically tackle the unique challenges
associated with biomedical and healthcare data and allow
decision-makers and stakeholders to better interpret and exploit the
data. This special issue aims at catalyzing synergies among
image/video processing, text/speech understanding, machine learning,
multi-modal learning and other related fields with the goals to (1)
develop novel data-driven models to accelerate knowledge discovery in
biomedicine through the seamless integration of medical data collected
from imaging systems, laboratory and wearable devices, as well as
other related medical devices; (2) promote the development of new
multi-modal learning systems to enhance the healthcare quality and
patient safety; and (3) promote new applications in biomedical
informatics that can leverage or benefits from the integration of
multi-modal data and machine learning.

 

LIST OF TOPICS

We welcome high-quality submissions with important new theories,
methods, applications, and insights at the intersection of image/video
processing, text/speech understanding, machine learning, multi-modal
learning, and biomedical informatics. The topics of interest include,
but are not limited to:


    Developing and applying cutting-edge machine and multi-modal
    learning techniques to tackle real-world medical and healthcare
    problems.

    Developing new machine learning approaches to improve the
    quantitative representation of high-dimensional medical images and
    videos for knowledge discovery in biomedicine

    Designing novel data-fusion methods to integrate multiple data
    sources and modalities for enhanced visualization, effective
    biomarker extraction, and optimal medical decision making.

    Addressing challenges and roadblocks in biomedical informatics
    with reference to the data-driven machine learning, such as
    imbalanced dataset, weakly-structured or unstructured data, noisy
    and ambiguous labeling, and more.

    Other closely related technical advances in image processing,
    video processing, audio processing, text understanding, and
    multi-modal fusion, with application potential in biomedical
    informatics.

The applications of interest may include: (1) Computational Biology,
including the advanced interpretation of critical biological findings,
using databases and cutting-edge computational infrastructure; (2)
Clinical Informatics, including the scenarios of using computation and
data for health care, spanning medicine, dentistry, nursing, pharmacy,
and allied health; (3) Public Health Informatics, including the
studies of patients and populations to improve the public health
system and to elucidate epidemiology. (4) mobile Health Applications,
including the use of mobile apps and wearable sensors for health
management and wellness promotion; and (5) Cyber-Informatics
Applications, including the use of social media data mining and
natural language processing for clinical insight discovery and medical
decision making.

 


IMPORTANT DATES

    Paper Submission: October 15, 2020
    First Notification: December 15, 2020
    Revised Manuscript: January 15, 2020
    Notification of Acceptance: February 15, 2021
    Final Manuscript Due: March 30, 2021
    Tentative Publication Date: June 30, 2020

 

GUEST EDITORS

    Zhangyang (Atlas) Wang, The University of Texas at Austin
    Vishal Patel, Johns Hopkins University
    Bing Yao, Oklahoma State University
    Steve Jiang, University of Texas Southwestern Medical Center
    Huimin Lu, Kyushu Institute of Technology, Japan
    Yang Shen, Texas A&M University

 

SUBMISSION INSTRUCTIONS

    Read the Information for Authors at
    https://ieee-cas.org/pubs/tcsvt/information-authors

    Submit your manuscript at the TCSVT webpage
    (https://mc.manuscriptcentral.com/tcsvt) and follow the submission
    procedure. At Submission Step 1 (i.e., Type, Title, & Abstract),
    please select "Transactions Papers" Special Issue on
    Learning with Multimodal Data for Biomedical Informatics” as
    your manuscript type. Additionally, please clearly indicate in the
    cover letter that the manuscript is submitted to this special
    issue.

    Early submissions are welcome. We will start the review process as
    soon as we receive your valuable contributions.