Computer Vision for Microscopy Image Analysis Call for Papers
Call for Papers & Announcing Challenge: Computer Vision for Microscopy
Image Analysis (CVMI 2021), to be held in conjunction with CVPR 2021
https://cvmi2021.github.io/index.html
This workshop intends to draw more visibility and interest to this
challenging yet fruitful field, and establish a platform to foster
in-depth idea exchange and collaboration. Authors are invited to
submit original and innovative papers. We aim for broad scope, topics
of interest include but are not limited to:
Image acquisition
Image calibration
Image enhancement
Object detection
Image Segmentation
Image stitching and Registration
Event detection
Object tracking
Shape analysis
Texture analysis
Classification
3D image analysis
Big image data to knowledge
Image datasets and benchmarking
Accepted papers will be included in the CVPR proceedings, on IEEE
Xplore, and on CVF website.
Paper Submission Deadline: March 23rd 2021, 11:59:59 Pacific
Time. Link to submission system:
https://cmt3.research.microsoft.com/CVMI2021/Submission/Index
We are pleased to announce the first Cell Tracking and Mitosis
Challenge (CTMC), which will be held as part of CVMI 2021.
CTMC introduces a human-annotated live-cell imaging dataset for cell
tracking, that is larger and more diverse than prior cell tracking
datasets. The dataset consists of 86 live-cell imaging videos that
represent 14 different cell lines of various shapes and sizes, with
each cell annotated using bounding boxes. CTMC will be hosted on the
MOTChallenge platform and aims to bring the cell tracking and
MOTChallenge communities together to build more generalized
algorithms.
* Challenge will be live until Friday, May 21st [5:59pm Central
Standard Time]
* Results will be announced at the CVMI workshop at CVPR 2021 on
Friday, June 25th
* Dataset with annotations can be downloaded from
https://motchallenge.net/data/CTMC-v1/
We look forward to your participation in advancing the
state-of-the-art in cell tracking! For any questions regarding this
challenge or the dataset, please feel free to contact Samreen Anjum at
samreen@utexas.edu
Thanks,
Mei