Robust Vision Challenge 2018

In conjunction with CVPR 2018

Robust Vision Challenge

The increasing availability of large annotated datasets such as Middlebury, PASCAL VOC, ImageNet, MS COCO, KITTI and Cityscapes has lead to tremendous progress in computer vision and machine learning over the last decade. Public leaderboards make it easy to track the state-of-the-art in the field by comparing the results of dozens of methods side-by-side. While steady progress is made on each individual dataset, many of them are limited to specific domains. KITTI, for example, focuses on real-world urban driving scenarios, while Middlebury considers indoor scenes. Consequently, methods that are state-of-the-art on one dataset often perform worse on a different one or require substantial adaptation of the model parameters.

The goal of this workshop is to foster the development of vision systems that are robust and consequently perform well on a variety of datasets with different characteristics. Towards this goal, we propose the Robust Vision Challenge, where performance on several tasks (eg, reconstruction, optical flow, semantic/instance segmentation, single image depth prediction) is measured across a number of challenging benchmarks with different characteristics, e.g., indoors vs. outdoors, real vs. synthetic, sunny vs. bad weather, different sensors. We encourage submissions of novel algorithms, techniques which are currently in review and methods that have already been published. The winner and the runner-up of each of the 6 challenges receive a prize and are invited to present their method at our CVPR 2018 workshop and participate in the workshop dinner. Furthermore, we plan to publish the winning entries in a joint TPAMI paper.

1st Place: $1000
2nd Place: $500

Presentation at our
CVPR 2018 Workshop

Invitation to joint
dinner at CVPR 2018

Invitation to co-author
joint TPAMI Submission

Challenges

ROB 2018 features 6 challenges: stereo, multi-view stereo (MVS), optical flow, single image depth prediction, semantic segmentation and instance segmentation. Participants are free to submit to a single challenge or to multiple challenges. For each challenge, the results of a single model must be submitted to all benchmarks (indicated with an x below). For each challenge we will award a price to the winner and the runner-up.

Stereo
MVS
Flow
Depth
Semantic
Instance

Important Dates

Feb 12, 2018 Release of training data and development kit
Mar 1, 2018 Release of test data and online evaluation platform
May 15, 2018 Submission deadline (6pm CET)
June 18, 2018 Robust Vision Challenge 2018 Workshop at CVPR 2018 in Salt Lake City

Sponsors



eXTReMe Tracker