Robust Vision Challenge

The Robust Vision Challenge 2018 was a full day event held in conjunction with CVPR 2018 in Salt Lake City. Our workshop comprised talks by the winning teams of each challenge as well as three invited keynote talks by renowned experts in the field. Videos of the talks are available at YouTube:

Program (June 18, 2018, Room 355 - C)

8:45 Introduction (Andreas Geiger, Oliver Zendel)
9:00 Invited Talk (chair: Oliver Zendel)
   - Judy Hoffman (UC Berkeley): Making our Models Robust to Changing Visual Environments
9:45 Session 1: Stereo (chair: Daniel Scharstein)
   - Yi Zhu (UCM): Stereo Matching Using Multi-scale Feature Constancy
   - Eddy Ilg (Uni Freiburg): Occlusions, Motion and Depth Boundaries with a Generic Network for Optical Flow, Disparity, or Scene Flow
10:30 Coffee Break
11:00   Session 2: Multiview Stereo (Daniel Scharstein, Johannes Schönberger)
11:30 Session 3: Optical Flow (chair: Torsten Sattler)
   - Deqing Sun (NVIDIA): PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume
   - Daniel Maurer (Uni Stuttgart): ProFlow: Learning to Predict Optical Flow
12:15 Lunch Break
13:30 Invited Talk (chair: Carsten Rother)
   - Uwe Franke (Daimler AG): 30 Years Fighting for Robustness
14:15 Session 4: Single Image Depth Prediction (chair: Angela Dai)
   - Mingming Gong (CMU): Deep Ordinal Regression Network for Monocular Depth Estimation
   - Ruibo Li (HUST): Deep Attention-based Classification Network for Robust Depth Prediction
15:00 Session 5: Semantic Segmentation (chair: Matthias Niessner)
   - Peter Samuel Rota Bulo (Mapillary): In-Place Activated BatchNorm for Memory-Optimized Training of DNNs
   - Marin Oršić (Uni Zagreb): Ladder-DenseNet Architecture for Robust Semantic Segmentation
15:45 Coffee Break
16:00 Invited Talk (chair: Matthias Niessner)
   - Stefan Roth (TU Darmstadt): Robust Scene Analysis: Energy-based models, deep learning, and something in between
16:45 Session 6: Instance Segmentation (chair: Jonas Uhrig)
   - Shou-Yao Roy Tseng (NTHU): Non-local RoI for Instance Segmentation
   - Hsien-Tzu Cheng (NTHU): Improved MaskRCNN
17:30 Discussion and Closing (Andreas Geiger)
18:30 Dinner (Winners, Sponsors, Organizers)

Invited Speakers

Judy Hoffman is a postdoctoral researcher at UC Berkeley. Her research lies at the intersection of computer vision and machine learning with a specific focus on semi-supervised learning algorithms for domain adaptation and transfer learning. She received a PhD in Electrical Engineering and Computer Science from UC Berkeley in 2016. She is the recipient of the NSF Graduate Research Fellowship, the Rosalie M. Stern Fellowship, and the Arthur M. Hopkin award. She is also a founder of the WiCV workshop (women in computer vision) co-located at CVPR annually.


Uwe Franke received his Diploma degree and his PhD degree both in electrical communictions engineering from Aachen Technical University in 1983 and 1988. Since 1989 he is with Daimler Research and Development. He developed Daimler's lane departure warning system (Spurassistent) and has been working on stereo vision since 1996. Since 2000 he is head of Daimler's image understanding group. The algorithms developed by this group are the basis for Daimler's Stereo Camera based safety systems that are commercially available in mid- and upper class Mercedes Benz vehicles since 2013.


Stefan Roth received the Diplom degree in Computer Science and Engineering from the University of Mannheim, Germany in 2001. In 2003 he received the ScM degree in Computer Science from Brown University, and in 2007 the PhD degree in Computer Science from the same institution. Since 2007 he is on the faculty of Computer Science at Technische Universität Darmstadt, Germany (Juniorprofessor 2007-2013, Professor since 2013). His research interests include probabilistic and deep learning approaches to image modeling, motion estimation and tracking, object recognition, and scene understanding. He received several awards, including honorable mentions for the Marr Prize at ICCV 2005 (with M. Black) and ICCV 2013 (with C. Vogel and K. Schindler), the Olympus-Prize 2010 of the German Association for Pattern Recognition (DAGM), and the Heinz Maier-Leibnitz Prize 2012 of the German Research Foundation (DFG). In 2013, he was awarded a Starting Grant of the European Research Council (ERC).

Stereo Leaderboard

1
iResNet_ROB
6
2
2
2
DN-CSS_ROB
3
7
1
3
DLCB_ROB
5
4
3
4
NaN_ROB
2
5
9
4
PSMNet_ROB
10
1
5
6
NOSS_ROB
1
16
4
Submitted by Anonymous
7
LALA_ROB
11
3
8
Submitted by Anonymous
8
CBMV_ROB
4
11
7
8
XPNet_ROB
8
8
6
Submitted by Anonymous
10
FBW_ROB
14
6
12
Submitted by ()
11
SGM_ROB
7
13
13
12
MSMD_ROB
13
9
15
Submitted by Anonymous
12
PDISCO_ROB
16
10
11
Submitted by Anonymous
12
PWCDC_ROB
14
12
10
Submitted by Anonymous
15
WCMA_ROB
12
14
13
16
ELAS_ROB
9
16
16
Baseline - Submitted by Thomas Schöps (ROB Team)
17
DPSimNet_ROB
17
15
17
Submitted by Anonymous
18
MEDIAN_ROB
19
18
18
Baseline - Submitted by Thomas Schöps (ROB Team)
19
AVERAGE_ROB
18
19
19
Baseline - Submitted by Thomas Schöps (ROB Team)

MVS Leaderboard

Flow Leaderboard

1
PWC-Net_ROB
2
2
2
1
2
ProFlow_ROB
1
5
1
3
3
LFNet_ROB
6
1
5
4
Submitted by Anonymous
4
AugFNG_ROB
8
3
3
2
Submitted by Anonymous
4
FF++_ROB
3
4
4
5
6
DMF_ROB
4
7
6
7
6
ResPWCR_ROB
5
6
7
6
Submitted by Anonymous
8
WOLF_ROB
7
8
8
8
Submitted by Anonymous
9
TVL1_ROB
9
9
9
9
10
H+S_ROB
10
10
10
10
11
AVG_FLOW_ROB
12
12
12
12
Baseline - Submitted by Joel Janai (ROB Team)

Depth Leaderboard

1
DORN_ROB
1
1
2
DABC_ROB
3
1
3
FUSION_ROB
2
8
Submitted by Anonymous
3
UReUFo_ROB
5
4
5
CSWS_E_ROB
6
4
6
APMoE_base_ROB
4
9
6
TASKNetV1_ROB
9
3
8
BRNet_ROB
8
6
Submitted by Anonymous
8
FCRN_ROB
7
7
10
robustDepth_ROB
10
10
Submitted by Anonymous
11
UNET_depth_ROB
11
12
Submitted by Anonymous
11
GoogLeNetV1_ROB
12
11
Baseline - Submitted by Jonas Uhrig (ROB Team)

Semantic Segmentation Leaderboard

1
MapillaryAI_ROB
1
1
1
1
2
LDN2_ROB
3
2
2
3
3
IBN-PSP-SA_ROB
2
3
3
4
Submitted by Anonymous
4
AHiSS_ROB
5
9
5
2
5
VENUS_ROB
4
4
4
9
VENUS-Net for RobustVision - Submitted by Anonymous
6
AdapNetv2_ROB
5
5
6
7
Submitted by Anonymous
7
VlocNet++_ROB
7
5
10
5
Submitted by Anonymous
8
BatMAN_ROB
8
8
9
6
9
APMoE_seg_ROB
8
7
7
10
10
GoogLeNetV1_ROB
10
10
7
8
Baseline - Submitted by Jonas Uhrig (ROB Team)

Instance Segmentation Leaderboard

Winners










RVC 2018 Submission Guidelines

The goal of ROB is to foster development of algorithms which are robust across various datasets. Thus, each participating method must be tested on all datasets involved in the respective challenge. It is not allowed to use different methods or to alter the parameters of a method for each individual benchmark of a challenge. By submitting to ROB you agree to eventually make your source code publicly available and publish a paper or Arxiv report about the technique. Synthetic training data may be used for pre-training models. External training data with annotated real images may be used as long as the dataset is public.

Invalid Submission Examples:

  • A method which is trained separately on each benchmark, resulting in different parameters/weights for each benchmark of the challenge.
  • A method which trains a classifier to detect from which benchmark a file comes from (either by image content or meta data, e.g., image dimensions)
  • A method which executes separate program paths specifically designed for individual datasets (this excludes pre-processing such as image resizing which is allowed)
  • A method which explicitly makes use of the knowledge that some output labels (eg, semantic labels) correspond to one dataset and some output labels correspond to another dataset. However, learned co-occurrence statistics between labels can be used.
  • A method which is trained on non-publicly available datasets

Valid Submission Examples:

  • A method that is trained on all benchmarks, resulting in a single parameter set/model. This model is then applied to all test sets of the challenge.
  • A method is trained on all benchmarks. Training samples are drawn with equal proportion from each dataset to balance the number of samples during training. The resulting model is then applied to all test sets of the challenge.
  • A method that was trained with all training/validation data available from the individual benchmarks and additional public data (eg, pre-trained on ImageNet or Mapillary Vistas) but does not contain dataset specific instructions or training
  • A method fullfilling the above criterion, but which is trained with the ROB training set in a supervised manner and with the ROB test set (or any other data set) in an unsupervised manner
  • A method not using explicit training data
  • A method trained on only some of the benchmarks while omitting one or more benchmarks completely. The same method must still participate in all individual benchmarks (i.e. submit results for respective benchmarking data to each benchmark)

Robust Vision Challenge 2018 Organizing Team


Sponsors

Gold Sponsors



eXTReMe Tracker