Please carefully read the following guidelines before submitting. If you do not comply with our guidlines your method will be removed.
The goal of ROB is to foster development of algorithms which are robust across various datasets. Thus, each participating method must be tested on all datasets involved in the respective challenge. It is not allowed to use different methods or to alter the parameters of a method for each individual benchmark of a challenge. By submitting to ROB you agree to eventually make your source code publicly available and publish a paper or Arxiv report about the technique. Synthetic training data may be used for pre-training models. External training data with annotated real images may be used as long as the dataset is public.
Invalid Submission Examples:
- A method which is trained separately on each benchmark, resulting in different parameters/weights for each benchmark of the challenge.
- A method which trains a classifier to detect from which benchmark a file comes from (either by image content or meta data, e.g., image dimensions)
- A method which executes separate program paths specifically designed for individual datasets (this excludes pre-processing such as image resizing which is allowed)
- A method which explicitly makes use of the knowledge that some output labels (eg, semantic labels) correspond to one dataset and some output labels correspond to another dataset. However, learned co-occurrence statistics between labels can be used.
- A method which is trained on non-publicly available datasets
Valid Submission Examples:
- A method that is trained on all benchmarks, resulting in a single parameter set/model. This model is then applied to all test sets of the challenge.
- A method is trained on all benchmarks. Training samples are drawn with equal proportion from each dataset to balance the number of samples during training. The resulting model is then applied to all test sets of the challenge.
- A method that was trained with all training/validation data available from the individual benchmarks and additional public data (eg, pre-trained on ImageNet or Mapillary Vistas) but does not contain dataset specific instructions or training
- A method fullfilling the above criterion, but which is trained with the ROB training set in a supervised manner and with the ROB test set (or any other data set) in an unsupervised manner
- A method not using explicit training data
- A method trained on only some of the benchmarks while omitting one or more benchmarks completely. The same method must still participate in all individual benchmarks (i.e. submit results for respective benchmarking data to each benchmark)