Robust Vision Challenge

Note: The leaderboards are not updated anymore as some of these submissions have been removed from the original leaderboards due to the respective policy (eg, having associated papers or code).

Stereo Leaderboard

1
iResNet_ROB
6
2
2
2
DN-CSS_ROB
3
7
1
3
DLCB_ROB
5
4
3
4
NaN_ROB
2
5
9
4
PSMNet_ROB
10
1
5
6
NOSS_ROB
1
16
4
Submitted by Anonymous
7
LALA_ROB
11
3
8
Submitted by Anonymous
8
CBMV_ROB
4
11
7
8
XPNet_ROB
8
8
6
Submitted by Anonymous
10
FBW_ROB
14
6
12
Submitted by ()
11
SGM_ROB
7
13
13
12
MSMD_ROB
13
9
15
Submitted by Anonymous
12
PDISCO_ROB
16
10
11
Submitted by Anonymous
12
PWCDC_ROB
14
12
10
Submitted by Anonymous
15
WCMA_ROB
12
14
13
16
ELAS_ROB
9
16
16
Baseline - Submitted by Thomas Schöps (ROB Team)
17
DPSimNet_ROB
17
15
17
Submitted by Anonymous
Last update: 2018-12-17 00:13:29 CET
Joining multiple rankings into one in a fair manner is a non-trivial task. In electorial science the equivalent task of finding one consensus ordering based on many ordered votes has been a central question for many years. The currently most-accepted solution for this problem is the Schulze Proportional Ranking (PR) method [1] for sorted lists of winners. We are using Schulze PR to rank the top ten in each challenge. The software used for this is a fork of the pyvotecore implemenation which you can find here.

[1] Markus Schulze. "A new monotonic, clone-independent, reversal symmetric, and condorcet-consistent single-winner election method". In Social Choice and Welfare. 2011.

We first rank each method according to a representative subset of metrics per dataset and then rank the resulting ranks to yield one global rank. The metrics and rankings per dataset can be viewed by clicking on "Detailed subrankings". Note that these rankings may differ from the rankings on the original websites as those typically consider a single metric while we are looking for robustness and thus take a number of metrics per dataset into account.


Gold Sponsors

Silver Sponsors



eXTReMe Tracker