Evaluation, Ranking and Prizes

Evaluation

Algorithms will be assessed on their ability to register IHC images to the corresponding H&E images such that the distance between corresponding tissue regions that exist in both images is minimized. For validation and test cases, we will provide a set of manually generated landmarks for IHC images that are to be registered to the H&E image domain. All landmarks will be placed as uniformly as possible in tissue regions that exist in both images. Challenge participants should then optimize algorithms such that the output of their algorithm is a set of registered landmarks that minimizes the distance to human generated landmarks in the target domain. The landmarks in the target domain will be kept secret from the challenge participants. The landmarks are intended to work as a proxy to assess the distance between corresponding tissue in the registered and target image. 

Teams will be able to validate their results prior to the end of the challenge on a validation data set. Teams will be allowed to submit predictions for the test data as many times as desired, however, results will only be made public for the most recent submission before the submission deadline during the challenge workshop and through this website thereafter. Code used for the evaluation of submissions will be made publicly available through Github. 

Ranking

For each set of landmarks in the  moving image, there will be at least two sets of corresponding landmarks in the target image. The distance between registered and target landmark will be computed for annotations from each annotator. We will then use the average of the respective distances in µm per landmark to get the final registration error for a specific landmark. We will use the median 90th percentile across the distances between all paired landmarks in an image as the score for a submission. Missing submitted landmarks within an image pair will be assigned the unregistered moving image coordinates or image boundary values for computing the 90th percentile. 

Prizes

In order to be ranked in the test set leaderboard, participants need to submit registered test set landmarks, as well as a link to a publicly available description of their algorithms, e.g. on arxiv.org. Two sets of monetary challenge prizes will be distributed among the participants who are ranked in the test set leaderboard. All participants will be eligible to receive prize money from the first set, whereas only those teams that publish their code will be eligible for prize money from the second set. Participants can receive an award from both the first and second set (e.g. if the highest ranked team also publishes their code, they will get the first prize from both lists). The total prize money currently available is 4000€.