Evaluation, Ranking and Prizes

Evaluation

Algorithms will be assessed on their ability to register WSIs of sections from the same tumour block such that the distance between corresponding tissue regions that exist in both images is minimised. For validation and test cases, we will generate a set of landmarks to quantify registration performances. All landmarks will be placed as uniformly as possible in tissue regions that exist in both images. Challenge participants should then optimise algorithms such that the output of their algorithm is a set of registered landmarks that minimises the distance to human generated landmarks in the target domain. The landmarks are intended to work as a proxy to assess the distance between corresponding tissue in the registered and target image. 

Teams can quantify their algorithm performance using validation data and registered validation set landmarks in the Leaderboards section of this website. Code used for the evaluation of submissions is publicly available through Github

Ranking

For each set of landmarks in the  moving image, there will be at least two sets of corresponding landmarks in the target image. The distance between registered and target landmark will be computed for annotations from each annotator. We will then use the average of the respective distances in µm per landmark to get the final registration error for a specific landmark. Missing submitted landmarks within an image pair will be assigned the unregistered moving image coordinates or image boundary values for computing the 90th percentile. For the ACROBAT 2022 challenge, the median 90th percentile determines the aggregate score that the ranking is based on. In the ACROBAT 2023 challenge, we will use the mean 90th percentile to rank submissions. 

Prizes

In order to be ranked in the test set leaderboard, participants need to submit a docker container of their algorithm and a link to a publicly available description of their algorithms, e.g. on arxiv.org. Two sets of challenge prizes will be distributed among the participants who are ranked in the test set leaderboard. All participants will be eligible to receive prizes from the first set, whereas only those teams that publish their code will be eligible for prizes from the second set. Participants can receive an award from both the first and second set (e.g. if the highest ranked team also publishes their code, they will get the first prize from both lists).