๐๏ธ Rules &Policies¶
โ๏ธ Accounts¶
A single and verified account per participant. Only verified accounts are allowed to submit to the Light My Cells challenge.ย You can not submit from multiple grand-challenge accounts. Each participant is allowed to solely submit from a single account.¶
Anonymous participation is not allowed. To qualify for ranking on the validation/testing leaderboards, true names and affiliations [university, institute or company (if any), country] must be displayed accurately on verified Grand Challenge profiles, for all participants.
๐ Code-sharing¶
Participants are only allowed to share scripts among their team members. Code sharing with other participants should be made publicly availableย through our forum.¶
๐๏ธParticipation¶
๐ฅ Teams¶
Teams are allowed. Participants in a team are not allowed to make individual submissions outside of their team, as this unfairly increases the submission chances.¶
Each participant can only join one team. Any individual participating with multiple or duplicate Grand Challenge profiles will be disqualified.¶
Participants from the same research group are also not allowed to register multiple teams. Organizers retain the right to disqualify such participants.¶
๐ข Members of the organizers' institutes¶
As we take part into national institutions, every member of CNRS, France BioImaging, Montpellier University and LIRMM may participate in the challenge and be eligible for prizes. Therefore, members of the organizers team and everyone involved in the conception or acquisition of the training and test datasets may participate in the challenge, but are not eligible for prizes or the final ranking in the Final Testing Phase - to avoid potential conflict of interest.¶
๐ Results¶
Participating teams have the option to decide if their performance results will be disclosed publicly. However, only those teams that share open source code and algorithms will qualify for winning awards.¶
๐ฅ Submission¶
This challenge only supports the submission of fully automated methods in Docker containers. It is not possible to submit semi-automated or interactive methods.
All Docker containers submitted to the challenge will be run in an offline setting (i.e. they will not have access to the internet, and cannot download/upload any resources). All necessary resources (e.g. pre-trained weights) must be encapsulated in the submitted containers apriori.
๐ง Method¶
All participants will perform local training (and validation) before submitting to the test phases. For both phases, participants will submit their algorithm in docker format to Grand-challenge.org, where metrics will automatically evaluate predictions on secret test datasets and rank participants.
When submitting, participants will have two choices:
- Submit open source code (Recommended)
- or only submit the inference part including the saved trained model (in h5 format) (in which case it will not be eligible for prizes)
In all cases, algorithms must predict and produce the best fluorescence Z-focus image of all four organelle channels with the same image size as the input image and in OME-TIFF format.
๐ฆ Public/private statement¶
We impose a strict limit on computing resources.
๐ง Limits to IT resources¶
We impose a strict limit on computing resources.
The average execution time of an algorithm for a submission on the
Grand-Challenge backend must be less than or equal to 10 minutes.
Submitted solutions that do not respect these limits will not be taken
into account in the ranking. There are several reasons for this.
Firstly, to reduce costs, as each model evaluation in the
Grand-Challenge costs money. Secondly, for reasons of real-world
applicability, as in many practical applications, computing resources,
memory budgets and algorithm runtimes are subject to significant
limitations. Thirdly, we level the playing field for participants who do
not have access to vast amounts of computing resources.
โ Database¶
Database is licensed as CC-BY, i.e. everyone (also non-participants of the challenge) are free to use the training data set in their respective work, given attribution in the publication.
Participants competing for prizes can use pre-trained AI models based on computer vision and/or medical imaging datasets (e.g. ImageNet, Medical Segmentation Decathlon, JUMP Cell Painting Datasets , etc.) They can also use external datasets to train their AI algorithms. However, such data and/or models must be published under a permissive license (within 3 months of the Open Development Phase deadline) to give all other participants a fair chance at competing on equal footing. They must also clearly state the use of external data in their submission, using the algorithm name [e.g. โNAME (trained w/ private data)โ], algorithm page and/or a supporting publication/URL.
๐ Ethics approval¶
No ethics approval is necessary for the database used in this context.
๐ฐ Publication¶
Entities, including researchers and companies seeking to compare their
AI models or products without entering into the competition,
participating teams and individuals intending to use the challenge, are
not permitted to publish until the challenge document is published
(expected by the end of 2024).
Use of the database is free, but publication of an article on the use of
the database is subject to prior publication of our data article.
After publication of our articles (both data and challenge), all parties
are free to publish their results, but must cite the appropriate
publication.
As the data will remain public, we plan to produce an article three
years after the publication of the challenge paper with the new methods
published.
๐ Train datasets¶
๐งช Test datasets¶
Canโt allow access to the final test images during the competition.
๐ Test datasets¶
The use of private data is not encouraged and will not win the prize. To ensure fairness, entrants must share links to external data, datasets or pre-trained models (all freely available) on the challenge forum by 21.03.2024 ; after this date, no additional data may be used for fair comparison.
Evaluation¶
๐ฐ Equal ranking¶
In the case the final rank will be equal for multiple participating teams, they will be ordered by the metric-based aggregation according to the mean of all metrics.
๐ค Withdrawal¶
All participants reserve the right to drop out of the FBI challenge and forego any further participation. However, they will not be able to retract their prior submissions or any published results till that point in time.
๐ซ Exclusion policy¶
Organizers of theย Light My Cells challenge reserve the right to disqualify any participant or participating team, at any point in time, on grounds of unfair or dishonest practices or toย exclude from the Light My Cells challenge those participants/teams breaking any of these rules.
At any stage of the Light My Cells challenge, the organizers reserve the right to disqualify any participant or team for engaging in unfair or dishonest practices and may exclude those individuals or teams from the competition for violating these rules.
๐๏ธ Conflicts of interest¶
There are no conflicts of interest to declare.