After signing up, you will be asked to submit a decoder executable and a set of files representing the encoded images. This executable could be a Python script, just make sure to include a shebang such as
#!/usr/bin/env python3 at the top of your script. If you want to include additional files with your decoder, such as model parameters or other Python packages, you can combine the decoder and other files into a zip archive and upload this instead. In this case, the zip archive should contain a file called
decode which will be executed on the server after unzipping.
The uploaded data will be placed into the working directory of the decoder. While you can upload multiple files to represent encoded images (e.g., one file for each image), browsers struggle to upload very large numbers of files. We therefore recommend combining images into an archive. The server will not automatically unzip data archives, unlike the zip file containing the decoder.
Running the decoder should reproduce a set of PNGs which will be compared to a set of target images. These can be written to the working directory of the decoder or any subfolder (e.g.,
./images/). In the P-frame challenge, additional images will already be present in the working directory when the decoder is executed and can be used by the decoder. These context files do not need to be uploaded by you.
The executable needs to run in one of the provided Docker environments. Depending on the environment you choose, your decoder will be provided a GPU (P100) or only two CPUs. If you require another environment, you can request it in our forum. Ideally, this should be a public Docker environment not maintained by you.
We impose the following limitations on the decoders:
After the test set has been released, we will require the teams to use one of the decoders submitted during the validation phase of the challenge. This is to ensure that the decoder has not been overfitted to the test set. Note that we will determine if two decoders are the same by comparing their hash values, so they should not differ even by a single bit.
Many authors will be familiar with a multitude of configurations which can act as either the encoder or the decoder, but probably few are familiar with the implementation of an arithmetic coder/decoder. We therefore release reference arithmetic coder/decoder in order to allow researchers to focus on the parts of the system for which they are experts:
To make it easier for you to deal with the P-frame data, we also provide data loaders implemented in Tensorflow and PyTorch:
These data loaders are part of a development kit which we will extend to other useful tools as well.