This is a mostly platform independent template, most scripts are written for CMake, which is required as build system.
- Change project name in root 
CMakeLists.txt - Run CMake, e.g. via 
cmake -B cmake-build-debug .(automatically done by CLion) - Run 
contests/add_contest NAME - Then, either
- Download 
samples-TASK.zipfiles tocontests/NAMEand runcontest/NAME/load_tasks, or - Run 
contest/NAME/add_task TASK 
 - Download 
 - Write code (this is the important part)
 - Run 
ctestin the tasks cmake binary directory (cmake-build-debug/contests/NAME/TASKif configured according to step 2) to test it 
contests/add_contest NAMEto create a new contestNAME.This invokes
scripts/add_contest.cmaketo create a contest folder, usingtemplates/contest.cmakeandtemplates/contest/*.contests/NAME/add_task TASKto create a new taskTASKin contestNAME.This invokes
scripts/add_task.cmaketo create a task folder, usingtemplates/task.cmake,templates/template.cppandtemplates/task/*.contests/NAME/load_taskscreates a task for eachsamples-TASK.zipin contestNAMEand adds the samples contained in the zip file.This invokes
scripts/load_tasks.cmake, which usesscripts/add_task.cmaketo create task folders.contests/NAME/TASK/add_sample NAMEcreates a sample for the given task (bothNAME.inandNAME.out).This is just a bash script, but rather simple, so it should be easily portable.
Run ctest in the cmake build directory corresponding to a task (in CLion: cmake-build-TYPE/contests/NAME/TASK) to run all samples. Add --output-on-failure for more detail (e.g. solution diff). Add -j 8 and/or --progress if you feel like it.
Each time cmake is run (the project is reloaded), ctest tests are generated.
Each task receives a build test (as testing is performed via a script, the test runner does not have to be built).
For each SAMPLE of the task, a test is created which runs the task executable with SAMPLE.in as input and compares the output with SAMPLE.out.
The test fails if:
- the execution does not finish within 5 seconds (configurable in 
config.cmake) - the executable exits with a non-zero exit code (usually a run error, error output is printed to console if using 
--output-on-failure) - the output does not match the desired output (wrong answer, diff output is printed to console if using 
--output-on-failure) 
Program output is saved to SAMPLE.result, diff output (if any) is saved to SAMPLE.result.diff, error output (if any) is saved to SAMPLE.result.err.
The sample tests are skipped if the build fails.
There are two test runner scripts,
perform_test.shfor UNIX andperform_test.cmakefor other platforms.perform_test.shterminates itself withSIGSEGVto makectestoutputExceptioninstead ofFailedto allow for a quick distinction between run errors and wrong answers.perform_test.cmakedoes not have this capability, so both run errors and wrong answers are reported asFailed.
perform_test.cmakeusesdiffto compare outputs, this might need to be changed based on the setup.
perform_test.shusesdiff,headandwc(although the latter two are not strictly required).
The simplest way to install this is to clone this repository.
Then you can add this repository as upstream remote (git remote add upstream REPO_URL) and change the origin to your repository.
If upstream is set up, you can git pull upstream master to update to the latest version.
The template is structured such that, if at all possible, new features also apply to existing tasks.
If you already have an existing repository, you can add this repository to its history:
- (optional) rename/move files you know will conflict
 git remote add upstream REPO_URL.git pull --allow-unrelated-histories upstream master. This is most certain to result in conflicts, especially inCMakeLists.txt. Mostly you can just pick the remote files in case of conflict, unless you know that you do not. If you feel daring, you can specify-s recursive -X theirsto automatically pick remote files during merge.- Incorporate your existing files by creating the appropriate contests and tasks and copying the respective source files.