Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mini-spec for type:scoring #372

Open
thorehusfeldt opened this issue May 4, 2024 · 0 comments
Open

Mini-spec for type:scoring #372

thorehusfeldt opened this issue May 4, 2024 · 0 comments

Comments

@thorehusfeldt
Copy link
Collaborator

thorehusfeldt commented May 4, 2024

This issue attempts to specify semantics for problems of type:scoring. At the time of writing, it is somewhere between the semantics of legacy specification of the problem package format https://github.com/Kattis/problem-package-format/blob/master/spec/legacy.md and the evolving 2023-07-draft https://github.com/Kattis/problem-package-format/blob/master/spec/2023-07-draft.md.

It is biased towards the latter, but avoids aggregation rules.

Specification bapc-scoring-0.1

A problem where type contains scoring is called a scoring problem. Each submission receives a non-negative numerical score (such as 85) rather than a verdict (such as AC). The goal of the submission is to maximize the score.

Scores are determined for test cases, test groups, and the submission itself.

The score of a failed test case is 0. By default, the score of an accepted test case is 1; but this can be overriden in testdata.yaml. If a custom output validator produces a score (by writing to score.txt), that value is multiplied with the test case score.

The score of a test group is determined by its subgroups and test cases. The score of an empty test group is 0. Otherwise, the score is either the sum or the minimum of its children. (The default is sum.)

The submission score is score on the topmost test group data.

The scoring behavior is configured by the following flags under scoring in testdata.yaml, which must exist in every testgroup corresponding to a subtask:

scoring?: { 
    # The score assigned to each accepted testcase in the group. If a scoring output validator is used, this score is multiplied by the score from the validator.
    score?: >= 0 | *1
    aggregation?: *"sum" | "min"
}

Example. Here is an example for a IOI-style problem with two subtasks of 20p and 80p, and where sample gives no points:

data:testdata.yaml = {  } # not needed; defaults give aggregation "sum"
data/sample:testdata.yaml = { score: 0;  } 
data/secret:testdata.yaml = {  } # not needed; defaults give aggregation "sum" 
data/secret/group1:testdata.yaml = { score: 20; aggregation = "min" }
data/secret/group2:testdata.yaml = { score: 80; aggregation = "min" }
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant