Hi there! This is a repo that contains an open-source subset of the autograder we'll be using for CS 131 - Fall 2024's course-long project: making an interpreter.
Using this repository / testing locally is entirely optional. It does not directly affect your grade. You are free to only submit to Gradescope!
This repository contains:
- the full source code for the autograder we deploy to Gradescope
- 20% of the test cases we evaluate your code on; these are the test cases that are public on Gradescope
- each version of the project is in a
v*
folder; - the
tests
subdirectory contains source (.br
) files for programs that should interpret and run without errors - the
fails
subdirectory contains source (.br
) files for programs that should interpret successfully, but error
- each version of the project is in a
This repository does not contain:
- 80% of the test cases we evaluate your code on
- the plagiarism checker, which is closed-source
- the Docker configuration for the deployment; this is managed by Gradescope.
- canonical solutions for the past projects - those are in the project template repo
We'll note that with the current setup, we grant five seconds for each test case to run.
We've made a separate repository for project template code.
- Make sure you're using Python 3.11
- Clone this repo and navigate to its root directory
Now, you're ready to test locally.
We only supply a limited number of test cases here, so it's important to write your own!
To do so, create a file in the following format:
[BREWIN PROGRAM HERE]
/*
*IN*
[DESIRED INPUT FOR PROGRAM]
*IN*
*OUT*
[DESIRED OUTPUT FOR PROGRAM]
*OUT*
*/
Here's an example!
func main() {
var a;
a = inputi("Enter a value: ");
print(a);
}
/*
*IN*
10
*IN*
*OUT*
Enter a value:
10
*OUT*
*/
In the test above, we supply a Brewin program that takes input from the user and prints it. We add 10 to the input section and 10 the output section, since the program prints whatever is given as input.
Now add this new test file to the test folder for the current project you're working on. If the test you're adding results in an error, add it to /fails
. Otherwise, add it to /tests
.
For example, if you're working on a test for Project 1 that does not have any errors, add the file to v1/tests
.
To test locally, you will additionally need a working implementation of the project version you're trying to test (your interpreter file and any additional files that you created that it relies on)
Place this in the same directory as tester.py
. Then, to test project 1 for example,
python tester.py 1
Running 6 tests...
Running v1/tests/test_add1.br... PASSED
Running v1/tests/test_print_const.br... PASSED
Running v1/tests/test_print_var.br... PASSED
Running v1/fails/test_bad_var1.br... PASSED
Running v1/fails/test_invalid_operands1.br... PASSED
Running v1/fails/test_unknown_func_call.br... PASSED
6/6 tests passed.
Total Score: 100.00%
Note: we also output the results of the terminal output to results.json
.
If you're a student and you've found a bug - please let the TAs know (confidentially)! If you're able to provide a minimum-reproducible example, we'll buy you a coffee - if not more!
This code is distributed under the MIT License.
Have you used this code? We'd love to hear from you! Submit an issue or send us an email ([email protected]).