Probably a C testing framework
- Create a testing framework for C which enables me to write unit tests inside the file under test.
- No test code can end up in compiled binaries.
- C language server should work for the test code, in my case ccls, but others should too.
- Single header file. I wanted the whole framework to be contained and used from a single header file.
- As little code as possible. I like my projects to do something specific well, instead of being ok at every configurable possibilty. this has the additional benefit of being much easier to audit and understand.
- No external library dependencies other than the C standard library.
The long and short of it is the PACTF_SUITE
macro adds a main function to a C library, enabling you to run it as a binary. This does mean that you can't use pactf in a file with a main function already. If you have code in a main function you want to test, then you'll have to pull it out into a library first.
Do one of the following:
- Clone the repo, and run
make install
as root to copy thepactf.h
header file to/usr/include
. - Clone the repo, and run
make link
as root to create a symlink to thepactf.h
header file in/usr/include
. This option can make it slightly easier to update by simply pulling down the repo. - Just dowload the
pactf.h
file and put it wherever you need to for global header files on your system.
You could add this repo as a git submodule to your project, or simply copy pactf.h
into your project and use that.
It was important to me that my LSP could identify the code inside the pactf macros when writing the tests. In order to achieve this you need to inform your LSP that things are built with the ENABLE_PACTF
macro defined, even though when you actually build your project, you won't want that macro defined.
ccls
You can configure ccls to recognise the code inside the pactf macros by doing one of:- Setting
-DPACTF_ENABLE
when generating yourcompile_commands.json
. - Add
-DPACTF_ENABLE
to your.ccls
file. Seeexamples/basic
for an example.
clangd - wip
WIP- Add
PACTF_SUITE
to the bottom of your library. This is going to contain all of the test code to be executed.
PACTF_SUITE({
});
- Inside the braces of
PACTF_SUITE
addP_TEST
. This is how you separate and name your test cases.
PACTF_SUITE({
P_TEST("it should do a thing",{
});
});
- Inside the braces of
P_TEST
addP_ASSERT
. This is how you make assertions about your codes behaviour. The code inside the braces ofP_ASSERT
should be an expression that evaluates to a boolean.
PACTF_SUITE({
P_TEST("it should do a thing",{
P_ASSERT(some_function() == 3);
});
});
- Done, this is the most basic form of a test.
I've had the most success with code formatters by passing a block {...}
to the macros I can, and ending each macro call with a semicolon. See the below macros section to see which macros support a block as an argument and which don't. Sometimes adding blank lines can help too, particularly inside the macros that don't support blocks, YMMV however.
All code arguments support optionally wrapping your code in a block, unless otherwise stated.
Macro | Explanation |
---|---|
PACTF_SETUP(code) |
PACTF_SETUP is an optional macro which enables you to execute setup code at the file root, outside of the main function. This could be function stubs, P_BEFORE_EACH , or P_AFTER_EACH . You will need to stub any external functions used in this file, inside this macro*. The code argument does not support being wrapped in a block. |
PACTF_SUITE(code) |
PACTF_SUITE is the wrapper that should contain all test code that isn't inside PACTF_SETUP . |
* While it would be possible to use these functions as mocks, I would encourage you not do so. Instead, I would encourage only using these stubs to ensure this file can compile on it's own and only writing unit tests for functions which do not call external functions. This will likely require structuring your code in a certain way.
Macro | Explanation |
---|---|
P_FUNCTION(name, code) |
P_FUNCTION is an optional wrapper to help organise your tests by function. Use the name argument to label the tests contained in the code argument. |
P_TEST(name, code) |
P_TEST is the wrapper for your tests. Each tests case should be in it's own P_TEST macro. Use the name argument to describe what behaviour you are testing for in the code argument. |
P_BEFORE_EACH(code) |
P_BEFORE_EACH defines a function which is executed before the code argument of each use of P_TEST . |
P_AFTER_EACH(code) |
P_AFTER_EACH defines a function which is executed after the code argument of each use of P_TEST . |
Macro | Explanation |
---|---|
P_ASSERT(expression) |
P_ASSERT is the macro used to make test assertions. The expression argument must evaluate to a boolean. |
Macro | Explanation |
---|---|
P_LOG(...args) |
P_LOG is simply a macro helper for printf and takes the same args . |
P_LOG_BOLD(...args) |
P_LOG_BOLD is a macro helper for printf and takes the same args , but wraps the resultant string in the bold ansi code. |
P_LOG_COLOUR(colour, ...args) |
P_LOG_COLOUR is a macro helper for printf and takes the same args , but wraps the resultant string in the provided ansi colour code. |
P_LOG_GREEN(colour, ...args) |
P_LOG_RED is macro for P_LOG_COLOUR with the colour hardcoded as green, "\033[32m" . |
P_LOG_RED(colour, ...args) |
P_LOG_RED is macro for P_LOG_COLOUR with the colour hardcoded as red, "\033[31m" . |
Macro | Explanation |
---|---|
P_STRINGIFY(arg) |
P_STRINGIFY is simply a macro helper for the # preprocessing operator. |
In order to run the tests, simply compile each file individually with the flag -DPACTF_ENABLE
, then execute the resultant binary. For example:
gcc -DPACTF_ENABLE -o test lib.c
./test
You may also need to add include flags to gcc if you're using header files from non standard places.
If any tests fail they will be reported in stdout and the main function will return with an exit code of 1.
There are a couple of options for running all of the binaries. The run_all_examples
recipe in the makefile runs all the test binaries regardless if any fail but can make it hard to spot failures, the test
recipe however errors immediately when any fail, and finally the matrix strategy used in .github/workflows/pr.yml
runs all of the binaries in parallel making it really obvious which one failed, whilst also running all of them regardless of failures. This gives you a lot of freedom in how you want to run these tests in different situations.
- I may extend this to be able to run integration tests against complete compiled binaries, however, that may end up being a separate project.