diff --git a/70024 - Software Reliability/Concolic Execution.md b/70024 - Software Reliability/Concolic Execution.md new file mode 100644 index 0000000..e8e0344 --- /dev/null +++ b/70024 - Software Reliability/Concolic Execution.md @@ -0,0 +1,8 @@ +## Definition +A type of [[Symbolic Execution 1]], that obtains a set of test cases that hit all possible branches in a program. +1. Start with concrete input. +2. run it, and collect path conditions. +3. choose a given path condition, and negate it. +4. solve the new path conditions to get a new concrete input to try. +5. repeat with the new concrete input, and record that input as a test case for the user. +It is a type of [[White-Box Fuzzing]]. \ No newline at end of file diff --git a/70024 - Software Reliability/Constraint Solving.md b/70024 - Software Reliability/Constraint Solving.md new file mode 100644 index 0000000..b831f77 --- /dev/null +++ b/70024 - Software Reliability/Constraint Solving.md @@ -0,0 +1 @@ +## Aims diff --git a/70024 - Software Reliability/Dynamic Analysis.md b/70024 - Software Reliability/Dynamic Analysis.md index f0161c5..10d5b64 100644 --- a/70024 - Software Reliability/Dynamic Analysis.md +++ b/70024 - Software Reliability/Dynamic Analysis.md @@ -1,7 +1,7 @@ ## Definition Analysis of behaviour collected from running programs. - Includes [[Fuzzing]], [[Compiler Sanitizers]] -- [[Symbolic Execution]] +- [[70024 - Software Reliability/Symbolic Execution]] - alternative to [[Static Analysis]] | Advantage | Description | diff --git a/70024 - Software Reliability/EXE.md b/70024 - Software Reliability/EXE.md new file mode 100644 index 0000000..84add31 --- /dev/null +++ b/70024 - Software Reliability/EXE.md @@ -0,0 +1,4 @@ +## Definition +Static instrumentation of source code. +- Takes in source files, instruments code by converting operations into calls to symbolic operations. +- [EXE: Automatically Generating Inputs of Death](https://web.stanford.edu/~engler/exe-ccs-06.pdf) \ No newline at end of file diff --git a/70024 - Software Reliability/Fuzzing.md b/70024 - Software Reliability/Fuzzing.md index eff9654..6c50f1f 100644 --- a/70024 - Software Reliability/Fuzzing.md +++ b/70024 - Software Reliability/Fuzzing.md @@ -25,6 +25,8 @@ Whenever using random inputs, it is important to consider their distribution (e. - Effective in finding edge-case inputs missed by human written test suites - Can automatically increase coverage of a codebase *Note: Can be used to find exploitable defects in programs. We consider it an advantage for the developer to find them first!* +### Disadvantages +Can miss edge cases as they have a low probability of being selected. ## Early Days > *_We didn't call it fuzzing back in the 1950s, but it was our standard practice to test programs by inputting decks of punch cards taken from the trash. We also used decks of random number punch cards. We weren't networked in those days, so we weren't much worried about security, but our random/trash decks often turned up undesirable behavior. Every programmer I knew (and there weren't many of us back then, so I knew a great proportion of them) used the trash-deck technique._* > **- [Gerald Weinberg's Secrets of Writing and Consulting: Fuzz Testing and Fuzz History (secretsofconsulting.blogspot.com)](http://secretsofconsulting.blogspot.com/2017/02/fuzz-testing-and-fuzz-history.html) diff --git a/70024 - Software Reliability/KLEE.md b/70024 - Software Reliability/KLEE.md new file mode 100644 index 0000000..83fe237 --- /dev/null +++ b/70024 - Software Reliability/KLEE.md @@ -0,0 +1,24 @@ +## [Website](https://klee.github.io/) +## Summary +A symbolic execution tool acting on [[LLVM IR]] (meaning llvm targeting languages such as C/C++ through `clang`, or Rust through `rustc` can be analysed). + +Dynamically instruments the [[LLVM IR]] as it interprets it. + +```bash +clang –c –emit-llvm bpf.c # generate llvm ir +klee bpf.bc # symbolically execute +``` +### Paths +Each path in the program is part of an execution tree. The executor manages a set of states at different parts of the tree that incrementally explore this. + +| Name | Definition | +| ---- | ---- | +| Infeasable Path | Where the constraints on a symbolic value are unsatisfiable. These paths are not explored as they cannot be reached in execution. | +| Path Condition | The conjunction of constraints gathered on an execution path. | +### All-Value Checks +Can implicitly check for [[Generic Bugs]] such as issue with: +- pointer dereferencing +- array indexing +- division/modulo operations +It can also check [[Functional Bugs]] such as assert statements. + diff --git a/70024 - Software Reliability/Manual Testing.md b/70024 - Software Reliability/Manual Testing.md new file mode 100644 index 0000000..8fe9c29 --- /dev/null +++ b/70024 - Software Reliability/Manual Testing.md @@ -0,0 +1,9 @@ +## Definition +Humans writing tests. +### Pros +- High feature coverage +- Good oracles (the developer) +- Can run other analyses on the tests (i.e. [[Compiler Sanitizers]]) +### Cons +- High effort (developer time is expensive) +- Missed corner cases (cannot cover cases the developer did not consider) \ No newline at end of file diff --git a/70024 - Software Reliability/Path Explosion.md b/70024 - Software Reliability/Path Explosion.md new file mode 100644 index 0000000..58c6cbd --- /dev/null +++ b/70024 - Software Reliability/Path Explosion.md @@ -0,0 +1,78 @@ +## Definition +Real world programs contain many conditions resulting in a huge number of (often exponential in terms of conditions) paths. +- When using [[Symbolic Execution 1]]to find bugs, we should prioritise the most important paths +## Solutions +### Search Heuristics + +#### Depth First Search +Only need to keep one active state (the current), can potentially miss out on other important code paths (when using time limit) by going exhaustively through one large path. +#### Breadth First Search +Need to keep many states of incomplete paths in memory. +#### Best-First Heuristic +Used by [[EXE]]. In [[EXE]] each fork is run as a separate process, it chooses the *best-first candidate* as the suspended process on the line executed the fewest number of times. +The *best-first candidate* is then executed as depth-first for some time, before making the *best-first candidate* selection again. + +This avoids exploding paths in loops (those instructions are run many times, and hence de-prioritised as *best-first candidates*). +#### MD2U +Available in [[KLEE]] +$$MD2U(s) = \min\left(\text{distance from s} \to \text{uncovered instruction} \right)$$ +States are selected according to weight $\cfrac{1}{MD2U(s)^2}$ to prioritise sattes close to uncovered instructions. +#### Random Path +Given an execution tree, assuming each subtree has an equal probability of hitting uncovered code (size not considered). +- Each tree's weighting of being chosen is based on the depth (e.g. higher has larger weighting) +- for depth $n$ of binary tree $weight(n) = 2^{-n}$ +This helps avoid starvation, we can never get stuck choosing the same path due to random nature of path selection. +#### Round Robin +Can take multiple heuristics and apply them in a round robin fashion. +### Eliminating redundant Paths +- If two paths reach the same [[Program Point]] with the same set of constraints, we can merge the states / prune one path. +- We can discard constraints for memory that is never read again (e.g. freed, destroyed at end of scope) +The basic algorithm is as follows: +```python + +# The set of reduced Path Conditions visited before by Program Point P +cache_set: dict[ProgramPoint, set[PathCond]] = ... + +# Locations read from P when reached with PC +read_set: dict[tuple[ProgramPoint, PathCond], set[Location]] = ... +trans_cl: dict[tuple[PathCond, ReadSet], Constraint] = ... + +# when program point p reached with path condition pc +def reachedwith(p: ProgramPoint, path_cond: PathCond) -> None: + for pc in cache_set[p]: + # If the conditions involving this path and the read set at + # this point overlap with all path conds in the cache set + if trans_cl[(path_cond, read_set[p, path_cond])] == pc + halt_exploration() + + # get the read locations at this program point and path_cond + read_set[p] = compute_read_set(p, path_cond) + # update the path conditions at this point + cache_set[p] = cache_set[p].union( + trans_cl[path_cond, read_set[p, path_cond]] + ) +``` +### Statically Merging Paths +Converting branching into an expression. +```rust +let x: i32; +if c { + x = A(); +} else { + x = B(); +} +``` +Can be converted to: +```rust +let x: i32 = if c { A() } else { B() }; +``` +This is called *phi-node folding* , the expression must be side-affect less. +- There is no overhead runtime overhead (single path) +### Test-Suite-Based Prioritisation +Many applications already have large test suites. +- Designed by developers to test features important to them / prioritised +- Typically have good coverage +- Often missing corner cases / cases not considered by developers +These test suites can be used to guide the tool, and prioritise which paths to check first. + +For example [ZESTI](https://www.doc.ic.ac.uk/~cristic/papers/zesti-icse-12.pdf) uses an existing regression suite to determine which paths are interesting. \ No newline at end of file diff --git a/70024 - Software Reliability/Software Reliability Techniques.md b/70024 - Software Reliability/Software Reliability Techniques.md index 8e6cab8..a46c6b0 100644 --- a/70024 - Software Reliability/Software Reliability Techniques.md +++ b/70024 - Software Reliability/Software Reliability Techniques.md @@ -1,7 +1,10 @@ ## Human Driven -### [[Manual Testing]] ### Coding Standards ### Code Review ### Tool Support +### [[Manual Testing]] -## Automated \ No newline at end of file +## Automated +### [[Fuzzing]] +### [[Static Analysis]] +### [[Symbolic Execution 1]] diff --git a/70024 - Software Reliability/Static Analysis.md b/70024 - Software Reliability/Static Analysis.md index cbfc5b9..f4d2ec1 100644 --- a/70024 - Software Reliability/Static Analysis.md +++ b/70024 - Software Reliability/Static Analysis.md @@ -11,6 +11,9 @@ Reasoning about behaviour of programs without executing them. | ---- | ---- | | Expensive | Can be precise, but very expensive | | Precision | Can be made cheaper by weakening analysis (e.g. [[Abstract Interpretation]]), but at the cost of precision. | +| No test Cases | Analysis often does not provide easily applicable test cases for developers. | + +Imprecise (but faster) analyses can result in false positives. ## Examples ### Coverity ([[A Few Billion Lines of Code Later]]) ### Theory of [[Program Analysis]] diff --git a/70024 - Software Reliability/Symbolic Execution 1.md b/70024 - Software Reliability/Symbolic Execution 1.md new file mode 100644 index 0000000..2828056 --- /dev/null +++ b/70024 - Software Reliability/Symbolic Execution 1.md @@ -0,0 +1,28 @@ +## Definition +By running a program on symbolic input (representing any value, can be logically constrained), every path through a program is analysed by constraining symbolic values to symbols representing the sets of possible values on the paths. +- Program instructions act on symbolic values. +- At conditionals, fork the symbolic values and constrain by the branch condition. +- On termination generate a test case by solving the constraints on the path. +### Advantages +| Advantage | Description | +| ---- | ---- | +| Automatic | Requires no input test cases, initial seeds, configuration (like [[Fuzzing]]/[[Swarm Testing]]). | +| Systematic | Can reach all possible paths in the program, and reason about all possible values on these paths. | +| Deep Bugs | Cam reason about all values, hence including extremely rare edge case values & memory layouts. | +| [[Functional Bugs]] | Can reason about logical statements given appropriate oracles. e.g. checking for crashes, given assert statements cause crashes. | +| Test Cases | Generates test cases for developers to allow them to reproduce and debug. | +### Disadvantages +| Disadvantage | Description | +| ---- | ---- | +| Complex Toolchain | Requires the source to be available, and compiled to some form the symbolic execution tool can interpret. | +| Constrain Solving | Constraint solving is expensive. | +## Mixed Concrete & Symbolic Execution +Many values are concrete (e.g. constants in the code, or concrete inputs set by the user). +- Only operations that include symbolic values need to be symbolically executed, the rest can be executed as normal. +- Allows interaction with outside environment, (e.g. operating system, un-instrumented libraries, etc.) +## Challenges +### [[Path Explosion]] +## Examples +KLEE, CREST, SPF, FuzzBall +### [[KLEE]] +### [[EXE]] diff --git a/70024 - Software Reliability/White-Box Fuzzing.md b/70024 - Software Reliability/White-Box Fuzzing.md index 941a76f..f64d0d2 100644 --- a/70024 - Software Reliability/White-Box Fuzzing.md +++ b/70024 - Software Reliability/White-Box Fuzzing.md @@ -1,6 +1,6 @@ ## Definition [[Program Analysis]] using constraint solving to generate inputs that provable cover different parts of the [[SUT]]. -- Called [[Symbolic Execution]], and is only loosely definable as [[Fuzzing]] +- Called [[70024 - Software Reliability/Symbolic Execution]], and is only loosely definable as [[Fuzzing]] ### Advantages Constraint solving ensures a minimal set of inputs (fast to run, slow to generate) that can perfectly cover entire [[SUT]], including inputs that are extremely unlikely to occur under random testing. ### Disadvantages