diff --git a/docs/configuration.md b/docs/configuration.md index 5635a929a..89ee65cb3 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -18,7 +18,7 @@ make build-certsuite-tool To launch the Config Generator: ```shell -./tnf generate config +./certsuite generate config ``` Here's an example of how to use the tool: diff --git a/docs/test-container.md b/docs/test-container.md deleted file mode 100644 index f452f1a1c..000000000 --- a/docs/test-container.md +++ /dev/null @@ -1,142 +0,0 @@ - -# Test - -The tests can be run within a prebuilt container in the OCP cluster. - -**Prerequisites for the OCP cluster** - -* The cluster should have enough resources to drain nodes and reschedule pods. If that is not the case, then ``lifecycle-pod-recreation`` test should be skipped. - -## With quay test container image - -### Pull test image - -The test image is available at this repository in [quay.io](https://quay.io/repository/testnetworkfunction/cnf-certification-test) and can be pulled using -The image can be pulled using : - -```shell -podman pull quay.io/testnetworkfunction/cnf-certification-test -``` - -### Check cluster resources - -Some tests suites such as `platform-alteration` require node access to get node configuration like `hugepage`. -In order to get the required information, the test suite does not `ssh` into nodes, but instead rely on [oc debug tools](https://docs.openshift.com/container-platform/3.7/cli_reference/basic_cli_operations.html#debug). This tool makes it easier to fetch information from nodes and also to debug running pods. - -`oc debug tool` will launch a new container ending with **-debug** suffix, and the container will be destroyed once the debug session is done. Ensure that the cluster should have enough resources to create debug pod, otherwise those tests would fail. - -!!! note - - It's **recommended** to clean up disk space and make sure there's enough resources to deploy another container image in every node before starting the tests. - -### Run the tests - -```shell -./run-tnf-container.sh -``` - -**Required arguments** - -* `-t` to provide the path of the local directory that contains tnf config files -* `-o` to provide the path of the local directory where test results (claim.json), the execution logs (certsuite.log), and the results artifacts file (results.tar.gz) will be available from after the container exits. - -!!! warning - - This directory must exist in order for the claim file to be written. - -**Optional arguments** - -* `-l` to list the labels to be run. See [Ginkgo Spec Labels](https://onsi.github.io/ginkgo/#spec-labels) for more information on how to filter tests with labels. - -!!! note - - If `-l` is not specified, the tnf will run in 'diagnostic' mode. In this mode, no test case will run: it will only get information from the cluster (PUTs, CRDs, nodes info, etc…) to save it in the claim file. This can be used to make sure the configuration was properly set and the autodiscovery found the right pods/crds… - -* `-i` to provide a name to a custom TNF container image. Supports local images, as well as images from external registries. - -* `-k` to set a path to one or more kubeconfig files to be used by the container to authenticate with the cluster. Paths must be separated by a colon. - -!!! note - - If `-k` is not specified, autodiscovery is performed. - -The autodiscovery first looks for paths in the `$KUBECONFIG` environment variable on the host system, and if the variable is not set or is empty, the default configuration stored in `$HOME/.kube/config` is checked. - -* `-n` to give the network mode of the container. Defaults set to `host`, which requires selinux to be disabled. Alternatively, `bridge` mode can be used with selinux if TNF_CONTAINER_CLIENT is set to `docker` or running the test as root. - -!!! note - - See the [docker run --network parameter reference](https://docs.docker.com/engine/reference/run/#network-settings) for more information on how to configure network settings. - -* `-b` to set an external offline DB that will be used to verify the certification status of containers, helm charts and operators. Defaults to the DB included in the TNF container image. - -!!! note - - See the [OCT tool](https://github.com/test-network-function/oct) for more information on how to create this DB. - -**Command to run** - -```shell -./run-tnf-container.sh -k ~/.kube/config -t ~/tnf/config --o ~/tnf/output -l "networking,access-control" -``` - -See [General tests](test-spec.md#general-tests) for a list of available keywords. - -### Run with `docker` - -By default, `run-container.sh` utilizes `podman`. However, an alternate container virtualization -client using `TNF_CONTAINER_CLIENT` can be configured. This is particularly useful for operating systems that do not readily support -`podman`. - -In order to configure the test harness to use `docker`, issue the following prior to -`run-tnf-container.sh`: - -```shell -export TNF_CONTAINER_CLIENT=docker -``` - -### Output tar.gz file with results and web viewer files - -After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them. - -By default, only the `claim.js`, the `cnf-certification-tests_junit.xml` file and this new tar.gz file are created after the test suite has finished, as this is probably all that normal partners/users will need. - -Two env vars allow to control the web artifacts and the the new tar.gz file generation: - -* TNF_OMIT_ARTIFACTS_ZIP_FILE=true/false : Defaulted to false in the launch scripts. If set to true, the tar.gz generation will be skipped. -* TNF_INCLUDE_WEB_FILES_IN_OUTPUT_FOLDER=true/false : Defaulted to false in the launch scripts. If set to true, the web viewer/parser files will also be copied to the output (claim) folder. - -## With local test container image - -### Build locally - -```shell -podman build -t cnf-certification-test:v5.1.2 \ - --build-arg TNF_VERSION=v5.1.2 \ -``` - -* `TNF_VERSION` value is set to a branch, a tag, or a hash of a commit that will be installed into the image - -### Build from an unofficial source - -The unofficial source could be a fork of the TNF repository. - -Use the `TNF_SRC_URL` build argument to override the URL to a source repository. - -```shell -podman build -t cnf-certification-test:v5.1.2 \ - --build-arg TNF_VERSION=v5.1.2 \ - --build-arg TNF_SRC_URL=https://github.com/test-network-function/cnf-certification-test . -``` - -### Run the tests 2 - -Specify the custom TNF image using the `-i` parameter. - -```shell -./run-tnf-container.sh -i cnf-certification-test:v5.1.2 --t ~/tnf/config -o ~/tnf/output -l "networking,access-control" -``` - - Note: see [General tests](test-spec.md#general-tests) for a list of available keywords. diff --git a/docs/test-run.md b/docs/test-run.md new file mode 100644 index 000000000..3c4c0b840 --- /dev/null +++ b/docs/test-run.md @@ -0,0 +1,116 @@ + +# Run the Test Suite + +The Test Suite can be run using the Certsuite tool directly or through a container. + +To run the Test Suite direct use: + +```shell +./certsuite run -l -c -k -o [] +``` + +If the _kubeconfig_ is not provided the value of the `KUBECONFIG` environment variable will be taken by default. + +The CLI output will show the following information: + +* Details of the Certsuite and claim file versions, the test case filter used and the location of the output files. +* The results for each test case grouped into test suites (the most recent log line is shown live as each test executes). +* Table with the number of test cases that have passed/failed or been skipped per test suite. +* The log lines produced by each test case that has failed. + +Once the test run has completed, the test results can be visualized by opening the `results.html` website in a web browser and loading the `claim.json` file. + +For more information on how to analyze the results see [Test Output](test-output.md). + +## Building the Certsuite tool executable + +The Certsuite binary can be built as follows: + +```shell +make build-certsuite-tool +``` + +## Test labels + +The test cases cases have several labels to allow for different types of groupings when selecting which to run. These are the following: + +* The name of the test case +* The name of the test suite +* The category of the test case (common, telco, faredge, extended) + +These labels can be combined with some operators to create label filters that match any condition. For example: + +* The label filter "observability,access-control" will match the test suites _observability_ and _access-control_. +* The label filter "operator && !operator-crd-versioning" will match the _operator_ test suite without the _operator_crd_versioning_ test case. +* To select all the test cases the _all_ label filter can be used. + +To view which test cases will run for a specific label or label filter use the flag `--list`. + +See the [CATALOG.md](CATALOG.md) to find all test labels. + +## Selected flags description + +The following is a non-exhaustive list of the most common flags that the `certsuite run` command accepts. To see the complete list use the `-h, --help` flag. + +* `-l, --label-filter`: Label expression to filter test cases. Can be a test suite or list or test suites, such as `"observability,access-control"` or a more complex expression with logical operators such as `"access-control && !access-control-sys-admin-capability"`. + +!!! note + + If `-l` is not specified, the Test Suite will run in 'diagnostic' mode. In this mode, no test case will run: it will only get information from the cluster (PUTs, CRDs, nodes info, etc…) to save it in the claim file. This can be used to make sure the configuration was properly set and the autodiscovery found the right pods/crds… + +* `-o, --output-dir`: Path of the local directory where test results (claim.json), the execution logs (certsuite.log), and the results artifacts file (results.tar.gz) will be available from after the container exits. + +* `-k, --kubeconfig`: Path to the Kubeconfig file of the target cluster. + +* `-c, --config-file`: Path to the `tnf_config.yml` file. + +* `--preflight-dockerconfig`: Path to the Dockerconfig file to be used by the Preflight test suite + +* `--offline-db`: Path to an offline DB to check the certification status of container images, operators and helm charts. Defaults to the DB included in the test container image. + +!!! note + + See the [OCT tool](https://github.com/test-network-function/oct) for more information on how to create this DB. + +## Using the container image + +The only prerequisite for running the Test Suite in container mode is having Docker or Podman installed. + +### Pull the test image + +The test image is available at this [repository](https://quay.io/repository/testnetworkfunction/cnf-certification-test) and can be pulled using: + +```shell +docker pull quay.io/testnetworkfunction/cnf-certification-test: +``` + +The image tag can be `latest` to select the latest release, `unstable` to fetch the image built with the latest commit in the repository or any existing version number such as `v5.1.0`. + +### Launch the Test Suite + +The Test Suite requires 3 files that must be provided to the test container: + +* The _Kubeconfig_ for the target cluster. +* The _Dockerconfig_ of the local Docker installation (only for the Preflight test suite). +* The `tnf_config.yml`. + +To reduce the number of shared volumes with the test container in the example below those files are copied into a folder called "config". Also, another folder to contain the output files called "results" has been created. The files saved in the output directory after the test run are: + +* A `claim.json` file with the test results. +* A `certsuite.log` file with the execution logs. +* A `.tar.gz` file with the above two files and an additional `results.html` file to visualize the results in a website. + +```shell +docker run --rm --network host + -v /config:/usr/certsuite/config:Z + -v /results:/usr/certsuite/results:Z + + quay.io/testnetworkfunction/cnf-certification-test:latest + + certsuite run + --kubeconfig=/usr/certsuite/config/kubeconfig + --preflight-dockerconfig=/usr/certsuite/config/dockerconfig + --config-file=/usr/certsuite/config/tnf_config.yml + --output-dir=/usr/certsuite/results + --label-filter=all +``` diff --git a/docs/test-standalone.md b/docs/test-standalone.md deleted file mode 100644 index 608017e5f..000000000 --- a/docs/test-standalone.md +++ /dev/null @@ -1,138 +0,0 @@ - -# Standalone test executable - -**Prerequisites** - -The repo is cloned and all the commands should be run from the cloned repo. - -```shell -mkdir ~/workspace -cd ~/workspace -git clone git@github.com:test-network-function/cnf-certification-test.git -cd cnf-certification-test -``` - -!!! note - - By default, `cnf-certification-test` emits results to `cnf-certification-test/cnf-certification-tests_junit.xml`. - -## 1. Install dependencies - -Depending on how you want to run the test suite there are different dependencies that will be needed. - -If you are planning on running the test suite as a container, the only pre-requisite is Docker or Podman. - -If you are planning on running the test suite as a standalone binary, there are pre-requisites that will -need to be installed in your environment prior to runtime. - -Dependency|Minimum Version ----|--- -[GoLang](https://golang.org/dl/)|1.22 -[golangci-lint](https://golangci-lint.run/usage/install/)|1.58.1 -[jq](https://stedolan.github.io/jq/)|1.6 -[OpenShift Client](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/)|4.12 - -Other binary dependencies required to run tests can be installed using the following command: - -!!! note - - * You must also make sure that `$GOBIN` (default `$GOPATH/bin`) is on your `$PATH`. - * Efforts to containerise this offering are considered a work in progress. - -## 2. Build the Test Suite - -In order to build the test executable, first make sure you have satisfied the [dependencies](#dependencies). - -```shell -make build-cnf-tests -``` - -*Gotcha:* The `make build*` commands run unit tests where appropriate. They do NOT test the workload. - -### 3. Test a workload - -A workload is tested by specifying which suites to run using the `run-cnf-suites.sh` helper -script. - -Run any combination of the suites keywords listed at in the [General tests](test-spec.md#general-tests) section, e.g. - -```shell -./run-cnf-suites.sh -l "lifecycle" -./run-cnf-suites.sh -l "networking,lifecycle" -./run-cnf-suites.sh -l "operator,networking" -./run-cnf-suites.sh -l "networking,platform-alteration" -./run-cnf-suites.sh -l "networking,lifecycle,affiliated-certification,operator" -``` - -!!! note - - As with "run-tnf-container.sh", if `-l` is not specified here, the tnf will run in 'diagnostic' mode. - -By default the claim file will be output into the same location as the test executable. The `-o` argument for - `run-cnf-suites.sh` can be used to provide a new location that the output files will be saved to. For more detailed - control over the outputs, see the output of `cnf-certification-test.test --help`. - -```shell - cd cnf-certification-test && ./cnf-certification-test.test --help -``` - -#### Run a single test - -All tests have unique labels, which can be used to filter which tests are to be run. This is useful when debugging -a single test. - -To select the test to be executed when running `run-cnf-suites.sh` with the following command-line: - -```shell -./run-cnf-suites.sh -l operator-install-source -``` - -!!! note - - The test labels work the same as the suite labels, so you can select more than one test with the filtering mechanism shown before. - -### Run all of the tests - -You can run all of the tests (including the intrusive tests and the extended suite) with the following commands: - -```shell -./run-cnf-suites.sh -l all -``` - -#### Run a subset - -You can find all the labels attached to the tests by running the following command: - -```shell -./run-cnf-suites.sh --list -``` - -You can also check the [CATALOG.md](CATALOG.md) to find all test labels. - -#### Labels for offline environments - -Some tests do require connectivity to Red Hat servers to validate certification status. -To run the tests in an offline environment, skip the tests using the `l` option. - -```shell -./run-cnf-suites.sh -l '!online' -``` - -Alternatively, if an offline DB for containers, helm charts and operators is available, there is no need to skip those tests if the environment variable `TNF_OFFLINE_DB` is set to the DB location. This DB can be generated using the [OCT tool](https://github.com/test-network-function/oct). - -Note: Only partner certified images are stored in the offline database. If Red Hat images are checked against the offline database, they will show up as not certified. The online database includes both Partner and Redhat images. - -#### Output tar.gz file with results and web viewer files - -After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them. - -By default, only the `claim.js`, the `cnf-certification-tests_junit.xml` file and this new tar.gz file are created after the test suite has finished, as this is probably all that normal partners/users will need. - -Two env vars allow to control the web artifacts and the the new tar.gz file generation: - -* TNF_OMIT_ARTIFACTS_ZIP_FILE=true/false : Defaulted to false in the launch scripts. If set to true, the tar.gz generation will be skipped. -* TNF_INCLUDE_WEB_FILES_IN_OUTPUT_FOLDER=true/false : Defaulted to false in the launch scripts. If set to true, the web viewer/parser files will also be copied to the output (claim) folder. - -### Build + Test a workload - -Refer [Developers' Guide](developers.md) diff --git a/mkdocs.yml b/mkdocs.yml index 45ed32e14..14749261f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -66,9 +66,7 @@ nav: - Setup: - Test Configuration: "configuration.md" - Runtime environment variables: "runtime-env.md" - - Build | Execute: - - Prebuilt container: "test-container.md" - - Standalone test executable: "test-standalone.md" + - Run: "test-run.md" - Available Test Specs: - Test Specs: "test-spec.md" - Implementation: "test-spec-implementation.md"