From ea39aa334a40fd2ba6c3c24e616d5b2508edb691 Mon Sep 17 00:00:00 2001
From: sebrandon1 Note that you can also specify the debug pod image to use with Disconnected environment
export TNF_PARTNER_REPO=registry.dfwt5g.lab:5000/testnetworkfunction
SUPPORT_IMAGE
-environment variable, default to debug-partner:4.5.4
.debug-partner:4.5.5
.
This repository provides a set of Cloud-Native Network Functions (CNF) test cases and the framework to add more test cases.
CNF
The app (containers/pods/operators) we want to certify according Telco partner/Red Hat\u2019s best practices.
TNF/Certification Test Suite
The tool we use to certify a CNF.
The purpose of the tests and the framework is to test the interaction of CNF with OpenShift Container Platform (OCP).
Info
This test suite is provided for the CNF Developers to test their CNFs readiness for certification. Please see \u201cCNF Developers\u201d for more information.
Features
The test suite generates a report (claim.json
) and saves the test execution log (tnf-execution.log
) in a configurable output directory.
The catalog of the existing test cases and test building blocks are available in CATALOG.md
There are 3 building blocks in the above framework.
the CNF
represents the CNF to be certified. The certification suite identifies the resources (containers/pods/operators etc) belonging to the CNF via labels or static data entries in the config file
the Certification container/exec
is the certification test suite running on the platform or in a container. The executable verifies the CNF under test configuration and its interactions with openshift
the Debug
pods are part of a Kubernetes daemonset responsible to run various privileged commands on kubernetes nodes. Debug pods are useful to run platform tests and test commands (e.g. ping) in container namespaces without changing the container image content. The debug daemonset is instantiated via the privileged-daemonset repository.
Developers of CNFs, particularly those targeting CNF Certification with Red Hat on OpenShift, can use this suite to test the interaction of their CNF with OpenShift. If interested in CNF Certification please contact Red Hat.
Requirements
Refer this documentation https://github.com/test-network-function/cnfextensions
Reference
cnf-certification-test-partner repository provides sample example to model the test setup.
"},{"location":"configuration/","title":"Test Configuration","text":""},{"location":"configuration/#cnf-certification-configuration","title":"CNF Certification configuration","text":"The CNF Certification Test uses a YAML configuration file to certify a specific CNF workload. This file specifies the CNF resources to be certified, as well as any exceptions or other general configuration options.
By default a file named tnf_config.yml will be used. Here\u2019s an example of the CNF Config File. For a description of each config option see the section CNF Config File options.
"},{"location":"configuration/#cnf-config-generator","title":"CNF Config Generator","text":"The CNF config file can be created using the CNF Config Generator, which is part of the TNF tool shipped with the CNF Certification. The purpose of this particular tool is to help users configuring the CNF Certification providing a logical structure of the available options as well as the information required to make use of them. The result is a CNF config file in YAML format that the CNF Certification will parse to adapt the certification process to a specific CNF workload.
To compile the TNF tool:
make build-tnf-tool\n
To launch the CNF Config Generator:
./tnf generate config\n
Here\u2019s an example of how to use the tool:
"},{"location":"configuration/#cnf-config-file-options","title":"CNF Config File options","text":""},{"location":"configuration/#cnf-resources","title":"CNF resources","text":"These options allow configuring the workload resources of the CNF to be verified. Only the resources that the CNF uses are required to be configured. The rest can be left empty. Usually a basic configuration includes Namespaces and Pods at least.
"},{"location":"configuration/#targetnamespaces","title":"targetNameSpaces","text":"The namespaces in which the CNF under test will be deployed.
targetNameSpaces:\n - name: tnf\n
"},{"location":"configuration/#podsundertestlabels","title":"podsUnderTestLabels","text":"The labels that each Pod of the CNF under test must have to be verified by the CNF Certification Suite.
Highly recommended
The labels should be defined in Pod definition rather than added after the Pod is created, as labels added later on will be lost in case the Pod gets rescheduled. In the case of Pods defined as part of a Deployment, it\u2019s best to use the same label as the one defined in the spec.selector.matchLabels section of the Deployment YAML. The prefix field can be used to avoid naming collision with other labels.
podsUnderTestLabels:\n - \"test-network-function.com/generic: target\"\n
"},{"location":"configuration/#operatorsundertestlabels","title":"operatorsUnderTestLabels","text":"The labels that each operator\u2019s CSV of the CNF under test must have to be verified by the CNF Certification Suite.
If a new label is used for this purpose make sure it is added to the CNF operator\u2019s CSVs.
operatorsUnderTestLabels:\n - \"test-network-function.com/operator: target\" \n
"},{"location":"configuration/#targetcrdfilters","title":"targetCrdFilters","text":"The CRD name suffix used to filter the CNF\u2019s CRDs among all the CRDs present in the cluster. For each CRD it can also be specified if it\u2019s scalable or not in order to avoid some lifecycle test cases.
targetCrdFilters:\n - nameSuffix: \"group1.tnf.com\"\n scalable: false\n - nameSuffix: \"anydomain.com\"\n scalable: true\n
With the config show above, all CRD names in the cluster whose names have the suffix group1.tnf.com or anydomain.com ( e.g. crd1.group1.tnf.com or mycrd.mygroup.anydomain.com) will be tested.
"},{"location":"configuration/#manageddeployments-managedstatefulsets","title":"managedDeployments / managedStatefulSets","text":"The Deployments/StatefulSets managed by a Custom Resource whose scaling is controlled using the \u201cscale\u201d subresource of the CR.
The CRD defining that CR should be included in the CRD filters with the scalable property set to true. If so, the test case lifecycle-{deployment/statefulset}-scaling will be skipped, otherwise it will fail.
managedDeployments:\n - name: jack\nmanagedStatefulsets:\n - name: jack\n
"},{"location":"configuration/#exceptions","title":"Exceptions","text":"These options allow adding exceptions to skip several checks for different resources. The exceptions must be justified in order to pass the CNF Certification.
"},{"location":"configuration/#acceptedkerneltaints","title":"acceptedKernelTaints","text":"The list of kernel modules loaded by the CNF that make the Linux kernel mark itself as tainted but that should skip verification.
Test cases affected: platform-alteration-tainted-node-kernel.
acceptedKernelTaints:\n - module: vboxsf\n - module: vboxguest\n
"},{"location":"configuration/#skiphelmchartlist","title":"skipHelmChartList","text":"The list of Helm charts that the CNF uses whose certification status will not be verified.
If no exception is configured, the certification status for all Helm charts will be checked in the OpenShift Helms Charts repository.
Test cases affected: affiliated-certification-helmchart-is-certified.
skipHelmChartList:\n - name: coredns\n
"},{"location":"configuration/#validprotocolnames","title":"validProtocolNames","text":"The list of allowed protocol names to be used for container port names.
The name field of a container port must be of the form protocol[-suffix] where protocol must be allowed by default or added to this list. The optional suffix can be chosen by the application. Protocol names allowed by default: grpc, grpc-web, http, http2, tcp, udp.
Test cases affected: manageability-container-port-name-format.
validProtocolNames:\n - \"http3\"\n - \"sctp\"\n
"},{"location":"configuration/#servicesignorelist","title":"servicesIgnoreList","text":"The list of Services that will skip verification.
Services included in this list will be filtered out at the autodiscovery stage and will not be subject to checks in any test case.
Tests cases affected: networking-dual-stack-service, access-control-service-type.
servicesignorelist:\n - \"hazelcast-platform-controller-manager-service\"\n - \"hazelcast-platform-webhook-service\"\n - \"new-pro-controller-manager-metrics-service\"\n
"},{"location":"configuration/#skipscalingtestdeployments-skipscalingteststatefulsets","title":"skipScalingTestDeployments / skipScalingTestStatefulSets","text":"The list of Deployments/StatefulSets that do not support scale in/out operations.
Deployments/StatefulSets included in this list will skip any scaling operation check.
Test cases affected: lifecycle-deployment-scaling, lifecycle-statefulset-scaling.
skipScalingTestDeployments:\n - name: deployment1\n namespace: tnf\nskipScalingTestStatefulSetNames:\n - name: statefulset1\n namespace: tnf\n
"},{"location":"configuration/#cnf-certification-settings","title":"CNF Certification settings","text":""},{"location":"configuration/#debugdaemonsetnamespace","title":"debugDaemonSetNamespace","text":"This is an optional field with the name of the namespace where a privileged DaemonSet will be deployed. The namespace will be created in case it does not exist. In case this field is not set, the default namespace for this DaemonSet is cnf-suite.
debugDaemonSetNamespace: cnf-cert\n
This DaemonSet, called tnf-debug is deployed and used internally by the CNF Certification tool to issue some shell commands that are needed in certain test cases. Some of these test cases might fail or be skipped in case it wasn\u2019t deployed correctly.
"},{"location":"configuration/#other-settings","title":"Other settings","text":"The autodiscovery mechanism will attempt to identify the default network device and all the IP addresses of the Pods it needs for network connectivity tests, though that information can be explicitly set using annotations if needed.
"},{"location":"configuration/#pod-ips","title":"Pod IPs","text":"The label test-network-function.com/skip_connectivity_tests excludes Pods from all connectivity tests.
The label test-network-function.com/skip_multus_connectivity_tests excludes Pods from Multus connectivity tests. Tests on the default interface are still run.
"},{"location":"configuration/#affinity-requirements","title":"Affinity requirements","text":"For CNF workloads that require Pods to use Pod or Node Affinity rules, the label AffinityRequired: true must be included on the Pod YAML. This will ensure that the affinity best practices are tested and prevent any test cases for anti-affinity to fail.
"},{"location":"developers/","title":"Developers","text":""},{"location":"developers/#steps","title":"Steps","text":"To test the newly added test / existing tests locally, follow the steps
Set runtime environment variables, as per the requirement.
For example, to deploy partner deployments in a custom namespace in the test config.
targetNameSpaces:\n - name: mynamespace\n
Also, skip intrusive tests
export TNF_NON_INTRUSIVE_ONLY=true\n
Set K8s config of the cluster where test pods are running
export KUBECONFIG=<<mypath/.kube/config>>\n
Execute test suite, which would build and run the suite
For example, to run networking
tests
./script/development.sh networking\n
If you have dependencies on other Pull Requests, you can add a comment like that:
Depends-On: <url of the PR>\n
and the dependent PR will automatically be extracted and injected in your change during the GitHub Action CI jobs and the DCI jobs.
"},{"location":"exception/","title":"Exception Process","text":""},{"location":"exception/#exception-process","title":"Exception Process","text":"There may exist some test cases which needs to fail always. The exception raised by the failed tests is published to Red Hat website for that partner.
CATALOG provides the details of such exception.
"},{"location":"reference/","title":"Helpful Links","text":"To run the test suite, some runtime environment variables are to be set.
"},{"location":"runtime-env/#ocp-412-labels","title":"OCP >=4.12 Labels","text":"The following labels need to be added to your default namespace in your cluster if you are running OCP >=4.12:
pod-security.kubernetes.io/enforce: privileged\npod-security.kubernetes.io/enforce-version: latest\n
You can manually label the namespace with:
oc label namespace/default pod-security.kubernetes.io/enforce=privileged\noc label namespace/default pod-security.kubernetes.io/enforce-version=latest\n
"},{"location":"runtime-env/#disable-intrusive-tests","title":"Disable intrusive tests","text":"To skip intrusive tests which may disrupt cluster operations, issue the following:
export TNF_NON_INTRUSIVE_ONLY=true\n
Likewise, to enable intrusive tests, set the following:
export TNF_NON_INTRUSIVE_ONLY=false\n
Intrusive tests are enabled by default.
"},{"location":"runtime-env/#preflight-integration","title":"Preflight Integration","text":"When running the preflight
suite of tests, there are a few environment variables that will need to be set:
PFLT_DOCKERCONFIG
is a required variable for running the preflight test suite. This provides credentials to the underlying preflight library for being able to pull/manipulate images and image bundles for testing.
When running as a container, the docker config is mounted to the container via volume mount.
When running as a standalone binary, the environment variables are consumed directly from your local machine.
See more about this variable here.
TNF_ALLOW_PREFLIGHT_INSECURE
(default: false) is required set to true
if you are running against a private container registry that has self-signed certificates.
In a disconnected environment, only specific versions of images are mirrored to the local repo. For those environments, the partner pod image quay.io/testnetworkfunction/cnf-test-partner
and debug pod image quay.io/testnetworkfunction/debug-partner
should be mirrored and TNF_PARTNER_REPO
should be set to the local repo, e.g.:
export TNF_PARTNER_REPO=registry.dfwt5g.lab:5000/testnetworkfunction\n
Note that you can also specify the debug pod image to use with SUPPORT_IMAGE
environment variable, default to debug-partner:4.5.4
.
The tests can be run within a prebuilt container in the OCP cluster.
Prerequisites for the OCP cluster
lifecycle-pod-recreation
test should be skipped.The test image is available at this repository in quay.io and can be pulled using The image can be pulled using :
podman pull quay.io/testnetworkfunction/cnf-certification-test\n
"},{"location":"test-container/#check-cluster-resources","title":"Check cluster resources","text":"Some tests suites such as platform-alteration
require node access to get node configuration like hugepage
. In order to get the required information, the test suite does not ssh
into nodes, but instead rely on oc debug tools. This tool makes it easier to fetch information from nodes and also to debug running pods.
oc debug tool
will launch a new container ending with -debug suffix, and the container will be destroyed once the debug session is done. Ensure that the cluster should have enough resources to create debug pod, otherwise those tests would fail.
Note
It\u2019s recommended to clean up disk space and make sure there\u2019s enough resources to deploy another container image in every node before starting the tests.
"},{"location":"test-container/#run-the-tests","title":"Run the tests","text":"./run-tnf-container.sh\n
Required arguments
-t
to provide the path of the local directory that contains tnf config files-o
to provide the path of the local directory where test results (claim.json), the execution logs (tnf-execution.log), and the results artifacts file (results.tar.gz) will be available from after the container exits.Warning
This directory must exist in order for the claim file to be written.
Optional arguments
-l
to list the labels to be run. See Ginkgo Spec Labels for more information on how to filter tests with labels.Note
If -l
is not specified, the tnf will run in \u2018diagnostic\u2019 mode. In this mode, no test case will run: it will only get information from the cluster (PUTs, CRDs, nodes info, etc\u2026) to save it in the claim file. This can be used to make sure the configuration was properly set and the autodiscovery found the right pods/crds\u2026
-i
to provide a name to a custom TNF container image. Supports local images, as well as images from external registries.
-k
to set a path to one or more kubeconfig files to be used by the container to authenticate with the cluster. Paths must be separated by a colon.
Note
If -k
is not specified, autodiscovery is performed.
The autodiscovery first looks for paths in the $KUBECONFIG
environment variable on the host system, and if the variable is not set or is empty, the default configuration stored in $HOME/.kube/config
is checked.
-n
to give the network mode of the container. Defaults set to host
, which requires selinux to be disabled. Alternatively, bridge
mode can be used with selinux if TNF_CONTAINER_CLIENT is set to docker
or running the test as root.Note
See the docker run \u2013network parameter reference for more information on how to configure network settings.
-b
to set an external offline DB that will be used to verify the certification status of containers, helm charts and operators. Defaults to the DB included in the TNF container image.Note
See the OCT tool for more information on how to create this DB.
Command to run
./run-tnf-container.sh -k ~/.kube/config -t ~/tnf/config\n-o ~/tnf/output -l \"networking,access-control\"\n
See General tests for a list of available keywords.
"},{"location":"test-container/#run-with-docker","title":"Run withdocker
","text":"By default, run-container.sh
utilizes podman
. However, an alternate container virtualization client using TNF_CONTAINER_CLIENT
can be configured. This is particularly useful for operating systems that do not readily support podman
.
In order to configure the test harness to use docker
, issue the following prior to run-tnf-container.sh
:
export TNF_CONTAINER_CLIENT=docker\n
"},{"location":"test-container/#output-targz-file-with-results-and-web-viewer-files","title":"Output tar.gz file with results and web viewer files","text":"After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them.
By default, only the claim.js
, the cnf-certification-tests_junit.xml
file and this new tar.gz file are created after the test suite has finished, as this is probably all that normal partners/users will need.
Two env vars allow to control the web artifacts and the the new tar.gz file generation:
podman build -t cnf-certification-test:v4.5.4 \\\n --build-arg TNF_VERSION=v4.5.4 \\\n
TNF_VERSION
value is set to a branch, a tag, or a hash of a commit that will be installed into the imageThe unofficial source could be a fork of the TNF repository.
Use the TNF_SRC_URL
build argument to override the URL to a source repository.
podman build -t cnf-certification-test:v4.5.4 \\\n --build-arg TNF_VERSION=v4.5.4 \\\n --build-arg TNF_SRC_URL=https://github.com/test-network-function/cnf-certification-test .\n
"},{"location":"test-container/#run-the-tests-2","title":"Run the tests 2","text":"Specify the custom TNF image using the -i
parameter.
./run-tnf-container.sh -i cnf-certification-test:v4.5.4\n-t ~/tnf/config -o ~/tnf/output -l \"networking,access-control\"\n
Note: see General tests for a list of available keywords.
"},{"location":"test-output/","title":"Test Output","text":""},{"location":"test-output/#test-output","title":"Test Output","text":""},{"location":"test-output/#claim-file","title":"Claim File","text":"The test suite generates an output file, named claim file. This file is considered as the proof of CNFs test run, evaluated by Red Hat when certified status is considered.
This file describes the following
Files that need to be submitted for certification
When submitting results back to Red Hat for certification, please include the above mentioned claim file, the JUnit file, and any available console logs.
How to add a CNF platform test result to the existing claim file?
go run cmd/tools/cmd/main.go claim-add --claimfile=claim.json\n--reportdir=/home/$USER/reports\n
Args: --claimfile is an existing claim.json file
--repordir :path to test results that you want to include.
The tests result files from the given report dir will be appended under the result section of the claim file using file name as the key/value pair. The tool will ignore the test result, if the key name is already present under result section of the claim file.
\"results\": {\n \"cnf-certification-tests_junit\": {\n \"testsuite\": {\n \"-errors\": \"0\",\n \"-failures\": \"2\",\n \"-name\": \"CNF Certification Test Suite\",\n \"-tests\": \"14\",\n ...\n
Reference
For more details on the contents of the claim file
The test suite also saves a copy of the execution logs at [test output directory]/tnf-execution.log
"},{"location":"test-output/#results-artifacts-zip-file","title":"Results artifacts zip file","text":"After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them. The file has a UTC date-time prefix and looks like this:
20230620-110654-cnf-test-results.tar.gz
The \u201c20230620-110654\u201d sample prefix means \u201cJune-20th 2023, 11:06:54\u201d
This is the content of the tar.gz file:
This file serves two different purposes:
claimjson.js
and classification.js
files to be in the same folder as the html files to work properly.A standalone HTML page is available to decode the results. For more details, see: https://github.com/test-network-function/parser
"},{"location":"test-output/#compare-claim-files-from-two-different-cnf-certification-suite-runs","title":"Compare claim files from two different CNF Certification Suite runs","text":"Parters can use the tnf claim compare
tool in order to compare two claim files. The differences are shown in a table per section. This tool can be helpful when the result of some test cases is different between two (consecutive) runs, as it shows configuration differences in both the CNF Cert Suite config and the cluster nodes that could be the root cause for some of the test cases results discrepancy.
All the compared sections, except the test cases results are compared blindly, traversing the whole json tree and substrees to get a list of all the fields and their values. Three tables are shown:
Let\u2019s say one of the nodes of the claim.json file contains this struct:
{\n \"field1\": \"value1\",\n \"field2\": {\n \"field3\": \"value2\",\n \"field4\": {\n \"field5\": \"value3\",\n \"field6\": \"value4\"\n }\n }\n}\n
When parsing that json struct fields, it will produce a list of fields like this:
/field1=value1\n/field2/field3=value2\n/field2/field4/field5=value3\n/field2/field4/field6=finalvalue2\n
Once this list of field\u2019s path+value strings has been obtained from both claim files, it is compared in order to find the differences or the fields that only exist on each file.
This is a fake example of a node \u201cclus0-0\u201d whose first CNI (index 0) has a different cniVersion and the ipMask flag of its first plugin (also index 0) has changed to false in the second run. Also, the plugin has another \u201cnewFakeFlag\u201d config flag in claim 2 that didn\u2019t exist in clam file 1.
...\nCNIs: Differences\nFIELD CLAIM 1 CLAIM 2\n/clus0-0/0/cniVersion 1.0.0 1.0.1\n/clus0-1/0/plugins/0/ipMasq true false\n\nCNIs: Only in CLAIM 1\n<none>\n\nCNIs: Only in CLAIM 2\n/clus0-1/0/plugins/0/newFakeFlag=true\n...\n
Currently, the following sections are compared, in this order:
The tnf
tool is located in the repo\u2019s cmd/tnf
folder. In order to compile it, just run:
make build-tnf-tool\n
"},{"location":"test-output/#examples","title":"Examples","text":""},{"location":"test-output/#compare-a-claim-file-against-itself-no-differences-expected","title":"Compare a claim file against itself: no differences expected","text":""},{"location":"test-output/#different-test-cases-results","title":"Different test cases results","text":"Let\u2019s assume we have two claim files, claim1.json and claim2.json, obtained from two CNF Certification Suite runs in the same cluster.
During the second run, there was a test case that failed. Let\u2019s simulate it modifying manually the second run\u2019s claim file to switch one test case\u2019s state from \u201cpassed\u201d to \u201cfailed\u201d.
"},{"location":"test-output/#different-cluster-configurations","title":"Different cluster configurations","text":"First, let\u2019s simulate that the second run took place in a cluster with a different OCP version. As we store the OCP version in the claim file (section claim.versions), we can also modify it manually. The versions section comparison appears at the very beginning of the tnf claim compare
output:
Now, let\u2019s simulate that the cluster was a bit different when the second CNF Certification Suite run was performed. First, let\u2019s make a manual change in claim2.json to emulate a different CNI version in the first node.
Finally, we\u2019ll simulate that, for some reason, the first node had one label removed when the second run was performed:
"},{"location":"test-spec/","title":"Available Test Specs","text":""},{"location":"test-spec/#test-specifications","title":"Test Specifications","text":""},{"location":"test-spec/#available-test-specs","title":"Available Test Specs","text":"There are two categories for CNF tests.
These tests are designed to test any commodity CNF running on OpenShift, and include specifications such as Default
network connectivity.
These tests are designed to test some unique aspects of the CNF under test are behaving correctly. This could include specifications such as issuing a GET
request to a web server, or passing traffic through an IPSEC tunnel.
These tests belong to multiple suites that can be run in any combination as is appropriate for the CNFs under test.
Info
Test suites group tests by the topic areas.
Suite Test Spec Description Minimum OpenShift Versionaccess-control
The access-control test suite is used to test service account, namespace and cluster/pod role binding for the pods under test. It also tests the pods/containers configuration. 4.6.0 affiliated-certification
The affiliated-certification test suite verifies that the containers and operators discovered or listed in the configuration file are certified by Redhat 4.6.0 lifecycle
The lifecycle test suite verifies the pods deployment, creation, shutdown and survivability. 4.6.0 networking
The networking test suite contains tests that check connectivity and networking config related best practices. 4.6.0 operator
The operator test suite is designed to test basic Kubernetes Operator functionality. 4.6.0 platform-alteration
verifies that key platform configuration is not modified by the CNF under test 4.6.0 observability
the observability test suite contains tests that check CNF logging is following best practices and that CRDs have status fields 4.6.0 Info
Please refer CATALOG.md for more details.
"},{"location":"test-spec/#cnf-specific-tests","title":"CNF-specific tests","text":"TODO
"},{"location":"test-standalone/","title":"Standalone test executable","text":""},{"location":"test-standalone/#standalone-test-executable","title":"Standalone test executable","text":"Prerequisites
The repo is cloned and all the commands should be run from the cloned repo.
mkdir ~/workspace\ncd ~/workspace\ngit clone git@github.com:test-network-function/cnf-certification-test.git\ncd cnf-certification-test\n
Note
By default, cnf-certification-test
emits results to cnf-certification-test/cnf-certification-tests_junit.xml
.
Depending on how you want to run the test suite there are different dependencies that will be needed.
If you are planning on running the test suite as a container, the only pre-requisite is Docker or Podman.
If you are planning on running the test suite as a standalone binary, there are pre-requisites that will need to be installed in your environment prior to runtime.
Run the following command to install the following dependencies.
make install-tools\n
Dependency Minimum Version GoLang 1.21 golangci-lint 1.55.1 jq 1.6 OpenShift Client 4.12 Other binary dependencies required to run tests can be installed using the following command:
Note
$GOBIN
(default $GOPATH/bin
) is on your $PATH
.In order to build the test executable, first make sure you have satisfied the dependencies.
make build-cnf-tests\n
Gotcha: The make build*
commands run unit tests where appropriate. They do NOT test the CNF.
A CNF is tested by specifying which suites to run using the run-cnf-suites.sh
helper script.
Run any combination of the suites keywords listed at in the General tests section, e.g.
./run-cnf-suites.sh -l \"lifecycle\"\n./run-cnf-suites.sh -l \"networking,lifecycle\"\n./run-cnf-suites.sh -l \"operator,networking\"\n./run-cnf-suites.sh -l \"networking,platform-alteration\"\n./run-cnf-suites.sh -l \"networking,lifecycle,affiliated-certification,operator\"\n
Note
As with \u201crun-tnf-container.sh\u201d, if -l
is not specified here, the tnf will run in \u2018diagnostic\u2019 mode.
By default the claim file will be output into the same location as the test executable. The -o
argument for run-cnf-suites.sh
can be used to provide a new location that the output files will be saved to. For more detailed control over the outputs, see the output of cnf-certification-test.test --help
.
cd cnf-certification-test && ./cnf-certification-test.test --help\n
"},{"location":"test-standalone/#run-a-single-test","title":"Run a single test","text":"All tests have unique labels, which can be used to filter which tests are to be run. This is useful when debugging a single test.
To select the test to be executed when running run-cnf-suites.sh
with the following command-line:
./run-cnf-suites.sh -l operator-install-source\n
Note
The test labels work the same as the suite labels, so you can select more than one test with the filtering mechanism shown before.
"},{"location":"test-standalone/#run-all-of-the-tests","title":"Run all of the tests","text":"You can run all of the tests (including the intrusive tests and the extended suite) with the following commands:
./run-cnf-suites.sh -l all\n
"},{"location":"test-standalone/#run-a-subset","title":"Run a subset","text":"You can find all the labels attached to the tests by running the following command:
./run-cnf-suites.sh --list\n
You can also check the CATALOG.md to find all test labels.
"},{"location":"test-standalone/#labels-for-offline-environments","title":"Labels for offline environments","text":"Some tests do require connectivity to Red Hat servers to validate certification status. To run the tests in an offline environment, skip the tests using the l
option.
./run-cnf-suites.sh -l '!online'\n
Alternatively, if an offline DB for containers, helm charts and operators is available, there is no need to skip those tests if the environment variable TNF_OFFLINE_DB
is set to the DB location. This DB can be generated using the OCT tool.
Note: Only partner certified images are stored in the offline database. If Redhat images are checked against the offline database, they will show up as not certified. The online database includes both Partner and Redhat images.
"},{"location":"test-standalone/#output-targz-file-with-results-and-web-viewer-files","title":"Output tar.gz file with results and web viewer files","text":"After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them.
By default, only the claim.js
, the cnf-certification-tests_junit.xml
file and this new tar.gz file are created after the test suite has finished, as this is probably all that normal partners/users will need.
Two env vars allow to control the web artifacts and the the new tar.gz file generation:
Refer Developers\u2019 Guide
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#overview","title":"Overview","text":"This repository provides a set of Cloud-Native Network Functions (CNF) test cases and the framework to add more test cases.
CNF
The app (containers/pods/operators) we want to certify according Telco partner/Red Hat\u2019s best practices.
TNF/Certification Test Suite
The tool we use to certify a CNF.
The purpose of the tests and the framework is to test the interaction of CNF with OpenShift Container Platform (OCP).
Info
This test suite is provided for the CNF Developers to test their CNFs readiness for certification. Please see \u201cCNF Developers\u201d for more information.
Features
The test suite generates a report (claim.json
) and saves the test execution log (tnf-execution.log
) in a configurable output directory.
The catalog of the existing test cases and test building blocks are available in CATALOG.md
There are 3 building blocks in the above framework.
the CNF
represents the CNF to be certified. The certification suite identifies the resources (containers/pods/operators etc) belonging to the CNF via labels or static data entries in the config file
the Certification container/exec
is the certification test suite running on the platform or in a container. The executable verifies the CNF under test configuration and its interactions with openshift
the Debug
pods are part of a Kubernetes daemonset responsible to run various privileged commands on kubernetes nodes. Debug pods are useful to run platform tests and test commands (e.g. ping) in container namespaces without changing the container image content. The debug daemonset is instantiated via the privileged-daemonset repository.
Developers of CNFs, particularly those targeting CNF Certification with Red Hat on OpenShift, can use this suite to test the interaction of their CNF with OpenShift. If interested in CNF Certification please contact Red Hat.
Requirements
Refer this documentation https://github.com/test-network-function/cnfextensions
Reference
cnf-certification-test-partner repository provides sample example to model the test setup.
"},{"location":"configuration/","title":"Test Configuration","text":""},{"location":"configuration/#cnf-certification-configuration","title":"CNF Certification configuration","text":"The CNF Certification Test uses a YAML configuration file to certify a specific CNF workload. This file specifies the CNF resources to be certified, as well as any exceptions or other general configuration options.
By default a file named tnf_config.yml will be used. Here\u2019s an example of the CNF Config File. For a description of each config option see the section CNF Config File options.
"},{"location":"configuration/#cnf-config-generator","title":"CNF Config Generator","text":"The CNF config file can be created using the CNF Config Generator, which is part of the TNF tool shipped with the CNF Certification. The purpose of this particular tool is to help users configuring the CNF Certification providing a logical structure of the available options as well as the information required to make use of them. The result is a CNF config file in YAML format that the CNF Certification will parse to adapt the certification process to a specific CNF workload.
To compile the TNF tool:
make build-tnf-tool\n
To launch the CNF Config Generator:
./tnf generate config\n
Here\u2019s an example of how to use the tool:
"},{"location":"configuration/#cnf-config-file-options","title":"CNF Config File options","text":""},{"location":"configuration/#cnf-resources","title":"CNF resources","text":"These options allow configuring the workload resources of the CNF to be verified. Only the resources that the CNF uses are required to be configured. The rest can be left empty. Usually a basic configuration includes Namespaces and Pods at least.
"},{"location":"configuration/#targetnamespaces","title":"targetNameSpaces","text":"The namespaces in which the CNF under test will be deployed.
targetNameSpaces:\n - name: tnf\n
"},{"location":"configuration/#podsundertestlabels","title":"podsUnderTestLabels","text":"The labels that each Pod of the CNF under test must have to be verified by the CNF Certification Suite.
Highly recommended
The labels should be defined in Pod definition rather than added after the Pod is created, as labels added later on will be lost in case the Pod gets rescheduled. In the case of Pods defined as part of a Deployment, it\u2019s best to use the same label as the one defined in the spec.selector.matchLabels section of the Deployment YAML. The prefix field can be used to avoid naming collision with other labels.
podsUnderTestLabels:\n - \"test-network-function.com/generic: target\"\n
"},{"location":"configuration/#operatorsundertestlabels","title":"operatorsUnderTestLabels","text":"The labels that each operator\u2019s CSV of the CNF under test must have to be verified by the CNF Certification Suite.
If a new label is used for this purpose make sure it is added to the CNF operator\u2019s CSVs.
operatorsUnderTestLabels:\n - \"test-network-function.com/operator: target\" \n
"},{"location":"configuration/#targetcrdfilters","title":"targetCrdFilters","text":"The CRD name suffix used to filter the CNF\u2019s CRDs among all the CRDs present in the cluster. For each CRD it can also be specified if it\u2019s scalable or not in order to avoid some lifecycle test cases.
targetCrdFilters:\n - nameSuffix: \"group1.tnf.com\"\n scalable: false\n - nameSuffix: \"anydomain.com\"\n scalable: true\n
With the config show above, all CRD names in the cluster whose names have the suffix group1.tnf.com or anydomain.com ( e.g. crd1.group1.tnf.com or mycrd.mygroup.anydomain.com) will be tested.
"},{"location":"configuration/#manageddeployments-managedstatefulsets","title":"managedDeployments / managedStatefulSets","text":"The Deployments/StatefulSets managed by a Custom Resource whose scaling is controlled using the \u201cscale\u201d subresource of the CR.
The CRD defining that CR should be included in the CRD filters with the scalable property set to true. If so, the test case lifecycle-{deployment/statefulset}-scaling will be skipped, otherwise it will fail.
managedDeployments:\n - name: jack\nmanagedStatefulsets:\n - name: jack\n
"},{"location":"configuration/#exceptions","title":"Exceptions","text":"These options allow adding exceptions to skip several checks for different resources. The exceptions must be justified in order to pass the CNF Certification.
"},{"location":"configuration/#acceptedkerneltaints","title":"acceptedKernelTaints","text":"The list of kernel modules loaded by the CNF that make the Linux kernel mark itself as tainted but that should skip verification.
Test cases affected: platform-alteration-tainted-node-kernel.
acceptedKernelTaints:\n - module: vboxsf\n - module: vboxguest\n
"},{"location":"configuration/#skiphelmchartlist","title":"skipHelmChartList","text":"The list of Helm charts that the CNF uses whose certification status will not be verified.
If no exception is configured, the certification status for all Helm charts will be checked in the OpenShift Helms Charts repository.
Test cases affected: affiliated-certification-helmchart-is-certified.
skipHelmChartList:\n - name: coredns\n
"},{"location":"configuration/#validprotocolnames","title":"validProtocolNames","text":"The list of allowed protocol names to be used for container port names.
The name field of a container port must be of the form protocol[-suffix] where protocol must be allowed by default or added to this list. The optional suffix can be chosen by the application. Protocol names allowed by default: grpc, grpc-web, http, http2, tcp, udp.
Test cases affected: manageability-container-port-name-format.
validProtocolNames:\n - \"http3\"\n - \"sctp\"\n
"},{"location":"configuration/#servicesignorelist","title":"servicesIgnoreList","text":"The list of Services that will skip verification.
Services included in this list will be filtered out at the autodiscovery stage and will not be subject to checks in any test case.
Tests cases affected: networking-dual-stack-service, access-control-service-type.
servicesignorelist:\n - \"hazelcast-platform-controller-manager-service\"\n - \"hazelcast-platform-webhook-service\"\n - \"new-pro-controller-manager-metrics-service\"\n
"},{"location":"configuration/#skipscalingtestdeployments-skipscalingteststatefulsets","title":"skipScalingTestDeployments / skipScalingTestStatefulSets","text":"The list of Deployments/StatefulSets that do not support scale in/out operations.
Deployments/StatefulSets included in this list will skip any scaling operation check.
Test cases affected: lifecycle-deployment-scaling, lifecycle-statefulset-scaling.
skipScalingTestDeployments:\n - name: deployment1\n namespace: tnf\nskipScalingTestStatefulSetNames:\n - name: statefulset1\n namespace: tnf\n
"},{"location":"configuration/#cnf-certification-settings","title":"CNF Certification settings","text":""},{"location":"configuration/#debugdaemonsetnamespace","title":"debugDaemonSetNamespace","text":"This is an optional field with the name of the namespace where a privileged DaemonSet will be deployed. The namespace will be created in case it does not exist. In case this field is not set, the default namespace for this DaemonSet is cnf-suite.
debugDaemonSetNamespace: cnf-cert\n
This DaemonSet, called tnf-debug is deployed and used internally by the CNF Certification tool to issue some shell commands that are needed in certain test cases. Some of these test cases might fail or be skipped in case it wasn\u2019t deployed correctly.
"},{"location":"configuration/#other-settings","title":"Other settings","text":"The autodiscovery mechanism will attempt to identify the default network device and all the IP addresses of the Pods it needs for network connectivity tests, though that information can be explicitly set using annotations if needed.
"},{"location":"configuration/#pod-ips","title":"Pod IPs","text":"The label test-network-function.com/skip_connectivity_tests excludes Pods from all connectivity tests.
The label test-network-function.com/skip_multus_connectivity_tests excludes Pods from Multus connectivity tests. Tests on the default interface are still run.
"},{"location":"configuration/#affinity-requirements","title":"Affinity requirements","text":"For CNF workloads that require Pods to use Pod or Node Affinity rules, the label AffinityRequired: true must be included on the Pod YAML. This will ensure that the affinity best practices are tested and prevent any test cases for anti-affinity to fail.
"},{"location":"developers/","title":"Developers","text":""},{"location":"developers/#steps","title":"Steps","text":"To test the newly added test / existing tests locally, follow the steps
Set runtime environment variables, as per the requirement.
For example, to deploy partner deployments in a custom namespace in the test config.
targetNameSpaces:\n - name: mynamespace\n
Also, skip intrusive tests
export TNF_NON_INTRUSIVE_ONLY=true\n
Set K8s config of the cluster where test pods are running
export KUBECONFIG=<<mypath/.kube/config>>\n
Execute test suite, which would build and run the suite
For example, to run networking
tests
./script/development.sh networking\n
If you have dependencies on other Pull Requests, you can add a comment like that:
Depends-On: <url of the PR>\n
and the dependent PR will automatically be extracted and injected in your change during the GitHub Action CI jobs and the DCI jobs.
"},{"location":"exception/","title":"Exception Process","text":""},{"location":"exception/#exception-process","title":"Exception Process","text":"There may exist some test cases which needs to fail always. The exception raised by the failed tests is published to Red Hat website for that partner.
CATALOG provides the details of such exception.
"},{"location":"reference/","title":"Helpful Links","text":"To run the test suite, some runtime environment variables are to be set.
"},{"location":"runtime-env/#ocp-412-labels","title":"OCP >=4.12 Labels","text":"The following labels need to be added to your default namespace in your cluster if you are running OCP >=4.12:
pod-security.kubernetes.io/enforce: privileged\npod-security.kubernetes.io/enforce-version: latest\n
You can manually label the namespace with:
oc label namespace/default pod-security.kubernetes.io/enforce=privileged\noc label namespace/default pod-security.kubernetes.io/enforce-version=latest\n
"},{"location":"runtime-env/#disable-intrusive-tests","title":"Disable intrusive tests","text":"To skip intrusive tests which may disrupt cluster operations, issue the following:
export TNF_NON_INTRUSIVE_ONLY=true\n
Likewise, to enable intrusive tests, set the following:
export TNF_NON_INTRUSIVE_ONLY=false\n
Intrusive tests are enabled by default.
"},{"location":"runtime-env/#preflight-integration","title":"Preflight Integration","text":"When running the preflight
suite of tests, there are a few environment variables that will need to be set:
PFLT_DOCKERCONFIG
is a required variable for running the preflight test suite. This provides credentials to the underlying preflight library for being able to pull/manipulate images and image bundles for testing.
When running as a container, the docker config is mounted to the container via volume mount.
When running as a standalone binary, the environment variables are consumed directly from your local machine.
See more about this variable here.
TNF_ALLOW_PREFLIGHT_INSECURE
(default: false) is required set to true
if you are running against a private container registry that has self-signed certificates.
In a disconnected environment, only specific versions of images are mirrored to the local repo. For those environments, the partner pod image quay.io/testnetworkfunction/cnf-test-partner
and debug pod image quay.io/testnetworkfunction/debug-partner
should be mirrored and TNF_PARTNER_REPO
should be set to the local repo, e.g.:
export TNF_PARTNER_REPO=registry.dfwt5g.lab:5000/testnetworkfunction\n
Note that you can also specify the debug pod image to use with SUPPORT_IMAGE
environment variable, default to debug-partner:4.5.5
.
The tests can be run within a prebuilt container in the OCP cluster.
Prerequisites for the OCP cluster
lifecycle-pod-recreation
test should be skipped.The test image is available at this repository in quay.io and can be pulled using The image can be pulled using :
podman pull quay.io/testnetworkfunction/cnf-certification-test\n
"},{"location":"test-container/#check-cluster-resources","title":"Check cluster resources","text":"Some tests suites such as platform-alteration
require node access to get node configuration like hugepage
. In order to get the required information, the test suite does not ssh
into nodes, but instead rely on oc debug tools. This tool makes it easier to fetch information from nodes and also to debug running pods.
oc debug tool
will launch a new container ending with -debug suffix, and the container will be destroyed once the debug session is done. Ensure that the cluster should have enough resources to create debug pod, otherwise those tests would fail.
Note
It\u2019s recommended to clean up disk space and make sure there\u2019s enough resources to deploy another container image in every node before starting the tests.
"},{"location":"test-container/#run-the-tests","title":"Run the tests","text":"./run-tnf-container.sh\n
Required arguments
-t
to provide the path of the local directory that contains tnf config files-o
to provide the path of the local directory where test results (claim.json), the execution logs (tnf-execution.log), and the results artifacts file (results.tar.gz) will be available from after the container exits.Warning
This directory must exist in order for the claim file to be written.
Optional arguments
-l
to list the labels to be run. See Ginkgo Spec Labels for more information on how to filter tests with labels.Note
If -l
is not specified, the tnf will run in \u2018diagnostic\u2019 mode. In this mode, no test case will run: it will only get information from the cluster (PUTs, CRDs, nodes info, etc\u2026) to save it in the claim file. This can be used to make sure the configuration was properly set and the autodiscovery found the right pods/crds\u2026
-i
to provide a name to a custom TNF container image. Supports local images, as well as images from external registries.
-k
to set a path to one or more kubeconfig files to be used by the container to authenticate with the cluster. Paths must be separated by a colon.
Note
If -k
is not specified, autodiscovery is performed.
The autodiscovery first looks for paths in the $KUBECONFIG
environment variable on the host system, and if the variable is not set or is empty, the default configuration stored in $HOME/.kube/config
is checked.
-n
to give the network mode of the container. Defaults set to host
, which requires selinux to be disabled. Alternatively, bridge
mode can be used with selinux if TNF_CONTAINER_CLIENT is set to docker
or running the test as root.Note
See the docker run \u2013network parameter reference for more information on how to configure network settings.
-b
to set an external offline DB that will be used to verify the certification status of containers, helm charts and operators. Defaults to the DB included in the TNF container image.Note
See the OCT tool for more information on how to create this DB.
Command to run
./run-tnf-container.sh -k ~/.kube/config -t ~/tnf/config\n-o ~/tnf/output -l \"networking,access-control\"\n
See General tests for a list of available keywords.
"},{"location":"test-container/#run-with-docker","title":"Run withdocker
","text":"By default, run-container.sh
utilizes podman
. However, an alternate container virtualization client using TNF_CONTAINER_CLIENT
can be configured. This is particularly useful for operating systems that do not readily support podman
.
In order to configure the test harness to use docker
, issue the following prior to run-tnf-container.sh
:
export TNF_CONTAINER_CLIENT=docker\n
"},{"location":"test-container/#output-targz-file-with-results-and-web-viewer-files","title":"Output tar.gz file with results and web viewer files","text":"After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them.
By default, only the claim.js
, the cnf-certification-tests_junit.xml
file and this new tar.gz file are created after the test suite has finished, as this is probably all that normal partners/users will need.
Two env vars allow to control the web artifacts and the the new tar.gz file generation:
podman build -t cnf-certification-test:v4.5.5 \\\n --build-arg TNF_VERSION=v4.5.5 \\\n
TNF_VERSION
value is set to a branch, a tag, or a hash of a commit that will be installed into the imageThe unofficial source could be a fork of the TNF repository.
Use the TNF_SRC_URL
build argument to override the URL to a source repository.
podman build -t cnf-certification-test:v4.5.5 \\\n --build-arg TNF_VERSION=v4.5.5 \\\n --build-arg TNF_SRC_URL=https://github.com/test-network-function/cnf-certification-test .\n
"},{"location":"test-container/#run-the-tests-2","title":"Run the tests 2","text":"Specify the custom TNF image using the -i
parameter.
./run-tnf-container.sh -i cnf-certification-test:v4.5.5\n-t ~/tnf/config -o ~/tnf/output -l \"networking,access-control\"\n
Note: see General tests for a list of available keywords.
"},{"location":"test-output/","title":"Test Output","text":""},{"location":"test-output/#test-output","title":"Test Output","text":""},{"location":"test-output/#claim-file","title":"Claim File","text":"The test suite generates an output file, named claim file. This file is considered as the proof of CNFs test run, evaluated by Red Hat when certified status is considered.
This file describes the following
Files that need to be submitted for certification
When submitting results back to Red Hat for certification, please include the above mentioned claim file, the JUnit file, and any available console logs.
How to add a CNF platform test result to the existing claim file?
go run cmd/tools/cmd/main.go claim-add --claimfile=claim.json\n--reportdir=/home/$USER/reports\n
Args: --claimfile is an existing claim.json file
--repordir :path to test results that you want to include.
The tests result files from the given report dir will be appended under the result section of the claim file using file name as the key/value pair. The tool will ignore the test result, if the key name is already present under result section of the claim file.
\"results\": {\n \"cnf-certification-tests_junit\": {\n \"testsuite\": {\n \"-errors\": \"0\",\n \"-failures\": \"2\",\n \"-name\": \"CNF Certification Test Suite\",\n \"-tests\": \"14\",\n ...\n
Reference
For more details on the contents of the claim file
The test suite also saves a copy of the execution logs at [test output directory]/tnf-execution.log
"},{"location":"test-output/#results-artifacts-zip-file","title":"Results artifacts zip file","text":"After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them. The file has a UTC date-time prefix and looks like this:
20230620-110654-cnf-test-results.tar.gz
The \u201c20230620-110654\u201d sample prefix means \u201cJune-20th 2023, 11:06:54\u201d
This is the content of the tar.gz file:
This file serves two different purposes:
claimjson.js
and classification.js
files to be in the same folder as the html files to work properly.A standalone HTML page is available to decode the results. For more details, see: https://github.com/test-network-function/parser
"},{"location":"test-output/#compare-claim-files-from-two-different-cnf-certification-suite-runs","title":"Compare claim files from two different CNF Certification Suite runs","text":"Parters can use the tnf claim compare
tool in order to compare two claim files. The differences are shown in a table per section. This tool can be helpful when the result of some test cases is different between two (consecutive) runs, as it shows configuration differences in both the CNF Cert Suite config and the cluster nodes that could be the root cause for some of the test cases results discrepancy.
All the compared sections, except the test cases results are compared blindly, traversing the whole json tree and substrees to get a list of all the fields and their values. Three tables are shown:
Let\u2019s say one of the nodes of the claim.json file contains this struct:
{\n \"field1\": \"value1\",\n \"field2\": {\n \"field3\": \"value2\",\n \"field4\": {\n \"field5\": \"value3\",\n \"field6\": \"value4\"\n }\n }\n}\n
When parsing that json struct fields, it will produce a list of fields like this:
/field1=value1\n/field2/field3=value2\n/field2/field4/field5=value3\n/field2/field4/field6=finalvalue2\n
Once this list of field\u2019s path+value strings has been obtained from both claim files, it is compared in order to find the differences or the fields that only exist on each file.
This is a fake example of a node \u201cclus0-0\u201d whose first CNI (index 0) has a different cniVersion and the ipMask flag of its first plugin (also index 0) has changed to false in the second run. Also, the plugin has another \u201cnewFakeFlag\u201d config flag in claim 2 that didn\u2019t exist in clam file 1.
...\nCNIs: Differences\nFIELD CLAIM 1 CLAIM 2\n/clus0-0/0/cniVersion 1.0.0 1.0.1\n/clus0-1/0/plugins/0/ipMasq true false\n\nCNIs: Only in CLAIM 1\n<none>\n\nCNIs: Only in CLAIM 2\n/clus0-1/0/plugins/0/newFakeFlag=true\n...\n
Currently, the following sections are compared, in this order:
The tnf
tool is located in the repo\u2019s cmd/tnf
folder. In order to compile it, just run:
make build-tnf-tool\n
"},{"location":"test-output/#examples","title":"Examples","text":""},{"location":"test-output/#compare-a-claim-file-against-itself-no-differences-expected","title":"Compare a claim file against itself: no differences expected","text":""},{"location":"test-output/#different-test-cases-results","title":"Different test cases results","text":"Let\u2019s assume we have two claim files, claim1.json and claim2.json, obtained from two CNF Certification Suite runs in the same cluster.
During the second run, there was a test case that failed. Let\u2019s simulate it modifying manually the second run\u2019s claim file to switch one test case\u2019s state from \u201cpassed\u201d to \u201cfailed\u201d.
"},{"location":"test-output/#different-cluster-configurations","title":"Different cluster configurations","text":"First, let\u2019s simulate that the second run took place in a cluster with a different OCP version. As we store the OCP version in the claim file (section claim.versions), we can also modify it manually. The versions section comparison appears at the very beginning of the tnf claim compare
output:
Now, let\u2019s simulate that the cluster was a bit different when the second CNF Certification Suite run was performed. First, let\u2019s make a manual change in claim2.json to emulate a different CNI version in the first node.
Finally, we\u2019ll simulate that, for some reason, the first node had one label removed when the second run was performed:
"},{"location":"test-spec/","title":"Available Test Specs","text":""},{"location":"test-spec/#test-specifications","title":"Test Specifications","text":""},{"location":"test-spec/#available-test-specs","title":"Available Test Specs","text":"There are two categories for CNF tests.
These tests are designed to test any commodity CNF running on OpenShift, and include specifications such as Default
network connectivity.
These tests are designed to test some unique aspects of the CNF under test are behaving correctly. This could include specifications such as issuing a GET
request to a web server, or passing traffic through an IPSEC tunnel.
These tests belong to multiple suites that can be run in any combination as is appropriate for the CNFs under test.
Info
Test suites group tests by the topic areas.
Suite Test Spec Description Minimum OpenShift Versionaccess-control
The access-control test suite is used to test service account, namespace and cluster/pod role binding for the pods under test. It also tests the pods/containers configuration. 4.6.0 affiliated-certification
The affiliated-certification test suite verifies that the containers and operators discovered or listed in the configuration file are certified by Redhat 4.6.0 lifecycle
The lifecycle test suite verifies the pods deployment, creation, shutdown and survivability. 4.6.0 networking
The networking test suite contains tests that check connectivity and networking config related best practices. 4.6.0 operator
The operator test suite is designed to test basic Kubernetes Operator functionality. 4.6.0 platform-alteration
verifies that key platform configuration is not modified by the CNF under test 4.6.0 observability
the observability test suite contains tests that check CNF logging is following best practices and that CRDs have status fields 4.6.0 Info
Please refer CATALOG.md for more details.
"},{"location":"test-spec/#cnf-specific-tests","title":"CNF-specific tests","text":"TODO
"},{"location":"test-standalone/","title":"Standalone test executable","text":""},{"location":"test-standalone/#standalone-test-executable","title":"Standalone test executable","text":"Prerequisites
The repo is cloned and all the commands should be run from the cloned repo.
mkdir ~/workspace\ncd ~/workspace\ngit clone git@github.com:test-network-function/cnf-certification-test.git\ncd cnf-certification-test\n
Note
By default, cnf-certification-test
emits results to cnf-certification-test/cnf-certification-tests_junit.xml
.
Depending on how you want to run the test suite there are different dependencies that will be needed.
If you are planning on running the test suite as a container, the only pre-requisite is Docker or Podman.
If you are planning on running the test suite as a standalone binary, there are pre-requisites that will need to be installed in your environment prior to runtime.
Run the following command to install the following dependencies.
make install-tools\n
Dependency Minimum Version GoLang 1.21 golangci-lint 1.55.1 jq 1.6 OpenShift Client 4.12 Other binary dependencies required to run tests can be installed using the following command:
Note
$GOBIN
(default $GOPATH/bin
) is on your $PATH
.In order to build the test executable, first make sure you have satisfied the dependencies.
make build-cnf-tests\n
Gotcha: The make build*
commands run unit tests where appropriate. They do NOT test the CNF.
A CNF is tested by specifying which suites to run using the run-cnf-suites.sh
helper script.
Run any combination of the suites keywords listed at in the General tests section, e.g.
./run-cnf-suites.sh -l \"lifecycle\"\n./run-cnf-suites.sh -l \"networking,lifecycle\"\n./run-cnf-suites.sh -l \"operator,networking\"\n./run-cnf-suites.sh -l \"networking,platform-alteration\"\n./run-cnf-suites.sh -l \"networking,lifecycle,affiliated-certification,operator\"\n
Note
As with \u201crun-tnf-container.sh\u201d, if -l
is not specified here, the tnf will run in \u2018diagnostic\u2019 mode.
By default the claim file will be output into the same location as the test executable. The -o
argument for run-cnf-suites.sh
can be used to provide a new location that the output files will be saved to. For more detailed control over the outputs, see the output of cnf-certification-test.test --help
.
cd cnf-certification-test && ./cnf-certification-test.test --help\n
"},{"location":"test-standalone/#run-a-single-test","title":"Run a single test","text":"All tests have unique labels, which can be used to filter which tests are to be run. This is useful when debugging a single test.
To select the test to be executed when running run-cnf-suites.sh
with the following command-line:
./run-cnf-suites.sh -l operator-install-source\n
Note
The test labels work the same as the suite labels, so you can select more than one test with the filtering mechanism shown before.
"},{"location":"test-standalone/#run-all-of-the-tests","title":"Run all of the tests","text":"You can run all of the tests (including the intrusive tests and the extended suite) with the following commands:
./run-cnf-suites.sh -l all\n
"},{"location":"test-standalone/#run-a-subset","title":"Run a subset","text":"You can find all the labels attached to the tests by running the following command:
./run-cnf-suites.sh --list\n
You can also check the CATALOG.md to find all test labels.
"},{"location":"test-standalone/#labels-for-offline-environments","title":"Labels for offline environments","text":"Some tests do require connectivity to Red Hat servers to validate certification status. To run the tests in an offline environment, skip the tests using the l
option.
./run-cnf-suites.sh -l '!online'\n
Alternatively, if an offline DB for containers, helm charts and operators is available, there is no need to skip those tests if the environment variable TNF_OFFLINE_DB
is set to the DB location. This DB can be generated using the OCT tool.
Note: Only partner certified images are stored in the offline database. If Redhat images are checked against the offline database, they will show up as not certified. The online database includes both Partner and Redhat images.
"},{"location":"test-standalone/#output-targz-file-with-results-and-web-viewer-files","title":"Output tar.gz file with results and web viewer files","text":"After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them.
By default, only the claim.js
, the cnf-certification-tests_junit.xml
file and this new tar.gz file are created after the test suite has finished, as this is probably all that normal partners/users will need.
Two env vars allow to control the web artifacts and the the new tar.gz file generation:
Refer Developers\u2019 Guide
"}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 711e3054fc7b2f31bc531eb9073d025208c89c73..e4456c8dfa68e19440d31812d8dbcc8741072a2c 100644 GIT binary patch delta 12 Tcmb=gXOr*d;NXs)$W{pe6=MT6 delta 12 Tcmb=gXOr*d;P@Ohk*yK{8RrBX diff --git a/test-container/index.html b/test-container/index.html index ea2f132b9..42ab4ab4f 100644 --- a/test-container/index.html +++ b/test-container/index.html @@ -1166,8 +1166,8 @@podman build -t cnf-certification-test:v4.5.4 \
- --build-arg TNF_VERSION=v4.5.4 \
+podman build -t cnf-certification-test:v4.5.5 \
+ --build-arg TNF_VERSION=v4.5.5 \
TNF_VERSION
value is set to a branch, a tag, or a hash of a commit that will be installed into the image
@@ -1175,13 +1175,13 @@ Build locallyBuild from an unofficial source¶
The unofficial source could be a fork of the TNF repository.
Use the TNF_SRC_URL
build argument to override the URL to a source repository.
-podman build -t cnf-certification-test:v4.5.4 \
- --build-arg TNF_VERSION=v4.5.4 \
+podman build -t cnf-certification-test:v4.5.5 \
+ --build-arg TNF_VERSION=v4.5.5 \
--build-arg TNF_SRC_URL=https://github.com/test-network-function/cnf-certification-test .
Run the tests 2¶
Specify the custom TNF image using the -i
parameter.
-./run-tnf-container.sh -i cnf-certification-test:v4.5.4
+./run-tnf-container.sh -i cnf-certification-test:v4.5.5
-t ~/tnf/config -o ~/tnf/output -l "networking,access-control"
Note: see General tests for a list of available keywords.