From ea39aa334a40fd2ba6c3c24e616d5b2508edb691 Mon Sep 17 00:00:00 2001 From: sebrandon1 Date: Thu, 9 Nov 2023 22:19:57 +0000 Subject: [PATCH] deploy: 789a5dd18cc89be2286498e8e65234f4d57398c6 --- runtime-env/index.html | 2 +- search/search_index.json | 2 +- sitemap.xml.gz | Bin 127 -> 127 bytes test-container/index.html | 10 +++++----- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/runtime-env/index.html b/runtime-env/index.html index 9a3b14e0c..e74b67af5 100644 --- a/runtime-env/index.html +++ b/runtime-env/index.html @@ -1009,7 +1009,7 @@

Disconnected environment
export TNF_PARTNER_REPO=registry.dfwt5g.lab:5000/testnetworkfunction
 

Note that you can also specify the debug pod image to use with SUPPORT_IMAGE -environment variable, default to debug-partner:4.5.4.

+environment variable, default to debug-partner:4.5.5.

diff --git a/search/search_index.json b/search/search_index.json index 6c54c64df..bdf1cd9c4 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#overview","title":"Overview","text":"

This repository provides a set of Cloud-Native Network Functions (CNF) test cases and the framework to add more test cases.

CNF

The app (containers/pods/operators) we want to certify according Telco partner/Red Hat\u2019s best practices.

TNF/Certification Test Suite

The tool we use to certify a CNF.

The purpose of the tests and the framework is to test the interaction of CNF with OpenShift Container Platform (OCP).

Info

This test suite is provided for the CNF Developers to test their CNFs readiness for certification. Please see \u201cCNF Developers\u201d for more information.

Features

  • The test suite generates a report (claim.json) and saves the test execution log (tnf-execution.log) in a configurable output directory.

  • The catalog of the existing test cases and test building blocks are available in CATALOG.md

"},{"location":"#architecture","title":"Architecture","text":"

There are 3 building blocks in the above framework.

  • the CNF represents the CNF to be certified. The certification suite identifies the resources (containers/pods/operators etc) belonging to the CNF via labels or static data entries in the config file

  • the Certification container/exec is the certification test suite running on the platform or in a container. The executable verifies the CNF under test configuration and its interactions with openshift

  • the Debug pods are part of a Kubernetes daemonset responsible to run various privileged commands on kubernetes nodes. Debug pods are useful to run platform tests and test commands (e.g. ping) in container namespaces without changing the container image content. The debug daemonset is instantiated via the privileged-daemonset repository.

"},{"location":"cnf-developers/","title":"CNF Developers","text":""},{"location":"cnf-developers/#cnf-developers-guidelines","title":"CNF Developers Guidelines","text":"

Developers of CNFs, particularly those targeting CNF Certification with Red Hat on OpenShift, can use this suite to test the interaction of their CNF with OpenShift. If interested in CNF Certification please contact Red Hat.

Requirements

  • OpenShift 4.10 installation to run the CNFs
  • At least one extra machine to host the test suite
"},{"location":"cnf-developers/#to-add-private-test-cases","title":"To add private test cases","text":"

Refer this documentation https://github.com/test-network-function/cnfextensions

Reference

cnf-certification-test-partner repository provides sample example to model the test setup.

"},{"location":"configuration/","title":"Test Configuration","text":""},{"location":"configuration/#cnf-certification-configuration","title":"CNF Certification configuration","text":"

The CNF Certification Test uses a YAML configuration file to certify a specific CNF workload. This file specifies the CNF resources to be certified, as well as any exceptions or other general configuration options.

By default a file named tnf_config.yml will be used. Here\u2019s an example of the CNF Config File. For a description of each config option see the section CNF Config File options.

"},{"location":"configuration/#cnf-config-generator","title":"CNF Config Generator","text":"

The CNF config file can be created using the CNF Config Generator, which is part of the TNF tool shipped with the CNF Certification. The purpose of this particular tool is to help users configuring the CNF Certification providing a logical structure of the available options as well as the information required to make use of them. The result is a CNF config file in YAML format that the CNF Certification will parse to adapt the certification process to a specific CNF workload.

To compile the TNF tool:

make build-tnf-tool\n

To launch the CNF Config Generator:

./tnf generate config\n

Here\u2019s an example of how to use the tool:

"},{"location":"configuration/#cnf-config-file-options","title":"CNF Config File options","text":""},{"location":"configuration/#cnf-resources","title":"CNF resources","text":"

These options allow configuring the workload resources of the CNF to be verified. Only the resources that the CNF uses are required to be configured. The rest can be left empty. Usually a basic configuration includes Namespaces and Pods at least.

"},{"location":"configuration/#targetnamespaces","title":"targetNameSpaces","text":"

The namespaces in which the CNF under test will be deployed.

targetNameSpaces:\n  - name: tnf\n
"},{"location":"configuration/#podsundertestlabels","title":"podsUnderTestLabels","text":"

The labels that each Pod of the CNF under test must have to be verified by the CNF Certification Suite.

Highly recommended

The labels should be defined in Pod definition rather than added after the Pod is created, as labels added later on will be lost in case the Pod gets rescheduled. In the case of Pods defined as part of a Deployment, it\u2019s best to use the same label as the one defined in the spec.selector.matchLabels section of the Deployment YAML. The prefix field can be used to avoid naming collision with other labels.

podsUnderTestLabels:\n  - \"test-network-function.com/generic: target\"\n
"},{"location":"configuration/#operatorsundertestlabels","title":"operatorsUnderTestLabels","text":"

The labels that each operator\u2019s CSV of the CNF under test must have to be verified by the CNF Certification Suite.

If a new label is used for this purpose make sure it is added to the CNF operator\u2019s CSVs.

operatorsUnderTestLabels:\n  - \"test-network-function.com/operator: target\" \n
"},{"location":"configuration/#targetcrdfilters","title":"targetCrdFilters","text":"

The CRD name suffix used to filter the CNF\u2019s CRDs among all the CRDs present in the cluster. For each CRD it can also be specified if it\u2019s scalable or not in order to avoid some lifecycle test cases.

targetCrdFilters:\n - nameSuffix: \"group1.tnf.com\"\n   scalable: false\n - nameSuffix: \"anydomain.com\"\n   scalable: true\n

With the config show above, all CRD names in the cluster whose names have the suffix group1.tnf.com or anydomain.com ( e.g. crd1.group1.tnf.com or mycrd.mygroup.anydomain.com) will be tested.

"},{"location":"configuration/#manageddeployments-managedstatefulsets","title":"managedDeployments / managedStatefulSets","text":"

The Deployments/StatefulSets managed by a Custom Resource whose scaling is controlled using the \u201cscale\u201d subresource of the CR.

The CRD defining that CR should be included in the CRD filters with the scalable property set to true. If so, the test case lifecycle-{deployment/statefulset}-scaling will be skipped, otherwise it will fail.

managedDeployments:\n  - name: jack\nmanagedStatefulsets:\n  - name: jack\n
"},{"location":"configuration/#exceptions","title":"Exceptions","text":"

These options allow adding exceptions to skip several checks for different resources. The exceptions must be justified in order to pass the CNF Certification.

"},{"location":"configuration/#acceptedkerneltaints","title":"acceptedKernelTaints","text":"

The list of kernel modules loaded by the CNF that make the Linux kernel mark itself as tainted but that should skip verification.

Test cases affected: platform-alteration-tainted-node-kernel.

acceptedKernelTaints:\n  - module: vboxsf\n  - module: vboxguest\n
"},{"location":"configuration/#skiphelmchartlist","title":"skipHelmChartList","text":"

The list of Helm charts that the CNF uses whose certification status will not be verified.

If no exception is configured, the certification status for all Helm charts will be checked in the OpenShift Helms Charts repository.

Test cases affected: affiliated-certification-helmchart-is-certified.

skipHelmChartList:\n  - name: coredns\n
"},{"location":"configuration/#validprotocolnames","title":"validProtocolNames","text":"

The list of allowed protocol names to be used for container port names.

The name field of a container port must be of the form protocol[-suffix] where protocol must be allowed by default or added to this list. The optional suffix can be chosen by the application. Protocol names allowed by default: grpc, grpc-web, http, http2, tcp, udp.

Test cases affected: manageability-container-port-name-format.

validProtocolNames:\n  - \"http3\"\n  - \"sctp\"\n
"},{"location":"configuration/#servicesignorelist","title":"servicesIgnoreList","text":"

The list of Services that will skip verification.

Services included in this list will be filtered out at the autodiscovery stage and will not be subject to checks in any test case.

Tests cases affected: networking-dual-stack-service, access-control-service-type.

servicesignorelist:\n  - \"hazelcast-platform-controller-manager-service\"\n  - \"hazelcast-platform-webhook-service\"\n  - \"new-pro-controller-manager-metrics-service\"\n
"},{"location":"configuration/#skipscalingtestdeployments-skipscalingteststatefulsets","title":"skipScalingTestDeployments / skipScalingTestStatefulSets","text":"

The list of Deployments/StatefulSets that do not support scale in/out operations.

Deployments/StatefulSets included in this list will skip any scaling operation check.

Test cases affected: lifecycle-deployment-scaling, lifecycle-statefulset-scaling.

skipScalingTestDeployments:\n  - name: deployment1\n    namespace: tnf\nskipScalingTestStatefulSetNames:\n  - name: statefulset1\n    namespace: tnf\n
"},{"location":"configuration/#cnf-certification-settings","title":"CNF Certification settings","text":""},{"location":"configuration/#debugdaemonsetnamespace","title":"debugDaemonSetNamespace","text":"

This is an optional field with the name of the namespace where a privileged DaemonSet will be deployed. The namespace will be created in case it does not exist. In case this field is not set, the default namespace for this DaemonSet is cnf-suite.

debugDaemonSetNamespace: cnf-cert\n

This DaemonSet, called tnf-debug is deployed and used internally by the CNF Certification tool to issue some shell commands that are needed in certain test cases. Some of these test cases might fail or be skipped in case it wasn\u2019t deployed correctly.

"},{"location":"configuration/#other-settings","title":"Other settings","text":"

The autodiscovery mechanism will attempt to identify the default network device and all the IP addresses of the Pods it needs for network connectivity tests, though that information can be explicitly set using annotations if needed.

"},{"location":"configuration/#pod-ips","title":"Pod IPs","text":"
  • The k8s.v1.cni.cncf.io/networks-status annotation is checked and all IPs from it are used. This annotation is automatically managed in OpenShift but may not be present in K8s.
  • If it is not present, then only known IPs associated with the Pod are used (the Pod .status.ips field).
"},{"location":"configuration/#network-interfaces","title":"Network Interfaces","text":"
  • The k8s.v1.cni.cncf.io/networks-status annotation is checked and the interface from the first entry found with \u201cdefault\u201d=true is used. This annotation is automatically managed in OpenShift but may not be present in K8s.

The label test-network-function.com/skip_connectivity_tests excludes Pods from all connectivity tests.

The label test-network-function.com/skip_multus_connectivity_tests excludes Pods from Multus connectivity tests. Tests on the default interface are still run.

"},{"location":"configuration/#affinity-requirements","title":"Affinity requirements","text":"

For CNF workloads that require Pods to use Pod or Node Affinity rules, the label AffinityRequired: true must be included on the Pod YAML. This will ensure that the affinity best practices are tested and prevent any test cases for anti-affinity to fail.

"},{"location":"developers/","title":"Developers","text":""},{"location":"developers/#steps","title":"Steps","text":"

To test the newly added test / existing tests locally, follow the steps

  • Clone the repo
  • Set runtime environment variables, as per the requirement.

    For example, to deploy partner deployments in a custom namespace in the test config.

    targetNameSpaces:\n  - name: mynamespace\n
  • Also, skip intrusive tests

export TNF_NON_INTRUSIVE_ONLY=true\n
  • Set K8s config of the cluster where test pods are running

    export KUBECONFIG=<<mypath/.kube/config>>\n
  • Execute test suite, which would build and run the suite

    For example, to run networking tests

    ./script/development.sh networking\n
"},{"location":"developers/#dependencies-on-other-pr","title":"Dependencies on other PR","text":"

If you have dependencies on other Pull Requests, you can add a comment like that:

Depends-On: <url of the PR>\n

and the dependent PR will automatically be extracted and injected in your change during the GitHub Action CI jobs and the DCI jobs.

"},{"location":"exception/","title":"Exception Process","text":""},{"location":"exception/#exception-process","title":"Exception Process","text":"

There may exist some test cases which needs to fail always. The exception raised by the failed tests is published to Red Hat website for that partner.

CATALOG provides the details of such exception.

"},{"location":"reference/","title":"Helpful Links","text":"
  • Contribution Guidelines
  • CATALOG
  • Best Practices Document v1.3
"},{"location":"runtime-env/","title":"Runtime environment variables","text":""},{"location":"runtime-env/#runtime-environment-variables","title":"Runtime environment variables","text":"

To run the test suite, some runtime environment variables are to be set.

"},{"location":"runtime-env/#ocp-412-labels","title":"OCP >=4.12 Labels","text":"

The following labels need to be added to your default namespace in your cluster if you are running OCP >=4.12:

pod-security.kubernetes.io/enforce: privileged\npod-security.kubernetes.io/enforce-version: latest\n

You can manually label the namespace with:

oc label namespace/default pod-security.kubernetes.io/enforce=privileged\noc label namespace/default pod-security.kubernetes.io/enforce-version=latest\n
"},{"location":"runtime-env/#disable-intrusive-tests","title":"Disable intrusive tests","text":"

To skip intrusive tests which may disrupt cluster operations, issue the following:

export TNF_NON_INTRUSIVE_ONLY=true\n

Likewise, to enable intrusive tests, set the following:

export TNF_NON_INTRUSIVE_ONLY=false\n

Intrusive tests are enabled by default.

"},{"location":"runtime-env/#preflight-integration","title":"Preflight Integration","text":"

When running the preflight suite of tests, there are a few environment variables that will need to be set:

PFLT_DOCKERCONFIG is a required variable for running the preflight test suite. This provides credentials to the underlying preflight library for being able to pull/manipulate images and image bundles for testing.

When running as a container, the docker config is mounted to the container via volume mount.

When running as a standalone binary, the environment variables are consumed directly from your local machine.

See more about this variable here.

TNF_ALLOW_PREFLIGHT_INSECURE (default: false) is required set to true if you are running against a private container registry that has self-signed certificates.

"},{"location":"runtime-env/#disconnected-environment","title":"Disconnected environment","text":"

In a disconnected environment, only specific versions of images are mirrored to the local repo. For those environments, the partner pod image quay.io/testnetworkfunction/cnf-test-partner and debug pod image quay.io/testnetworkfunction/debug-partner should be mirrored and TNF_PARTNER_REPO should be set to the local repo, e.g.:

export TNF_PARTNER_REPO=registry.dfwt5g.lab:5000/testnetworkfunction\n

Note that you can also specify the debug pod image to use with SUPPORT_IMAGE environment variable, default to debug-partner:4.5.4.

"},{"location":"test-container/","title":"Prebuilt container","text":""},{"location":"test-container/#test","title":"Test","text":"

The tests can be run within a prebuilt container in the OCP cluster.

Prerequisites for the OCP cluster

  • The cluster should have enough resources to drain nodes and reschedule pods. If that is not the case, then lifecycle-pod-recreation test should be skipped.
"},{"location":"test-container/#with-quay-test-container-image","title":"With quay test container image","text":""},{"location":"test-container/#pull-test-image","title":"Pull test image","text":"

The test image is available at this repository in quay.io and can be pulled using The image can be pulled using :

podman pull quay.io/testnetworkfunction/cnf-certification-test\n
"},{"location":"test-container/#check-cluster-resources","title":"Check cluster resources","text":"

Some tests suites such as platform-alteration require node access to get node configuration like hugepage. In order to get the required information, the test suite does not ssh into nodes, but instead rely on oc debug tools. This tool makes it easier to fetch information from nodes and also to debug running pods.

oc debug tool will launch a new container ending with -debug suffix, and the container will be destroyed once the debug session is done. Ensure that the cluster should have enough resources to create debug pod, otherwise those tests would fail.

Note

It\u2019s recommended to clean up disk space and make sure there\u2019s enough resources to deploy another container image in every node before starting the tests.

"},{"location":"test-container/#run-the-tests","title":"Run the tests","text":"
./run-tnf-container.sh\n

Required arguments

  • -t to provide the path of the local directory that contains tnf config files
  • -o to provide the path of the local directory where test results (claim.json), the execution logs (tnf-execution.log), and the results artifacts file (results.tar.gz) will be available from after the container exits.

Warning

This directory must exist in order for the claim file to be written.

Optional arguments

  • -l to list the labels to be run. See Ginkgo Spec Labels for more information on how to filter tests with labels.

Note

If -l is not specified, the tnf will run in \u2018diagnostic\u2019 mode. In this mode, no test case will run: it will only get information from the cluster (PUTs, CRDs, nodes info, etc\u2026) to save it in the claim file. This can be used to make sure the configuration was properly set and the autodiscovery found the right pods/crds\u2026

  • -i to provide a name to a custom TNF container image. Supports local images, as well as images from external registries.

  • -k to set a path to one or more kubeconfig files to be used by the container to authenticate with the cluster. Paths must be separated by a colon.

Note

If -k is not specified, autodiscovery is performed.

The autodiscovery first looks for paths in the $KUBECONFIG environment variable on the host system, and if the variable is not set or is empty, the default configuration stored in $HOME/.kube/config is checked.

  • -n to give the network mode of the container. Defaults set to host, which requires selinux to be disabled. Alternatively, bridge mode can be used with selinux if TNF_CONTAINER_CLIENT is set to docker or running the test as root.

Note

See the docker run \u2013network parameter reference for more information on how to configure network settings.

  • -b to set an external offline DB that will be used to verify the certification status of containers, helm charts and operators. Defaults to the DB included in the TNF container image.

Note

See the OCT tool for more information on how to create this DB.

Command to run

./run-tnf-container.sh -k ~/.kube/config -t ~/tnf/config\n-o ~/tnf/output -l \"networking,access-control\"\n

See General tests for a list of available keywords.

"},{"location":"test-container/#run-with-docker","title":"Run with docker","text":"

By default, run-container.sh utilizes podman. However, an alternate container virtualization client using TNF_CONTAINER_CLIENT can be configured. This is particularly useful for operating systems that do not readily support podman.

In order to configure the test harness to use docker, issue the following prior to run-tnf-container.sh:

export TNF_CONTAINER_CLIENT=docker\n
"},{"location":"test-container/#output-targz-file-with-results-and-web-viewer-files","title":"Output tar.gz file with results and web viewer files","text":"

After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them.

By default, only the claim.js, the cnf-certification-tests_junit.xml file and this new tar.gz file are created after the test suite has finished, as this is probably all that normal partners/users will need.

Two env vars allow to control the web artifacts and the the new tar.gz file generation:

  • TNF_OMIT_ARTIFACTS_ZIP_FILE=true/false : Defaulted to false in the launch scripts. If set to true, the tar.gz generation will be skipped.
  • TNF_INCLUDE_WEB_FILES_IN_OUTPUT_FOLDER=true/false : Defaulted to false in the launch scripts. If set to true, the web viewer/parser files will also be copied to the output (claim) folder.
"},{"location":"test-container/#with-local-test-container-image","title":"With local test container image","text":""},{"location":"test-container/#build-locally","title":"Build locally","text":"
podman build -t cnf-certification-test:v4.5.4 \\\n  --build-arg TNF_VERSION=v4.5.4 \\\n
  • TNF_VERSION value is set to a branch, a tag, or a hash of a commit that will be installed into the image
"},{"location":"test-container/#build-from-an-unofficial-source","title":"Build from an unofficial source","text":"

The unofficial source could be a fork of the TNF repository.

Use the TNF_SRC_URL build argument to override the URL to a source repository.

podman build -t cnf-certification-test:v4.5.4 \\\n  --build-arg TNF_VERSION=v4.5.4 \\\n  --build-arg TNF_SRC_URL=https://github.com/test-network-function/cnf-certification-test .\n
"},{"location":"test-container/#run-the-tests-2","title":"Run the tests 2","text":"

Specify the custom TNF image using the -i parameter.

./run-tnf-container.sh -i cnf-certification-test:v4.5.4\n-t ~/tnf/config -o ~/tnf/output -l \"networking,access-control\"\n

Note: see General tests for a list of available keywords.

"},{"location":"test-output/","title":"Test Output","text":""},{"location":"test-output/#test-output","title":"Test Output","text":""},{"location":"test-output/#claim-file","title":"Claim File","text":"

The test suite generates an output file, named claim file. This file is considered as the proof of CNFs test run, evaluated by Red Hat when certified status is considered.

This file describes the following

  • The system(s) under test
  • The tests that are executed
  • The outcome of the executed / skipped tests

Files that need to be submitted for certification

When submitting results back to Red Hat for certification, please include the above mentioned claim file, the JUnit file, and any available console logs.

How to add a CNF platform test result to the existing claim file?

go run cmd/tools/cmd/main.go claim-add --claimfile=claim.json\n--reportdir=/home/$USER/reports\n

Args: --claimfile is an existing claim.json file --repordir :path to test results that you want to include.

The tests result files from the given report dir will be appended under the result section of the claim file using file name as the key/value pair. The tool will ignore the test result, if the key name is already present under result section of the claim file.

 \"results\": {\n \"cnf-certification-tests_junit\": {\n \"testsuite\": {\n \"-errors\": \"0\",\n \"-failures\": \"2\",\n \"-name\": \"CNF Certification Test Suite\",\n \"-tests\": \"14\",\n ...\n

Reference

For more details on the contents of the claim file

  • schema.
  • Guide.
"},{"location":"test-output/#execution-logs","title":"Execution logs","text":"

The test suite also saves a copy of the execution logs at [test output directory]/tnf-execution.log

"},{"location":"test-output/#results-artifacts-zip-file","title":"Results artifacts zip file","text":"

After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them. The file has a UTC date-time prefix and looks like this:

20230620-110654-cnf-test-results.tar.gz

The \u201c20230620-110654\u201d sample prefix means \u201cJune-20th 2023, 11:06:54\u201d

This is the content of the tar.gz file:

  • claim.json
  • cnf-certification-tests_junit.xml
  • claimjson.js
  • classification.js
  • results.html

This file serves two different purposes:

  1. Make it easier to store and send the test results for review.
  2. View the results in the html web page. In addition, the web page (either results-embed.thml or results.html) has a selector for workload type and allows the parter to introduce feedback for each of the failing test cases for later review from Red Hat. It\u2019s important to note that this web page needs the claimjson.js and classification.js files to be in the same folder as the html files to work properly.
"},{"location":"test-output/#show-results-after-running-the-test-code","title":"Show Results after running the test code","text":"

A standalone HTML page is available to decode the results. For more details, see: https://github.com/test-network-function/parser

"},{"location":"test-output/#compare-claim-files-from-two-different-cnf-certification-suite-runs","title":"Compare claim files from two different CNF Certification Suite runs","text":"

Parters can use the tnf claim compare tool in order to compare two claim files. The differences are shown in a table per section. This tool can be helpful when the result of some test cases is different between two (consecutive) runs, as it shows configuration differences in both the CNF Cert Suite config and the cluster nodes that could be the root cause for some of the test cases results discrepancy.

All the compared sections, except the test cases results are compared blindly, traversing the whole json tree and substrees to get a list of all the fields and their values. Three tables are shown:

  • Differences: same fields with different values.
  • Fields in claim 1 only: json fields in claim file 1 that don\u2019t exist in claim 2.
  • Fields in claim 2 only: json fields in claim file 2 that don\u2019t exist in claim 1.

Let\u2019s say one of the nodes of the claim.json file contains this struct:

{\n  \"field1\": \"value1\",\n  \"field2\": {\n    \"field3\": \"value2\",\n    \"field4\": {\n      \"field5\": \"value3\",\n      \"field6\": \"value4\"\n    }\n  }\n}\n

When parsing that json struct fields, it will produce a list of fields like this:

/field1=value1\n/field2/field3=value2\n/field2/field4/field5=value3\n/field2/field4/field6=finalvalue2\n

Once this list of field\u2019s path+value strings has been obtained from both claim files, it is compared in order to find the differences or the fields that only exist on each file.

This is a fake example of a node \u201cclus0-0\u201d whose first CNI (index 0) has a different cniVersion and the ipMask flag of its first plugin (also index 0) has changed to false in the second run. Also, the plugin has another \u201cnewFakeFlag\u201d config flag in claim 2 that didn\u2019t exist in clam file 1.

...\nCNIs: Differences\nFIELD                           CLAIM 1      CLAIM 2\n/clus0-0/0/cniVersion           1.0.0        1.0.1\n/clus0-1/0/plugins/0/ipMasq     true         false\n\nCNIs: Only in CLAIM 1\n<none>\n\nCNIs: Only in CLAIM 2\n/clus0-1/0/plugins/0/newFakeFlag=true\n...\n

Currently, the following sections are compared, in this order:

  • claim.versions
  • claim.Results
  • claim.configurations.Config
  • claim.nodes.cniPlugins
  • claim.nodes.csiDriver
  • claim.nodes.nodesHwInfo
  • claim.nodes.nodeSummary
"},{"location":"test-output/#how-to-build-the-tnf-tool","title":"How to build the tnf tool","text":"

The tnf tool is located in the repo\u2019s cmd/tnf folder. In order to compile it, just run:

make build-tnf-tool\n
"},{"location":"test-output/#examples","title":"Examples","text":""},{"location":"test-output/#compare-a-claim-file-against-itself-no-differences-expected","title":"Compare a claim file against itself: no differences expected","text":""},{"location":"test-output/#different-test-cases-results","title":"Different test cases results","text":"

Let\u2019s assume we have two claim files, claim1.json and claim2.json, obtained from two CNF Certification Suite runs in the same cluster.

During the second run, there was a test case that failed. Let\u2019s simulate it modifying manually the second run\u2019s claim file to switch one test case\u2019s state from \u201cpassed\u201d to \u201cfailed\u201d.

"},{"location":"test-output/#different-cluster-configurations","title":"Different cluster configurations","text":"

First, let\u2019s simulate that the second run took place in a cluster with a different OCP version. As we store the OCP version in the claim file (section claim.versions), we can also modify it manually. The versions section comparison appears at the very beginning of the tnf claim compare output:

Now, let\u2019s simulate that the cluster was a bit different when the second CNF Certification Suite run was performed. First, let\u2019s make a manual change in claim2.json to emulate a different CNI version in the first node.

Finally, we\u2019ll simulate that, for some reason, the first node had one label removed when the second run was performed:

"},{"location":"test-spec/","title":"Available Test Specs","text":""},{"location":"test-spec/#test-specifications","title":"Test Specifications","text":""},{"location":"test-spec/#available-test-specs","title":"Available Test Specs","text":"

There are two categories for CNF tests.

  • General

These tests are designed to test any commodity CNF running on OpenShift, and include specifications such as Default network connectivity.

  • CNF-Specific

These tests are designed to test some unique aspects of the CNF under test are behaving correctly. This could include specifications such as issuing a GET request to a web server, or passing traffic through an IPSEC tunnel.

"},{"location":"test-spec/#general-tests","title":"General tests","text":"

These tests belong to multiple suites that can be run in any combination as is appropriate for the CNFs under test.

Info

Test suites group tests by the topic areas.

Suite Test Spec Description Minimum OpenShift Version access-control The access-control test suite is used to test service account, namespace and cluster/pod role binding for the pods under test. It also tests the pods/containers configuration. 4.6.0 affiliated-certification The affiliated-certification test suite verifies that the containers and operators discovered or listed in the configuration file are certified by Redhat 4.6.0 lifecycle The lifecycle test suite verifies the pods deployment, creation, shutdown and survivability. 4.6.0 networking The networking test suite contains tests that check connectivity and networking config related best practices. 4.6.0 operator The operator test suite is designed to test basic Kubernetes Operator functionality. 4.6.0 platform-alteration verifies that key platform configuration is not modified by the CNF under test 4.6.0 observability the observability test suite contains tests that check CNF logging is following best practices and that CRDs have status fields 4.6.0

Info

Please refer CATALOG.md for more details.

"},{"location":"test-spec/#cnf-specific-tests","title":"CNF-specific tests","text":"

TODO

"},{"location":"test-standalone/","title":"Standalone test executable","text":""},{"location":"test-standalone/#standalone-test-executable","title":"Standalone test executable","text":"

Prerequisites

The repo is cloned and all the commands should be run from the cloned repo.

mkdir ~/workspace\ncd ~/workspace\ngit clone git@github.com:test-network-function/cnf-certification-test.git\ncd cnf-certification-test\n

Note

By default, cnf-certification-test emits results to cnf-certification-test/cnf-certification-tests_junit.xml.

"},{"location":"test-standalone/#1-install-dependencies","title":"1. Install dependencies","text":"

Depending on how you want to run the test suite there are different dependencies that will be needed.

If you are planning on running the test suite as a container, the only pre-requisite is Docker or Podman.

If you are planning on running the test suite as a standalone binary, there are pre-requisites that will need to be installed in your environment prior to runtime.

Run the following command to install the following dependencies.

make install-tools\n
Dependency Minimum Version GoLang 1.21 golangci-lint 1.55.1 jq 1.6 OpenShift Client 4.12

Other binary dependencies required to run tests can be installed using the following command:

Note

  • You must also make sure that $GOBIN (default $GOPATH/bin) is on your $PATH.
  • Efforts to containerise this offering are considered a work in progress.
"},{"location":"test-standalone/#2-build-the-test-suite","title":"2. Build the Test Suite","text":"

In order to build the test executable, first make sure you have satisfied the dependencies.

make build-cnf-tests\n

Gotcha: The make build* commands run unit tests where appropriate. They do NOT test the CNF.

"},{"location":"test-standalone/#3-test-a-cnf","title":"3. Test a CNF","text":"

A CNF is tested by specifying which suites to run using the run-cnf-suites.sh helper script.

Run any combination of the suites keywords listed at in the General tests section, e.g.

./run-cnf-suites.sh -l \"lifecycle\"\n./run-cnf-suites.sh -l \"networking,lifecycle\"\n./run-cnf-suites.sh -l \"operator,networking\"\n./run-cnf-suites.sh -l \"networking,platform-alteration\"\n./run-cnf-suites.sh -l \"networking,lifecycle,affiliated-certification,operator\"\n

Note

As with \u201crun-tnf-container.sh\u201d, if -l is not specified here, the tnf will run in \u2018diagnostic\u2019 mode.

By default the claim file will be output into the same location as the test executable. The -o argument for run-cnf-suites.sh can be used to provide a new location that the output files will be saved to. For more detailed control over the outputs, see the output of cnf-certification-test.test --help.

    cd cnf-certification-test && ./cnf-certification-test.test --help\n
"},{"location":"test-standalone/#run-a-single-test","title":"Run a single test","text":"

All tests have unique labels, which can be used to filter which tests are to be run. This is useful when debugging a single test.

To select the test to be executed when running run-cnf-suites.sh with the following command-line:

./run-cnf-suites.sh -l operator-install-source\n

Note

The test labels work the same as the suite labels, so you can select more than one test with the filtering mechanism shown before.

"},{"location":"test-standalone/#run-all-of-the-tests","title":"Run all of the tests","text":"

You can run all of the tests (including the intrusive tests and the extended suite) with the following commands:

./run-cnf-suites.sh -l all\n
"},{"location":"test-standalone/#run-a-subset","title":"Run a subset","text":"

You can find all the labels attached to the tests by running the following command:

./run-cnf-suites.sh --list\n

You can also check the CATALOG.md to find all test labels.

"},{"location":"test-standalone/#labels-for-offline-environments","title":"Labels for offline environments","text":"

Some tests do require connectivity to Red Hat servers to validate certification status. To run the tests in an offline environment, skip the tests using the l option.

./run-cnf-suites.sh -l '!online'\n

Alternatively, if an offline DB for containers, helm charts and operators is available, there is no need to skip those tests if the environment variable TNF_OFFLINE_DB is set to the DB location. This DB can be generated using the OCT tool.

Note: Only partner certified images are stored in the offline database. If Redhat images are checked against the offline database, they will show up as not certified. The online database includes both Partner and Redhat images.

"},{"location":"test-standalone/#output-targz-file-with-results-and-web-viewer-files","title":"Output tar.gz file with results and web viewer files","text":"

After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them.

By default, only the claim.js, the cnf-certification-tests_junit.xml file and this new tar.gz file are created after the test suite has finished, as this is probably all that normal partners/users will need.

Two env vars allow to control the web artifacts and the the new tar.gz file generation:

  • TNF_OMIT_ARTIFACTS_ZIP_FILE=true/false : Defaulted to false in the launch scripts. If set to true, the tar.gz generation will be skipped.
  • TNF_INCLUDE_WEB_FILES_IN_OUTPUT_FOLDER=true/false : Defaulted to false in the launch scripts. If set to true, the web viewer/parser files will also be copied to the output (claim) folder.
"},{"location":"test-standalone/#build-test-a-cnf","title":"Build + Test a CNF","text":"

Refer Developers\u2019 Guide

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#overview","title":"Overview","text":"

This repository provides a set of Cloud-Native Network Functions (CNF) test cases and the framework to add more test cases.

CNF

The app (containers/pods/operators) we want to certify according Telco partner/Red Hat\u2019s best practices.

TNF/Certification Test Suite

The tool we use to certify a CNF.

The purpose of the tests and the framework is to test the interaction of CNF with OpenShift Container Platform (OCP).

Info

This test suite is provided for the CNF Developers to test their CNFs readiness for certification. Please see \u201cCNF Developers\u201d for more information.

Features

  • The test suite generates a report (claim.json) and saves the test execution log (tnf-execution.log) in a configurable output directory.

  • The catalog of the existing test cases and test building blocks are available in CATALOG.md

"},{"location":"#architecture","title":"Architecture","text":"

There are 3 building blocks in the above framework.

  • the CNF represents the CNF to be certified. The certification suite identifies the resources (containers/pods/operators etc) belonging to the CNF via labels or static data entries in the config file

  • the Certification container/exec is the certification test suite running on the platform or in a container. The executable verifies the CNF under test configuration and its interactions with openshift

  • the Debug pods are part of a Kubernetes daemonset responsible to run various privileged commands on kubernetes nodes. Debug pods are useful to run platform tests and test commands (e.g. ping) in container namespaces without changing the container image content. The debug daemonset is instantiated via the privileged-daemonset repository.

"},{"location":"cnf-developers/","title":"CNF Developers","text":""},{"location":"cnf-developers/#cnf-developers-guidelines","title":"CNF Developers Guidelines","text":"

Developers of CNFs, particularly those targeting CNF Certification with Red Hat on OpenShift, can use this suite to test the interaction of their CNF with OpenShift. If interested in CNF Certification please contact Red Hat.

Requirements

  • OpenShift 4.10 installation to run the CNFs
  • At least one extra machine to host the test suite
"},{"location":"cnf-developers/#to-add-private-test-cases","title":"To add private test cases","text":"

Refer this documentation https://github.com/test-network-function/cnfextensions

Reference

cnf-certification-test-partner repository provides sample example to model the test setup.

"},{"location":"configuration/","title":"Test Configuration","text":""},{"location":"configuration/#cnf-certification-configuration","title":"CNF Certification configuration","text":"

The CNF Certification Test uses a YAML configuration file to certify a specific CNF workload. This file specifies the CNF resources to be certified, as well as any exceptions or other general configuration options.

By default a file named tnf_config.yml will be used. Here\u2019s an example of the CNF Config File. For a description of each config option see the section CNF Config File options.

"},{"location":"configuration/#cnf-config-generator","title":"CNF Config Generator","text":"

The CNF config file can be created using the CNF Config Generator, which is part of the TNF tool shipped with the CNF Certification. The purpose of this particular tool is to help users configuring the CNF Certification providing a logical structure of the available options as well as the information required to make use of them. The result is a CNF config file in YAML format that the CNF Certification will parse to adapt the certification process to a specific CNF workload.

To compile the TNF tool:

make build-tnf-tool\n

To launch the CNF Config Generator:

./tnf generate config\n

Here\u2019s an example of how to use the tool:

"},{"location":"configuration/#cnf-config-file-options","title":"CNF Config File options","text":""},{"location":"configuration/#cnf-resources","title":"CNF resources","text":"

These options allow configuring the workload resources of the CNF to be verified. Only the resources that the CNF uses are required to be configured. The rest can be left empty. Usually a basic configuration includes Namespaces and Pods at least.

"},{"location":"configuration/#targetnamespaces","title":"targetNameSpaces","text":"

The namespaces in which the CNF under test will be deployed.

targetNameSpaces:\n  - name: tnf\n
"},{"location":"configuration/#podsundertestlabels","title":"podsUnderTestLabels","text":"

The labels that each Pod of the CNF under test must have to be verified by the CNF Certification Suite.

Highly recommended

The labels should be defined in Pod definition rather than added after the Pod is created, as labels added later on will be lost in case the Pod gets rescheduled. In the case of Pods defined as part of a Deployment, it\u2019s best to use the same label as the one defined in the spec.selector.matchLabels section of the Deployment YAML. The prefix field can be used to avoid naming collision with other labels.

podsUnderTestLabels:\n  - \"test-network-function.com/generic: target\"\n
"},{"location":"configuration/#operatorsundertestlabels","title":"operatorsUnderTestLabels","text":"

The labels that each operator\u2019s CSV of the CNF under test must have to be verified by the CNF Certification Suite.

If a new label is used for this purpose make sure it is added to the CNF operator\u2019s CSVs.

operatorsUnderTestLabels:\n  - \"test-network-function.com/operator: target\" \n
"},{"location":"configuration/#targetcrdfilters","title":"targetCrdFilters","text":"

The CRD name suffix used to filter the CNF\u2019s CRDs among all the CRDs present in the cluster. For each CRD it can also be specified if it\u2019s scalable or not in order to avoid some lifecycle test cases.

targetCrdFilters:\n - nameSuffix: \"group1.tnf.com\"\n   scalable: false\n - nameSuffix: \"anydomain.com\"\n   scalable: true\n

With the config show above, all CRD names in the cluster whose names have the suffix group1.tnf.com or anydomain.com ( e.g. crd1.group1.tnf.com or mycrd.mygroup.anydomain.com) will be tested.

"},{"location":"configuration/#manageddeployments-managedstatefulsets","title":"managedDeployments / managedStatefulSets","text":"

The Deployments/StatefulSets managed by a Custom Resource whose scaling is controlled using the \u201cscale\u201d subresource of the CR.

The CRD defining that CR should be included in the CRD filters with the scalable property set to true. If so, the test case lifecycle-{deployment/statefulset}-scaling will be skipped, otherwise it will fail.

managedDeployments:\n  - name: jack\nmanagedStatefulsets:\n  - name: jack\n
"},{"location":"configuration/#exceptions","title":"Exceptions","text":"

These options allow adding exceptions to skip several checks for different resources. The exceptions must be justified in order to pass the CNF Certification.

"},{"location":"configuration/#acceptedkerneltaints","title":"acceptedKernelTaints","text":"

The list of kernel modules loaded by the CNF that make the Linux kernel mark itself as tainted but that should skip verification.

Test cases affected: platform-alteration-tainted-node-kernel.

acceptedKernelTaints:\n  - module: vboxsf\n  - module: vboxguest\n
"},{"location":"configuration/#skiphelmchartlist","title":"skipHelmChartList","text":"

The list of Helm charts that the CNF uses whose certification status will not be verified.

If no exception is configured, the certification status for all Helm charts will be checked in the OpenShift Helms Charts repository.

Test cases affected: affiliated-certification-helmchart-is-certified.

skipHelmChartList:\n  - name: coredns\n
"},{"location":"configuration/#validprotocolnames","title":"validProtocolNames","text":"

The list of allowed protocol names to be used for container port names.

The name field of a container port must be of the form protocol[-suffix] where protocol must be allowed by default or added to this list. The optional suffix can be chosen by the application. Protocol names allowed by default: grpc, grpc-web, http, http2, tcp, udp.

Test cases affected: manageability-container-port-name-format.

validProtocolNames:\n  - \"http3\"\n  - \"sctp\"\n
"},{"location":"configuration/#servicesignorelist","title":"servicesIgnoreList","text":"

The list of Services that will skip verification.

Services included in this list will be filtered out at the autodiscovery stage and will not be subject to checks in any test case.

Tests cases affected: networking-dual-stack-service, access-control-service-type.

servicesignorelist:\n  - \"hazelcast-platform-controller-manager-service\"\n  - \"hazelcast-platform-webhook-service\"\n  - \"new-pro-controller-manager-metrics-service\"\n
"},{"location":"configuration/#skipscalingtestdeployments-skipscalingteststatefulsets","title":"skipScalingTestDeployments / skipScalingTestStatefulSets","text":"

The list of Deployments/StatefulSets that do not support scale in/out operations.

Deployments/StatefulSets included in this list will skip any scaling operation check.

Test cases affected: lifecycle-deployment-scaling, lifecycle-statefulset-scaling.

skipScalingTestDeployments:\n  - name: deployment1\n    namespace: tnf\nskipScalingTestStatefulSetNames:\n  - name: statefulset1\n    namespace: tnf\n
"},{"location":"configuration/#cnf-certification-settings","title":"CNF Certification settings","text":""},{"location":"configuration/#debugdaemonsetnamespace","title":"debugDaemonSetNamespace","text":"

This is an optional field with the name of the namespace where a privileged DaemonSet will be deployed. The namespace will be created in case it does not exist. In case this field is not set, the default namespace for this DaemonSet is cnf-suite.

debugDaemonSetNamespace: cnf-cert\n

This DaemonSet, called tnf-debug is deployed and used internally by the CNF Certification tool to issue some shell commands that are needed in certain test cases. Some of these test cases might fail or be skipped in case it wasn\u2019t deployed correctly.

"},{"location":"configuration/#other-settings","title":"Other settings","text":"

The autodiscovery mechanism will attempt to identify the default network device and all the IP addresses of the Pods it needs for network connectivity tests, though that information can be explicitly set using annotations if needed.

"},{"location":"configuration/#pod-ips","title":"Pod IPs","text":"
  • The k8s.v1.cni.cncf.io/networks-status annotation is checked and all IPs from it are used. This annotation is automatically managed in OpenShift but may not be present in K8s.
  • If it is not present, then only known IPs associated with the Pod are used (the Pod .status.ips field).
"},{"location":"configuration/#network-interfaces","title":"Network Interfaces","text":"
  • The k8s.v1.cni.cncf.io/networks-status annotation is checked and the interface from the first entry found with \u201cdefault\u201d=true is used. This annotation is automatically managed in OpenShift but may not be present in K8s.

The label test-network-function.com/skip_connectivity_tests excludes Pods from all connectivity tests.

The label test-network-function.com/skip_multus_connectivity_tests excludes Pods from Multus connectivity tests. Tests on the default interface are still run.

"},{"location":"configuration/#affinity-requirements","title":"Affinity requirements","text":"

For CNF workloads that require Pods to use Pod or Node Affinity rules, the label AffinityRequired: true must be included on the Pod YAML. This will ensure that the affinity best practices are tested and prevent any test cases for anti-affinity to fail.

"},{"location":"developers/","title":"Developers","text":""},{"location":"developers/#steps","title":"Steps","text":"

To test the newly added test / existing tests locally, follow the steps

  • Clone the repo
  • Set runtime environment variables, as per the requirement.

    For example, to deploy partner deployments in a custom namespace in the test config.

    targetNameSpaces:\n  - name: mynamespace\n
  • Also, skip intrusive tests

export TNF_NON_INTRUSIVE_ONLY=true\n
  • Set K8s config of the cluster where test pods are running

    export KUBECONFIG=<<mypath/.kube/config>>\n
  • Execute test suite, which would build and run the suite

    For example, to run networking tests

    ./script/development.sh networking\n
"},{"location":"developers/#dependencies-on-other-pr","title":"Dependencies on other PR","text":"

If you have dependencies on other Pull Requests, you can add a comment like that:

Depends-On: <url of the PR>\n

and the dependent PR will automatically be extracted and injected in your change during the GitHub Action CI jobs and the DCI jobs.

"},{"location":"exception/","title":"Exception Process","text":""},{"location":"exception/#exception-process","title":"Exception Process","text":"

There may exist some test cases which needs to fail always. The exception raised by the failed tests is published to Red Hat website for that partner.

CATALOG provides the details of such exception.

"},{"location":"reference/","title":"Helpful Links","text":"
  • Contribution Guidelines
  • CATALOG
  • Best Practices Document v1.3
"},{"location":"runtime-env/","title":"Runtime environment variables","text":""},{"location":"runtime-env/#runtime-environment-variables","title":"Runtime environment variables","text":"

To run the test suite, some runtime environment variables are to be set.

"},{"location":"runtime-env/#ocp-412-labels","title":"OCP >=4.12 Labels","text":"

The following labels need to be added to your default namespace in your cluster if you are running OCP >=4.12:

pod-security.kubernetes.io/enforce: privileged\npod-security.kubernetes.io/enforce-version: latest\n

You can manually label the namespace with:

oc label namespace/default pod-security.kubernetes.io/enforce=privileged\noc label namespace/default pod-security.kubernetes.io/enforce-version=latest\n
"},{"location":"runtime-env/#disable-intrusive-tests","title":"Disable intrusive tests","text":"

To skip intrusive tests which may disrupt cluster operations, issue the following:

export TNF_NON_INTRUSIVE_ONLY=true\n

Likewise, to enable intrusive tests, set the following:

export TNF_NON_INTRUSIVE_ONLY=false\n

Intrusive tests are enabled by default.

"},{"location":"runtime-env/#preflight-integration","title":"Preflight Integration","text":"

When running the preflight suite of tests, there are a few environment variables that will need to be set:

PFLT_DOCKERCONFIG is a required variable for running the preflight test suite. This provides credentials to the underlying preflight library for being able to pull/manipulate images and image bundles for testing.

When running as a container, the docker config is mounted to the container via volume mount.

When running as a standalone binary, the environment variables are consumed directly from your local machine.

See more about this variable here.

TNF_ALLOW_PREFLIGHT_INSECURE (default: false) is required set to true if you are running against a private container registry that has self-signed certificates.

"},{"location":"runtime-env/#disconnected-environment","title":"Disconnected environment","text":"

In a disconnected environment, only specific versions of images are mirrored to the local repo. For those environments, the partner pod image quay.io/testnetworkfunction/cnf-test-partner and debug pod image quay.io/testnetworkfunction/debug-partner should be mirrored and TNF_PARTNER_REPO should be set to the local repo, e.g.:

export TNF_PARTNER_REPO=registry.dfwt5g.lab:5000/testnetworkfunction\n

Note that you can also specify the debug pod image to use with SUPPORT_IMAGE environment variable, default to debug-partner:4.5.5.

"},{"location":"test-container/","title":"Prebuilt container","text":""},{"location":"test-container/#test","title":"Test","text":"

The tests can be run within a prebuilt container in the OCP cluster.

Prerequisites for the OCP cluster

  • The cluster should have enough resources to drain nodes and reschedule pods. If that is not the case, then lifecycle-pod-recreation test should be skipped.
"},{"location":"test-container/#with-quay-test-container-image","title":"With quay test container image","text":""},{"location":"test-container/#pull-test-image","title":"Pull test image","text":"

The test image is available at this repository in quay.io and can be pulled using The image can be pulled using :

podman pull quay.io/testnetworkfunction/cnf-certification-test\n
"},{"location":"test-container/#check-cluster-resources","title":"Check cluster resources","text":"

Some tests suites such as platform-alteration require node access to get node configuration like hugepage. In order to get the required information, the test suite does not ssh into nodes, but instead rely on oc debug tools. This tool makes it easier to fetch information from nodes and also to debug running pods.

oc debug tool will launch a new container ending with -debug suffix, and the container will be destroyed once the debug session is done. Ensure that the cluster should have enough resources to create debug pod, otherwise those tests would fail.

Note

It\u2019s recommended to clean up disk space and make sure there\u2019s enough resources to deploy another container image in every node before starting the tests.

"},{"location":"test-container/#run-the-tests","title":"Run the tests","text":"
./run-tnf-container.sh\n

Required arguments

  • -t to provide the path of the local directory that contains tnf config files
  • -o to provide the path of the local directory where test results (claim.json), the execution logs (tnf-execution.log), and the results artifacts file (results.tar.gz) will be available from after the container exits.

Warning

This directory must exist in order for the claim file to be written.

Optional arguments

  • -l to list the labels to be run. See Ginkgo Spec Labels for more information on how to filter tests with labels.

Note

If -l is not specified, the tnf will run in \u2018diagnostic\u2019 mode. In this mode, no test case will run: it will only get information from the cluster (PUTs, CRDs, nodes info, etc\u2026) to save it in the claim file. This can be used to make sure the configuration was properly set and the autodiscovery found the right pods/crds\u2026

  • -i to provide a name to a custom TNF container image. Supports local images, as well as images from external registries.

  • -k to set a path to one or more kubeconfig files to be used by the container to authenticate with the cluster. Paths must be separated by a colon.

Note

If -k is not specified, autodiscovery is performed.

The autodiscovery first looks for paths in the $KUBECONFIG environment variable on the host system, and if the variable is not set or is empty, the default configuration stored in $HOME/.kube/config is checked.

  • -n to give the network mode of the container. Defaults set to host, which requires selinux to be disabled. Alternatively, bridge mode can be used with selinux if TNF_CONTAINER_CLIENT is set to docker or running the test as root.

Note

See the docker run \u2013network parameter reference for more information on how to configure network settings.

  • -b to set an external offline DB that will be used to verify the certification status of containers, helm charts and operators. Defaults to the DB included in the TNF container image.

Note

See the OCT tool for more information on how to create this DB.

Command to run

./run-tnf-container.sh -k ~/.kube/config -t ~/tnf/config\n-o ~/tnf/output -l \"networking,access-control\"\n

See General tests for a list of available keywords.

"},{"location":"test-container/#run-with-docker","title":"Run with docker","text":"

By default, run-container.sh utilizes podman. However, an alternate container virtualization client using TNF_CONTAINER_CLIENT can be configured. This is particularly useful for operating systems that do not readily support podman.

In order to configure the test harness to use docker, issue the following prior to run-tnf-container.sh:

export TNF_CONTAINER_CLIENT=docker\n
"},{"location":"test-container/#output-targz-file-with-results-and-web-viewer-files","title":"Output tar.gz file with results and web viewer files","text":"

After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them.

By default, only the claim.js, the cnf-certification-tests_junit.xml file and this new tar.gz file are created after the test suite has finished, as this is probably all that normal partners/users will need.

Two env vars allow to control the web artifacts and the the new tar.gz file generation:

  • TNF_OMIT_ARTIFACTS_ZIP_FILE=true/false : Defaulted to false in the launch scripts. If set to true, the tar.gz generation will be skipped.
  • TNF_INCLUDE_WEB_FILES_IN_OUTPUT_FOLDER=true/false : Defaulted to false in the launch scripts. If set to true, the web viewer/parser files will also be copied to the output (claim) folder.
"},{"location":"test-container/#with-local-test-container-image","title":"With local test container image","text":""},{"location":"test-container/#build-locally","title":"Build locally","text":"
podman build -t cnf-certification-test:v4.5.5 \\\n  --build-arg TNF_VERSION=v4.5.5 \\\n
  • TNF_VERSION value is set to a branch, a tag, or a hash of a commit that will be installed into the image
"},{"location":"test-container/#build-from-an-unofficial-source","title":"Build from an unofficial source","text":"

The unofficial source could be a fork of the TNF repository.

Use the TNF_SRC_URL build argument to override the URL to a source repository.

podman build -t cnf-certification-test:v4.5.5 \\\n  --build-arg TNF_VERSION=v4.5.5 \\\n  --build-arg TNF_SRC_URL=https://github.com/test-network-function/cnf-certification-test .\n
"},{"location":"test-container/#run-the-tests-2","title":"Run the tests 2","text":"

Specify the custom TNF image using the -i parameter.

./run-tnf-container.sh -i cnf-certification-test:v4.5.5\n-t ~/tnf/config -o ~/tnf/output -l \"networking,access-control\"\n

Note: see General tests for a list of available keywords.

"},{"location":"test-output/","title":"Test Output","text":""},{"location":"test-output/#test-output","title":"Test Output","text":""},{"location":"test-output/#claim-file","title":"Claim File","text":"

The test suite generates an output file, named claim file. This file is considered as the proof of CNFs test run, evaluated by Red Hat when certified status is considered.

This file describes the following

  • The system(s) under test
  • The tests that are executed
  • The outcome of the executed / skipped tests

Files that need to be submitted for certification

When submitting results back to Red Hat for certification, please include the above mentioned claim file, the JUnit file, and any available console logs.

How to add a CNF platform test result to the existing claim file?

go run cmd/tools/cmd/main.go claim-add --claimfile=claim.json\n--reportdir=/home/$USER/reports\n

Args: --claimfile is an existing claim.json file --repordir :path to test results that you want to include.

The tests result files from the given report dir will be appended under the result section of the claim file using file name as the key/value pair. The tool will ignore the test result, if the key name is already present under result section of the claim file.

 \"results\": {\n \"cnf-certification-tests_junit\": {\n \"testsuite\": {\n \"-errors\": \"0\",\n \"-failures\": \"2\",\n \"-name\": \"CNF Certification Test Suite\",\n \"-tests\": \"14\",\n ...\n

Reference

For more details on the contents of the claim file

  • schema.
  • Guide.
"},{"location":"test-output/#execution-logs","title":"Execution logs","text":"

The test suite also saves a copy of the execution logs at [test output directory]/tnf-execution.log

"},{"location":"test-output/#results-artifacts-zip-file","title":"Results artifacts zip file","text":"

After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them. The file has a UTC date-time prefix and looks like this:

20230620-110654-cnf-test-results.tar.gz

The \u201c20230620-110654\u201d sample prefix means \u201cJune-20th 2023, 11:06:54\u201d

This is the content of the tar.gz file:

  • claim.json
  • cnf-certification-tests_junit.xml
  • claimjson.js
  • classification.js
  • results.html

This file serves two different purposes:

  1. Make it easier to store and send the test results for review.
  2. View the results in the html web page. In addition, the web page (either results-embed.thml or results.html) has a selector for workload type and allows the parter to introduce feedback for each of the failing test cases for later review from Red Hat. It\u2019s important to note that this web page needs the claimjson.js and classification.js files to be in the same folder as the html files to work properly.
"},{"location":"test-output/#show-results-after-running-the-test-code","title":"Show Results after running the test code","text":"

A standalone HTML page is available to decode the results. For more details, see: https://github.com/test-network-function/parser

"},{"location":"test-output/#compare-claim-files-from-two-different-cnf-certification-suite-runs","title":"Compare claim files from two different CNF Certification Suite runs","text":"

Parters can use the tnf claim compare tool in order to compare two claim files. The differences are shown in a table per section. This tool can be helpful when the result of some test cases is different between two (consecutive) runs, as it shows configuration differences in both the CNF Cert Suite config and the cluster nodes that could be the root cause for some of the test cases results discrepancy.

All the compared sections, except the test cases results are compared blindly, traversing the whole json tree and substrees to get a list of all the fields and their values. Three tables are shown:

  • Differences: same fields with different values.
  • Fields in claim 1 only: json fields in claim file 1 that don\u2019t exist in claim 2.
  • Fields in claim 2 only: json fields in claim file 2 that don\u2019t exist in claim 1.

Let\u2019s say one of the nodes of the claim.json file contains this struct:

{\n  \"field1\": \"value1\",\n  \"field2\": {\n    \"field3\": \"value2\",\n    \"field4\": {\n      \"field5\": \"value3\",\n      \"field6\": \"value4\"\n    }\n  }\n}\n

When parsing that json struct fields, it will produce a list of fields like this:

/field1=value1\n/field2/field3=value2\n/field2/field4/field5=value3\n/field2/field4/field6=finalvalue2\n

Once this list of field\u2019s path+value strings has been obtained from both claim files, it is compared in order to find the differences or the fields that only exist on each file.

This is a fake example of a node \u201cclus0-0\u201d whose first CNI (index 0) has a different cniVersion and the ipMask flag of its first plugin (also index 0) has changed to false in the second run. Also, the plugin has another \u201cnewFakeFlag\u201d config flag in claim 2 that didn\u2019t exist in clam file 1.

...\nCNIs: Differences\nFIELD                           CLAIM 1      CLAIM 2\n/clus0-0/0/cniVersion           1.0.0        1.0.1\n/clus0-1/0/plugins/0/ipMasq     true         false\n\nCNIs: Only in CLAIM 1\n<none>\n\nCNIs: Only in CLAIM 2\n/clus0-1/0/plugins/0/newFakeFlag=true\n...\n

Currently, the following sections are compared, in this order:

  • claim.versions
  • claim.Results
  • claim.configurations.Config
  • claim.nodes.cniPlugins
  • claim.nodes.csiDriver
  • claim.nodes.nodesHwInfo
  • claim.nodes.nodeSummary
"},{"location":"test-output/#how-to-build-the-tnf-tool","title":"How to build the tnf tool","text":"

The tnf tool is located in the repo\u2019s cmd/tnf folder. In order to compile it, just run:

make build-tnf-tool\n
"},{"location":"test-output/#examples","title":"Examples","text":""},{"location":"test-output/#compare-a-claim-file-against-itself-no-differences-expected","title":"Compare a claim file against itself: no differences expected","text":""},{"location":"test-output/#different-test-cases-results","title":"Different test cases results","text":"

Let\u2019s assume we have two claim files, claim1.json and claim2.json, obtained from two CNF Certification Suite runs in the same cluster.

During the second run, there was a test case that failed. Let\u2019s simulate it modifying manually the second run\u2019s claim file to switch one test case\u2019s state from \u201cpassed\u201d to \u201cfailed\u201d.

"},{"location":"test-output/#different-cluster-configurations","title":"Different cluster configurations","text":"

First, let\u2019s simulate that the second run took place in a cluster with a different OCP version. As we store the OCP version in the claim file (section claim.versions), we can also modify it manually. The versions section comparison appears at the very beginning of the tnf claim compare output:

Now, let\u2019s simulate that the cluster was a bit different when the second CNF Certification Suite run was performed. First, let\u2019s make a manual change in claim2.json to emulate a different CNI version in the first node.

Finally, we\u2019ll simulate that, for some reason, the first node had one label removed when the second run was performed:

"},{"location":"test-spec/","title":"Available Test Specs","text":""},{"location":"test-spec/#test-specifications","title":"Test Specifications","text":""},{"location":"test-spec/#available-test-specs","title":"Available Test Specs","text":"

There are two categories for CNF tests.

  • General

These tests are designed to test any commodity CNF running on OpenShift, and include specifications such as Default network connectivity.

  • CNF-Specific

These tests are designed to test some unique aspects of the CNF under test are behaving correctly. This could include specifications such as issuing a GET request to a web server, or passing traffic through an IPSEC tunnel.

"},{"location":"test-spec/#general-tests","title":"General tests","text":"

These tests belong to multiple suites that can be run in any combination as is appropriate for the CNFs under test.

Info

Test suites group tests by the topic areas.

Suite Test Spec Description Minimum OpenShift Version access-control The access-control test suite is used to test service account, namespace and cluster/pod role binding for the pods under test. It also tests the pods/containers configuration. 4.6.0 affiliated-certification The affiliated-certification test suite verifies that the containers and operators discovered or listed in the configuration file are certified by Redhat 4.6.0 lifecycle The lifecycle test suite verifies the pods deployment, creation, shutdown and survivability. 4.6.0 networking The networking test suite contains tests that check connectivity and networking config related best practices. 4.6.0 operator The operator test suite is designed to test basic Kubernetes Operator functionality. 4.6.0 platform-alteration verifies that key platform configuration is not modified by the CNF under test 4.6.0 observability the observability test suite contains tests that check CNF logging is following best practices and that CRDs have status fields 4.6.0

Info

Please refer CATALOG.md for more details.

"},{"location":"test-spec/#cnf-specific-tests","title":"CNF-specific tests","text":"

TODO

"},{"location":"test-standalone/","title":"Standalone test executable","text":""},{"location":"test-standalone/#standalone-test-executable","title":"Standalone test executable","text":"

Prerequisites

The repo is cloned and all the commands should be run from the cloned repo.

mkdir ~/workspace\ncd ~/workspace\ngit clone git@github.com:test-network-function/cnf-certification-test.git\ncd cnf-certification-test\n

Note

By default, cnf-certification-test emits results to cnf-certification-test/cnf-certification-tests_junit.xml.

"},{"location":"test-standalone/#1-install-dependencies","title":"1. Install dependencies","text":"

Depending on how you want to run the test suite there are different dependencies that will be needed.

If you are planning on running the test suite as a container, the only pre-requisite is Docker or Podman.

If you are planning on running the test suite as a standalone binary, there are pre-requisites that will need to be installed in your environment prior to runtime.

Run the following command to install the following dependencies.

make install-tools\n
Dependency Minimum Version GoLang 1.21 golangci-lint 1.55.1 jq 1.6 OpenShift Client 4.12

Other binary dependencies required to run tests can be installed using the following command:

Note

  • You must also make sure that $GOBIN (default $GOPATH/bin) is on your $PATH.
  • Efforts to containerise this offering are considered a work in progress.
"},{"location":"test-standalone/#2-build-the-test-suite","title":"2. Build the Test Suite","text":"

In order to build the test executable, first make sure you have satisfied the dependencies.

make build-cnf-tests\n

Gotcha: The make build* commands run unit tests where appropriate. They do NOT test the CNF.

"},{"location":"test-standalone/#3-test-a-cnf","title":"3. Test a CNF","text":"

A CNF is tested by specifying which suites to run using the run-cnf-suites.sh helper script.

Run any combination of the suites keywords listed at in the General tests section, e.g.

./run-cnf-suites.sh -l \"lifecycle\"\n./run-cnf-suites.sh -l \"networking,lifecycle\"\n./run-cnf-suites.sh -l \"operator,networking\"\n./run-cnf-suites.sh -l \"networking,platform-alteration\"\n./run-cnf-suites.sh -l \"networking,lifecycle,affiliated-certification,operator\"\n

Note

As with \u201crun-tnf-container.sh\u201d, if -l is not specified here, the tnf will run in \u2018diagnostic\u2019 mode.

By default the claim file will be output into the same location as the test executable. The -o argument for run-cnf-suites.sh can be used to provide a new location that the output files will be saved to. For more detailed control over the outputs, see the output of cnf-certification-test.test --help.

    cd cnf-certification-test && ./cnf-certification-test.test --help\n
"},{"location":"test-standalone/#run-a-single-test","title":"Run a single test","text":"

All tests have unique labels, which can be used to filter which tests are to be run. This is useful when debugging a single test.

To select the test to be executed when running run-cnf-suites.sh with the following command-line:

./run-cnf-suites.sh -l operator-install-source\n

Note

The test labels work the same as the suite labels, so you can select more than one test with the filtering mechanism shown before.

"},{"location":"test-standalone/#run-all-of-the-tests","title":"Run all of the tests","text":"

You can run all of the tests (including the intrusive tests and the extended suite) with the following commands:

./run-cnf-suites.sh -l all\n
"},{"location":"test-standalone/#run-a-subset","title":"Run a subset","text":"

You can find all the labels attached to the tests by running the following command:

./run-cnf-suites.sh --list\n

You can also check the CATALOG.md to find all test labels.

"},{"location":"test-standalone/#labels-for-offline-environments","title":"Labels for offline environments","text":"

Some tests do require connectivity to Red Hat servers to validate certification status. To run the tests in an offline environment, skip the tests using the l option.

./run-cnf-suites.sh -l '!online'\n

Alternatively, if an offline DB for containers, helm charts and operators is available, there is no need to skip those tests if the environment variable TNF_OFFLINE_DB is set to the DB location. This DB can be generated using the OCT tool.

Note: Only partner certified images are stored in the offline database. If Redhat images are checked against the offline database, they will show up as not certified. The online database includes both Partner and Redhat images.

"},{"location":"test-standalone/#output-targz-file-with-results-and-web-viewer-files","title":"Output tar.gz file with results and web viewer files","text":"

After running all the test cases, a compressed file will be created with all the results files and web artifacts to review them.

By default, only the claim.js, the cnf-certification-tests_junit.xml file and this new tar.gz file are created after the test suite has finished, as this is probably all that normal partners/users will need.

Two env vars allow to control the web artifacts and the the new tar.gz file generation:

  • TNF_OMIT_ARTIFACTS_ZIP_FILE=true/false : Defaulted to false in the launch scripts. If set to true, the tar.gz generation will be skipped.
  • TNF_INCLUDE_WEB_FILES_IN_OUTPUT_FOLDER=true/false : Defaulted to false in the launch scripts. If set to true, the web viewer/parser files will also be copied to the output (claim) folder.
"},{"location":"test-standalone/#build-test-a-cnf","title":"Build + Test a CNF","text":"

Refer Developers\u2019 Guide

"}]} \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 711e3054fc7b2f31bc531eb9073d025208c89c73..e4456c8dfa68e19440d31812d8dbcc8741072a2c 100644 GIT binary patch delta 12 Tcmb=gXOr*d;NXs)$W{pe6=MT6 delta 12 Tcmb=gXOr*d;P@Ohk*yK{8RrBX diff --git a/test-container/index.html b/test-container/index.html index ea2f132b9..42ab4ab4f 100644 --- a/test-container/index.html +++ b/test-container/index.html @@ -1166,8 +1166,8 @@

Output tar.gz file

With local test container image

Build locally

-
podman build -t cnf-certification-test:v4.5.4 \
-  --build-arg TNF_VERSION=v4.5.4 \
+
podman build -t cnf-certification-test:v4.5.5 \
+  --build-arg TNF_VERSION=v4.5.5 \
 
  • TNF_VERSION value is set to a branch, a tag, or a hash of a commit that will be installed into the image
  • @@ -1175,13 +1175,13 @@

    Build locallyBuild from an unofficial source

    The unofficial source could be a fork of the TNF repository.

    Use the TNF_SRC_URL build argument to override the URL to a source repository.

    -
    podman build -t cnf-certification-test:v4.5.4 \
    -  --build-arg TNF_VERSION=v4.5.4 \
    +
    podman build -t cnf-certification-test:v4.5.5 \
    +  --build-arg TNF_VERSION=v4.5.5 \
       --build-arg TNF_SRC_URL=https://github.com/test-network-function/cnf-certification-test .
     

    Run the tests 2

    Specify the custom TNF image using the -i parameter.

    -
    ./run-tnf-container.sh -i cnf-certification-test:v4.5.4
    +
    ./run-tnf-container.sh -i cnf-certification-test:v4.5.5
     -t ~/tnf/config -o ~/tnf/output -l "networking,access-control"
     

    Note: see General tests for a list of available keywords.