created` message is emitted.
## Creating a StatefulSet with file system volumes
-Create a StatefulSet yaml file, similar to the following demo-statefulset-file-system.yaml file.
-
-
-kind: StatefulSet
-apiVersion: apps/v1
-metadata:
- name: demo-statefulset-file-system
-spec:
- selector:
- matchLabels:
- app: demo-statefulset
- serviceName: demo-statefulset
- replicas: 1
- template:
+Create a StatefulSet YAML file, similar to the following `demo-statefulset-file-system.yaml` file.
+
+Be sure to indicate the `volumeMounts`, listing each volume's name and path. In this example, the `mountPath` is listed as `"/data"`.
+
+ kind: StatefulSet
+ apiVersion: apps/v1
metadata:
- labels:
- app: demo-statefulset
+ name: demo-statefulset-file-system
spec:
- containers:
- - name: demo-container
- image: registry.access.redhat.com/ubi8/ubi:latest
- command: [ "/bin/sh", "-c", "--" ]
- args: [ "while true; do sleep 30; done;" ]
- volumeMounts:
+ selector:
+ matchLabels:
+ app: demo-statefulset
+ serviceName: demo-statefulset
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: demo-statefulset
+ spec:
+ containers:
+ - name: demo-container
+ image: registry.access.redhat.com/ubi8/ubi:latest
+ command: [ "/bin/sh", "-c", "--" ]
+ args: [ "while true; do sleep 30; done;" ]
+ volumeMounts:
+ - name: demo-volume-file-system
+ mountPath: "/data"
+ volumes:
- name: demo-volume-file-system
- mountPath: "/data"
- volumes:
- - name: demo-volume-file-system
- persistentVolumeClaim:
- claimName: demo-pvc-file-system
-
+ persistentVolumeClaim:
+ claimName: demo-pvc-file-system
## Creating a StatefulSet with raw block volume
-Create a StatefulSet yaml file, similar to the following demo-statefulset-raw-block.yaml file.
-
-
-kind: StatefulSet
-apiVersion: apps/v1
-metadata:
- name: demo-statefulset-raw-block
-spec:
- selector:
- matchLabels:
- app: demo-statefulset
- serviceName: demo-statefulset
- replicas: 1
- template:
+Create a StatefulSet YAML file, similar to the following `demo-statefulset-raw-block.yaml` file.
+
+Be sure to indicate the `volumeDevices`, listing each volume's name and path. In this example, the `devicePath` is listed as `"/dev/block"`.
+
+ kind: StatefulSet
+ apiVersion: apps/v1
metadata:
- labels:
- app: demo-statefulset
+ name: demo-statefulset-raw-block
spec:
- containers:
- - name: demo-container
- image: registry.access.redhat.com/ubi8/ubi:latest
- command: [ "/bin/sh", "-c", "--" ]
- args: [ "while true; do sleep 30; done;" ]
- volumeDevices:
+ selector:
+ matchLabels:
+ app: demo-statefulset
+ serviceName: demo-statefulset
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: demo-statefulset
+ spec:
+ containers:
+ - name: demo-container
+ image: registry.access.redhat.com/ubi8/ubi:latest
+ command: [ "/bin/sh", "-c", "--" ]
+ args: [ "while true; do sleep 30; done;" ]
+ volumeDevices:
+ - name: demo-volume-raw-block
+ devicePath: "/dev/block"
+ volumes:
- name: demo-volume-raw-block
- devicePath: "/dev/block"
- volumes:
- - name: demo-volume-raw-block
- persistentVolumeClaim:
- claimName: demo-pvc-raw-block
-
+ persistentVolumeClaim:
+ claimName: demo-pvc-raw-block
## Creating a StatefulSet with both raw block and file system volumes
-Create a StatefulSet yaml file, similar to the following demo-statefulset-combined.yaml file.
-
-
-kind: StatefulSet
-apiVersion: apps/v1
-metadata:
- name: demo-statefulset-combined
-spec:
- selector:
- matchLabels:
- app: demo-statefulset
- serviceName: demo-statefulset
- replicas: 1
- template:
+Create a StatefulSet YAML file, similar to the following `demo-statefulset.yaml` file.
+
+In a StatefulSet file that uses both volume modes, it is important to indicate both the `volumeMounts` and `volumeDevices` parameters.
+
+ kind: StatefulSet
+ apiVersion: apps/v1
metadata:
- labels:
- app: demo-statefulset
+ name: demo-statefulset
spec:
- containers:
- - name: demo-container
- image: registry.access.redhat.com/ubi8/ubi:latest
- command: [ "/bin/sh", "-c", "--" ]
- args: [ "while true; do sleep 30; done;" ]
- volumeMounts:
+ selector:
+ matchLabels:
+ app: demo-statefulset
+ serviceName: demo-statefulset
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: demo-statefulset
+ spec:
+ containers:
+ - name: demo-container
+ image: registry.access.redhat.com/ubi8/ubi:latest
+ command: [ "/bin/sh", "-c", "--" ]
+ args: [ "while true; do sleep 30; done;" ]
+ volumeMounts:
+ - name: demo-volume-file-system
+ mountPath: "/data"
+ volumeDevices:
+ - name: demo-volume-raw-block
+ devicePath: "/dev/block"
+ volumes:
- name: demo-volume-file-system
- mountPath: "/data"
- volumeDevices:
+ persistentVolumeClaim:
+ claimName: demo-pvc-file-system
- name: demo-volume-raw-block
- devicePath: "/dev/block"
- volumes:
- - name: demo-volume-file-system
- persistentVolumeClaim:
- claimName: demo-pvc-file-system
- - name: demo-volume-raw-block
- persistentVolumeClaim:
- claimName: demo-pvc-raw-block
-
-
+ persistentVolumeClaim:
+ claimName: demo-pvc-raw-block
diff --git a/docs/content/configuration/csi_ug_config_create_storageclasses.md b/docs/content/configuration/csi_ug_config_create_storageclasses.md
index 52d3373d7..750dc8d3e 100644
--- a/docs/content/configuration/csi_ug_config_create_storageclasses.md
+++ b/docs/content/configuration/csi_ug_config_create_storageclasses.md
@@ -1,14 +1,16 @@
# Creating a StorageClass
-Create a storage class yaml file in order to define the storage system pool name, secret reference, `SpaceEfficiency`, and `fstype`.
+Create a storage class YAML file in order to define the storage parameters, such as pool name, secret reference, `SpaceEfficiency`, and `fstype`.
+
+**Note:** If you are using the CSI Topology feature, in addition to the information and parameter definitions provided here, be sure to follow the steps in [Creating a StorageClass with topology awareness](csi_ug_config_create_storageclasses_topology.md).
Use the following procedure to create and apply the storage classes.
**Note:** This procedure is applicable for both Kubernetes and Red Hat® OpenShift®. For Red Hat OpenShift, replace `kubectl` with `oc` in all relevant commands.
-Create a storage class yaml file, similar to the following demo-storageclass.yaml.
+Create a storage class YAML file, similar to the following `demo-storageclass.yaml` and update the storage parameters as needed.
-Update the capabilities, pools, and array secrets, as needed.
+When configuring the file, be sure to use the same array secret and array secret namespace as defined in [Creating a Secret](csi_ug_config_create_secret.md).
Use the `SpaceEfficiency` parameters for each storage system, as defined in [the following table](#spaceefficiency). These values are not case-sensitive.
@@ -17,8 +19,8 @@ _**Table:** `SpaceEfficiency` parameter definitions
|Storage system type|SpaceEfficiency parameter options|
|-------------------|---------------------------------|
|IBM FlashSystem® A9000 and A9000R|Always includes deduplication and compression. No need to specify during configuration.|
-|IBM Spectrum® Virtualize Family|- thick (default value)
- thin
- compressed
- deduplicated
**Note:** If not specified, the default value is thick.|
-|IBM® DS8000® Family| - none (default value)
- thin
**Note:** If not specified, the default value is none.|
+|IBM Spectrum® Virtualize Family|- `thick` (default value)
- `thin`
- `compressed`
- `deduplicated`
**Note:** If not specified, the default value is thick.|
+|IBM® DS8000® Family| - `none` (default value)
- `thin`
**Note:** If not specified, the default value is `none`.|
- The IBM DS8000 Family `pool` value is the pool ID and not the pool name as is used in other storage systems.
- Be sure that the `pool` value is the name of an existing pool on the storage system.
@@ -29,36 +31,29 @@ _**Table:** `SpaceEfficiency` parameter definitions
- The `csi.storage.k8s.io/fstype` parameter is optional. The values that are allowed are _ext4_ or _xfs_. The default value is _ext4_.
- The `volume_name_prefix` parameter is optional.
-**Note:** For IBM DS8000 Family, the maximum prefix length is five characters. The maximum prefix length for other systems is 20 characters.
For storage systems that use Spectrum Virtualize, the `CSI_` prefix is added as default if not specified by the user.
+**Note:** For IBM DS8000 Family, the maximum prefix length is 5 characters. The maximum prefix length for other systems is 20 characters.
For storage systems that use Spectrum Virtualize, the `CSI` prefix is added as default if not specified by the user.
- kind: StorageClass
- apiVersion: storage.k8s.io/v1
- metadata:
- name: demo-storageclass
- provisioner: block.csi.ibm.com
- parameters:
- SpaceEfficiency: deduplicated # Optional.
- pool: demo-pool
-
- csi.storage.k8s.io/provisioner-secret-name: demo-secret
- csi.storage.k8s.io/provisioner-secret-namespace: default
- csi.storage.k8s.io/controller-publish-secret-name: demo-secret
- csi.storage.k8s.io/controller-publish-secret-namespace: default
- csi.storage.k8s.io/controller-expand-secret-name: demo-secret
- csi.storage.k8s.io/controller-expand-secret-namespace: default
-
- csi.storage.k8s.io/fstype: xfs # Optional. Values ext4\xfs. The default is ext4.
- volume_name_prefix: demoPVC # Optional.
- allowVolumeExpansion: true
+ kind: StorageClass
+ apiVersion: storage.k8s.io/v1
+ metadata:
+ name: demo-storageclass
+ provisioner: block.csi.ibm.com
+ parameters:
+ pool: demo-pool
+ SpaceEfficiency: thin # Optional.
+ volume_name_prefix: demo-prefix # Optional.
+
+ csi.storage.k8s.io/fstype: xfs # Optional. Values ext4/xfs. The default is ext4.
+ csi.storage.k8s.io/secret-name: demo-secret
+ csi.storage.k8s.io/secret-namespace: default
+ allowVolumeExpansion: true
Apply the storage class.
```
- kubectl apply -f demo-storageclass.yaml
+ kubectl apply -f .yaml
```
-The `storageclass.storage.k8s.io/demo-storageclass created` message is emitted.
-
-
+The `storageclass.storage.k8s.io/ created` message is emitted.
\ No newline at end of file
diff --git a/docs/content/configuration/csi_ug_config_create_storageclasses_topology.md b/docs/content/configuration/csi_ug_config_create_storageclasses_topology.md
new file mode 100644
index 000000000..14e87a773
--- /dev/null
+++ b/docs/content/configuration/csi_ug_config_create_storageclasses_topology.md
@@ -0,0 +1,46 @@
+# Creating a StorageClass with topology awareness
+
+When using the CSI Topology feature, different parameters must be taken into account when creating a storage class YAML file with specific `by_management_id` requirements. Use this information to help define a StorageClass that is topology aware.
+
+**Note:** For information and parameter definitions that are not related to topology awareness, be sure to see the information provided in [Creating a StorageClass](csi_ug_config_create_storageclasses.md), in addition to the current section.
+
+The StorageClass file must be defined to contain topology information, based off of the labels that were already defined on the nodes in the cluster (see [Compatibility and requirements](../installation/csi_ug_requirements.md)). This determines the storage pools that are then served as candidates for PersistentVolumeClaim (PVC) requests made, as well as the subset of nodes that can make use of the volumes provisioned by the CSI driver.
+
+With topology awareness, the StorageClass must have the `volumeBindingMode` set to `WaitForFirstConsumer` (as defined in the `.yaml` example below). This defines that any PVCs that are requested with this specific StorageClass, will wait to be configured until the CSI driver can see the worker node topology.
+
+The `by_management_id` parameter is optional and values such as the `pool`, `SpaceEfficiency`, and `volume_name_prefix` may all be specified.
+
+The various `by_management_id` parameters are chosen within the following hierarchical order:
+1. From within the `by_management_id` parameter, per system (if specified).
+2. Outside of the parameter, as a cross-system default (if not specified within the `by_management_id` parameter for the relevant `management-id`).
+
+
+ ```
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: demo-storageclass-config-secret
+provisioner: block.csi.ibm.com
+volumeBindingMode: WaitForFirstConsumer
+parameters:
+ # non-csi.storage.k8s.io parameters may be specified in by_management_id per system and/or outside by_management_id as the cross-system default.
+
+ by_management_id: '{"demo-management-id-1":{"pool":"demo-pool-1","SpaceEfficiency":"deduplicated","volume_name_prefix":"demo-prefix-1"},
+ "demo-management-id-2":{"pool":"demo-pool-2","volume_name_prefix":"demo-prefix-2"}}' # Optional.
+ pool: demo-pool
+ SpaceEfficiency: thin # Optional.
+ volume_name_prefix: demo-prefix # Optional.
+
+ csi.storage.k8s.io/fstype: xfs # Optional. Values ext4/xfs. The default is ext4.
+ csi.storage.k8s.io/secret-name: demo-config-secret
+ csi.storage.k8s.io/secret-namespace: default
+allowVolumeExpansion: true
+ ```
+Apply the storage class.
+
+ ```
+ kubectl apply -f .yaml
+ ```
+The `storageclass.storage.k8s.io/ created` message is emitted.
+
+
diff --git a/docs/content/configuration/csi_ug_config_create_vol_replicationclass.md b/docs/content/configuration/csi_ug_config_create_vol_replicationclass.md
new file mode 100644
index 000000000..a7355145a
--- /dev/null
+++ b/docs/content/configuration/csi_ug_config_create_vol_replicationclass.md
@@ -0,0 +1,33 @@
+# Creating a VolumeReplicationClass
+
+Create a VolumeReplicationClass YAML file to enable volume replication.
+
+**Note:** Remote copy function is referred to as the more generic volume replication within this documentation set. Not all supported products use the remote-copy function terminology.
+
+In order to enable volume replication for your storage system, create a VolumeReplicationClass YAML file, similar to the following `demo-volumereplicationclass.yaml`.
+
+When configuring the file, be sure to use the same array secret and array secret namespace as defined in [Creating a Secret](csi_ug_config_create_secret.md).
+
+For information on obtaining your storage system `system_id`, see [Finding a `system_id`](csi_ug_config_replication_find_systemid.md).
+
+```
+apiVersion: replication.storage.openshift.io/v1alpha1
+kind: VolumeReplicationClass
+metadata:
+ name: demo-volumereplicationclass
+spec:
+ provisioner: block.csi.ibm.com
+ parameters:
+ system_id: demo-system-id
+ copy_type: async # Optional. Values sync/async. The default is sync.
+
+ replication.storage.openshift.io/replication-secret-name: demo-secret
+ replication.storage.openshift.io/replication-secret-namespace: default
+```
+
+After the YAML file is created, apply it by using the `kubectl apply -f` command.
+
+```
+kubectl apply -f .yaml
+```
+The `volumereplicationclass.replication.storage.openshift.io/ created` message is emitted.
\ No newline at end of file
diff --git a/docs/content/configuration/csi_ug_config_create_vol_snapshotclass.md b/docs/content/configuration/csi_ug_config_create_vol_snapshotclass.md
index b0967e790..c72aae29a 100644
--- a/docs/content/configuration/csi_ug_config_create_vol_snapshotclass.md
+++ b/docs/content/configuration/csi_ug_config_create_vol_snapshotclass.md
@@ -2,37 +2,37 @@
Create a VolumeSnapshotClass YAML file to enable creation and deletion of volume snapshots.
-**Note:**
+**Note:** IBM® FlashCopy® function is referred to as the more generic volume snapshots and cloning within this documentation set. Not all supported products use the FlashCopy function terminology.
-- IBM® FlashCopy® function is referred to as the more generic volume snapshots and cloning within this documentation set. Not all supported products use the FlashCopy function terminology.
-- For volume snapshot support, the minimum orchestration platform version requirements are Red Hat® OpenShift® 4.4 and Kubernetes 1.17.
-
-In order to enable creation and deletion of volume snapshots for your storage system, create a VolumeSnapshotClass YAML file, similar to the following demo-snapshotclass.yaml.
+In order to enable creation and deletion of volume snapshots for your storage system, create a VolumeSnapshotClass YAML file, similar to the following `demo-volumesnapshotclass.yaml`.
When configuring the file, be sure to use the same array secret and array secret namespace as defined in [Creating a Secret](csi_ug_config_create_secret.md).
- The `snapshot_name_prefix` parameter is optional.
- **Note:** For IBM DS8000® Family, the maximum prefix length is five characters.
The maximum prefix length for other systems is 20 characters.
For storage systems using Spectrum Virtualize, the `CSI_` prefix is added as default if not specified by the user.
+ **Note:** For IBM DS8000® Family, the maximum prefix length is five characters.
The maximum prefix length for other systems is 20 characters.
For storage systems that use Spectrum Virtualize, the `CSI` prefix is added as default if not specified by the user.
-- The `pool` parameter is not available on IBM FlashSystem A9000 and A9000R storage systems. For these storage systems the snapshot must be created on the same pool as the source.
+- The `pool` parameter is not available on IBM FlashSystem A9000 and A9000R storage systems. For these storage systems, the snapshot must be created on the same pool as the source.
-```screen
+```
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
- name: demo-snapshotclass
+ name: demo-volumesnapshotclass
driver: block.csi.ibm.com
deletionPolicy: Delete
parameters:
+ pool: demo-pool # Optional. Use to create the snapshot on a different pool than the source.
+ SpaceEfficiency: thin # Optional. Use to create the snapshot with a different space efficiency than the source.
+ snapshot_name_prefix: demo-prefix # Optional.
+
csi.storage.k8s.io/snapshotter-secret-name: demo-secret
csi.storage.k8s.io/snapshotter-secret-namespace: default
- snapshot_name_prefix: demoSnapshot # Optional.
- pool: demo-pool # Optional. Use to create the snapshot on a different pool than the source.
```
After the YAML file is created, apply it by using the `kubectl apply -f` command.
```
kubectl apply -f .yaml
-```
\ No newline at end of file
+```
+ The `volumesnapshotclass.snapshot.storage.k8s.io/ created` message is emitted.
\ No newline at end of file
diff --git a/docs/content/configuration/csi_ug_config_create_vol_snapshotclass_topology.md b/docs/content/configuration/csi_ug_config_create_vol_snapshotclass_topology.md
new file mode 100644
index 000000000..2946eb8af
--- /dev/null
+++ b/docs/content/configuration/csi_ug_config_create_vol_snapshotclass_topology.md
@@ -0,0 +1,44 @@
+# Creating a VolumeSnapshotClass with topology awareness
+
+When using the CSI Topology feature, different parameters must be taken into account when creating a VolumeSnapshotClass YAML file with specific `by_management_id` requirements. Use this information to help define a VolumeSnapshotClass that is topology aware and enables the creation and deletion of volume snapshots.
+
+**Note:**
+ - For information and parameter definitions that are not related to topology awareness, be sure to see the information that is provided in [Creating a VolumeSnapshotClass](csi_ug_config_create_vol_snapshotclass.md), in addition to the current section.
+
+ - IBM® FlashCopy® function is referred to as the more generic volume snapshots and cloning within this documentation set. Not all supported products use the FlashCopy function terminology.
+
+In order to enable creation and deletion of volume snapshots for your storage system, create a VolumeSnapshotClass YAML file, similar to the following `demo-volumesnapshotclass-config-secret.yaml`.
+
+ The `by_management_id` parameter is optional and values such as the `pool`, `SpaceEfficiency`, and `volume_name_prefix` can all be specified.
+
+The various `by_management_id` parameters are chosen within the following hierarchical order:
+1. From within the `by_management_id` parameter, per system (if specified).
+2. Outside of the parameter, as a cross-system default (if not specified within the `by_management_id` parameter for the relevant `management-id`).
+
+```
+apiVersion: snapshot.storage.k8s.io/v1beta1
+kind: VolumeSnapshotClass
+metadata:
+ name: demo-volumesnapshotclass-config-secret
+driver: block.csi.ibm.com
+deletionPolicy: Delete
+parameters:
+ # non-csi.storage.k8s.io parameters may be specified in by_management_id per system and/or outside by_management_id as the cross-system default.
+
+ by_management_id: '{"demo-management-id-1":{"pool":"demo-pool-1","SpaceEfficiency":"deduplicated","snapshot_name_prefix":"demo-prefix-1"},
+ "demo-management-id-2":{"pool":"demo-pool-2","snapshot_name_prefix":"demo-prefix-2"}}' # Optional.
+ pool: demo-pool # Optional. Use to create the snapshot on a different pool than the source.
+ SpaceEfficiency: thin # Optional. Use to create the snapshot with a different space efficiency than the source.
+ snapshot_name_prefix: demo-prefix # Optional.
+
+ csi.storage.k8s.io/snapshotter-secret-name: demo-config-secret
+ csi.storage.k8s.io/snapshotter-secret-namespace: default
+```
+
+After the YAML file is created, apply it by using the `kubectl apply -f` command.
+
+```
+kubectl apply -f .yaml
+```
+
+ The `volumesnapshotclass.snapshot.storage.k8s.io/ created` message is emitted.
\ No newline at end of file
diff --git a/docs/content/configuration/csi_ug_config_expand_pvc.md b/docs/content/configuration/csi_ug_config_expand_pvc.md
index 3284667c2..e94b2a81a 100644
--- a/docs/content/configuration/csi_ug_config_expand_pvc.md
+++ b/docs/content/configuration/csi_ug_config_expand_pvc.md
@@ -2,9 +2,9 @@
Use this information to expand existing volumes.
-**Important:** Before expanding an existing volume, be sure that the relevant StorageClass yaml `allowVolumeExpansion` parameter is set to true. For more information, see [Creating a StorageClass](csi_ug_config_create_storageclasses.md).
+**Important:** Before expanding an existing volume, be sure that the relevant StorageClass `.yaml` `allowVolumeExpansion` parameter is set to true. For more information, see [Creating a StorageClass](csi_ug_config_create_storageclasses.md).
-To expand an existing volume, open the relevant PersistentVolumeClaim (PVC) yaml file and increase the `storage` parameter value. For example, if the current `storage` value is set to _1Gi_, you can change it to _10Gi_, as needed. For more information about PVC configuration, see [Creating a PersistentVolumeClaim (PVC)](csi_ug_config_create_pvc.md).
+To expand an existing volume, open the relevant PersistentVolumeClaim (PVC) YAML file and increase the `storage` parameter value. For example, if the current `storage` value is set to _1Gi_, you can change it to _10Gi_, as needed. For more information about PVC configuration, see [Creating a PersistentVolumeClaim (PVC)](csi_ug_config_create_pvc.md).
Be sure to use the `kubectl apply` command in order to apply your changes.
diff --git a/docs/content/configuration/csi_ug_config_replication_find_systemid.md b/docs/content/configuration/csi_ug_config_replication_find_systemid.md
new file mode 100644
index 000000000..1e314e308
--- /dev/null
+++ b/docs/content/configuration/csi_ug_config_replication_find_systemid.md
@@ -0,0 +1,8 @@
+# Finding a `system_id`
+
+Find the remote storage system `system_id` parameter on your storage system in order to create a VolumeReplicationClass YAML file, enabling replication.
+
+For finding the `system_id` parameter on your Spectrum Virtualize storage system, use the `lspartnership` command.
+
+For more information, see **Command-line interface** > **Copy Service commands** > **lspartnership** within your specific product documentation on [IBM Docs](https://www.ibm.com/docs/en).
+
diff --git a/docs/content/configuration/csi_ug_config_topology.md b/docs/content/configuration/csi_ug_config_topology.md
new file mode 100644
index 000000000..a7d09fe7d
--- /dev/null
+++ b/docs/content/configuration/csi_ug_config_topology.md
@@ -0,0 +1,9 @@
+# Configuring for CSI Topology
+
+Use this information for specific configuring information when using CSI Topology with the IBM® block storage CSI driver.
+
+**Important:** Be sure that all of the topology requirements are met before starting. For more information, see [Compatibility and requirements](../installation/csi_ug_requirements.md).
+
+- [Creating a Secret with topology awareness](csi_ug_config_create_secret_topology.md)
+- [Creating a StorageClass with topology awareness](csi_ug_config_create_storageclasses_topology.md)
+- [Creating a VolumeSnapshotClass with topology awareness](csi_ug_config_create_vol_snapshotclass_topology.md)
\ No newline at end of file
diff --git a/docs/content/csi_overview.md b/docs/content/csi_overview.md
index b16250df4..4d5d9a8b6 100644
--- a/docs/content/csi_overview.md
+++ b/docs/content/csi_overview.md
@@ -8,8 +8,8 @@ By leveraging CSI (Container Storage Interface) drivers for IBM storage systems,
IBM storage orchestration for containers includes the following driver types for storage provisioning:
-- The IBM block storage CSI driver, for block storage (documented here).
-- The IBM Spectrum® Scale CSI driver, for file storage. For specific Spectrum Scale and Spectrum Scale CSI driver product information, see [IBM Spectrum Scale documentation](https://www.ibm.com/docs/en/spectrum-scale/).
+- The IBM block storage CSI driver, for block storage (documented here).
+- The IBM Spectrum® Scale CSI driver, for file storage. For specific Spectrum Scale and Spectrum Scale CSI driver product information, see [IBM Spectrum Scale documentation](https://www.ibm.com/docs/en/spectrum-scale/).
For details about volume provisioning with Kubernetes, refer to [Persistent volumes on Kubernetes](https://kubernetes.io/docs/concepts/storage/volumes/).
diff --git a/docs/content/installation/csi_ug_install_operator_github.md b/docs/content/installation/csi_ug_install_operator_github.md
index bc87eed3f..ef71d8ce8 100644
--- a/docs/content/installation/csi_ug_install_operator_github.md
+++ b/docs/content/installation/csi_ug_install_operator_github.md
@@ -4,28 +4,30 @@ The operator for IBM® block storage CSI driver can be installed directly with G
Use the following steps to install the operator and driver, with [GitHub](https://github.com/IBM/ibm-block-csi-operator) (github.com/IBM/ibm-block-csi-operator).
-**Note:** Before you begin, you may need to create a user-defined namespace. Create the project namespace, using the `kubectl create ns ` command.
+**Note:** Before you begin, it is best practice to create a user-defined namespace. Create the project namespace, using the `kubectl create ns ` command.
1. Install the operator.
1. Download the manifest from GitHub.
```
- curl https://raw.githubusercontent.com/IBM/ibm-block-csi-operator/v1.6.0/deploy/installer/generated/ibm-block-csi-operator.yaml > ibm-block-csi-operator.yaml
+ curl https://raw.githubusercontent.com/IBM/ibm-block-csi-operator/v1.7.0/deploy/installer/generated/ibm-block-csi-operator.yaml > ibm-block-csi-operator.yaml
```
- 2. **Optional:** Update the image fields in the ibm-block-csi-operator.yaml.
+ 2. (Optional) Update the image fields in the `ibm-block-csi-operator.yaml`.
- 3. Install the operator, using a user-defined namespace.
+ **Note:** If a user-defined namespace was created, edit the namespace from `default` to ``.
+
+ 3. Install the operator.
```
- kubectl -n apply -f ibm-block-csi-operator.yaml
+ kubectl apply -f ibm-block-csi-operator.yaml
```
4. Verify that the operator is running. (Make sure that the Status is _Running_.)
- ```screen
- $ kubectl get pod -l app.kubernetes.io/name=ibm-block-csi-operator -n
+ ```
+ $> kubectl get pod -l app.kubernetes.io/name=ibm-block-csi-operator -n
NAME READY STATUS RESTARTS AGE
ibm-block-csi-operator-5bb7996b86-xntss 1/1 Running 0 10m
```
@@ -35,22 +37,24 @@ Use the following steps to install the operator and driver, with [GitHub](https:
1. Download the manifest from GitHub.
```
- curl https://raw.githubusercontent.com/IBM/ibm-block-csi-operator/v1.6.0/deploy/crds/csi.ibm.com_v1_ibmblockcsi_cr.yaml > csi.ibm.com_v1_ibmblockcsi_cr.yaml
+ curl https://raw.githubusercontent.com/IBM/ibm-block-csi-operator/v1.7.0/config/samples/csi.ibm.com_v1_ibmblockcsi_cr.yaml > csi.ibm.com_v1_ibmblockcsi_cr.yaml
```
- 2. **Optional:** Update the image repository field, tag field, or both in the csi.ibm.com_v1_ibmblockcsi_cr.yaml.
+ 2. (Optional) Update the image repository field, tag field, or both in the `csi.ibm.com_v1_ibmblockcsi_cr.yaml`.
- 3. Install the csi.ibm.com_v1_ibmblockcsi_cr.yaml.
+ **Note:** If a user-defined namespace was created, edit the namespace from `default` to ``.
+
+ 3. Install the `csi.ibm.com_v1_ibmblockcsi_cr.yaml`.
```
- kubectl -n apply -f csi.ibm.com_v1_ibmblockcsi_cr.yaml
+ kubectl apply -f csi.ibm.com_v1_ibmblockcsi_cr.yaml
```
4. Verify that the driver is running:
- ```bash
- $ kubectl get pods -n -l csi
+ ```
+ $> kubectl get pods -n -l csi
NAME READY STATUS RESTARTS AGE
- ibm-block-csi-controller-0 6/6 Running 0 9m36s
+ ibm-block-csi-controller-0 7/7 Running 0 9m36s
ibm-block-csi-node-jvmvh 3/3 Running 0 9m36s
ibm-block-csi-node-tsppw 3/3 Running 0 9m36s
ibm-block-csi-operator-5bb7996b86-xntss 1/1 Running 0 10m
diff --git a/docs/content/installation/csi_ug_install_operator_openshift.md b/docs/content/installation/csi_ug_install_operator_openshift.md
index 2e14f1eae..b9a95dfaa 100644
--- a/docs/content/installation/csi_ug_install_operator_openshift.md
+++ b/docs/content/installation/csi_ug_install_operator_openshift.md
@@ -38,9 +38,9 @@ The Red Hat OpenShift Container Platform uses the following `SecurityContextCons
10. Click **Create Instance** to create the IBM block storage CSI driver (`IBMBlockCSI`).
- A yaml file opens in the web console. This file can be left as-is, or edited as needed.
+ A YAML file opens in the web console. This file can be left as-is, or edited as needed.
-11. Update the yaml file to include your user-defined namespace.
+11. Update the YAML file to include your user-defined namespace.
12. Click **Create**.
diff --git a/docs/content/installation/csi_ug_install_operator_operatorhub.md b/docs/content/installation/csi_ug_install_operator_operatorhub.md
index 9d956e8ba..cd5a82cf9 100644
--- a/docs/content/installation/csi_ug_install_operator_operatorhub.md
+++ b/docs/content/installation/csi_ug_install_operator_operatorhub.md
@@ -4,4 +4,4 @@ When using OperatorHub.io, the operator for IBM® block storage CSI driver can b
To install the CSI driver from OperatorHub.io, go to https://operatorhub.io/operator/ibm-block-csi-operator-community and follow the installation instructions, once clicking the **Install** button.
-**Note:** To ensure that the operator installs the driver, be sure to apply the yaml that is located as part of the ibm-block-csi-operator-community page mentioned above.
+**Note:** To ensure that the operator installs the driver, be sure to apply the YAML file that is located as part of the ibm-block-csi-operator-community page mentioned above.
diff --git a/docs/content/installation/csi_ug_requirements.md b/docs/content/installation/csi_ug_requirements.md
index bb86dbfe3..53e8260ea 100644
--- a/docs/content/installation/csi_ug_requirements.md
+++ b/docs/content/installation/csi_ug_requirements.md
@@ -1,12 +1,12 @@
# Compatibility and requirements
-For the complete and up-to-date information about the compatibility and requirements for using the IBM® block storage CSI driver, refer to its latest release notes. The release notes detail supported operating system and container platform versions, as well as microcode versions of the supported storage systems.
+For the complete and up-to-date information about the compatibility and requirements for using the IBM® block storage CSI driver, refer to its latest release notes. The release notes detail supported operating system and container platform versions, and microcode versions of the supported storage systems.
Before beginning the installation of the CSI (Container Storage Interface) driver, be sure to verify that you comply with the following prerequisites.
For IBM Cloud® Satellite users, see [cloud.ibm.com/docs/satellite](https://cloud.ibm.com/docs/satellite) for full system requirements.
-**Important:** When using Satellite, complete the following checks, configurations, and the installation process before assigning the hosts to your locations. In addition, **do not** create a Kubernetes cluster. This is done through Satellite.
+**Important:** When using Satellite, complete the following checks, configurations, and the installation process before assigning the hosts to your locations. In addition, **do not** create a Kubernetes cluster. Creating the Kubernetes cluster is done through Satellite.
- The CSI driver requires the following ports to be opened on the worker nodes OS firewall:
- **For all iSCSI users**
@@ -17,7 +17,7 @@ For IBM Cloud® Satellite users, see [cloud.ibm.com/docs/satellite](https://clou
Port 7778
- - **IBM Spectrum® Virtualize Family includes IBM® SAN Volume Controller and IBM FlashSystem® family members built with IBM Spectrum® Virtualize (including FlashSystem 5xxx, 7200, 9100, 9200, 9200R)**
+ - **IBM Spectrum® Virtualize Family includes IBM® SAN Volume Controller and IBM FlashSystem® family members that are built with IBM Spectrum® Virtualize (including FlashSystem 5xxx, 7200, 9100, 9200, 9200R)**
Port 22
@@ -27,17 +27,15 @@ For IBM Cloud® Satellite users, see [cloud.ibm.com/docs/satellite](https://clou
- Be sure that multipathing is installed and running.
-Perform these steps for each worker node in Kubernetes cluster to prepare your environment for installing the CSI (Container Storage Interface) driver.
+Complete these steps for each worker node in Kubernetes cluster to prepare your environment for installing the CSI (Container Storage Interface) driver.
-1. **For RHEL OS users:** Ensure iSCSI connectivity. If using RHCOS or if the packages are already installed, skip this step and continue to step 2.
-
-2. Configure Linux® multipath devices on the host.
+1. Configure Linux® multipath devices on the host.
**Important:** Be sure to configure each worker with storage connectivity according to your storage system instructions. For more information, find your storage system documentation in [IBM Documentation](http://www.ibm.com/docs/).
**Additional configuration steps for OpenShift® Container Platform users (RHEL and RHCOS).** Other users can continue to step 3.
- Download and save the following yaml file:
+ Download and save the following YAML file:
```
curl https://raw.githubusercontent.com/IBM/ibm-block-csi-operator/master/deploy/99-ibm-attach.yaml > 99-ibm-attach.yaml
@@ -45,25 +43,53 @@ Perform these steps for each worker node in Kubernetes cluster to prepare your e
This file can be used for both Fibre Channel and iSCSI configurations. To support iSCSI, uncomment the last two lines in the file.
- **Important:** The 99-ibm-attach.yaml configuration file overrides any files that already exist on your system. Only use this file if the files mentioned are not already created.
If one or more have been created, edit this yaml file, as necessary.
+ **Important:** The `99-ibm-attach.yaml` configuration file overrides any files that exist on your system. Only use this file if the files mentioned are not already created.
If one or more were created, edit this YAML file, as necessary.
- Apply the yaml file.
+ Apply the YAML file.
`oc apply -f 99-ibm-attach.yaml`
-
-3. If needed, enable support for volume snapshots (FlashCopy® function) on your Kubernetes cluster.
- For more information and instructions, see the Kubernetes blog post, [Kubernetes 1.20: Kubernetes Volume Snapshot Moves to GA](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/).
+2. Configure storage system connectivity.
- Install both the Snapshot CRDs and the Common Snapshot Controller once per cluster.
+ 1. Define the host of each Kubernetes node on the relevant storage systems with the valid WWPN (for Fibre Channel) or IQN (for iSCSI) of the node.
- The instructions and relevant yaml files to enable volume snapshots can be found at: [https://github.com/kubernetes-csi/external-snapshotter#usage](https://github.com/kubernetes-csi/external-snapshotter#usage)
+ 2. For Fibre Channel, configure the relevant zoning from the storage to the host.
-4. Configure storage system connectivity.
+ 3. Ensure proper connectivity.
- 1. Define the host of each Kubernetes node on the relevant storage systems with the valid WWPN (for Fibre Channel) or IQN (for iSCSI) of the node.
+3. **For RHEL OS users:** Ensure that the following packages are installed.
- 2. For Fibre Channel, configure the relevant zoning from the storage to the host.
+ If using RHCOS or if the packages are already installed, this step may be skipped.
+
+ - sg3_utils
+ - iscsi-initiator-utils
+ - device-mapper-multipath
+ - xfsprogs (if XFS file system is required)
+4. (Optional) If planning on using volume snapshots (FlashCopy® function), enable support on your Kubernetes cluster.
+
+ For more information and instructions, see the Kubernetes blog post, [Kubernetes 1.20: Kubernetes Volume Snapshot Moves to GA](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/).
+
+ Install both the Snapshot CRDs and the Common Snapshot Controller once per cluster.
+
+ The instructions and relevant YAML files to enable volume snapshots can be found at: [https://github.com/kubernetes-csi/external-snapshotter#usage](https://github.com/kubernetes-csi/external-snapshotter#usage)
+
+5. (Optional) If planning on using volume replication (remote copy function), enable support on your orchestration platform cluster and storage system.
+
+ 1. To enable support on your Kubernetes cluster, install the following replication CRDs once per cluster.
+
+ ```
+ curl -O https://raw.githubusercontent.com/csi-addons/volume-replication-operator/v0.1.0/config/crd/bases/replication.storage.openshift.io_volumereplicationclasses.yaml
+ kubectl apply -f ./replication.storage.openshift.io_volumereplicationclasses.yaml
+
+ curl -O https://raw.githubusercontent.com/csi-addons/volume-replication-operator/v0.1.0/config/crd/bases/replication.storage.openshift.io_volumereplications.yaml
+ kubectl apply -f ./replication.storage.openshift.io_volumereplications.yaml
+ ````
+
+ 2. To enable support on your storage system, see the following section within your Spectrum Virtualize product documentation on [IBM Documentation](https://www.ibm.com/docs/en/): **Administering** > **Managing Copy Services** > **Managing remote-copy partnerships**.
+6. (Optional) To use CSI Topology, at least one node in the cluster must have the label-prefix of `topology.block.csi.ibm.com` to introduce topology awareness:
+
+ **Important:** This label-prefix must be found on the nodes in the cluster **before** installing the IBM® block storage CSI driver. If the nodes do not have the proper label-prefix before installation, CSI Topology cannot be used with the CSI driver.
+ For more information, see [Configuring for CSI Topology](../configuration/csi_ug_config_topology.md).
\ No newline at end of file
diff --git a/docs/content/installation/csi_ug_uninstall_github.md b/docs/content/installation/csi_ug_uninstall_github.md
index 59a12e8b5..ecd0bb194 100644
--- a/docs/content/installation/csi_ug_uninstall_github.md
+++ b/docs/content/installation/csi_ug_uninstall_github.md
@@ -3,16 +3,16 @@
Use this information to uninstall the IBM® CSI (Container Storage Interface) operator and driver with GitHub.
Perform the following steps in order to uninstall the CSI driver and operator.
-1. Delete the IBMBlockCSI custom resource.
+1. Delete the IBMBlockCSI custom resource.
```
- kubectl -n delete -f csi.ibm.com_v1_ibmblockcsi_cr.yaml
+ kubectl delete -f csi.ibm.com_v1_ibmblockcsi_cr.yaml
```
-2. Delete the operator.
+2. Delete the operator.
```
- kubectl -n delete -f ibm-block-csi-operator.yaml
+ kubectl delete -f ibm-block-csi-operator.yaml
```
diff --git a/docs/content/installation/csi_ug_uninstall_operatorhub.md b/docs/content/installation/csi_ug_uninstall_operatorhub.md
index 55b059144..122c4c0b4 100644
--- a/docs/content/installation/csi_ug_uninstall_operatorhub.md
+++ b/docs/content/installation/csi_ug_uninstall_operatorhub.md
@@ -2,6 +2,6 @@
Use this information to uninstall the IBM® CSI (Container Storage Interface) operator and driver with OperatorHub.io.
-To uninstall the CSI driver with OperatorHub.io, use the `kubectl delete -f` command to delete the yaml files, one at a time, in the reverse order of the installation steps that are documented in https://operatorhub.io/operator/ibm-block-csi-operator-community.
+To uninstall the CSI driver with OperatorHub.io, use the `kubectl delete -f` command to delete the YAML files, one at a time, in the reverse order of the installation steps that are documented in https://operatorhub.io/operator/ibm-block-csi-operator-community.
**Note:** To see the installation steps, click **Install** on the OperatorHub.io webpage.
\ No newline at end of file
diff --git a/docs/content/installation/csi_ug_upgrade.md b/docs/content/installation/csi_ug_upgrade.md
index 95a8330c1..85d7ee7fd 100644
--- a/docs/content/installation/csi_ug_upgrade.md
+++ b/docs/content/installation/csi_ug_upgrade.md
@@ -2,7 +2,7 @@
Use this information to upgrade the IBM® block storage CSI driver.
-- The OpenShift web console and OperatorHub.io both automatically upgrade the the CSI (Container Storage Interface) driver when a new version is released.
+- The OpenShift web console and OperatorHub.io both automatically upgrade the CSI (Container Storage Interface) driver when a new version is released.
- With OpenShift web console, the **Approval Strategy** must be set to **Automatic**.
To check if your operator is running at the latest release level, from the OpenShift web console, browse to **Operators** > **Installed Operators**. Check the status of the Operator for IBM block storage CSI driver. Ensure that the **Upgrade Status** is _Up to date_.
diff --git a/docs/content/release_notes/csi_rn_changelog_1.5.1.md b/docs/content/release_notes/csi_rn_changelog_1.5.1.md
new file mode 100644
index 000000000..53ce7a615
--- /dev/null
+++ b/docs/content/release_notes/csi_rn_changelog_1.5.1.md
@@ -0,0 +1,3 @@
+# 1.5.1 (July 2021)
+
+IBM® block storage CSI driver 1.5.1 was a maintenance release, adding improved Red Hat OpenShift 4.6 integration.
\ No newline at end of file
diff --git a/docs/content/release_notes/csi_rn_changelog_1.6.0.md b/docs/content/release_notes/csi_rn_changelog_1.6.0.md
index fd01997ab..aa3d9b4f1 100644
--- a/docs/content/release_notes/csi_rn_changelog_1.6.0.md
+++ b/docs/content/release_notes/csi_rn_changelog_1.6.0.md
@@ -1,3 +1,3 @@
# 1.6.0 (June 2021)
-IBM® block storage CSI driver 1.6.0 adds additional support for Kubernetes 1.21 and Red Hat® OpenShift® 4.8.
\ No newline at end of file
+IBM® block storage CSI driver 1.6.0 added additional support for Kubernetes 1.21 and Red Hat® OpenShift® 4.8.
\ No newline at end of file
diff --git a/docs/content/release_notes/csi_rn_changelog_1.7.0.md b/docs/content/release_notes/csi_rn_changelog_1.7.0.md
new file mode 100644
index 000000000..c5df058a1
--- /dev/null
+++ b/docs/content/release_notes/csi_rn_changelog_1.7.0.md
@@ -0,0 +1,6 @@
+# 1.7.0 (September 2021)
+
+IBM® block storage CSI driver 1.7.0 adds new support and enhancements:
+- Now supports the CSI Topology feature
+- New volume replication (remote copy) support for IBM Spectrum Virtualize Family storage systems
+- Additional support for Kubernetes 1.22
\ No newline at end of file
diff --git a/docs/content/release_notes/csi_rn_compatibility.md b/docs/content/release_notes/csi_rn_compatibility.md
index e7b79702e..2952aebb1 100644
--- a/docs/content/release_notes/csi_rn_compatibility.md
+++ b/docs/content/release_notes/csi_rn_compatibility.md
@@ -1,3 +1,3 @@
# Compatibility and requirements
-This section specifies the compatibility and requirements of version 1.6.0 of IBM® block storage CSI driver.
+This section specifies the compatibility and requirements of version 1.7.0 of IBM® block storage CSI driver.
diff --git a/docs/content/release_notes/csi_rn_edition_notice.md b/docs/content/release_notes/csi_rn_edition_notice.md
index fc22cb546..6d4b68466 100644
--- a/docs/content/release_notes/csi_rn_edition_notice.md
+++ b/docs/content/release_notes/csi_rn_edition_notice.md
@@ -1,4 +1,4 @@
-# First Edition (June 2021)
+# First Edition (September 2021)
-This edition applies to version 1.6.0 of the IBM® block storage CSI driver software package. Newer document editions may be issued for the same product version in order to add missing information, update information, or amend typographical errors. The edition is reset to 'First Edition' for every new product version.
+This edition applies to version 1.7.0 of the IBM® block storage CSI driver software package. Newer document editions may be issued for the same product version in order to add missing information, update information, or amend typographical errors. The edition is reset to 'First Edition' for every new product version.
diff --git a/docs/content/release_notes/csi_rn_knownissues.md b/docs/content/release_notes/csi_rn_knownissues.md
index dd39b1bf4..1874de6c1 100644
--- a/docs/content/release_notes/csi_rn_knownissues.md
+++ b/docs/content/release_notes/csi_rn_knownissues.md
@@ -1,6 +1,6 @@
# Known issues
-This section details the known issues in IBM® block storage CSI driver 1.6.0, along with possible solutions or workarounds (if available).
+This section details the known issues in IBM® block storage CSI driver 1.7.0, along with possible solutions or workarounds (if available).
The following severity levels apply to known issues:
@@ -12,11 +12,12 @@ The following severity levels apply to known issues:
**Important:**
-- **The issues listed below apply to IBM block storage CSI driver 1.6.0**. As long as a newer version has not yet been released, a newer release notes edition for IBM block storage CSI driver 1.6.0 might be issued to provide a more updated list of known issues and workarounds.
-- When a newer version is released for general availability, the release notes of this version will no longer be updated. Accordingly, check the release notes of the newer version to learn whether any newly discovered issues affect IBM block storage CSI driver 1.6.0 or whether the newer version resolves any of the issues listed below.
+- **The issues listed below apply to IBM block storage CSI driver 1.7.0**. As long as a newer version has not yet been released, a newer release notes edition for IBM block storage CSI driver 1.7.0 might be issued to provide a more updated list of known issues and workarounds.
+- When a newer version is released for general availability, the release notes of this version will no longer be updated. Accordingly, check the release notes of the newer version to learn whether any newly discovered issues affect IBM block storage CSI driver 1.7.0 or whether the newer version resolves any of the issues listed below.
|Ticket ID|Severity|Description|
|---------|--------|-----------|
+|**CSI-3382**|Service|After CSI Topology label deletion, volume provisioning does not work, even when not using any topology-aware YAML files.
**Workaround:** To allow volume provisioning through the CSI driver, delete the operator pod.
After the deletion, a new operator pod is created and the controller pod is automatically restarted, allowing for volume provisioning.|
|**CSI-2157**|Service|In extremely rare cases, too many Fibre Channel worker node connections may result in a failure when the CSI driver attempts to attach a pod. As a result, the `Host for node: {0} was not found, ensure all host ports are configured on storage` error message may be found in the IBM block storage CSI driver controller logs.
**Workaround:** Ensure that all host ports are properly configured on the storage system. If the issue continues and the CSI driver can still not attach a pod, contact IBM Support.|
|**CSI-702**|Service|Modifying the controller or node **affinity** settings may not take effect.
**Workaround:** If needed, delete the controller StatefulSet and/or the DaemonSet node after modifying the **affinity** settings in the IBMBlockCSI custom resource.|
diff --git a/docs/content/release_notes/csi_rn_limitations.md b/docs/content/release_notes/csi_rn_limitations.md
index c769f8b15..b766a6f55 100644
--- a/docs/content/release_notes/csi_rn_limitations.md
+++ b/docs/content/release_notes/csi_rn_limitations.md
@@ -4,7 +4,7 @@ As opposed to known issues, limitations are functionality restrictions that are
## IBM® DS8000® usage limitations
-When using the CSI (Container Storage Interface) driver with DS8000 Family products, connectivity limit on the storage side may be reached because of too many open connections. This occurs due to connection closing lag times from the storage side.
+Connectivity limits on the storage side might be reached with DS8000 Family products due to too many open connections. This occurs due to connection closing lag times from the storage side.
## Volume snapshot limitations
@@ -28,5 +28,17 @@ The following limitations apply when using volume clones with the IBM block stor
The following limitations apply when expanding volumes with the IBM block storage CSI driver:
- When using the CSI driver with IBM Spectrum Virtualize Family and IBM DS8000 Family products, during size expansion of a PersistentVolumeClaim (PVC), the size remains until all snapshots of the specific PVC are deleted.
-- When expanding a PVC while not in use by a pod, the volume size immediately increases on the storage side. PVC size only increases, however, after a pod begins to use the PVC.
-- When expanding a filesystem PVC for a volume that was previously formatted but is now no longer being used by a pod, any copy or replication operations performed on the PVC (such as snapshots or cloning, and so on) results in a copy with the newer, larger, size on the storage. However, its filesystem has the original, smaller, size.
\ No newline at end of file
+- When expanding a PVC while not in use by a pod, the volume size immediately increases on the storage side. However, PVC size only increases after a pod uses the PVC.
+- When expanding a filesystem PVC for a volume that was previously formatted but is now no longer being used by a pod, any copy or replication operations performed on the PVC (such as snapshots or cloning) results in a copy with the newer, larger, size on the storage. However, its filesystem has the original, smaller, size.
+
+## Volume replication limitations
+
+When a role switch is conducted, this is not reflected within the other orchestration platform replication objects.
+
+**Important:** When using volume replication on volumes that were created with a driver version lower than 1.7.0:
+
+ 1. Change the reclaim policy of the relevant PersistentVolumes to `Retain`.
+ 2. Delete the relevant PersistentVolumes.
+ 3. Import the volumes, by using the latest import procedure (version 1.7.0 or later) (see **CSI driver configuration** > **Advanced configuration** > **Importing an existing volume** in the user information).
+
+ For more information, see the [Change the Reclaim Policy of a PersistentVolume](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/) information in the Kubernetes documentation.
\ No newline at end of file
diff --git a/docs/content/release_notes/csi_rn_supported_orchestration.md b/docs/content/release_notes/csi_rn_supported_orchestration.md
index 31b1d9c8c..1c783ef58 100644
--- a/docs/content/release_notes/csi_rn_supported_orchestration.md
+++ b/docs/content/release_notes/csi_rn_supported_orchestration.md
@@ -4,12 +4,13 @@ The following table details orchestration platforms suitable for deployment of t
|Orchestration platform|Version|Architecture|
|----------------------|-------|------------|
-|Kubernetes|1.20|x86|
|Kubernetes|1.21|x86|
-|Red Hat® OpenShift®|4.7|x86, IBM Z®, IBM Power Systems™1|
+|Kubernetes|1.22|x86|
|Red Hat OpenShift|4.8|x86, IBM Z, IBM Power Systems1|
-1IBM Power Systems architecture is only supported on Spectrum Virtualize Family storage systems.
+1IBM Power Systems architecture is only supported on Spectrum Virtualize and DS8000 Family storage systems.
-**Note:** As of this document's publication date, IBM Cloud® Satellite only supports RHEL 7 on x86 architecture for Red Hat OpenShift. For the latest support information, see [cloud.ibm.com/docs/satellite](https://cloud.ibm.com/docs/satellite).
+**Note:**
+- As of this document's publication date, IBM Cloud® Satellite only supports RHEL 7 on x86 architecture for Red Hat OpenShift. For the latest support information, see [cloud.ibm.com/docs/satellite](https://cloud.ibm.com/docs/satellite).
+- For the latest orchestration platform support information, see the [Lifecycle and support matrix](https://www.ibm.com/docs/en/stg-block-csi-driver?topic=SSRQ8T/landing/csi_lifecycle_support_matrix.html).
diff --git a/docs/content/release_notes/csi_rn_supported_os.md b/docs/content/release_notes/csi_rn_supported_os.md
index 3d48795c6..0acdf260c 100644
--- a/docs/content/release_notes/csi_rn_supported_os.md
+++ b/docs/content/release_notes/csi_rn_supported_os.md
@@ -5,9 +5,10 @@ The following table lists operating systems required for deployment of the IBM®
|Operating system|Architecture|
|----------------|------------|
|Red Hat® Enterprise Linux® (RHEL) 7.x|x86, IBM Z®|
-|Red Hat Enterprise Linux CoreOS (RHCOS)|x86, IBM Z®2, IBM Power Systems™1|
+|Red Hat Enterprise Linux CoreOS (RHCOS)|x86, IBM Z, IBM Power Systems™1|
-1IBM Power Systems architecture is only supported on Spectrum Virtualize Family storage systems.
-2IBM Z and IBM Power Systems architectures are only supported using CLI installation.
+1IBM Power Systems architecture is only supported on Spectrum Virtualize and DS8000 Family storage systems.
+
+**Note:** For the latest operating system support information, see the [Lifecycle and support matrix](https://www.ibm.com/docs/en/stg-block-csi-driver?topic=SSRQ8T/landing/csi_lifecycle_support_matrix.html).
diff --git a/docs/content/release_notes/csi_rn_supported_storage.md b/docs/content/release_notes/csi_rn_supported_storage.md
index db25f75bb..c65710854 100644
--- a/docs/content/release_notes/csi_rn_supported_storage.md
+++ b/docs/content/release_notes/csi_rn_supported_storage.md
@@ -1,18 +1,18 @@
# Supported storage systems
-IBM® block storage CSI driver 1.6.0 supports different IBM storage systems as listed in the following table.
+IBM® block storage CSI driver 1.7.0 supports different IBM storage systems as listed in the following table.
|Storage system|Microcode version|
|--------------|-----------------|
-|IBM FlashSystem™ A9000|12.x|
-|IBM FlashSystem A9000R|12.x|
+|IBM FlashSystem™ A9000|12.3.0.a or later|
+|IBM FlashSystem A9000R|12.3.0.a or later|
|IBM Spectrum Virtualize™ Family including IBM SAN Volume Controller (SVC) and IBM FlashSystem® family members built with IBM Spectrum® Virtualize (including FlashSystem 5xxx, 7200, 9100, 9200, 9200R)|7.8 and above, 8.x|
|IBM Spectrum Virtualize as software only|7.8 and above, 8.x|
|IBM DS8000® Family|8.x and higher with same API interface|
**Note:**
-- Newer microcode versions may also be compatible. When a newer microcode version becomes available, contact IBM Support to inquire whether the new microcode version is compatible with the current version of the CSI driver.
-- The IBM Spectrum Virtualize Family and IBM SAN Volume Controller storage systems run the IBM Spectrum Virtualize software. In addition, IBM Spectrum Virtualize package is available as a deployable solution that can be run on any compatible hardware.
+- For the latest microcode storage support information, see the [Lifecycle and support matrix](https://www.ibm.com/docs/en/stg-block-csi-driver?topic=SSRQ8T/landing/csi_lifecycle_support_matrix.html).
+- The IBM Spectrum Virtualize Family and IBM SAN Volume Controller storage systems run the IBM Spectrum Virtualize software. In addition, IBM Spectrum Virtualize package is available as a deployable solution that can be run on any compatible hardware.
diff --git a/docs/content/release_notes/csi_rn_whatsnew.md b/docs/content/release_notes/csi_rn_whatsnew.md
index 3335a6488..65e20822c 100644
--- a/docs/content/release_notes/csi_rn_whatsnew.md
+++ b/docs/content/release_notes/csi_rn_whatsnew.md
@@ -1,12 +1,22 @@
-# What's new in 1.6.0
+# What's new in 1.7.0
-IBM® block storage CSI driver 1.6.0 introduces the enhancements detailed in the following section.
+IBM® block storage CSI driver 1.7.0 introduces the enhancements that are detailed in the following section.
-**General availability date**: 18 June 2021
+**General availability date:** 30 September 2021
-## Additional supported orchestration platforms for deployment
+## Now supports CSI Topology
-This version adds support for orchestration platforms Kubernetes 1.21 and Red Hat OpenShift 4.8, suitable for deployment of the CSI (Container Storage Interface) driver.
+IBM® block storage CSI driver 1.7.0 is now topology aware. Using this feature, volume access can be limited to a subset of nodes, based on regions and availability zones. Nodes can be located in various regions within an availability zone, or across the different availability zones. Using CSI Topology feature can ease volume provisioning for workloads within a multi-zone architecture.
+
+For more information, see [CSI Topology Feature](https://kubernetes-csi.github.io/docs/topology.html).
+
+## New volume replication support for IBM Spectrum Virtualize Family storage systems
+
+When using IBM Spectrum Virtualize Family storage systems, the CSI driver now supports volume replication (remote copy).
+
+## Additional support for Kubernetes 1.22 orchestration platforms for deployment
+
+This version adds support for orchestration platform Kubernetes 1.22, suitable for deployment of the CSI (Container Storage Interface) driver.
diff --git a/docs/content/troubleshooting/csi_ug_troubleshooting_detect_errors.md b/docs/content/troubleshooting/csi_ug_troubleshooting_detect_errors.md
deleted file mode 100644
index f257cbc2d..000000000
--- a/docs/content/troubleshooting/csi_ug_troubleshooting_detect_errors.md
+++ /dev/null
@@ -1,21 +0,0 @@
-# Detecting errors
-
-Use this information to help pinpoint potential causes for stateful pod failure.
-
-This is an overview of actions that you can take to pinpoint a potential cause for a stateful pod failure.
-
-**Note:** This procedures is applicable for both Kubernetes and Red Hat® OpenShift®. For Red Hat OpenShift, replace `kubectl` with `oc` in all relevant commands.
-
-1. Verify that the CSI driver is running. (Make sure the `csi-controller` pod status is _Running_).
-
- ```
- $> kubectl get all -n -l csi
- ```
-
-2. If `pod/ibm-block-csi-controller-0` is not in a _Running_ state, run the following command:
-
- ```
- kubectl describe -n pod/ibm-block-csi-controller-0
- ```
-
- View the logs (see [Log collection](csi_ug_troubleshooting_logs.md)).
diff --git a/docs/content/troubleshooting/csi_ug_troubleshooting_logs.md b/docs/content/troubleshooting/csi_ug_troubleshooting_logs.md
index 5e04c06de..fa19d2387 100644
--- a/docs/content/troubleshooting/csi_ug_troubleshooting_logs.md
+++ b/docs/content/troubleshooting/csi_ug_troubleshooting_logs.md
@@ -1,12 +1,12 @@
-# Log collection
+# Log and status collection
-Use the CSI (Container Storage Interface) driver logs for problem identification.
+Use the CSI (Container Storage Interface) driver debug information for problem identification.
**Note:** These procedures are applicable for both Kubernetes and Red Hat® OpenShift®. For Red Hat OpenShift, replace `kubectl` with `oc` in all relevant commands.
-To collect and display logs, related to the different components of IBM® block storage CSI driver, use the following Kubernetes commands:
+To collect and display status and logs related to the different components of IBM® block storage CSI driver, use the following Kubernetes commands:
-## Log collection for CSI pods, daemonset, and StatefulSet
+## Status collection for CSI pods, daemonset, and statefulset
`kubectl get all -n -l csi`
@@ -20,4 +20,22 @@ To collect and display logs, related to the different components of IBM® block
## Log collection for Operator for IBM block storage CSI driver
-`kubectl log -f -n ibm-block-csi-operator- -c ibm-block-csi-operator`
\ No newline at end of file
+`kubectl log -f -n ibm-block-csi-operator- -c ibm-block-csi-operator`
+
+## Detecting errors
+
+To help pinpoint potential causes for stateful pod failure:
+
+1. Verify that all CSI pods are running.
+
+ ```
+ kubectl get pods -n -l csi
+ ```
+
+2. If a pod is not in a _Running_ state, run the following command:
+
+ ```
+ kubectl describe -n pod/
+ ```
+
+ View the logs.
\ No newline at end of file
diff --git a/docs/content/troubleshooting/csi_ug_troubleshooting_misc.md b/docs/content/troubleshooting/csi_ug_troubleshooting_misc.md
index e30049806..b23bb2ec3 100644
--- a/docs/content/troubleshooting/csi_ug_troubleshooting_misc.md
+++ b/docs/content/troubleshooting/csi_ug_troubleshooting_misc.md
@@ -12,7 +12,7 @@ kubectl get -n csidriver,sa,clusterrole,clusterrolebinding,stateful
```
## Error during pod creation
-**Note:** This troubleshooting procedure is relevant for volumes using file system types only (not for volumes using raw block volume types).
+**Note:** This troubleshooting procedure is relevant for volumes using file system volume mode only (not for volumes using raw block volume mode).
If the following error occurs during stateful application pod creation (the pod status is _ContainerCreating_):
diff --git a/docs/content/troubleshooting/csi_ug_troubleshooting_misc.md.dcsbackup b/docs/content/troubleshooting/csi_ug_troubleshooting_misc.md.dcsbackup
deleted file mode 100644
index 7be9a50ac..000000000
--- a/docs/content/troubleshooting/csi_ug_troubleshooting_misc.md.dcsbackup
+++ /dev/null
@@ -1,46 +0,0 @@
-# Miscellaneous troubleshooting
-
-Use this information to help pinpoint potential causes for stateful pod failure.
-
-**Note:** These procedures are applicable for both Kubernetes and Red Hat® OpenShift®. For Red Hat OpenShift, replace `kubectl` with `oc` in all relevant commands.
-
-- [General troubleshooting](#general_troubleshooting)
-- [Error during pod creation](#error_during_pod_creation) (for volumes using StatefulSet only)
-
-## General troubleshooting
-Use the following command for general troubleshooting:
-
-```
-kubectl get -n csidriver,sa,clusterrole,clusterrolebinding,statefulset,pod,daemonset | grep ibm-block-csi
-```
-
-## Error during pod creation
-**Note:** This troubleshooting procedure is relevant for volumes using file system types only (not for volumes using raw block volume types).
-
-If the following error occurs during stateful application pod creation (the pod status is _ContainerCreating_):
-
-```screen
- -8e73-005056a49b44" : rpc error: code = Internal desc = 'fsck' found errors on device /dev/dm-26 but could not correct them: fsck from util-linux 2.23.2
- /dev/mapper/mpathym: One or more block group descriptor checksums are invalid. FIXED.
- /dev/mapper/mpathym: Group descriptor 0 checksum is 0x0000, should be 0x3baa.
-
- /dev/mapper/mpathym: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
- (i.e., without -a or -p options)
-```
-1. Log in to the relevant worker node and run the `fsck` command to repair the filesystem manually.
-
- `fsck /dev/dm-`
-
- The pod should come up immediately. If the pod is still in a _ContainerCreating_ state, continue to the next step.
-
-2. Run the `# multipath -ll` command to see if there are faulty multipath devices.
-
- If there are faulty multipath devices:
-
- 1. Restart multipath daemon, using the `systemctl restart multipathd` command.
- 2. Rescan any iSCSI devices, using the `rescan-scsi-bus.sh` command.
- 3. Restart the multipath daemon again, using the `systemctl restart multipathd` command.
-
- The multipath devices should be running properly and the pod should come up immediately.
-
-
diff --git a/docs/content/troubleshooting/csi_ug_troubleshooting_misc.md.tminfo b/docs/content/troubleshooting/csi_ug_troubleshooting_misc.md.tminfo
deleted file mode 100644
index 0e8c1a902..000000000
--- a/docs/content/troubleshooting/csi_ug_troubleshooting_misc.md.tminfo
+++ /dev/null
@@ -1,10 +0,0 @@
-PROCESS START: Trademark Scanner 2127-20210504
-PROCESS START: 06-10-2021 11:58:18
-PROCESS: Scan Document
-HOMEDIR: C:\Users\410341756\Documents\GitHub\ibm-block-csi-driver\docs
-SETTING: Trademark list version is 20201214
-SCAN FILE: C:\Users\410341756\Documents\GitHub\ibm-block-csi-driver\docs\csi_ug_troubleshooting_misc.md
-SCAN MESSAGE: Line #4: Trademarks were updated. [OpenShift] [Red Hat]
-SCAN RESULT: Trademarks:2 List: [Red Hat](x2) [OpenShift](x2)
-PROCESS END: 06-10-2021 11:58:18
-Scanning Complete
diff --git a/docs/content/troubleshooting/csi_ug_troubleshooting_node_crash.md b/docs/content/troubleshooting/csi_ug_troubleshooting_node_crash.md
index 1a7eba0f5..7f18f3cb4 100644
--- a/docs/content/troubleshooting/csi_ug_troubleshooting_node_crash.md
+++ b/docs/content/troubleshooting/csi_ug_troubleshooting_node_crash.md
@@ -9,15 +9,15 @@ When a worker node shuts down or crashes, all pods in a StatefulSet that reside
For example:
-```screen
-$> kubectl get nodes
-NAME STATUS ROLES AGE VERSION
-k8s-master Ready master 6d
-k8s-node1 Ready 6d
-k8s-node3 NotReady 6d
-
-$> kubectl get pods --all-namespaces -o wide | grep default
-default sanity-statefulset-0 1/1 Terminating 0 19m 10.244.2.37 k8s-node3
+```
+$>kubectl get nodes
+NAME STATUS ROLES AGE VERSION
+k8s-master Ready master 6d
+k8s-node1 Ready 6d
+k8s-node2 NotReady 6d
+
+$>kubectl get pods --all-namespaces -o wide | grep default
+default sanity-statefulset-0 1/1 Terminating 0 19m 10.244.2.37 k8s-node2
```
## Recovering a crashed node
@@ -45,32 +45,30 @@ Follow the following procedure to recover from a crashed node (see a [full examp
kubectl delete pod --grace-period=0 --force
```
-5. Verify that the pod is now in a _Running_ state and that the pod has moved to worker-node1.
+5. Verify that the pod is now in a _Running_ state and that the pod has moved to a _Ready_ node.
For example:
-```screen
-$> kubectl get nodes
-NAME STATUS ROLES AGE VERSION
-k8s-master Ready master 6d
-k8s-node1 Ready 6d
-k8s-node3 NotReady 6d
+ $> kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ k8s-master Ready master 6d
+ k8s-node1 Ready 6d
+ k8s-node2 NotReady 6d
-$> kubectl get pods --all-namespaces -o wide | grep default
-default sanity-statefulset-0 1/1 Terminating 0 19m 10.244.2.37 k8s-node3
+ $> kubectl get pods --all-namespaces -o wide | grep default
+ default sanity-statefulset-0 1/1 Terminating 0 19m 10.244.2.37 k8s-node2
-$> kubectl get volumeattachment
-NAME AGE
-csi-5944e1c742d25e7858a8e48311cdc6cc85218f1156dd6598d4cf824fb1412143 10m
+ $> kubectl get volumeattachment
+ NAME AGE
+ csi-5944e1c742d25e7858a8e48311cdc6cc85218f1156dd6598d4cf824fb1412143 10m
-$> kubectl delete volumeattachment csi-5944e1c742d25e7858a8e48311cdc6cc85218f1156dd6598d4cf824fb1412143
-volumeattachment.storage.k8s.io "csi-5944e1c742d25e7858a8e48311cdc6cc85218f1156dd6598d4cf824fb1412143" deleted
+ $> kubectl delete volumeattachment csi-5944e1c742d25e7858a8e48311cdc6cc85218f1156dd6598d4cf824fb1412143
+ volumeattachment.storage.k8s.io "csi-5944e1c742d25e7858a8e48311cdc6cc85218f1156dd6598d4cf824fb1412143" deleted
-$> kubectl delete pod sanity-statefulset-0 --grace-period=0 --force
-warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
-pod "sanity-statefulset-0" deleted
+ $> kubectl delete pod sanity-statefulset-0 --grace-period=0 --force
+ warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
+ pod "sanity-statefulset-0" deleted
-$> kubectl get pods --all-namespaces -o wide | grep default
-default sanity-statefulset-0 1/1 Running 0 26s 10.244.1.210 k8s-node1
-```
+ $> kubectl get pods --all-namespaces -o wide | grep default
+ default sanity-statefulset-0 1/1 Running 0 26s 10.244.1.210 k8s-node1
diff --git a/docs/content/using/csi_ug_using_sample.md b/docs/content/using/csi_ug_using_sample.md
index ea6e5b731..44032b42a 100644
--- a/docs/content/using/csi_ug_using_sample.md
+++ b/docs/content/using/csi_ug_using_sample.md
@@ -2,38 +2,24 @@
You can use the CSI (Container Storage Interface) driver for running stateful containers with a storage volume provisioned from IBM® block storage systems.
-These examples illustrate a basic configuration required for running a stateful container with volumes provisioned on an IBM Spectrum® Virtualize Family storage system.
+These instructions illustrate the general flow for a basic configuration required for running a stateful container with volumes provisioned on storage system.
-While these examples specify the use of IBM Spectrum Virtualize products, the same configuration is used on all supported storage system types.
-
-**Note:** The secret names given can be user specified. When giving secret names when managing different system storage types, be sure to give system type indicators to each name.
-
-The following are examples of different types of secret names that can be given per storage type.
-
-|Storage system name|Secret name|
-|-------------------|-----------|
-|IBM FlashSystem® A9000
IBM FlashSystem A9000R|a9000-array1|
-|IBM Spectrum Virtualize Family including IBM SAN Volume Controller and
IBM FlashSystem family members built with IBM Spectrum Virtualize
(including FlashSystem 5xxx, 7200, 9100, 9200, 9200R)|storwize-array1|
-|IBM DS8000® Family products|DS8000-array1|
-
-**Note:** This procedure is applicable for both Kubernetes and Red Hat® OpenShift®. For Red Hat OpenShift, replace `kubectl` with `oc` in all relevant commands.
+**Note:** The secret names given are user specified. To implement order and help any debugging that may be required, provide system type indicators to each secret name when managing different system storage types.
Use this information to run a stateful container on StatefulSet volumes using either file systems or raw block volumes.
-1. Create an array secret, as described in [Creating a Secret](../configuration/csi_ug_config_create_secret.md).
+1. Create an array secret, as described in [Creating a Secret](../configuration/csi_ug_config_create_secret.md).
-2. Create a storage class, as described in [Creating a StorageClass](../configuration/csi_ug_config_create_storageclasses.md).
+2. Create a storage class, as described in [Creating a StorageClass](../configuration/csi_ug_config_create_storageclasses.md).
- **Remember:** The `SpaceEfficiency` values for Spectrum Virtualize Family are: thick, thin, compressed, or deduplicated. These values are not case specific.
+ **Remember:** The `SpaceEfficiency` values for Spectrum Virtualize Family are: `thick`, `thin`, `compressed`, or `deduplicated`. These values are not case specific.
- For DS8000 Family systems, the default value is standard, but can be set to thin, if required. These values are not case specific. For more information, see [Creating a StorageClass](../configuration/csi_ug_config_create_storageclasses.md).
+ For DS8000 Family systems, the default value is `none`, but can be set to `thin`, if required. These values are not case specific. For more information, see [Creating a StorageClass](../configuration/csi_ug_config_create_storageclasses.md).
This parameter is not applicable for IBM FlashSystem A9000 and A9000R systems. These systems always include deduplication and compression.
-3. Create a PVC with the size of 1 Gb, as described in [Creating a PersistentVolumeClaim (PVC)](../configuration/csi_ug_config_create_pvc.md).
-
-4. Display the existing PVC and the created persistent volume (PV).
-
-5. Create a StatefulSet, as described in [Creating a StatefulSet](../configuration/csi_ug_config_create_statefulset.md).
+3. Create a PVC with the size of 1 Gb, as described in [Creating a PersistentVolumeClaim (PVC)](../configuration/csi_ug_config_create_pvc.md).
+4. (Optional) Display the existing PVC and the created persistent volume (PV).
+5. Create a StatefulSet, as described in [Creating a StatefulSet](../configuration/csi_ug_config_create_statefulset.md).
\ No newline at end of file
diff --git a/node/pkg/driver/device_connectivity/device_connectivity_helper_scsigeneric.go b/node/pkg/driver/device_connectivity/device_connectivity_helper_scsigeneric.go
index 8837190ca..b69a8106c 100644
--- a/node/pkg/driver/device_connectivity/device_connectivity_helper_scsigeneric.go
+++ b/node/pkg/driver/device_connectivity/device_connectivity_helper_scsigeneric.go
@@ -73,6 +73,7 @@ const (
multipathdCmd = "multipathd"
multipathCmd = "multipath"
VolumeIdDelimiter = ":"
+ VolumeStorageIdsDelimiter = ";"
)
func NewOsDeviceConnectivityHelperScsiGeneric(executer executer.ExecuterInterface) OsDeviceConnectivityHelperScsiGenericInterface {
@@ -132,12 +133,21 @@ func (r OsDeviceConnectivityHelperScsiGeneric) RescanDevices(lunId int, arrayIde
logger.Debugf("Rescan : finish rescan lun on lun id : {%v}, with array identifiers : {%v}", lunId, arrayIdentifiers)
return nil
}
+func getVolumeUuid(volumeId string) string {
+ volumeIdParts := strings.Split(volumeId, VolumeIdDelimiter)
+ idsPart := volumeIdParts[len(volumeIdParts)-1]
+ splittedIdsPart := strings.Split(idsPart, VolumeStorageIdsDelimiter)
+ if len(splittedIdsPart) == 2 {
+ return splittedIdsPart[1]
+ } else {
+ return splittedIdsPart[0]
+ }
+}
func (r OsDeviceConnectivityHelperScsiGeneric) GetMpathDevice(volumeId string) (string, error) {
logger.Infof("GetMpathDevice: Searching multipath devices for volume : [%s] ", volumeId)
- volumeIdParts := strings.Split(volumeId, VolumeIdDelimiter)
- volumeUuid := volumeIdParts[len(volumeIdParts)-1]
+ volumeUuid := getVolumeUuid(volumeId)
volumeUuidLower := strings.ToLower(volumeUuid)
diff --git a/node/pkg/driver/node.go b/node/pkg/driver/node.go
index bb558e27c..06e4cea53 100644
--- a/node/pkg/driver/node.go
+++ b/node/pkg/driver/node.go
@@ -19,6 +19,9 @@ package driver
import (
"context"
"fmt"
+ "path"
+ "strings"
+
"github.com/container-storage-interface/spec/lib/go/csi"
"github.com/ibm/ibm-block-csi-driver/node/goid_info"
"github.com/ibm/ibm-block-csi-driver/node/logger"
@@ -27,8 +30,6 @@ import (
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"k8s.io/utils/mount"
- "path"
- "strings"
)
var (
@@ -711,7 +712,7 @@ func (d *NodeService) NodeGetInfo(ctx context.Context, req *csi.NodeGetInfoReque
}
logger.Debugf("discovered topology labels : %v", topologyLabels)
- fcExists := d.NodeUtils.IsPathExists(FCPath)
+ fcExists := d.NodeUtils.IsFCExists()
if fcExists {
fcWWNs, err = d.NodeUtils.ParseFCPorts()
if err != nil {
diff --git a/node/pkg/driver/node_test.go b/node/pkg/driver/node_test.go
index 542b66a93..70db3e0a4 100644
--- a/node/pkg/driver/node_test.go
+++ b/node/pkg/driver/node_test.go
@@ -20,16 +20,17 @@ import (
"context"
"errors"
"fmt"
- "github.com/container-storage-interface/spec/lib/go/csi"
- "github.com/golang/mock/gomock"
- "github.com/ibm/ibm-block-csi-driver/node/mocks"
- "github.com/ibm/ibm-block-csi-driver/node/pkg/driver/device_connectivity"
"path"
"path/filepath"
"reflect"
"strings"
"testing"
+ "github.com/container-storage-interface/spec/lib/go/csi"
+ "github.com/golang/mock/gomock"
+ "github.com/ibm/ibm-block-csi-driver/node/mocks"
+ "github.com/ibm/ibm-block-csi-driver/node/pkg/driver/device_connectivity"
+
"github.com/ibm/ibm-block-csi-driver/node/pkg/driver"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@@ -1134,7 +1135,7 @@ func TestNodeGetCapabilities(t *testing.T) {
}
func TestNodeGetInfo(t *testing.T) {
- topologySegments := map[string]string{"topology.kubernetes.io/zone": "testZone"}
+ topologySegments := map[string]string{"topology.block.csi.ibm.com/zone": "testZone"}
testCases := []struct {
name string
@@ -1217,7 +1218,7 @@ func TestNodeGetInfo(t *testing.T) {
fake_nodeutils := mocks.NewMockNodeUtilsInterface(mockCtrl)
d := newTestNodeService(fake_nodeutils, nil, nil)
fake_nodeutils.EXPECT().GetTopologyLabels(context.TODO(), d.Hostname).Return(topologySegments, nil)
- fake_nodeutils.EXPECT().IsPathExists(driver.FCPath).Return(tc.fcExists)
+ fake_nodeutils.EXPECT().IsFCExists().Return(tc.fcExists)
if tc.fcExists {
fake_nodeutils.EXPECT().ParseFCPorts().Return(tc.return_fcs, tc.return_fc_err)
}
diff --git a/node/pkg/driver/node_utils.go b/node/pkg/driver/node_utils.go
index 4490df1ef..47aadaf3b 100644
--- a/node/pkg/driver/node_utils.go
+++ b/node/pkg/driver/node_utils.go
@@ -19,14 +19,16 @@ package driver
import (
"context"
"fmt"
- "github.com/ibm/ibm-block-csi-driver/node/pkg/driver/device_connectivity"
+ "io"
"io/ioutil"
- "k8s.io/apimachinery/pkg/util/errors"
"os"
"path"
"strconv"
"strings"
+ "github.com/ibm/ibm-block-csi-driver/node/pkg/driver/device_connectivity"
+ "k8s.io/apimachinery/pkg/util/errors"
+
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
@@ -38,7 +40,7 @@ import (
var (
getOpts = metav1.GetOptions{}
- topologyPrefixes = [...]string{"topology.kubernetes.io", "topology.block.csi.ibm.com"}
+ topologyPrefixes = [...]string{"topology.block.csi.ibm.com"}
)
const (
@@ -52,6 +54,7 @@ const (
resizeFsTimeoutMilliseconds = 30 * 1000
TimeOutMultipathdCmd = 10 * 1000
multipathdCmd = "multipathd"
+ minFilesInNonEmptyDir = 1
)
//go:generate mockgen -destination=../../mocks/mock_node_utils.go -package=mocks github.com/ibm/ibm-block-csi-driver/node/pkg/driver NodeUtilsInterface
@@ -67,6 +70,7 @@ type NodeUtilsInterface interface {
ClearStageInfoFile(filePath string) error
StageInfoFileIsExist(filePath string) bool
IsPathExists(filePath string) bool
+ IsFCExists() bool
IsDirectory(filePath string) bool
RemoveFileOrDirectory(filePath string) error
MakeDir(dirPath string) error
@@ -239,6 +243,26 @@ func (n NodeUtils) ParseFCPorts() ([]string, error) {
return fcPorts, nil
}
+func (n NodeUtils) IsFCExists() bool {
+ return n.IsPathExists(FCPath) && !n.isEmptyDir(FCPath)
+}
+
+func (n NodeUtils) isEmptyDir(path string) bool {
+ f, _ := os.Open(path)
+ defer f.Close()
+
+ _, err := f.Readdir(minFilesInNonEmptyDir)
+
+ if err != nil {
+ if err != io.EOF {
+ logger.Warningf("Check is directory %s empty returned error %s", path, err.Error())
+ }
+ return true
+ }
+
+ return false
+}
+
func (n NodeUtils) IsPathExists(path string) bool {
_, err := os.Stat(path)
if err != nil {
diff --git a/node/pkg/driver/node_utils_test.go b/node/pkg/driver/node_utils_test.go
index 9bb7d0b5b..e5e441bfe 100644
--- a/node/pkg/driver/node_utils_test.go
+++ b/node/pkg/driver/node_utils_test.go
@@ -19,16 +19,17 @@ package driver_test
import (
"errors"
"fmt"
- gomock "github.com/golang/mock/gomock"
- mocks "github.com/ibm/ibm-block-csi-driver/node/mocks"
- driver "github.com/ibm/ibm-block-csi-driver/node/pkg/driver"
- "github.com/ibm/ibm-block-csi-driver/node/pkg/driver/device_connectivity"
- executer "github.com/ibm/ibm-block-csi-driver/node/pkg/driver/executer"
"io/ioutil"
"os"
"reflect"
"syscall"
"testing"
+
+ gomock "github.com/golang/mock/gomock"
+ mocks "github.com/ibm/ibm-block-csi-driver/node/mocks"
+ driver "github.com/ibm/ibm-block-csi-driver/node/pkg/driver"
+ "github.com/ibm/ibm-block-csi-driver/node/pkg/driver/device_connectivity"
+ executer "github.com/ibm/ibm-block-csi-driver/node/pkg/driver/executer"
)
var (
diff --git a/node/pkg/driver/version_test.go b/node/pkg/driver/version_test.go
index fe5001785..ffef8696c 100644
--- a/node/pkg/driver/version_test.go
+++ b/node/pkg/driver/version_test.go
@@ -46,7 +46,7 @@ func TestGetVersion(t *testing.T) {
version, err := GetVersion(dir)
expected := VersionInfo{
- DriverVersion: "1.6.0",
+ DriverVersion: "1.7.0",
GitCommit: "",
BuildDate: "",
GoVersion: runtime.Version(),
@@ -76,7 +76,7 @@ func TestGetVersionJSON(t *testing.T) {
}
expected := fmt.Sprintf(`{
- "driverVersion": "1.6.0",
+ "driverVersion": "1.7.0",
"gitCommit": "",
"buildDate": "",
"goVersion": "%s",
diff --git a/reusables/doc-resources.md b/reusables/doc-resources.md
deleted file mode 100644
index e8a0029aa..000000000
--- a/reusables/doc-resources.md
+++ /dev/null
@@ -1,22 +0,0 @@
-IBM resources
-
-- [IBM SAN Volume Controller documentation](https://www.ibm.com/docs/en/sanvolumecontroller)\(ibm.com/docs/en/sanvolumecontroller)
-- [IBM Spectrum Scale documentation](https://www.ibm.com/docs/en/spectrum-scale)\(ibm.com/docs/en/spectrum-scale\)
-- [IBM FlashSystem® 5200, 5000, 5100, Storwize® V5100 and V5000E documentation](http://www.ibm.com/docs/en/f555sv-and-v)\(ibm.com/docs/en/f555sv-and-v\)
-- [IBM FlashSystem™ 7200 and Storwize V7000 documentation](https://www.ibm.com/docs/en/flashsystem-7x00)\(ibm.com/docs/en/flashsystem-7x00\)
-- [IBM Spectrum Virtualize as Software Only documentation](https://www.ibm.com/docs/en/spectrumvirtualsoftw)\(ibm.com/docs/en/spectrumvirtualsoftw\)
-- [IBM FlashSystem 9200 and 9100 documentation](https://www.ibm.com/docs/en/flashsystem-9x00)\(ibm.com/docs/en/flashsystem-9x00\)
-- [IBM FlashSystem A9000 documentation](https://www.ibm.com/docs/en/flashsystem-a9000)\(ibm.com/docs/en/flashsystem-a9000\)
-- [IBM FlashSystem A9000R documentation](https://www.ibm.com/docs/en/flashsystem-a9000r)\(ibm.com/docs/en/flashsystem-a9000r\)
-- [IBM DS8880 documentation](https://www.ibm.com/docs/en/ds8880) \(ibm.com/docs/en/ds8880\)
-- [IBM DS8900 documentation](https://www.ibm.com/docs/en/ds8900) \(ibm.com/docs/en/ds8900\)
-- [IBM Spectrum® Access for IBM Cloud® Private Blueprint](https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=TSW03569USEN&) \(ibm.com/downloads/cas/KK5PGD8E\)
-
- Used as the FlexVolume driver based solution for OpenShift® 3.11, using [IBM Storage Enabler for Containers](https://www.ibm.com/docs/en/stgenablercontainers)\(ibm.com/docs/en/stgenablercontainers\)
-
-- [IBM Storage for Red Hat® OpenShift Blueprint](http://www.redbooks.ibm.com/abstracts/redp5565.html?Open) \(http://www.redbooks.ibm.com/abstracts/redp5565.html?Open\)
-
-External resources
-- [Persistent volumes on Kubernetes](https://kubernetes.io/docs/concepts/storage/volumes/) \(kubernetes.io/docs/concepts/storage/volumes\)
-- [Kubernetes Documentation](https://kubernetes.io/docs/home/) \(kubernetes.io/docs/home/\)
-- [Kubernetes Blog](https://kubernetes.io/blog/) \(kubernetes.io//blog\)
diff --git a/scripts/ci/jenkins_pipeline_csi b/scripts/ci/Jenkinsfile
similarity index 93%
rename from scripts/ci/jenkins_pipeline_csi
rename to scripts/ci/Jenkinsfile
index 6689dbfc5..95c72e5bb 100644
--- a/scripts/ci/jenkins_pipeline_csi
+++ b/scripts/ci/Jenkinsfile
@@ -1,4 +1,9 @@
pipeline {
+ parameters {
+ string(name: 'IMAGE_VERSION', defaultValue: "1.7.0")
+ string(name: 'DOCKER_REGISTRY', defaultValue: DEFAULT_DOCKER_REGISTRY)
+ string(name: 'EMAIL_TO', defaultValue: "")
+ }
environment {
registryCredentialsID = 'csi_w3_user'
}
diff --git a/scripts/ci/jenkins_pipeline_community_csi_test b/scripts/ci/jenkins_pipeline_community_csi_test
deleted file mode 100644
index 4f8cee84f..000000000
--- a/scripts/ci/jenkins_pipeline_community_csi_test
+++ /dev/null
@@ -1,83 +0,0 @@
-pipeline {
- agent {
- label 'docker-engine'
- }
- environment {
- CONTROLLER_LOGS = "csi_controller_logs"
- }
-
- stages {
- stage('Environment Setup') {
- agent {
- label 'ansible_rhel73'
- }
- steps {
- script{
- echo "checking out XAVI"
- if (env.XAVILIB_BRANCH == null) {
- env.XAVILIB_BRANCH = 'develop'
- }
- // Just bring XAVI repo (use it in different stage)
- xaviCheckOutScm(path: 'testing/', name: 'xavi', branch: "${env.XAVILIB_BRANCH}")
-
-
- // Generate the new storage conf yaml with relevant envs
- env.pwd = sh(returnStdout: true, script: 'pwd').trim()
- echo " env.pwd ${env.pwd}"
-
- env.new_conf_yaml_name = "${env.pwd}/scripts/ci/storage_conf_new.yaml"
- sh 'echo new conf yaml ${new_conf_yaml_name}'
-
- env.full_storage_conf_yaml_path = "${env.pwd}/scripts/ci/storage_conf.yaml"
- echo "full storage conf yaml path : ${env.full_storage_conf_yaml_path}"
-
- echo "replacing username and password in storage-conf file"
- // this will replace the username and password env vars in the yaml file.
- sh '''
- ( echo "cat < ${new_conf_yaml_name}";
- cat ${full_storage_conf_yaml_path};
- echo "EOF";
- ) > ${new_conf_yaml_name}
- . ${new_conf_yaml_name}
- cat ${new_conf_yaml_name}
- '''
-
- echo "getting pool name from yaml file"
- env.POOL_NAME = sh(returnStdout: true, script: 'cat ${full_storage_conf_yaml_path} | grep " pools:" -A 4 | grep name | cut -d ":" -f2').trim()
- echo "pool name ${POOL_NAME}"
-
- }
- }
- }
- stage('Configure Storage') {
- agent {
- label 'ansible_rhel73'
- }
- steps {
- echo "found storage yaml so running ansible to configure storage using yaml file : ${env.new_conf_yaml_name}"
- script {
- configureStorage(storage_arrays: "${env.STORAGE_ARRAYS}", vars_file: "${env.new_conf_yaml_name}")
- }
- }
- }
-
- stage ('CSI-controller: build and start controller server and csi sanity tests') {
- steps {
- sh './scripts/ci/run_community_csi_test.sh'
- }
- }
- }
-
- post {
- always {
- sh './scripts/ci/community_csi_test_cleanup.sh csi-controller'
- sh './scripts/ci/community_csi_test_cleanup.sh csi-sanity-test'
- archiveArtifacts "${env.CONTROLLER_LOGS}, ${env.CONTROLLER_LOGS}_node"
- sh 'ls build/reports'
- junit 'build/reports/*.xml'
- sh '[ -d build/reports ] && rm -rf build/reports'
- sh '[ -f `${env.CONTROLLER_LOGS}` ] && rm -f csi_controller_logs'
-
- }
- }
-}
diff --git a/scripts/ci/run_csi_test_client.sh b/scripts/ci/run_csi_test_client.sh
index c21879751..7629e40f2 100755
--- a/scripts/ci/run_csi_test_client.sh
+++ b/scripts/ci/run_csi_test_client.sh
@@ -7,6 +7,12 @@ if [ $3 = 'community_svc' ] ; then
else
csi_params='csi_params'
fi
+
+common_tests_to_skip_file="scripts/csi_test/common_csi_tests_to_skip"
+array_specific_tests_to_skip_file="scripts/csi_test/$3_csi_tests_to_skip"
+tests_to_skip_file="scripts/csi_test/csi_tests_to_skip"
+cat $common_tests_to_skip_file $array_specific_tests_to_skip_file > $tests_to_skip_file
+
#/tmp/k8s_dir is the directory of the csi grpc\unix socket that shared between csi server and csi-test docker
-docker build -f Dockerfile-csi-test --build-arg CSI_PARAMS=${csi_params} -t csi-sanity-test . && docker run --user=root -e STORAGE_ARRAYS=${STORAGE_ARRAYS} -e USERNAME=${USERNAME} -e PASSWORD=${PASSWORD} -e POOL_NAME=${POOL_NAME} -v /tmp/k8s_dir:/tmp/k8s_dir:rw -v$2:/tmp/test_results:rw --rm --name $1 csi-sanity-test
+docker build -f Dockerfile-csi-test --build-arg CSI_PARAMS=${csi_params} -t csi-sanity-test . && docker run --user=root -e STORAGE_ARRAYS=${STORAGE_ARRAYS} -e USERNAME=${USERNAME} -e PASSWORD=${PASSWORD} -e POOL_NAME=${POOL_NAME} -v /tmp:/tmp:rw -v$2:/tmp/test_results:rw --rm --name $1 csi-sanity-test
diff --git a/scripts/csi_test/common_csi_tests_to_skip b/scripts/csi_test/common_csi_tests_to_skip
new file mode 100644
index 000000000..691004e40
--- /dev/null
+++ b/scripts/csi_test/common_csi_tests_to_skip
@@ -0,0 +1,5 @@
+GetCapacity
+ListVolumes
+volume lifecycle
+ListSnapshots
+NodeGetVolumeStats
diff --git a/scripts/csi_test/community_a9k_csi_tests_to_skip b/scripts/csi_test/community_a9k_csi_tests_to_skip
new file mode 100644
index 000000000..e69de29bb
diff --git a/scripts/csi_test/community_ds8k_csi_tests_to_skip b/scripts/csi_test/community_ds8k_csi_tests_to_skip
new file mode 100644
index 000000000..a48deba96
--- /dev/null
+++ b/scripts/csi_test/community_ds8k_csi_tests_to_skip
@@ -0,0 +1 @@
+should create volume from an existing source snapshot
diff --git a/scripts/csi_test/community_svc_csi_tests_to_skip b/scripts/csi_test/community_svc_csi_tests_to_skip
new file mode 100644
index 000000000..572a33b98
--- /dev/null
+++ b/scripts/csi_test/community_svc_csi_tests_to_skip
@@ -0,0 +1,4 @@
+should create volume from an existing source snapshot
+should create volume from an existing source volume
+should succeed when requesting to create a snapshot with already existing name and same source volume ID
+should fail when requesting to create a snapshot with already existing name and different source volume ID
diff --git a/scripts/csi_test/csi_params b/scripts/csi_test/csi_params
index b8bd87cf6..b524c8581 100644
--- a/scripts/csi_test/csi_params
+++ b/scripts/csi_test/csi_params
@@ -1,3 +1,3 @@
#SpaceEfficiency: Compression
pool: POOL_NAME
-#volume_name_prefix: v_olga
+#volume_name_prefix: vol_
diff --git a/scripts/csi_test/csi_tests_to_run b/scripts/csi_test/csi_tests_to_run
deleted file mode 100644
index 8cb29b064..000000000
--- a/scripts/csi_test/csi_tests_to_run
+++ /dev/null
@@ -1,18 +0,0 @@
-ControllerGetCapabilities
-CreateVolume
-DeleteVolume
-ExpandVolume
-NodeGetCapabilities
-NodeGetInfo
-CreateSnapshot
-DeleteSnapshot
-NodePublishVolume
-NodeUnpublishVolume
-NodeStageVolume
-NodeUnstageVolume
-GetPluginCapabilities
-GetPluginInfo
-Probe
-ControllerPublishVolume
-ControllerUnpublishVolume
-Node Service
diff --git a/scripts/csi_test/entrypoint-csi-tests.sh b/scripts/csi_test/entrypoint-csi-tests.sh
index 7b1544b69..a75b0f72c 100755
--- a/scripts/csi_test/entrypoint-csi-tests.sh
+++ b/scripts/csi_test/entrypoint-csi-tests.sh
@@ -8,8 +8,8 @@ sed -i -e "s/PASSWORD/${PASSWORD}/g" ${SECRET_FILE}
echo "update params file"
sed -i -e "s/POOL_NAME/${POOL_NAME}/g" ${PARAM_FILE}
-# get tests to run
-TESTS=`cat ${TESTS_TO_RUN_FILE}| sed -Ez '$ s/\n+$//' | tr '\n' "|"`
+# get tests to skip
+TESTS=`cat ${TESTS_TO_SKIP_FILE}| sed -Ez '$ s/\n+$//' | tr '\n' "|"`
/usr/local/go/src/github.com/kubernetes-csi/csi-test/cmd/csi-sanity/csi-sanity \
--csi.endpoint ${ENDPOINT} \
@@ -19,5 +19,5 @@ TESTS=`cat ${TESTS_TO_RUN_FILE}| sed -Ez '$ s/\n+$//' | tr '\n' "|"`
--csi.junitfile ${JUNIT_OUTPUT} \
--ginkgo.v \
--ginkgo.debug \
---ginkgo.focus "${TESTS}"
+--ginkgo.skip "${TESTS}"