Installing IBM Fusion Data Foundation using the command line (CLI) and Dynamic storage classes in an Installer Provisioned Infrastructure Openshift cluster.
Red Hat has this doc which shows how to use the CLI to install and configure Data Foundation using the Local Storage operator and disks from LocalStorage storage classes. This document intends do show how to use dynamic disks ona an IPI cluster, in this case an Openshift cluster installed on VMware.
In this example we used Openshift 4.18 and Fusion 2.11. On a fresh installed Openshift cluster on VMware we need to be able to install IBM Fusion Software. In order to proceed we need to get through the pre-requisites listed on Fusion documentation.
To run the steps you need to be logged on the Openshift cluster with the oc
command with an administrative user:
oc login -u kubeadmin
And type the password or go to the user name on up right corner of the Openshift web console and got to Copy login command click on Display Token and use:
oc login --token=sha256~<your token> --server=https://api.<your-cluster.your-domain>:6443
The main pre-requisite steps are:
- Obtaining the entitlement key which allows the download of software images from the IBM registry web site. Licensed customers already have access to the entitlement keys, trial version is also available.
- Creating the image pull secret so Openshift can access the IBM Registry.
- Adding IBM Operator Catalog to have all IBM software available on Operators -> Operator Hub
With all the pre-requisites met we can start to configure the Operators
IBM provides documentation on how to install the IBM Fusion Operator using the command line.
The basic steps are to create the deployfusion.yaml file with the content:
apiVersion: v1
kind: Namespace
metadata:
name: ibm-spectrum-fusion-ns
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: isf-og
namespace: ibm-spectrum-fusion-ns
spec:
targetNamespaces:
- ibm-spectrum-fusion-ns
upgradeStrategy: Default
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: ibm-operator-catalog
namespace: openshift-marketplace
spec:
sourceType: grpc
image: icr.io/cpopen/ibm-operator-catalog:latest
displayName: IBM Operator Catalog
publisher: IBM
updateStrategy:
registryPoll:
interval: 45m
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: isf-operator
namespace: ibm-spectrum-fusion-ns
spec:
channel: v2.0
name: isf-operator
sourceNamespace: openshift-marketplace
source: ibm-operator-catalog
installPlanApproval: Automatic
Apply the file to the cluster:
oc apply -f deployfusion.yaml
Then we need to accept Fusion license, creating the accept_license.yaml file:
apiVersion: prereq.isf.ibm.com/v1
kind: SpectrumFusion
metadata:
name: spectrumfusion
namespace: ibm-spectrum-fusion-ns
spec:
license:
accept: true
And then apply the file:
oc apply -f accept_license.yaml
We need to create the Project/Namespace with namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
labels:
openshift.io/cluster-monitoring: "true"
name: openshift-storage
spec: {}
Then apply namespace.yaml
oc apply -f namespace.yaml
Create the operatorgroup.yaml file
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: openshift-storage-operatorgroup
namespace: openshift-storage
spec:
targetNamespaces:
- openshift-storage
Apply operatorgroup.yaml file
oc apply -f operatorgroup.yaml
We need to create the ISF DataFoundation Catalog, creating the isf-datafoundation-catalog.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: isf-data-foundation-catalog
namespace: openshift-marketplace
spec:
displayName: Data Foundation Catalog
image: icr.io/cpopen/isf-data-foundation-catalog:v4.18
publisher: IBM
sourceType: grpc
updateStrategy:
registryPoll:
interval: 60m
Apply the isf-datafoundation-catalog.yaml
oc apply -f isf-datafoundation-catalog.yaml
We finally install the Data Foundation operator, with the file subscription.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: odf-operator
namespace: openshift-storage
spec:
channel: "stable-4.18"
installPlanApproval: Automatic
name: odf-operator
source: isf-data-foundation-catalog
sourceNamespace: openshift-marketplace
Installing the operator
oc apply -f subscription.yaml
We need to patch the Console operator so the Data Foundation appears on Openshift Web Concole:
oc patch console.operator cluster -n openshift-storage --type json -p '[{"op": "add", "path": "/spec/plugins", "value": ["odf-console"]}]'
With the operators installed we create the ocs-storagecluster.yaml poiting to a Storage Class known by the Openshift, which in this case is VMware.
apiVersion: ocs.openshift.io/v1
kind: StorageCluster
metadata:
name: ocs-storagecluster
namespace: openshift-storage
spec:
arbiter: {}
encryption:
kms: {}
externalStorage: {}
managedResources:
cephBlockPools: {}
cephCluster: {}
cephConfig: {}
cephDashboard: {}
cephFilesystems: {}
cephNonResilientPools: {}
cephObjectStoreUsers: {}
cephObjectStores: {}
cephToolbox: {}
mirroring: {}
monPVCTemplate:
metadata: {}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: thin-csi
volumeMode: Filesystem
status: {}
resources: {}
storageDeviceSets:
- config: {}
count: 1
dataPVCTemplate:
metadata: {}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Ti
storageClassName: thin-csi
volumeMode: Block
status: {}
name: fusion-storage
placement: {}
preparePlacement: {}
replica: 3
resources: {}
Then apply the ocs-
oc apply -f ocs-storagecluster.yaml
After all pods are created the Storage Cluster is ready to use.