Deploy Cyberwatch on an existing Kubernetes cluster
This page describes the steps to deploy Cyberwatch on an existing Kubernetes cluster. This procedure assumes that the user has a basic knowledge of the Kubernetes orchestrator and Helm.
Deployment steps
Have a cluster that meets the software’s technical prerequisites.
Log in to the Helm repository:
helm registry login harbor.cyberwatch.frFill in the username prefixed with
cbw$, then fill in the password.These credentials are the ones in your Cyberwatch license, if you happen to not have it, please contact us at support@cyberwatch.com.
Create and edit the configuration file
values.ymlIt’s necessary to store the file
values.ymlin a secure way. The file is required for the update of Docker images or for the update of the Helm chart.The following steps describe how to set up a minimal configuration file for deploying the Cyberwatch application.
Here is an example of the
values.ymlin its minimal configuration:global: # storageClass: image: registryCredentials: - name: cyberwatch-credentials registry: harbor.cyberwatch.fr/cbw-on-premise username: changeme password: changeme nginx: resolver: changeme ingress: enabled: true hosts: - hostname: cyberwatch.example.com ingressClassName: nginx tls: enabled: true thirdParties: enabled: false database: password: "changeme" root_password: "changeme" redis: password: "changeme" key: base: "changeme" credential: "changeme" node: name: cyberwatch-node-name type: singleSet the credentials used to pull the docker images. The username and password are the same as those used to login to the Helm chart repository.
global: image: registryCredentials: - name: cyberwatch-credentials registry: harbor.cyberwatch.fr/cbw-on-premise username: changeme password: changemeConfigure the field
global.storageClassused to define the type of storage used by theVolumeClaimsto keep the persistent data.By default, the Helm chart configures the application so that the data is saved on the machine that executes the containers by using
hostPathtype ofVolumeClaims. This behavior is only adapted if the Kubernetes cluster is made of off one node. On a Kubernetes cluster with multiple nodes, Cyberwatch recommends using aStorageClass:global: # storageClass:List the
storageClassavailable on the cluster:kubectl get scRemove the comment on the
global.storageClassfield and assign it the value of an availablestorageClasson the cluster.For example:
global: storageClass: csi-cinder-classic
If necessary, further information is available in the comments of the default chart Helm configuration file.
Configure the
nginx.resolverfield to the IP address of the DNS service of the Kubernetes clusterGet the IP address of the
kube-dnsDNS resolver:kubectl -n kube-system get svc kube-dnsAssign the IP address of the DNS resolver of the Kubernetes cluster to the field
nginx.resolver.Example:
nginx: resolver: 10.3.0.10
Configure the
ingressOne or more ingresses can be configured in the
ingress.hostsfield. Each ingress must have a uniquehostnameand aningressClassName. TheIngressClassavailable on the cluster can be listed using the command below:kubectl get ingressclassesAssign the selected value to the
ingressClassNamefield and the domain name wich will accept requests to thehostnamefield.Example:
ingress: enabled: true hosts: - hostname: cyberwatch.example.com ingressClassName: nginx tls: enabled: trueThe IP address that corresponds to the domain name must be the IP address of the cluster load balancer.
If necessary, further information is available in the comments of the default chart Helm configuration file.
Disable
thirdPartiescontainer by setting tofalsethe parameterthirdParties.enabled:thirdParties: enabled: falseConfigure the secrets for the application, the database and redis
To generate these secrets, use the following command:
cat <<-EOF database: password: "$(openssl rand -hex 16)" root_password: "$(openssl rand -hex 16)" redis: password: "$(openssl rand -hex 16)" key: base: "$(openssl rand -hex 64)" credential: "$(openssl rand -hex 64)" EOFConfigure the name of the node in the Cyberwatch application with the
node.nameparameter:node: name: cyberwatch-node-name type: singleCreate the cyberwatch namespace on the cluster
kubectl create namespace cyberwatchGenerate a couple of SSH keys and save them as a secret
ssh-keygen -q -N '' -f ./id_ed25519 -t ed25519 kubectl -n cyberwatch create secret generic web-scanner-ssh-authorized-keys --from-file=authorized_keys="./id_ed25519.pub" kubectl -n cyberwatch create secret generic ssh-private-key --from-file="./id_ed25519"Deploy the Helm chart to your cluster:
helm -n cyberwatch install cyberwatch oci://harbor.cyberwatch.fr/cbw-on-premise/cyberwatch-chart -f values.ymlThe deployment of the Helm chart will use the configurations of the
values.ymlfile to configure the application.Verify the status of all the pods:
kubectl -n cyberwatch get podsWhen all the pods are running, register the Administrator account from the web interface
Accessing the Cyberwatch instance through the IP address will return a 404 error. It is necessary to use the domain name defined above.
(Optional) Retrieve the chart Helm default configuration file
The above documentation shows the steps to follow to set up a minimal configuration of Cyberwatch.
It is possible to download the default chart Helm configuration file of Cyberwatch, in order to use an already complete file that indicates which default values can be updated.
Using this file is recommended if you wish to deviate from the minimal configuration described in this documentation, if you wish to set up a TLS certificate for example.
To retrieve the chart Helm default configuration file:
helm show values oci://harbor.cyberwatch.fr/cbw-on-premise/cyberwatch-chart > values.yml
This file can then be modified according to your needs, and the Helm chart deployed from this configuration.