Mirantis Kubernetes Engine
Mirantis Kubernetes Engine (MKE) is a container orchestration platform for developing and running modern applications at scale, on private clouds, public clouds, and on bare metal.
MKE as a container orchestration platform is especially beneficial in the following scenarios:
- Orchestrating more than one container
- Robust and scalable applications deployment
- Multi-tenant software offerings
The following sections describe how to deploy a single-node YugabyteDB cluster on Mirantis MKE using kubectl and helm.
This page describes the steps for a single-node cluster for the purpose of simplicity, as you require more than one machine/VM for a multi-node cluster deployment.
Prerequisite
Before installing a single-node YugabyteDB cluster, ensure that you have the Docker runtime installed on the host on which you are installing MKE. To download and install Docker, select one of the following environments:
Install and configure Mirantis
-
Install MKE docker image as follows:
docker image pull mirantis/ucp:3.5.8
docker container run --rm -it --name ucp \ -v /var/run/docker.sock:/var/run/docker.sock \ mirantis/ucp:3.5.8 install \ --host-address <node-ip> \ // Replace <node-ip> with the IP address of your machine. --interactive
When prompted, enter the
username
andpassword
that you want to set; these are used to access the MKE web UI. -
Install and configure kubectl with MKE, and install Helm using the instructions in MKE documentation.
-
Create a new storage class using the following steps:
-
Copy the following content to a file named
storage.yaml
:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: yb-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
-
Apply the configuration using the following command:
kubectl apply -f storage.yaml
-
-
Make the new storage class default as follows:
kubectl patch storageclass yb-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
-
Verify that the storage class is created using the following command:
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE yb-storage(default) kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 23s
-
Create four PersistentVolumes(PVs) using the following steps:
-
Copy the following PersistentVolume configuration to a file
volume.yaml
:apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume1 labels: type: local spec: storageClassName: yb-storage capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data"
-
Apply the configuration using the following command:
kubectl apply -f volume.yaml
-
Repeat the preceding two steps to create the remaining PVs :
task-pv-volume2
,task-pv-volume3
, andtask-pv-volume4
by changing the metadata name involume.yaml
for each volume and re-running thekubectl apply
command on the same file.
-
-
Verify PersistentVolumes are created using the following command:
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE task-pv-volume1 10Gi RWO Retain Available yb-storage 103s task-pv-volume2 10Gi RWO Retain Available yb-storage 82s task-pv-volume3 10Gi RWO Retain Available yb-storage 70s task-pv-volume4 10Gi RWO Retain Available yb-storage 60s
Deploy YugabyteDB
Download YugabyteDB Helm chart
To download and start YugabyteDB Helm chart, perform the following:
-
Add the charts repository using the following command:
helm repo add yugabytedb https://charts.yugabyte.com
-
Fetch updates from the repository using the following command:
helm repo update
-
Validate the chart version as follows:
helm search repo yugabytedb/yugabyte --version 2.17.2
NAME CHART VERSION APP VERSION DESCRIPTION yugabytedb/yugabyte 2.17.2 2.17.2.0-b216 YugabyteDB is the high-performance distributed ...
Create a cluster
Create a single-node YugabyteDB cluster using the following command:
kubectl create namespace yb-demo
helm install yb-demo yugabytedb/yugabyte \
--version 2.17.2 \
--set resource.master.requests.cpu=0.5,resource.master.requests.memory=0.5Gi,\
resource.tserver.requests.cpu=0.5,resource.tserver.requests.memory=0.5Gi,\
replicas.master=1,replicas.tserver=1 --namespace yb-demo
Check cluster status with kubectl
Run the following command to verify that you have two services with one running pod in each: one YB-Master pod (yb-master-0) and one YB-Tserver pod (yb-tserver-0):
kubectl --namespace yb-demo get pods
NAME READY STATUS RESTARTS AGE
yb-master-0 0/2 ContainerCreating 0 5s
yb-tserver-0 0/2 ContainerCreating 0 4s
For details on the roles of these pods in a YugabyteDB cluster, refer to Architecture.
The status of all the pods change to Running state in a few seconds, as per the following output:
NAME READY STATUS RESTARTS AGE
yb-master-0 2/2 Running 0 13s
yb-tserver-0 2/2 Running 0 12s
Connect to the database
Connect to your cluster using ysqlsh, and interact with it using distributed SQL. ysqlsh is installed with YugabyteDB and is located in the bin directory of the YugabyteDB home directory.
-
To start ysqlsh with kubectl, run the following command:
kubectl --namespace yb-demo exec -it yb-tserver-0 -- sh -c "cd /home/yugabyte && ysqlsh -h yb-tserver-0 --echo-queries"
ysqlsh (11.2-YB-2.23.1.0-b0) Type "help" for help. yugabyte=#
-
To load sample data and explore an example using ysqlsh, refer to Retail Analytics.
Access the MKE web UI
To control your cluster visually with MKE , refer to Access the MKE web UI.