Tenemos un master y 4 nodos esclavos ya instalador y configurados en un cluster de kubernetes
192.168.1.88 k8s-master
192.168.1.41 node01
192.168.1.42 node02
192.168.1.43 node03
192.168.1.44 node04
A cada nodo le hemos dado un disco duro sin formato: /dev/sdb
Instalar Ansible
yum install epel-release
yum install ansible
Una vez instalado vamos a designar los nodos al final del archivo /etc/ansible/hosts
[nodes] 192.168.1.41 ansible_hostname=node01 192.168.1.42 ansible_hostname=node02 192.168.1.43 ansible_hostname=node03 192.168.1.44 ansible_hostname=node04
Creamos una clave ssh
ssh-keygen -t rsa -b 2048
la copiamos a cada nodo
for NODE in node01 node02 node03 node04; do ssh-copy-id root@${NODE}; done
Con la clave publica configurada probamos si ansible funciona
ansible nodes -m ping
Instalar GlusterFS
ansible nodes -a "yum install centos-release-gluster glusterfs-server -y"
ansible nodes -a "systemctl start glusterd" ansible nodes -a "systemctl enable glusterd"
Instalar Heketi en nodo master
wget https://github.com/heketi/heketi/releases/download/v9.0.0/heketi-client-v9.0.0.linux.amd64.tar.gz tar xzvf heketi-client-v9.0.0.linux.amd64.tar.gz cp heketi-client/bin/heketi-cli /usr/local/sbin/
Desplegar Heketi Service
Etqiuetamos aquellos nodos que formaran el cluster storage
kubectl label node node01 storagenode=glusterfs kubectl label node node02 storagenode=glusterfs kubectl label node node03 storagenode=glusterfs kubectl label node node04 storagenode=glusterfs
Creamos un secreto que sera la password de heketi, tenemos que poner la password en base64
la password que usaremos es «password» que en base64 se queda en: cGFzc3dvcmQ=
nano heketi-secret.yaml
apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: key: cGFzc3dvcmQ= type: kubernetes.io/glusterfs
kubectl apply -f heketi-secret.yaml
Preparamos el deployment de los nodos
nano heketi-deployment.json
{
"kind": "List",
"apiVersion": "v1",
"items": [
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "heketi",
"labels": {
"glusterfs": "heketi-service",
"deploy-heketi": "support"
},
"annotations": {
"description": "Exposes Heketi Service",
"tags": "kubernetes,k8s,heketi",
"traefik.backend.loadbalancer": "wrr",
"traefik.backend.weight": "10",
"traefik.enable": "true",
"traefik.frontend.entryPoints": "http,https",
"traefik.frontend.rule": "Host:heketi-api.example.com",
"traefik.tags": "kubernetes"
}
},
"spec": {
"type": "LoadBalancer",
"selector": {
"name": "heketi"
},
"ports": [
{
"name": "heketi",
"port": 8080,
"targetPort": 8080
}
]
}
},
{
"kind": "Deployment",
"apiVersion": "apps/v1beta1",
"metadata": {
"name": "heketi",
"labels": {
"glusterfs": "heketi-deployment"
},
"annotations": {
"description": "Defines how to deploy Heketi"
}
},
"spec": {
"replicas": 1,
"strategy": {
"rollingUpdate": {
"maxSurge": 0,
"maxUnavailable": 1
},
"type": "RollingUpdate"
},
"template": {
"metadata": {
"name": "heketi",
"labels": {
"name": "heketi",
"glusterfs": "heketi-pod"
}
},
"spec": {
"terminationGracePeriodSeconds": 0,
"nodeSelector": {
"storagenode": "glusterfs"
},
"containers": [
{
"image": "heketi/heketi:5",
"imagePullPolicy": "Always",
"name": "heketi",
"env": [
{
"name": "HEKETI_EXECUTOR",
"value": "ssh"
},
{
"name": "HEKETI_SSH_USER",
"value": "root"
},
{
"name": "HEKETI_SSH_PORT",
"value": "22"
},
{
"name": "HEKETI_SSH_KEYFILE",
"value": "/root/.ssh/id_rsa"
},
{
"name": "HEKETI_ADMIN_KEY",
"valueFrom": {
"secretKeyRef": {
"name": "heketi-secret",
"key": "key"
}
}
}
],
"ports": [
{
"containerPort": 8080
}
],
"volumeMounts": [
{
"name": "heketi-ssh-key",
"mountPath": "/root/.ssh"
},
{
"name": "heketi-db",
"mountPath": "/var/lib/heketi"
}
],
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"httpGet": {
"path": "/hello",
"port": 8080
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 15,
"httpGet": {
"path": "/hello",
"port": 8080
}
}
}
],
"volumes": [
{
"name": "heketi-ssh-key",
"hostPath": {
"path": "/root/.ssh"
}
},
{
"name": "heketi-db",
"hostPath": {
"path": "/data/heketi/db"
}
}
]
}
}
}
}
]
}
Hay que poner la clave privada en cada nodo para heketi
for NODE in node01 node02 node03 node04; do scp /root/.ssh/id_rsa root@${NODE}:/root/.ssh/id_rsa; done
kubectl apply -f heketi-deployment.json
Preparamos el despliegue del cluster storage
nano heketi-topology.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"192.168.1.41"
],
"storage": [
"192.168.1.41"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.1.42"
],
"storage": [
"192.168.1.42"
]
},
"zone": 2
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.1.43"
],
"storage": [
"192.168.1.43"
]
},
"zone": 3
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.1.44"
],
"storage": [
"192.168.1.44"
]
},
"zone": 4
},
"devices": [
"/dev/sdb"
]
}
]
}
]
}
kubectl get service
heketi-cli --user admin --secret password --server http://192.168.1.241:8080 topology load --json heketi-topology.json
Con esto ya tenemos creado un cluster storage ahora solo queda conectarlo con kubernetes para que lo invoque automaticamente
Crear un Storage Class
Declaramos un StorageClass y le damos nombres en este caso es «slow» dado que no son discos ssd.
heketi-cli --user admin --secret password --server http://192.168.1.241:8080 cluster list
Con el id del cluster storage y la ip del servicio heketi el usuario y la password ya podmeos desplegar el cluster storage
nano storageClass.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: slow provisioner: kubernetes.io/glusterfs parameters: resturl: "http://192.168.1.241:8080" clusterid: "4725b4b28ba2c6a7deaa851220c0c443" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret" volumetype: "replicate:3"
kubectl apply -f storageClass.yaml
Anotamos en comentarios que este es nuestro storage class por defecto para que helm lo enlace de forma automatica
kubectl patch storageclass slow -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Instalar Helm
wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz tar -xzvf helm-v2.14.3-linux-amd64.tar.gz mv linux-amd64/helm /usr/local/bin/helm mv linux-amd64/tiller /usr/local/bin/tiller
kubectl -n kube-system create serviceaccount tiller kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller helm init --service-account tiller
helm install stable/wordpress
0 comentarios