GCP Private Kubernetes cluster for Helm installation of Elasticsearch

In this article, I will list the steps needed to deploy an Elasticsearch cluster on a private Google Cloud Platform (GCP) Kubernetes cluster using Helm, from creating a docker image with Elasticsearch, to the creation of a private Kubernetes cluster and more.

Let’s begin.

Docker image

First, we need to create our own docker image for elasticsearch, based on the official image provided by elastico. This step is mandatory, because we do not have any connectivity to the official repository from inside a private GCP cluster.

So, on my computer, I’m creating a Dockerfile with the following content:

FROM docker.elastic.co/elasticsearch/elasticsearch:7.5.0

Next step is to build the image and tag (replace PROJECT_ID).

docker build --tag elasticsearch-gcs:7.5.0 .
docker tag elasticsearch-gcs:7.5.0 gcr.io/PROJECT_ID/elasticsearch-gcs:7.5.0

Then, I push the newly created image to internal gcp repository of my project.

docker push gcr.io/PROJECT_ID/elasticsearch-gcs:7.5.0

If needed, I authenticate to be able to push the image:

gcloud auth configure-docker

VPC network and client VM

We will create a dedicated VPC network for the cluster.

gcloud compute networks create elasticsearch-network --subnet-mode=custom

Create a subnet in our VPC network.

gcloud compute networks subnets create elasticsearch-subnet \
    --network=elasticsearch-network --range=10.50.0.0/16

Create VM which will be used to run kubectl commands.

gcloud compute instances create --subnet=elasticsearch-subnet \
    --scopes cloud-platform es-cluster-proxy

Make note of the IP of the VM, or get it:

export CLIENT_IP=`gcloud compute instances describe es-cluster-proxy \
--format="value(networkInterfaces[0].networkIP)"`

Create a firewall rule to allow SSH access to the VPC network.

gcloud compute firewall-rules create elasticsearch-proxy-ssh --network elasticsearch-network  \
    --allow tcp:22

Connectivity

Serverless VPC access

Create a serverless VPC access for cloud functions. Do not forget to change the region parameter (ex: europe-west2).

gcloud compute networks vpc-access connectors create elasticsearch-cluster \
--network elasticsearch-network \
--region REGION \
--range 10.9.8.0/28

Don’t forget to update all cloud functions to use the newly created VPC connector using:

--vpc-connector projects/PROJECT_ID/locations/REGION/connectors/elasticsearch-cluster

VPC network peering

In order for our backend API to be able to make request to the cluster, we need to peer networks together, that is default network with our newly created network elasticsearch-network.

gcloud compute networks peerings create elasticsearch-default-peer \
    --network=default \
    --peer-project PROJECT_ID \
    --peer-network elasticsearch-network \
    --auto-create-routes

Private cluster

Creation

Let’s continue by creating a private cluster with no public IP on compute engines used by the cluster. Do not forget to change the zone parameter (ex: europe-west2-a) and to configure the cluster so that it fits your needs.

gcloud container clusters create "elasticsearch-cluster" \
  --zone "ZONE" \
  --no-enable-basic-auth \
  --cluster-version "1.14.10-gke.21" \
  --machine-type "n1-standard-1" \
  --image-type="cos_containerd" \
  --disk-type "pd-standard" \
  --disk-size "50" \
  --metadata disable-legacy-endpoints=true \
  --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
  --num-nodes "3" \
  --enable-stackdriver-kubernetes \
  --enable-ip-alias \
  --enable-private-nodes \
  --enable-private-endpoint \
  --master-ipv4-cidr 172.16.0.32/28 \
  --network elasticsearch-network \
  --subnetwork=elasticsearch-subnet \
  --no-issue-client-certificate \
  --addons HorizontalPodAutoscaling,HttpLoadBalancing \
  --enable-autoupgrade \
  --enable-autorepair \
  # --enable-master-authorized-networks \
  # --master-authorized-networks <IP>/32

Allow access to VM

We update our cluster to authorize access from our previously created VM.

gcloud container clusters update elasticsearch-cluster \
    --enable-master-authorized-networks \
    --master-authorized-networks <VM_IP>/32

Note that this can also be done at cluster creation.

Elasticsearch cluster

Connect to VM

Connect to proxy VM either with GCP interface in browser, or using gcloud.

gcloud compute ssh es-cluster-proxy

Install tools

Install kubectl.

sudo apt-get install kubectl

Install helm.

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ helm init

Add elastic helm charts repo.

helm repo add elastic https://helm.elastic.co

Connect to cluster

Get cluster credentials.

gcloud container clusters get-credentials elasticsearch-cluster \
--zone ZONE --internal-ip

Prepare configuration

Create a YAML file to personalize elasticsearch installation (Change PROJECT_ID).

---
service:
  type: LoadBalancer
  annotations:
    cloud.google.com/load-balancer-type: Internal
replicas: 2

image: "gcr.io/PROJECT_ID/elasticsearch-gcs"
imageTag: "7.5.0"
resources:
  requests:
    cpu: "100m"

Here, we add an annotation so that an internal load balancer will be deployed to expose our cluster on the internal network. We also specify the image to be used, that is our internal image.

Install elasticsearch

Install elasticsearch with helm.

helm install --values ./es-config.yml elasticsearch elastic/elasticsearch

Check

In order to assess deployment, make a request to the IP of the internal load balancer.

$ curl http://10.50.0.xxx:9200
$ curl http://10.50.0.xxx:9200/_cluster/state?pretty

Uninstall

If for any reason, you need to uninstall elasticsearch from cluster, connect back to the VM and connect kubectl to cluster, then :

helm uninstall elasticsearch

Notes

# Get cluster state.
kubectl get all

# Get pods
kubectl get pods
    
# Check a pod
kubectl describe pod <pod-name>
    
# List gcp images
gcloud container images list
    
# Configure password
kubectl create secret generic elastic-credentials  --from-literal=password=test --from-literal=username=elastic

# Delete secrets
kubectl delete secrets elastic-credentials

Sources

This is the result of aggregating multiple sources of documentation – listed right below – it is adviced to go through each link to get a better understanding of the whole process.

Anonyme

Auteur/autrice : Victor

Ingénieur en informatique de formation et de métier, j’administre ce serveur et son domaine et privilégie l'utilisation de logiciels libres au quotidien. Je construis progressivement mon "cloud" personnel service après service pour conserver un certain contrôle sur mes données numériques.

Une réflexion sur « GCP Private Kubernetes cluster for Helm installation of Elasticsearch »

  1. merci de votre post. Particularment
    `gcloud container clusters get-credentials elasticsearch-cluster \
    –zone ZONE –internal-ip `
    avec le argument `–internal-ip.`
    c’est marche bien!

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *