Getting Started with Google Container Engine - Tutorial
Tue, May 23, 2017
Kubernetes provides orchestration for running containerized applications. In this tutorial we’ll walk through the basics of using Google Container Engine, or GKE, which provides managed Kubernets services for your use.
This article is a combination multiple other tutorials such as: Quickstart for Google Container Engine, Container Cluster Operations and the Kubernetes Reference Documentation. I’ve combined and customized to provide a base for future articles.
- Create a cluster
- Switching between clusters
- Deploy a docker application
- Expose the application as a Service
- Scale the application
Before you begin
This tutorial requires a few components to be setup and ready for use before starting.
Enable Container Engine
Take the following steps to enable the Google Container Engine API: - Visit the Container Engine page in the Google Cloud Platform Console. - Create or select a project. - Wait for the API and related services to be enabled. This can take several minutes - Enable billing for your project. ENABLE BILLING
Install Google Cloud SDK command line client
- Install gcloud, the Google Cloud SDK
Install kubectl command line client
Once your Google Cloud SDK is set up, you can install the kubernetes client,
kubectl with the following command:
$ gcloud components install kubectl
Create a Cluster
Before we can start using GKE we’ll need to create the cluster
Set a default Compute Engine zone.
$ gcloud config set compute/zone us-central1-b
You can view your defaults in the
gcloud command-line tool by running the following command:
$ gcloud config list
Create a cluster (this step can take a few minutes to complete).
$ gcloud container clusters create example-cluster
Switching Between Clusters
You may have access to multiple cluster in various projects. Often times its necessary to point your client between different clusters.
Review the clusters available in the project
$ gcloud container clusters list
Set the default cluster for
$ gcloud config set container/cluster example-cluster
Google Container Engine uses the
kubectl command to manage resources in your cluster. If you have more than one cluster, you must tell
kubectl which cluster to target.
kubectl to target a specific cluster, run the following command in your shell or terminal window:
$ gcloud container clusters get-credentials example-cluster
kubectl is configured to use Application Default Credentials to authenticate to the cluster. Ensure it has the right credentials by running
$ gcloud auth application-default login
This opens a browser window prompting you to log in with your desired account.
Deploy a Docker Application
In this step we’re going to deploy a prebuilt sample Node application described in the official docs. In future tutorials we’ll be creating our own Docker applications, building and deploying them.
Deploy and run the
$ kubectl run hello-node --image=gcr.io/google-samples/node-hello:1.0 --port=8080
Review the pod status
$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-node-3526609615-pl732 1/1 Running 0 15m
Expose the Application as a Service
At this point the application is running in our container. While other applications within the cluster can access it, we currently don’t have a way to access it from outside the cluster.
To expose the application externally we’ll define a Service for it
$ kubectl expose deployment hello-node --type="LoadBalancer" service "hello-node" exposed
This command creates a Service resource within the cluster. The type=“LoadBalancer option in
kubectl requests that Google Cloud Platform provision a load balancer for your container, which is billed per the regular Load Balancer pricing .
Copy the external IP address for the hello-node app
$ kubectl get service hello-node NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-node 10.3.250.198 22.214.171.124 8080:32083/TCP 58s
View the app (replace EXTERNAL-IP with the external IP address you obtained in the previous step).
$ open http://EXTERNAL-IP:8080
Note: You might need to wait several minutes to get an external IP address. If you don’t get an external IP address, run
kubectlget service hello-node again.
Scale the application
Scaling your application is typically manage by setting a base number of pods and then utilizing auto scaling capabilities. In this example though we’re going to demonstrate scaling manually.
If we list the application we’ll see ‘1/1 READY’. This means we requested one and there is indeed one running.
$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-node-3526609615-pl732 1/1 Running 0 45m
When we deployed the node application kubenetes also created a Replication Controller to manage how many replicas of the application are running. We can see this by listing the deployments
$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-node 1 1 1 1 54m
Now lets scale the deployment so that we have 3 pods running
$ kubectl scale --replicas=3 deployment/hello-node deployment "hello-node" scaled
And review that everything has now been updated
$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hello-node 3 3 3 3 54m $ kubectl get pods NAME READY STATUS RESTARTS AGE hello-node-3526609615-cf9jn 1/1 Running 0 24s hello-node-3526609615-p3nqk 1/1 Running 0 24s hello-node-3526609615-pl732 1/1 Running 0 55m
To avoid incurring charges to your Google Cloud Platform account for the resources used in this quickstart:
Delete the services you created to remove the load balancing resources that are provisioned to serve them:
$ kubectl delete service hello-node
Wait a few minutes for the service to spin down, then use the following command to delete the cluster you created:
$ gcloud container clusters delete example-cluster
#fieldnotes/kubernetes#google, #gcp, #kubernetes, #gke,