Showing posts with label Kubernetes. Show all posts
Showing posts with label Kubernetes. Show all posts

Friday, 26 January 2018

Running gcloud/kubectl commands in docker container.

In this blog, we will see, how we can authenticate with the google cloud console from the docker container using service account.

I was having a scenario, where i need to run some gcloud commands from the docker container as a prerequisite for running the kubectl commands.

Example: initialize the .kube folder with the config file (google cloud cluster config).

Steps:

1. Create a service account, with the privileges you required for calling the google api's.
2. Download the service account JSON file on the local machine.
3. Create a docker file, which includes google cloud sdk and other components like kubectl in my case.
4. Passing the service account information to the docker container using environment variable.
5. Creating service account JSON file on the go, in the docker container using provided environment variable values.
6. Run the gcloud auth service account command and pass the service account json file to it.

In Brief: 

The first 3 steps are simple and lot of documentation available for it.I will start with the fourth one.

Service account information should not be copied directly into the image. They must be passed through the secrets or the environment variables. This make it more secure and configurable.

We can write a shell script, which creates a service account json file dynamically in the container using the environment variables.And we can copied that shell script file into the container and keep it is a entry point or manually run it for generating the service account json file.

Here is the link for, creating a JSON file dynamically inside the container.

Once, the file is generated we can use the following command for activating the service account and perform other operations :

./secrets is a folder, where account.json file generated from the environment variable

1. gcloud auth activate-service-account --key-file ./secrets/account.json
2. gcloud --quiet config set project $project
3. gcloud --quiet config set compute/zone $zone
4. gcloud container clusters get-credentials $cluster_name --zone $zone --project $project

We can also wrap the above 4 gcloud commands in one shell script and run that script file, instead of running commands independently.

Lets name the file init.sh

#!/bin/ash

sh ./generate.sh

gcloud auth activate-service-account --key-file ./secrets/account.json
gcloud --quiet config set project $project
gcloud --quiet config set compute/zone $zone
gcloud container clusters get-credentials $cluster_name --zone $zone --project $project

Where, sh ./generate.sh file, will generate the service acount json file in the secrets folder.

Now, lets just run the init file, and we are done.

sh ./init.sh

In the next blog, i will show you, how we can provision a google container engine using terraform.

Sunday, 16 July 2017

Debugging a Kubernetes Pod (Node.js Application)

Debugging a node.js application is very easy, if it is running locally. But when it is deployed on kubernetes, it requires a lot of effort.

Every time you have found some bug, you re-build your image , re-deploy your pod and again start debugging.

In this approach, we will attached a debugger to a running pod (node.js instance) in the kubernetes, and using chrome-dev tools, we will debug our application.

We have updated our instance image with the bash script, which will check whether to run application in a debug mode or a normal mode. The bash script will check for environment variable "DEBUG_MODE", whether it is defined or not, if not it will run the application in a normal mode.We will pass that environment variable with the deployment yaml/json file.

The main advantage of using a bash script is, if you have completed your debugging and now want to start a pod in a normal mode, you just remove the environment variable from the yaml and restart the pod, it will run in a normal manner. Which reduces our time of updating code and re-building image.

Let's start with the implementation :

1. Bash Script
2. Update Dockerfile.
3. Create Pod with the newly created image.
4. Port-Forward the pod.

1. Creating Bash Script : 

I am using node:alpine as a base image, it is pretty light weight. So, the terminal will be a /bin/ash instead of /bin/bash. So, do change the first line based on the base image you are using.

In this script, i am using a optional "DEBUG_FILE" variable, which allow us to provide a file path while debugging.

The script is pretty simple, i am just checking initially whether the "DEBUG_MODE" is defined or not (not checking any value), if it is defined then attaching a chrome-dev tools to it (node --debug-brk --inspect app.js).

Note: Update your startup file name, in place of app.js in the bash script.

#!/bin/ash
echo "
check-mode.sh script checks whether debugging is ON or not, while initiating a container.
It accepts two environment variables :
a. DEBUG_MODE (mandatory for debugging)
b. DEBUG_FILE (optional file path for debugging)
Example : docker run-it -e DEBUG_MODE=debug -e DEBUG_FILE=app.js 'imagename' /bin/ash
Example : kubectl --namespace=app-debug port-forward backend-0 9229:9229"

if [ -z "$DEBUG_MODE" ]
then
echo "DEBUG_MODE is not defined, initiating without debugging.."
node app.js
else
echo
echo "---- 1. Environemt Variable DEBUG_MODE is Defined -----"
echo "---- 2. Checking Environment Variable DEBUG_FILE is defined or not,
and also does the file exist at that path ? ----"

if [ ! -z "$DEBUG_FILE" ] && [ -f "$DEBUG_FILE" ]
then
echo "---- 3. Environment Variable DEBUG_FILE is defined and also File Exist ----"
echo
node --debug-brk --inspect $DEBUG_FILE
else
echo "----- 3. DEBUG_FILE or File Path doesn't exist ----"
echo "----- 4. Debugging the default entry point app.js ----"
echo
node --debug-brk --inspect app.js
fi
fi


2. Update a docker file : 

FROM node:6.10.3-alpine

ENV NODE_ENV=development app="/home/app"

RUN mkdir "/home/app"

WORKDIR "$app"

RUN npm install --production

COPY "app.js" "$app"

COPY "check-mode.sh" "$app"

EXPOSE 3000

RUN chmod +x $app/check-mode.sh

ENTRYPOINT  $app/check-mode.sh


3. Create Pod with the newly created image:

After the new image is successfully built using the above docker file, we can create a new pod on kubernetes with the newly create image. Also make sure to pass the "DEBUG_MODE" environement variable in the pod yaml/json. The value right now doesn't matter for the env variable as, in the script we are just checking whether it is defined or not.

After the pod is created, in the logs you can see that, it will log that the debugger is listening on some port, generally the default port is 9229, but it can varies also.

Here is the docker run output:

docker run -it -e DEBUG_MODE=debug -e DEBUG_FILE=app.js 30657b10fb02 /bin/ash
externally, i have passed the environment variable using the -e.

Here is the kuberentes pod output:

pod logs ouptut, shows that the debugger is running at 9229 port.



environment variable declared in the pod yaml/json.

Now, in the final step we will port-forward it to local using the kubectl command line, and will attach it to the chrome://inspect.


4. Port-Forward the pod :

To attached the running debugger to the local chrome://inspect we will require to port-forward it to local.

Using kubectl we can port-forward the running pod to the local.

Command : kubectl --namespace="your namespace name" port-forward "pod name" "debugger running port in a pod"

Example : kubectl --namespace=default port-forward testenv-0 9229:9229

Here is the output you will get after port-forwarding :


After successfully port-forwarding, we can open the chrome-dev tools, to start debugging :

a. Type chrome://inspect in your browser new tab.
b. In the remote target, you will see the startup file of your pod.



Now, after your debugging is completed, we can just remove the environment variable from the pod yaml/json and restart the pod. It will work as normal instance. 

This is only a one time investment, anytime you think of associating a debugger to a pod, just update the environment variable. You don't need to rebuild your code image and re-deploy.

Note : if you are ever facing issue for copying the bash script file while building docker image, just open the bash script file in sublime text editor and then go to view->line endings->unix and save your file again.

Sunday, 25 June 2017

Accessing the Kubernetes API Server From The Pod.

Kubernetes API server can be access from the pod on the following URL, https://kubernetes.default .

To get authenticated for accessing the api server url, we also need to pass the service account token and the "ca cert". Once that is done, we can perform all the operations that are permitted to that service account.

Let's say, suppose we have deployed our code as a POD in the kubernetes cluster, and the same code is responsible for creating other stateful sets /replica sets / services / namespaces. In that case we will required to get authenticated for accessing the api server.  And using kubernetes-client, we can deploy our deployment.


I am using the godaddy-kubernetes-client library for creating namespaces , deployments and statefulsets.

The "token" and "ca cert" resides at the following location in the pod :

a. token : /var/run/secrets/kubernetes.io/serviceaccount/token
b. ca-cert : /var/run/secrets/kubernetes.io/serviceaccount/ca.crt


let getRequestInfo = () => {
    return {
        url: "https://kubernetes.default",
        ca:   fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/ca.crt').toString(),
        auth: {
            bearer: fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/token').toString(),
        },
        timeout: 1500
    };
}

let initK8objs = () =>{
    k8obj = getRequestInfo();
    k8score = new Api.Core(k8obj),
    k8s = new Api.Api(k8obj);
}

And once the authentication is done, we can use the above created k8score and k8s object to perform crud operations on the API server.

ex : k8s.group("v1").ns().post('/json-path')  will create a new namespace.

And the other way of authenticating is passing the cluster username and password with the "ca-cert" also known as basic authentication, In the below case we will need to pass the user and password to the pod by either environment variables or using secrets.

const core = new Api.Core({
  url: 'https://kubernetes.default',
  ca: fs.readFileSync('cluster-ca.pem'),
  auth: {
    user: 'user',
    pass: 'pass'
  }
});

Sunday, 7 May 2017

Kubernetes Service And Their EndPoints.

Its very necessary that we specify a proper selector while creating a kubernetes service. The selectors should be unique. So, that they can easily discover the pods.

If you specify a common selectors for all the services, then it may occur that it will point to a multiple pods. And then it will be very difficult to identify the real cause. I was getting a connection-refused error. Sometimes, when i was accessing pod using a node port service.

In my case i was having two pods (named backend-manager and the other one was named engine).
And i have created two node port services for it (bm and engine).

And here are the service yamls :

1. Engine Service :

"engine": { //Engine /
"apiVersion":"v1",
"kind":"Service",
"metadata":{
"name":"engine",
"namespace":`${tennantId}-${salt}`,
"labels":{
"app":"backend",
"tier":"engine"
}
},
"spec":{
"type":"NodePort",
"selector":{
"app":"backend"
},
"ports":[{
"port":3000,
"targetPort": 3000
}]
}
}


2. BM SERVICE :

"bm": { //Backend Manager
"apiVersion":"v1",
"kind":"Service",
"metadata":{
"name":"bm",
"namespace":`${tennantId}-${salt}`,
"labels":{
"app":"backend",
"tier":"bm"
}
},
"spec":{
"type":"NodePort",
"selector":{
"app":"backend"
},
"ports":[{
"port":3001,
"targetPort": 3001
}]
}
}


As, you can see i have mistakenly mentioned the common selector in both the service.

And lets see, the endpoints for the service, using the below url.

http://localhost:8000/api/v1/namespaces/myapps-fv92n/endpoints/bm

And here is the output :





As, you can clearly see that different pods (both BM and Engine) are listing under the service endpoint (under subsets->addresses highlighted above ). Actually, only BM should be listed.

I have then modified both the service yaml and added one more custom selector:

a. "tier:bm" in BM service.
b. "tier:engine" in Engine Service.

And here is the output for the service endpoint, Only BM is listing under the bm service endpoint.



Sometimes, i was getting the connection refused error while accessing the pods using the node port service. And after updating the service yaml's this issue has been resolved.


Below is the steps for creating the end-point URL :

We can get the endpoints by sending a GET request to the following URL.

The URL contains the following parts :

1. localhost:8000 : As, i have created the proxy (kubectl proxy --port=8000).

2. api/v1 : version

3. namespaces : keyword

4. {{namespace-name}} : Your namespace name, if you have created services under the default namespace, then mentioned it default.

5. endpoints : keyword.

6. {{service-name}} : In my case it is bm

URL :  http://localhost:8000/api/v1/namespaces/myapps-fv92n/endpoints/bm

Sunday, 23 April 2017

Accessing Externally Hosted Mongo-DB inside kuberenetes or minikube

Accessing Externally Hosted Mongo-DB in the Kubernetes/Minikube PODS

We can directly access the externally hosted mongo-db inside the pod, by using it public IP.

But, suppose in future, if the externally hosted mongo dB IP changes, then we would require to update all the pods, that are accessing the database.

The better option is creating a service without selector. So, for it no endpoints will be created. After that we will manually create the endpoints and provide the externally hosted mongodb address there.

In this way, when we access the service, it will automatically route to the end points created for it.

And, if later on if public IP changes for the DB, we will not require to update the pods, just we will require to update the end points.

Below are the json files for it:

  1.   Mongodb Service without Selector:


{
                            "kind": "Service",
                            "apiVersion": "v1",
                            "metadata":{
                                name: "mongodb"
                            },
                            "spec":{
                                "ports":[{
                                    "protocol": "TCP",
                                    "port": 27017,
                                    "targetPort": 27017
                                }]
                            },
                            "selector": {}                  
                }
As you can see, we have not provided any selector for it. So, there will be no end point created for it.
By default the mongo-db is accessible to the default port (27017).

2. End-Point for the above service:

{
                        "kind": "Endpoints",
                        "apiVersion": "v1",
                        "metadata":{
                            "name": "mongodb"
                        },
                        "subsets":[{
                            "addresses":[{
                                "ip": "30.188.60.252",  // This is the external end point.
                                }],
                            "ports":[{
                                "port": 27017,
                                "protocol": "TCP"
                            }]
                        }]
            }

Note : The name property value must match with the newly created service name. That's how both the endpoints and service will associate.



And to access the external mongo-db now, We can just use the service name directly.

Ex :  let constr = "mongodb://abcd:abcd@mongodb";

Or, we can use the service IP also that will proxy the traffic to the endpoint.