Saturday, 23 June 2018

In Node JS, Update A Nested Level JSON Key Value With The New Value.

Most of the time, while working with JSON, we come across a scenario where we would like to update a existing json key with the new value and return the updated json. The key could exist at the nth level, and the value can be in any form (json, string, number..)

Directly updating the root node, is not that difficult.

The scenario become more complex, when we need to update the nth node in the Key value, with the new one.

I have written the basic API (with node js + ramda), where it can update the node at any level and it will return the updated json.

Here is the git repositories for it :

1. https://github.com/UtkarshYeolekar/update-jsonkey  (node js + ramda)

2. https://github.com/UtkarshYeolekar/update-jsonkey-express/ (node js + express + ramda)

Let me explain this by example :

Suppose, we have the following json structure :

```{
"testing":{
        "test1":{
            "a":11,
            "b":232
        },
        "test2":{
            "xy":233,
            "zz":"abc xyz",
            "json":{
                "msm":"sds",
                "abc":"weuewoew"
                }
            }
    }
  }
```

Example 1:

Now suppose, if we need to update the value of the key "abc" which is the not the direct value of key "test2".  We will require to iterate till the "json" node and then update the value of the "abc".

The key path is : testing->test2->json->abc , we need to iterate this full path to update the "abc" node.

To update the above node "abc" the API call would be :

Function Prototype : api.updateJson("keypathfromroot", "new value", existing json)

api.updateJson("/testing/test2/json/abc","newvalue", json)


Example 2:

Now, suppose if we need to update the node "test2" with the new json value.

let newValue =
{
  "key1" : "value1",
  "key2" : "value2"
}


The API call would be :

api.updateJson("/testing/test2/",newValue, json)

Note, the key path, we have just provided it to the node "test2". The keypath will always be from the root to the child node, which we need to update.

Both the git repository contains the enough documentation, to get started. Here is the link for the Readme.md file


Hope it helps.

Sunday, 15 April 2018

Sharing Host Directory/Folder with the Docker Container.

In this blog, we are going to learn, how we can mount a existing host folder/directory to a docker container.

Imagine a scenario, where you need to share local files with the docker container. And whenever you modify the files or folder on your host machine i.e outside a container, you need that to be updated in the container also.

This is possible by mounting a host directory with the docker container. Let's checkout the steps for it.

Here in the example, i am using boot2docker VM. So, the host is here boot2docker VM, not our machine, on which it is running. But as boot2docker is a linux VM and running over virtual box, we have the facility of having some folders from the machine mounted over the VM as a host folder.

We can check it, by going to the Oracle Virtual Box -> boot2docker VM -> Settings -> Shared Folders. And here you can see, c/users is already mounted there.



For mounting the host folders, other than c/users, we need to first share it with the VM, then only we can mount it with the container. For this session we will use the already shared folder.

Let's start mounting the host folder:

1. lets first create a folder, under c:/users directory for hosting our code files. I have created a folder name "terraform" under c://users and which consist of some javascript and json files.

2. Now, lets mount the terraform folder into the container at /home/app/config path.
 
Command :  docker run -v "hostfolder:folderInContainer" imageName

             docker run -it -v "/c/Users/terraform:/home/app/config" terraform /bin/ash

here i am mounting, the terraform host folder at the /home/app/config directory in the container. So, all the contents of terraform directory will be listed under the config folder in the container.

2. let's check, whether our files/content exist into the config folder or not, just "cd" to the config folder and execute the "ls" list command to list down the files.


As we can see, in the above screenshot, couple of json and java-script files listed there.

The good thing about this, whenever we do any changes to the host folder outside of the container, they are automatically sync/reflected in the container. Try yourself by adding couple of files and folder into the host folder and then just try listing the files into the container. You will be amazed to see, that changes are reflected there also.

Thanks for reading this blog.



Friday, 26 January 2018

Running gcloud/kubectl commands in docker container.

In this blog, we will see, how we can authenticate with the google cloud console from the docker container using service account.

I was having a scenario, where i need to run some gcloud commands from the docker container as a prerequisite for running the kubectl commands.

Example: initialize the .kube folder with the config file (google cloud cluster config).

Steps:

1. Create a service account, with the privileges you required for calling the google api's.
2. Download the service account JSON file on the local machine.
3. Create a docker file, which includes google cloud sdk and other components like kubectl in my case.
4. Passing the service account information to the docker container using environment variable.
5. Creating service account JSON file on the go, in the docker container using provided environment variable values.
6. Run the gcloud auth service account command and pass the service account json file to it.

In Brief: 

The first 3 steps are simple and lot of documentation available for it.I will start with the fourth one.

Service account information should not be copied directly into the image. They must be passed through the secrets or the environment variables. This make it more secure and configurable.

We can write a shell script, which creates a service account json file dynamically in the container using the environment variables.And we can copied that shell script file into the container and keep it is a entry point or manually run it for generating the service account json file.

Here is the link for, creating a JSON file dynamically inside the container.

Once, the file is generated we can use the following command for activating the service account and perform other operations :

./secrets is a folder, where account.json file generated from the environment variable

1. gcloud auth activate-service-account --key-file ./secrets/account.json
2. gcloud --quiet config set project $project
3. gcloud --quiet config set compute/zone $zone
4. gcloud container clusters get-credentials $cluster_name --zone $zone --project $project

We can also wrap the above 4 gcloud commands in one shell script and run that script file, instead of running commands independently.

Lets name the file init.sh

#!/bin/ash

sh ./generate.sh

gcloud auth activate-service-account --key-file ./secrets/account.json
gcloud --quiet config set project $project
gcloud --quiet config set compute/zone $zone
gcloud container clusters get-credentials $cluster_name --zone $zone --project $project

Where, sh ./generate.sh file, will generate the service acount json file in the secrets folder.

Now, lets just run the init file, and we are done.

sh ./init.sh

In the next blog, i will show you, how we can provision a google container engine using terraform.

How to create/generate a JSON file dynamically using shell script.

In this post, we will see that, how we can dynamically generate/create a JSON file using shell script.

Some days back, i was having a scenario where i need to generate a JSON file in a docker container using environment variable. And environment variable values are passed through the environment file into the docker container.

Will start with writing a shell/bash script. Lets name it, generate.sh

#!/bin/ash

cat > /home/app/secrets/account.json << EOF

{
  "type": $type,
  "project_id": $project_id,
  "private_key":$private_key,
  "client_email":$client_email,
  "client_id":$client_id,
  "auth_uri":$auth_uri,
  "token_uri":$token_uri
}


EOF

And we can now save this file. Where $type,$project_id,$private_key are the environment variables.

Now, we can run this shell script by executing the following command in the bash.

sh generate.sh

And this will generate a JSON file in the /home/app/secrets/ folder.

In the shebang, i have used #!/bin/ash as, i was using the alpine docker image.