Sunday, 13 August 2017

Authenticating With The Docker Hub V2 Api

This example is about authenticating with the docker hub v2 API and then getting the information/tags of the private repository.

Recently, i got this task and i was looking for any example. The public repository can be directly accessible using the v2 API, but for getting the private one, we must required authentication.

I have read their documentation and then tried the same using POSTMAN client first, once that is working, i have written a sample code.

First lets go with the postman client : 

There are two steps in it :

1. Getting the Auth Token by passing the username and password (POST)
2. Using that Auth token, query the docker hub v2 api (GET)


1. Getting the Auth Token : For getting the auth token we need to send a post request to https://hub.docker.com/v2/users/login/ with the username and password in the body. And in return it will return the auth token.


Post request to get the auth token.

2. Using that Auth token, query the docker hub v2 api : By using the above auth token, we can query the v2 api for getting the private repository tags info. We will require to pass the auth token in the headers.

Repository endpoint : https://hub.docker.com/v2/repositories/username/private-repo/tags



This will in-turn return the tags of the repository.

In node js :

1. Getting Auth token :

let dockerConfig = require('./config.js').dockerConfig,
    rp = require('request-promise'),
    _ = require('lodash'),
    R = require('ramda');


let getAuthToken = (username, password) => {

    let options = {
        method: 'POST',
        uri: `${dockerConfig.loginEndpoint}`,
        body: {
            "username": `${username}`,
            "password": `${password}`
        },
        json: true
    }
    return rp(options)

}

2. Getting the private repository tags using token :

let getImageTags = (username, repository, authtoken) => {
        let options = {
            method: 'GET',
            uri: `${dockerConfig.repositoryEndPoint}/${username}/${repository}/tags`,
            headers: {
                Authorization: `Bearer ${authtoken}`
            },
            json: true
        }
        return rp(options);
    }

Both the function will return promises, we can call them like this :

getAuthToken(config.username, config.password)
    .then((tokenInfo) => {
        console.log("token recieved");
        return getImageTags(config.username, config.repository, tokenInfo.token)
    })
    .then((tags) => {
        if (!_.isUndefined(tags) && !_.isNull(tags) && tags.count > 0) {
            let result = tags.results.map((tag) => (R.pick(["name"], tag)));
            console.log(result);
        }
        else
            console.log("No tags found");
    })
    .catch((err) => {
        console.error("Error Occured ", err.message);
    });


3. Config file, look like this : You will require to update it with your docker hub info.

module.exports = {
    dockerConfig : {
                loginEndpoint : "https://hub.docker.com/v2/users/login/",
                username :"username",
                password:"password",
                repository : "private_repo",
                repositoryEndPoint : "https://hub.docker.com/v2/repositories",
                tagsEndPoint : "tags"
        }

};

Here is the full working example, you can just clone it and start working.
https://github.com/UtkarshYeolekar/docker-auth-example

Hope it helps, Thanks!


Sunday, 16 July 2017

Debugging a Kubernetes Pod (Node.js Application)

Debugging a node.js application is very easy, if it is running locally. But when it is deployed on kubernetes, it requires a lot of effort.

Every time you have found some bug, you re-build your image , re-deploy your pod and again start debugging.

In this approach, we will attached a debugger to a running pod (node.js instance) in the kubernetes, and using chrome-dev tools, we will debug our application.

We have updated our instance image with the bash script, which will check whether to run application in a debug mode or a normal mode. The bash script will check for environment variable "DEBUG_MODE", whether it is defined or not, if not it will run the application in a normal mode.We will pass that environment variable with the deployment yaml/json file.

The main advantage of using a bash script is, if you have completed your debugging and now want to start a pod in a normal mode, you just remove the environment variable from the yaml and restart the pod, it will run in a normal manner. Which reduces our time of updating code and re-building image.

Let's start with the implementation :

1. Bash Script
2. Update Dockerfile.
3. Create Pod with the newly created image.
4. Port-Forward the pod.

1. Creating Bash Script : 

I am using node:alpine as a base image, it is pretty light weight. So, the terminal will be a /bin/ash instead of /bin/bash. So, do change the first line based on the base image you are using.

In this script, i am using a optional "DEBUG_FILE" variable, which allow us to provide a file path while debugging.

The script is pretty simple, i am just checking initially whether the "DEBUG_MODE" is defined or not (not checking any value), if it is defined then attaching a chrome-dev tools to it (node --debug-brk --inspect app.js).

Note: Update your startup file name, in place of app.js in the bash script.

#!/bin/ash
echo "
check-mode.sh script checks whether debugging is ON or not, while initiating a container.
It accepts two environment variables :
a. DEBUG_MODE (mandatory for debugging)
b. DEBUG_FILE (optional file path for debugging)
Example : docker run-it -e DEBUG_MODE=debug -e DEBUG_FILE=app.js 'imagename' /bin/ash
Example : kubectl --namespace=app-debug port-forward backend-0 9229:9229"

if [ -z "$DEBUG_MODE" ]
then
echo "DEBUG_MODE is not defined, initiating without debugging.."
node app.js
else
echo
echo "---- 1. Environemt Variable DEBUG_MODE is Defined -----"
echo "---- 2. Checking Environment Variable DEBUG_FILE is defined or not,
and also does the file exist at that path ? ----"

if [ ! -z "$DEBUG_FILE" ] && [ -f "$DEBUG_FILE" ]
then
echo "---- 3. Environment Variable DEBUG_FILE is defined and also File Exist ----"
echo
node --debug-brk --inspect $DEBUG_FILE
else
echo "----- 3. DEBUG_FILE or File Path doesn't exist ----"
echo "----- 4. Debugging the default entry point app.js ----"
echo
node --debug-brk --inspect app.js
fi
fi


2. Update a docker file : 

FROM node:6.10.3-alpine

ENV NODE_ENV=development app="/home/app"

RUN mkdir "/home/app"

WORKDIR "$app"

RUN npm install --production

COPY "app.js" "$app"

COPY "check-mode.sh" "$app"

EXPOSE 3000

RUN chmod +x $app/check-mode.sh

ENTRYPOINT  $app/check-mode.sh


3. Create Pod with the newly created image:

After the new image is successfully built using the above docker file, we can create a new pod on kubernetes with the newly create image. Also make sure to pass the "DEBUG_MODE" environement variable in the pod yaml/json. The value right now doesn't matter for the env variable as, in the script we are just checking whether it is defined or not.

After the pod is created, in the logs you can see that, it will log that the debugger is listening on some port, generally the default port is 9229, but it can varies also.

Here is the docker run output:

docker run -it -e DEBUG_MODE=debug -e DEBUG_FILE=app.js 30657b10fb02 /bin/ash
externally, i have passed the environment variable using the -e.

Here is the kuberentes pod output:

pod logs ouptut, shows that the debugger is running at 9229 port.



environment variable declared in the pod yaml/json.

Now, in the final step we will port-forward it to local using the kubectl command line, and will attach it to the chrome://inspect.


4. Port-Forward the pod :

To attached the running debugger to the local chrome://inspect we will require to port-forward it to local.

Using kubectl we can port-forward the running pod to the local.

Command : kubectl --namespace="your namespace name" port-forward "pod name" "debugger running port in a pod"

Example : kubectl --namespace=default port-forward testenv-0 9229:9229

Here is the output you will get after port-forwarding :


After successfully port-forwarding, we can open the chrome-dev tools, to start debugging :

a. Type chrome://inspect in your browser new tab.
b. In the remote target, you will see the startup file of your pod.



Now, after your debugging is completed, we can just remove the environment variable from the pod yaml/json and restart the pod. It will work as normal instance. 

This is only a one time investment, anytime you think of associating a debugger to a pod, just update the environment variable. You don't need to rebuild your code image and re-deploy.

Note : if you are ever facing issue for copying the bash script file while building docker image, just open the bash script file in sublime text editor and then go to view->line endings->unix and save your file again.

Sunday, 25 June 2017

Accessing the Kubernetes API Server From The Pod.

Kubernetes API server can be access from the pod on the following URL, https://kubernetes.default .

To get authenticated for accessing the api server url, we also need to pass the service account token and the "ca cert". Once that is done, we can perform all the operations that are permitted to that service account.

Let's say, suppose we have deployed our code as a POD in the kubernetes cluster, and the same code is responsible for creating other stateful sets /replica sets / services / namespaces. In that case we will required to get authenticated for accessing the api server.  And using kubernetes-client, we can deploy our deployment.


I am using the godaddy-kubernetes-client library for creating namespaces , deployments and statefulsets.

The "token" and "ca cert" resides at the following location in the pod :

a. token : /var/run/secrets/kubernetes.io/serviceaccount/token
b. ca-cert : /var/run/secrets/kubernetes.io/serviceaccount/ca.crt


let getRequestInfo = () => {
    return {
        url: "https://kubernetes.default",
        ca:   fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/ca.crt').toString(),
        auth: {
            bearer: fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/token').toString(),
        },
        timeout: 1500
    };
}

let initK8objs = () =>{
    k8obj = getRequestInfo();
    k8score = new Api.Core(k8obj),
    k8s = new Api.Api(k8obj);
}

And once the authentication is done, we can use the above created k8score and k8s object to perform crud operations on the API server.

ex : k8s.group("v1").ns().post('/json-path')  will create a new namespace.

And the other way of authenticating is passing the cluster username and password with the "ca-cert" also known as basic authentication, In the below case we will need to pass the user and password to the pod by either environment variables or using secrets.

const core = new Api.Core({
  url: 'https://kubernetes.default',
  ca: fs.readFileSync('cluster-ca.pem'),
  auth: {
    user: 'user',
    pass: 'pass'
  }
});

Sunday, 7 May 2017

Kubernetes Service And Their EndPoints.

Its very necessary that we specify a proper selector while creating a kubernetes service. The selectors should be unique. So, that they can easily discover the pods.

If you specify a common selectors for all the services, then it may occur that it will point to a multiple pods. And then it will be very difficult to identify the real cause. I was getting a connection-refused error. Sometimes, when i was accessing pod using a node port service.

In my case i was having two pods (named backend-manager and the other one was named engine).
And i have created two node port services for it (bm and engine).

And here are the service yamls :

1. Engine Service :

"engine": { //Engine /
"apiVersion":"v1",
"kind":"Service",
"metadata":{
"name":"engine",
"namespace":`${tennantId}-${salt}`,
"labels":{
"app":"backend",
"tier":"engine"
}
},
"spec":{
"type":"NodePort",
"selector":{
"app":"backend"
},
"ports":[{
"port":3000,
"targetPort": 3000
}]
}
}


2. BM SERVICE :

"bm": { //Backend Manager
"apiVersion":"v1",
"kind":"Service",
"metadata":{
"name":"bm",
"namespace":`${tennantId}-${salt}`,
"labels":{
"app":"backend",
"tier":"bm"
}
},
"spec":{
"type":"NodePort",
"selector":{
"app":"backend"
},
"ports":[{
"port":3001,
"targetPort": 3001
}]
}
}


As, you can see i have mistakenly mentioned the common selector in both the service.

And lets see, the endpoints for the service, using the below url.

http://localhost:8000/api/v1/namespaces/myapps-fv92n/endpoints/bm

And here is the output :





As, you can clearly see that different pods (both BM and Engine) are listing under the service endpoint (under subsets->addresses highlighted above ). Actually, only BM should be listed.

I have then modified both the service yaml and added one more custom selector:

a. "tier:bm" in BM service.
b. "tier:engine" in Engine Service.

And here is the output for the service endpoint, Only BM is listing under the bm service endpoint.



Sometimes, i was getting the connection refused error while accessing the pods using the node port service. And after updating the service yaml's this issue has been resolved.


Below is the steps for creating the end-point URL :

We can get the endpoints by sending a GET request to the following URL.

The URL contains the following parts :

1. localhost:8000 : As, i have created the proxy (kubectl proxy --port=8000).

2. api/v1 : version

3. namespaces : keyword

4. {{namespace-name}} : Your namespace name, if you have created services under the default namespace, then mentioned it default.

5. endpoints : keyword.

6. {{service-name}} : In my case it is bm

URL :  http://localhost:8000/api/v1/namespaces/myapps-fv92n/endpoints/bm

Monday, 1 May 2017

Promisifying Redis Client With Bluebird Example (With Pub-Sub Also)

Days before, i was struggling handling callbacks in redis-client. And code was getting more complex, with callbacks.

I have research around, whether is there any thing like promises in redis. And i came to know, that now redis support promises by promisifying node_redis with bluebird.

I have written a simple example code for promisify the node_redis client.

I have written a wrapper around the out of the box node_redis functions (get,set,exist, etc) and that will return you a promise. You can just directly call the wrapper function and can write your all callback code in the "then()" function.

In this example, i am first initializing the redis client and then using the exist function to check, whether the key exist in redis or not, if not then creating the key and publishing it to subscriber, and if the key already exist, then updating the old key value by increment it with 1 and then publishing it to subscribers.

Here is the code for it :

Git hub repository : https://github.com/UtkarshYeolekar/promisify-redis-client

1. redis.js : Which contains all the wrapper functions.

You can remove the logger and can use console.log directly. Also, you can remove the redis-config and can directly specify the port and host in the create Client().


let bluebird = require("bluebird"),
    redis = require('redis'),
    logger = require("./logger"),
    redisConfig = require("./config.js").redisConfig,
    maxReconnectingTry = 4,
    tryReconnecting  = 0,
    // subscriber will pass a callback function and when the redis client
    // will recieve a message, it will call that callback function.
    callback  
bluebird.promisifyAll(redis.RedisClient.prototype);

let redisClient = null;
module.exports = {

        initRedisClient : () =>{
            redisClient =  redis.createClient(redisConfig().port,redisConfig().host)
            logger.debug("Initalizing Redis Client");

            redisClient.on('ready',function() {
            logger.debug(" subs Redis is ready");
            });

            redisClient.on('connect',function(){
                logger.debug('subs connected to redis');
                isRedisConnected = true;
            });

            redisClient.on("message", function(channel, message) {
                logger.info("message recieved on channel :", channel);
                callback(channel,message);
            });

            redisClient.on("error", function (err) {
                logger.debug("Error occurred while connecting to redis " + err);
                isRedisConnected = false;
            });

            redisClient.on('reconnecting',function(err){
                    tryReconnecting++;
                    logger.warn('reconnecting');
                    if(tryReconnecting >= maxReconnectingTry)
                    {
                        logger.error(err);
                        redisClient.quit();
                    }
            });
        },
        getKeyValue: (key) => {
            return redisClient.getAsync(key)
                .then((res, err) => err ? Promise.reject("getKeyValue : "+err) : Promise.resolve(res));
        },
        setKeyValue: (key, value) => {
            return redisClient.setAsync(key, value)
                .then((res, err) => err ? Promise.reject("setkeyvalue : "+ err) : Promise.resolve(res));
        },
        doesKeyExist: key => {
            return redisClient.existsAsync(key)
                .then((res, err) => !res || err ? Promise.resolve(false) : Promise.resolve(res));
        },
        deleteKey: key => {
            return redisClient.delAsync(key)
                .then((res, err) => res ? Promise.resolve(res) : Promise.reject("deleteKey :"+err));
        },
        publishMessage: (channel,message) => redisClient.publish(channel,message),
        endConnection: () => redisClient.quit(),
        subscribeChannel: (channel,cb) => {
             redisClient.subscribe(channel)
             callback = cb;
        }
    }


2. update-redis.js : By using wrapper functions update the redis key and publish the message to the channel.

In this code :
1. I am checking whether the key exist in the redis.
2. if key doesn't exist than create the key and publish the message to the channel(subscriber).
3. if key exist, then get the old value of the key and increment it by 1 and publish it to the channel(subscriber).

I am using "winston" for logging. So, also in this code, you can remove the logger and can use console.log().

let redis = require("./redis.js"),
    logger = require("./logger.js"),
    _baseVersion = 1,
    _currentVersion , _deploymentVersion , _previousDeployedVersion = null;

const versionLabel = "v";
const key = "_deploymentVersion";
const channel = "deployment";


/*redis().deleteKey(key)
.then((res) => redis().doesKeyExist(key))*/
module.exports = {
    updateRedis: () => {
    redis.initRedisClient();
      return redis.doesKeyExist(key)
        .then((res) => res ? redis.getKeyValue(key) : null)
        .then((res) => {
            if (res != null) {
                logger.info("Current Deployed Version", res);
                _previousDeployedVersion = res;
                _currentVersion = parseInt(_previousDeployedVersion.split(versionLabel)[1]) + 1;
                _deploymentVersion = versionLabel + _currentVersion;
            }
            else
                _deploymentVersion = versionLabel + _baseVersion;
            redis.setKeyValue(key, _deploymentVersion)
        })
        .then((res) => {
            logger.info("version updated to : ", _deploymentVersion);
            let message = JSON.stringify({ "_deploymentVersion": _deploymentVersion });
            return redis.publishMessage(channel, message);
        }).then((res) => {
            logger.info("message published to channel :", channel);
            redis.endConnection();
            return Promise.resolve("Redis Updated and message published");
        })
        .catch((res) => {
            logger.error("catch block :->", res);
            redis.endConnection();
            return Promise.reject("Error in updating redis",res);
        });

    }
}

3. subsriber.js : Here we will initialize one more client and will subscribe to the above deployment channel. So, whenever the message is published in channel, we will get notified.


let redis = require("./redis.js");

redis.initRedisClient();
redis.subscribeChannel('deployment',(channel,message)=>{
             console.log(message);
});


Lets, require both the JS (update-redis and subscriber into one js file and named it app.js).

//Updating redis key and publishing it to a channel
let redis = require('./update-redis.js');

//subscribing the channel.
let subs = require('./subscriber.js');

redis.updateRedis().then((res)=>{
    console.log("res",res);
})
.catch((err)=>{
    console.log("eree",err);
});


Now, we can directly run the code by typing node .

You can download the full code from here. It contains the logger and the config file also.

Steps :

1. Download the code.
2. npm install
3. Make sure your redis service is up and running.
4. node app.js

Note : Publisher and Subscriber cannot work on the same client, We require two separate clients for that.


Do provide your valuable feedback 😊


Sunday, 23 April 2017

Accessing Externally Hosted Mongo-DB inside kuberenetes or minikube

Accessing Externally Hosted Mongo-DB in the Kubernetes/Minikube PODS

We can directly access the externally hosted mongo-db inside the pod, by using it public IP.

But, suppose in future, if the externally hosted mongo dB IP changes, then we would require to update all the pods, that are accessing the database.

The better option is creating a service without selector. So, for it no endpoints will be created. After that we will manually create the endpoints and provide the externally hosted mongodb address there.

In this way, when we access the service, it will automatically route to the end points created for it.

And, if later on if public IP changes for the DB, we will not require to update the pods, just we will require to update the end points.

Below are the json files for it:

  1.   Mongodb Service without Selector:


{
                            "kind": "Service",
                            "apiVersion": "v1",
                            "metadata":{
                                name: "mongodb"
                            },
                            "spec":{
                                "ports":[{
                                    "protocol": "TCP",
                                    "port": 27017,
                                    "targetPort": 27017
                                }]
                            },
                            "selector": {}                  
                }
As you can see, we have not provided any selector for it. So, there will be no end point created for it.
By default the mongo-db is accessible to the default port (27017).

2. End-Point for the above service:

{
                        "kind": "Endpoints",
                        "apiVersion": "v1",
                        "metadata":{
                            "name": "mongodb"
                        },
                        "subsets":[{
                            "addresses":[{
                                "ip": "30.188.60.252",  // This is the external end point.
                                }],
                            "ports":[{
                                "port": 27017,
                                "protocol": "TCP"
                            }]
                        }]
            }

Note : The name property value must match with the newly created service name. That's how both the endpoints and service will associate.



And to access the external mongo-db now, We can just use the service name directly.

Ex :  let constr = "mongodb://abcd:abcd@mongodb";

Or, we can use the service IP also that will proxy the traffic to the endpoint. 

  

Sunday, 16 April 2017

Using SharePoint Designer 2013 workflow, update/create item in other site collection.

Here in this blog, i am taking the general scenario of "updating/creating" a list item from one site collection to another site collection.
The Statement:
"I have a master list in one site collection and copy of the same list in another site collection. I would like to make sure that whenever there is an update to an item on the master list, the same item in the copy list in the other site collection gets updated ."
Implementation :
Steps in brief :
  1. We will use the rest API in the SPD workflow for the cross site collection call.
  2. On the source master list, we will write two workflows (2013 template) one on item added and the other one on item updated.
  3. The item added workflow will create the copy of list item in the destination list using the rest API.
  4. The item updated workflow will update that copy in the destination list, whenever the master list entry will update.
  5. For allowing the master list workflow to create/update entry in other site collection, we need to provide the workflow a app permission on the target site.
  6. List Schema  :
Master List : List Name "Employee" , Columns : {"EmpName", "CTC"}
Destination List :
List Name "Employee Backup", Columns {"EmpName", "CTC","MasterListItemID"}
MasterListItemID : will hold the item id of the master list item.
Let's create first workflow (on Item Added) :

3 dictionary variables will require for the post request :
  1. header : which will contain the accept/Content-type keys with the same value "application/json;odata=verbose"

Header Dictionary contains accept & content-type keys

2. metadata : the metadata dictionary will contain only one key "type" and the value will be SP.Data.[title of target list]ListItem


3. parameters : this dictionary will contain the key  __metadata & the columns values.

the "MasterListItemId" column will hold a current item ID.
Parameter dictionary..(the __metadata will be of type dictionary)

Will associate all these dictionary variables with the "call a web-service" action. The method will be a "Post" as we are creating a item.
Request URL will be : https://targetsitecollection/_api/web/lists/getbytitle('Target_List_Name')/Items

Save this workflow and update the settings to trigger it on item added only.
Now, to allow this workflow to create item in the target site collection, we need to provide a permission to this workflow.
Go to site settings of the source site(holding master list) -> site app permissions


Copy the App identifier (ID between last | and @) for next steps (n my example this was 8f20f240-ddde-45dc-a08a-66834769220d)
Now, manually add this app identifier on the target site collection (site holding the target list). To do this follow the below steps :
a. Open the appinv.aspx page :
http://{the Site Collection}/{target-site}/_layouts/15/appinv.aspx.
b. Paste your App identifier of the source site, lookup the rest of the information and use the following XML to the App’s Permission Request XML.
And now the first part has been done, now lets publish the workflow and create a new item in the master list and check the workflow status, if it is completed than the item will be successfully created in the list.
Note : Make sure that the user who is creating the item in master list, also have a permission on the target site collection/site. Or else, we can write the actions under "APP STEP" in the workflow.
-----------------------------------------------------------------------

Now, lets write a second workflow, which will trigger when the master list item has been updated.
In the item updated workflow will have two stages :
  1. Fetch the item ID of the "copy of the master list item/target list item". Because for updating the target list item using rest api, we will require it item id.
  2. Updating the target list item.
As we have stored the master list item ID in the target list item field (MasterListItemID). So, whenever the master list item will update, we will take the current item ID and will query the target list for fetching the item whose "MasterListItemID" is equal to the current item Id.


So, we will write a first request as :
https://target_site_collection/_api/web/lists/getbytitle('target_list')/Items?$filter=MasterListItemID eq [%Current Item:ID%]
which will in turn return the Item ID of the target list item Id and then we will use that ID to update the list item.
As for now, we are expecting only one result in response. So, for now i am directly using "d/results(0)/ID" for getting the item ID. Later on we can check the response Item count and do some validation.
header dictionary will just hold two keys (Accept/Content-Type) and the request will a "GET" request.

Now, as the first stage is completed, will write a second stage for updating the item.

This stage will also contain 3 dictionaries object (header,metadata,parameters).
metadata & parameters dictionary will be same as the first workflow. So, create the dictionary same as mentioned in the "Item Added Workflow"
header dictionary will contain two additional keys as we are updating the existing item.
X-HTTP-Method and If-Match are the two additional keys.

In the "call a web service" action we will point to the target list item and will use the item id that we have stored in the previous stage.
https://target_site_collection/_api/web/lists/getbytitle('target_list')/items([%Variable:SecondaryListItemID%])
And now, just save and publish the workflow and try updating the item.
Note : Only once we need to provide a permission. And we have done it while writing the first workflow. 
References :
http://blog.portiva.nl/2016/11/03/sharepoint-designer-call-http-web-service-to-create-item-in-other-site-collection/

Using Google Apps Script, Upload file to google drive and insert data into spreadsheet

For inserting the data into the spreadsheet, i have created a sample form which is having a limited no of fields. And on the form action, i am submitting a post request to the server.
The action tag in the form is pointing to the google app script end point.
Please do watch this video for better clarity.

Also, to upload multiple files in the google drive, please refer the
updated code in the below mentioned repository.

Upload multiple files to google drive using google app script.

Here is the snippet for the html form:
<article id="content1" contenteditable="true">
<p>
<form id="uploadForm" action="Your script end point" method="POST">
<input type="hidden" value="" name="fileContent" id="fileContent">
<input type="hidden" value="" name="filename" id="filename">
<label> Name : </label><input required type="text" value="" name="name" id="name">
<label> Email :</label> <input required type="text" value="" name="email" id="email">
<label> Contact : </label><input required type="text" value="" name="contact" id="contact">
<label> SkillSets :</label> <input required type="text" value="" name="skillsets" id="skillsets">
<label> LinkedIn Account:</label><input type="text" value="" name="linkedinUrl" id="linkedinUrl">
</form>
<input required id="attach" name="attach" type="file"/>
<input value="Submit" type="button" onclick="UploadFile();" />
function UploadFile() {
var reader = new FileReader();
var file = document.getElementById('attach').files[0];
reader.onload = function(){
document.getElementById('fileContent').value=reader.result;
document.getElementById('filename').value=file.name;
document.getElementById('uploadForm').submit();
}
reader.readAsDataURL(file);
}
</p>
</article>
And Here is the google script snippet :
<article id="content2" contenteditable="true">
<p>
// Do change it your email address.
var emailTo= "emailaddress@anydomain.com"
function doPost(e) {
try {
var data = e.parameter.fileContent;
var filename = e.parameter.filename;
var email = e.parameter.email;
var name = e.parameter.name;
var result=uploadFileToGoogleDrive(data,filename,name,email,e);
return ContentService // return json success results
.createTextOutput(
JSON.stringify({"result":"success",
"data": JSON.stringify(result) }))
.setMimeType(ContentService.MimeType.JSON);
} catch(error) { // if error return this
Logger.log(error);
return ContentService
.createTextOutput(JSON.stringify({"result":"error", "error": error}))
.setMimeType(ContentService.MimeType.JSON);
}
}
// new property service GLOBAL
var SCRIPT_PROP = PropertiesService.getScriptProperties();
// see: https://developers.google.com/apps-script/reference/properties/
/**
* select the sheet
*/
function setup() {
var doc = SpreadsheetApp.getActiveSpreadsheet();
SCRIPT_PROP.setProperty("key", doc.getId());
}
/**
* record_data inserts the data received from the html form submission
* e is the data received from the POST
*/
function record_data(e,fileUrl) {
try {
var doc = SpreadsheetApp.openById(SCRIPT_PROP.getProperty("key"));
var sheet = doc.getSheetByName('responses'); // select the responses sheet
var headers = sheet.getRange(1, 1, 1, sheet.getLastColumn()).getValues()[0];
var nextRow = sheet.getLastRow()+1; // get next row
var row = [ new Date() ]; // first element in the row should always be a timestamp
// loop through the header columns
for (var i = 1; i < headers.length; i++) { // start at 1 to avoid Timestamp column
if(headers[i].length > 0 && headers[i] == "resume") {
row.push(fileUrl); // add data to row
}
else if(headers[i].length > 0) {
row.push(e.parameter[headers[i]]); // add data to row
}
}
// more efficient to set values as [][] array than individually
sheet.getRange(nextRow, 1, 1, row.length).setValues([row]);
}
catch(error) {
Logger.log(e);
}
finally {
return;
}
}
function uploadFileToGoogleDrive(data, file, name, email,e) {
try {
var dropbox = "Demo";
var folder, folders = DriveApp.getFoldersByName(dropbox);
if (folders.hasNext()) {
folder = folders.next();
} else {
folder = DriveApp.createFolder(dropbox);
}
var contentType = data.substring(5,data.indexOf(';')),
bytes = Utilities.base64Decode(data.substr(data.indexOf('base64,')+7)),
blob = Utilities.newBlob(bytes, contentType, file);
var file = folder.createFolder([name, email].join("-")).createFile(blob);
var fileUrl=file.getUrl();
//Generating Email Body
var html =
'<body>' +
'<h2> New Job Application </h2>' +
'<p>Name : '+e.parameters.name+'</p>' +
'<p>Email : '+e.parameters.email+'</p>' +
'<p>Contact : '+e.parameters.contact+'</p>' +
'<p>Skill Sets : '+e.parameters.skillsets+'</p>' +
'<p>LinkedIn Url : '+e.parameters.linkedinUrl+'</p>' +
'<p>File Name : '+e.parameters.filename+'</p>' +
'<p><a href='+file.getUrl()+'>Resume Link</a></p><br />' +
'</body>';
record_data(e,fileUrl);
MailApp.sendEmail(emailTo, "New Job Application Recieved","New Job Application Request Recieved",{htmlBody:html});
return file.getUrl();
} catch (f) {
return ContentService // return json success results
.createTextOutput(
JSON.stringify({"result":"file upload failed",
"data": JSON.stringify(f) }))
.setMimeType(ContentService.MimeType.JSON);
}
}</p>
</article>