Saturday, December 28, 2013

how to unit test javascript without setting your hair on fire?

This code looks familiar? Most of us, at some point in time would have written javascript this way to do a simple login validation. Right?


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
function formSubmit(){
     var userName = $("#userName").val();
    var password = $("#password").val();

    if(!isValid(userName,password)){
    return false;
    }

    $.ajax({
        type: 'POST',
        url: "login",
        dataType: 'text',
        data : "username="+userName+"&password="+password,
        success: function (data) { 
            alert('Login Success');
        },
        error: function( xhr, err) {
            alert('Login Failed.'+ xhr.status);
        }
    });
}

The real problem comes when we try to write unit tests for this kind of javascript. The issue is the code which is completely mixed with HTML and inline event handlers.
So what? Big Deal. Why not write test cases to test HTML dependencies and DOM manipulations?
Yeah! You could. But you would end up writing ONLY test cases rather than writing actual functions.

How about we refactor the code so that it's testable?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
function formSubmit()
{
    var cred = readValuesFromUI(); //separate out the function which reads values from ui.
    var val = validate(cred); // separate out validations.
    if(val.isValid)
    {
        doLogin(cred); // ajax call to server
    }
    else
    {
        alert('Invalid Credentials.');
    }
}

Jasmine Specs to the rescue
Jasmine - An automated test framework for Javascript.
Say you have a function which returns the sum of two numbers

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
function sum(num1, num2){
    return num1 + num2;
}

describe("this is a suite which tests the function sum with various inputs", function() {
    it("this is a spec which tests +ve scenario", function() {
    var s = sum(1,2);       
    expect(s).toEqual(3);
    });
});


'expect' is equivalent to 'assert' and 'toEqual' is the matcher.
Let's spice this up a little bit. Say, we validate the input params (num1 & num2) before returning the sum

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
function sum(num1, num2){
    var isValid = validateParams(num1,num2);
    if(isValid)
        return num1+ num2;
    else
        return -1;
}
function validateParams(num1,num2)
{
 // some logic
}

Now, what if the validateParams function fails when testing the sum function? We are not bothered about the functionality of the validateParams function when testing the sum function right? All we need is the function to return true/false based on then input so that the sum function can go ahead and do its job. So why not mock the function validateParams?

Jasmine provides a way to spy on functions. No matter how the validateParams function works, we would still be able to test the sum function.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
describe("this is a suite", function() {
    it("this is a spec where validateParams returns true", function() {
    var fakeValidateParams = jasmine.createSpy();
spyOn(window,'validateParams').andReturn(true);
    var s = sum(1,2);       
    expect(s).toEqual(3);
    });
    it("this is a spec where validateParams returns false", function() {
    var fakeValidateParams = jasmine.createSpy();
spyOn(window,'validateParams').andReturn(false);   
    var s = sum(1,2);       
    expect(s).toEqual(-1);
    });
});

This way, its easier to fake ajax calls, spy on library functions. I have put together a simple login app with bare minimum javascript. It's integrated with maven and the specs can be run as a part of your build without having to make use of the browser.

Jasmine, through the jasmine-jquery plugin also lets you use json fixtures to test functions which take inputs.

getJSONFixture(*.json) - loads the json data and makes it available for your specs. By default, fixtures are loaded from this location -

spec/javascripts/fixtures/json.

Jasmine has a lot more to offer, with support to fake events, fake timers etc. Do read the documentation.

Steps:

1. Clone the project from github.
2. Navigate to the path where you find pom.xml (pom.xml is configured with jasmine-maven-plugin and the saga-maven-plugin to generate coverage report)
3. Run "mvn clean test" to run the specs and "mvn clean verify" to run the specs & generate the coverage report.
4. Open the total-report.html and see the magic unfold in front your eyes ;-)

Jasmine specs run from the terminal


Coverage Report



To conclude :

Although, I ended up writing unit test cases for two whole modules, there was nothing conclusive about these tests. Yeah! Agreed that these tests help you test the flow. Which block of code gets called for what kind of input sort of way but that is all about it.
I had to do quite a bit of refactoring to make the already existing javascript functions testable.
Similar opinions expressed on this stackoverflow link as well.

Do let me know what you think.

PS: Title courtesy - This Post

Sunday, August 4, 2013

My tyrst with Atmosphere Framework.


Recently, I was asked to implement a feature which involved sending notifications from the server to the client about the state of a server resource (in my case Database). Till now, we were doing a poll once in every four seconds to get the job done. Before we started development, we outlined the set of requirements so that we could work towards them.

- Update all connected clients about the state of the server via Broadcast.
- Pick a framework which doesn’t involve any external server.
- Integrate seamlessly with existing application.
- Be easy to implement and be scalable.

Once we had the set of requirements clear, we embarked on the process of identifying the framework that would fit the bill. We (each one of us tried a poc with different technologies) evaluated Node.js, Vert.x, ActiveMq and Atmosphere.

We chose Atmosphere since it satisfied most of our requirements.
- Doesn't require any external server.
- Has a jersey module which integrates seamlessly with any webapp.
- Switches back to the fallback transport if the browser/server doesn’t support the specified transport.

POC

At a high level, the POC has the following components:
- Atmosphere's server side component which takes care of suspending the incoming request and broadcasting the data back to the clients when invoked.
- Atmosphere's client side component which is responsible to initiate a request and handle the response it receives via the broadcaster.
- A rest resource and a client(for testing purpose) which takes care of generating the content that needs to be broadcasted.
- BroadcastFilters which intercept the broadcast before it is being sent to the clients.
- Interceptors which intercept the incoming requests.
- EventsLogger which logs the Framework related events like onSuspend, onPresuspend, onReconnect, etc.

The POC, per say, is pretty straightforward. I have borrowed the concepts from several sample programs that the author himself has shared. What's more interesting is the number of problems that ran into during the process of developing the POC and testing it.

Issues that I faced:

- In our project we have tomcat servers which host the application and there is Apache Httpd 2.4 to take care of Authenication via ldap. Apache 2.4 doesn't support websockets by default so the request did not get through. I had to upgrade the server from 2.4 to 2.4.6 which has the module mod_proxy_wstunnel which supports websockets.

- After upgrading Apache, I had issues with IE8. IE8 doesn't support websockets and the framework downgrades the transport to long-polling (IE8 supports long-polling). In spite of this the request didn't get through. The client tries to establish a connection until it reaches the max no. of retries and eventually errors out.

- Assume that a client tries to subscribe for a topic to which he doesn't have access. So, I tried to return the response back to the client without suspending the request with this message – "Access restricted. You don’t have sufficient privileges to receive updates for this topic."

- I couldn't get the cache working. When the client loses connection to a server, the broadcasted messages go into the cache and when the client resumes connection, the messages get delivered to the client from the cache.

- Open 3 browsers Safari, Firefox and Chrome. Each has three tabs opened. Open the application and hit subscribe on the three tabs one after the other. You could see that the connection gets shared. Meaning, there is only one websocket connection that gets opened and it gets shared between tabs in each browsers. This is due to the shared attribute in the request which we set to true. Now, I run the Rest client which pushes the data to the application which in turn broadcasts it to the clients. Only the active tab receives the broadcasts. Not sure if it's the intended behaviour but it does happen. 

Please try this out and let me know if it works. :)

My Dev environment:
Ubuntu 12.04 and Windows 7
Browsers:
Firefox 22.0
Chrome Version 28.0.1500.95 m
Safari : 5.1.7 for Windows

Step 1: Deploy the application and open it in any of the browser.
Step 2: Choose a topic from the drop down. As of now, the FeedRestResourceClient publishes only content for the topic 'Computers & Technology'.
Step 3: Run the FeedRestResourceClient
Step 4: You'll see that the client has received the broadcast.




Reference Links:


~ cheers.!

Monday, June 10, 2013

just about a month in the new company..........

On May 15th, I had completed one month with Lister. I was going to write a post at that time but I hadn't done anything substantial apart from attending day long inductions and spending time getting to know things. Now that I've started checking in code on a regular basis, I feel now's the best time to share my experiences and to pause and reflect on my days so far.

I am not doing anything different.

To begin with, I am not doing anything different from what I've been doing in my previous companies [1, 2]. In my previous roles, I was more of an individual contributor who takes up a requirement, codes and commits the changes against the said tasks. Even now, I am doing the same thing but within the confines of a team.

The team here uses a number of open source tools and frameworks [1, 2, 3] and I get to play around with them. Learning my way out. This part of my job is the most exciting to say the least.

Small Company

Like I had mentioned here, it's a small company. Everybody knows everybody kind of an environment. Its been just over a month and within this span of time, I'd come to know about many teams (at least in my bay) where they'd studied, where they'd worked previously without me having to put any extra effort. Feels good to get to know people.

Commute is grueling

I spend 4 hours on a daily basis commuting. And by the time I reach home, I think about nothing but hitting the sack so that I can get some sleep before starting it all over again. But the good part – I am able to spend time reading books. I have read 2 books [1, 2] this month and I am gonna finish another one. If you look at it, it might appear as if I've been given the gift of time. :) Time almost comes to a standstill when I'm traveling. ;-)

To sum up

Apart from the work, I have been keeping myself busy reading stuff and trying out things that are out of my comfort zone and documenting things here. Things are pretty nascent at this stage and once they take shape, I'll share the more with you guys.
Over all, I wouldn't say that I am disappointed but at the same time, I have started worrying about this - What if I get a comfortable hold of the open source frameworks and tools I use? What next? Will it become monotonous? How am I going to keep myself engaged and psyched?

But at the moment all I can do is to hope for the best.

See you soon. :)

~ cheers.!

Monday, June 3, 2013

Shutting down Live Score Card Android App

Last year, out of boredom I wrote a Live Wallpaper (Android) which would display the live scorecard. I scrapped data from a very popular site.

Wait, allow me to explain. I'd no intentions to steal data. Or, I didn't have any intentions to create something like this.

In my opinion, I followed a proper way

- I tried contacting several teams (sites) asking if they offer any APIs. Many didn't respond and few replied stating that they offer only paid solutions which start at a minimum of 5K per month (* Refer attached mail chain).

- The other URLs that I got from Stackoverflow spoke nothing about restricted access (even their robots.txt said nothing). At the same time, the data that they provided weren't enough to build a fully featured app.

At that point, I thought, "Why let trivial things such as access and data bother app development? Let me finish building it and then I'll decide whether to continue it or not. Anyhow, I'm not going to make money."

All things said, the scraper was pretty light weight and given the scale of the website, a cron accessing the site and scraping a single page is no big deal (Technically, of course. ;-) ). So, I went ahead and scraped their URL. Apart from a handful of my friends who helped me out with testing app in their devices, I extensively used it for 3-4 months, totally forgot about it until recently when I was asked about the app in one of interviews :D

Now, out of sheer guilt I have disabled the api which pulled data from their site. :'(

Sorry guys.
Nothing personal.

Happy coding :)
~ cheers.!

---
* mail chain

---------- Forwarded message ----------
From: Pankaj Chhaparwal 
Date: Mon, Sep 3, 2012 at 5:51 PM
Subject: RE: RSS Feeds to get live scores
To: karthick r 

The smallest pkg we have is for rs 5000 per month. 

Regards,
Pankaj
 

From: karthick r [xxxx] 
Sent: Monday, September 03, 2012 5:35 PM
To: Pankaj Chhaparwal
Subject: Re: RSS Feeds to get live scores
 
Hi, 
Thanks for the prompt reply.
Please let me know the applicable rates. It'll help me decide. 

Regards,
Karthick.R

On Mon, Sep 3, 2012 at 5:14 PM, Pankaj Chhaparwal  wrote:

Hi Karthick, 
We can provide you this content, but we only offer a paid solution. 
Let me know if you would be interested. 

Regards,
Pankaj Chhaparwal



From: xx@gmail.com
Sent: Monday, September 03, 2012 2:36 PM
To: xxxxxxxxxx
Subject: RSS Feeds to get live scores
 
Hi, I am working on an android app to display live scores. Just want to know if you are providing any API to pull the match/score data. Once developed, this app will be available for others to download from Google Play for Free. Please let me know if you provide any such APIs. Thanks, Karthick Website: http://about.me/r.karthick 
Company: Developer
-- 
regards,
r.karthick

Tuesday, May 28, 2013

Monitor exceptions using logstash


To monitor exceptions, we are going to need a little more than grep. Replace the filter section of the test.conf file attached in the previous post with this.

filter { 

 multiline {
                patterns_dir => "D:/logstash/logstash-1.1.9-monolithic/patterns"
                pattern => "^(%{MONTH}|%{YEAR}-)"
                negate => true
                what => "previous"
                type => "loglevel"
        }

 grok {
                patterns_dir => "D:/logstash/logstash-1.1.9-monolithic/patterns"
                pattern => ["(?m)(?<logdate>%{MONTH} %{MONTHDAY}, %{YEAR} %{DATA} [AP]{1}M{1}) %{NOTSPACE:package} %{WORD:method}.*%{LOGLEVEL:loglevel}: %{GREEDYDATA:msg}"]
                singles => true
                
        }
 
 grep {
               # Answers the question - what are you looking for? 
               # In this example, I am interested in server start up. 
               # @message - maps to one log statement/event and I have defined a grep to match the word 
               # 'Server startup' in the message.
               match => ["@message","CannotLoadBeanClassException"]               
               type => "loglevel"
         }
}


Grep matches a word in a line (CannotLoadBeanClassException) but we need a bit more when it comes to getting the exception stack trace, isn't it?
Fret not. Logstash's multiline to the rescue. Multiline uses Grok pattern to identify a pattern.

More about Grok filters >> Here

My tomcat logs are in this format:

May 28, 2013 6:04:30 PM org.apache.catalina.core.StandardContext startInternal
SEVERE: Error listenerStart

pattern => "^(%{MONTH}|%{YEAR}-)" indicates that any line that begins in this format is a part of the multi line event. So, technically, every line in our log file is a part of the multi line event.
negate => true
When negate is set to true the lines which doesn't match (line 101-119) the given pattern will be constituted as a part of the multi line filter and will subsequently be added to the previously encountered line which matches the pattern.(line 100) Phew.! ;-)

For instance at line no. 100 we have a match and assume that the subsequent lines give away the stack trace.
Line 100:: May 28, 2013 6:04:30 PM org.apache.catalina.core.StandardContext startInternal
.................
.................
/* A match is occurred at line 100 and the subsequent 20 lines are added to line 100. */
....................
....................
Line 120::: May 28, 2013 6:04:30 PM org.apache.catalina.core.StandardContext stop

This way, we will get the entire stack trace.



Now that we have the stack trace  with us, its just a matter of configuring the appropriate output. 
 Send the message via Email or Invoke a HTTP endpoint

As Sridhar pointed out, there should be an option for the users to subscribe for a specific exception rather being spammed with all exception stack traces.


# Define grep blocks for the exceptions that you want to monitor and as
# when there is a match you can add certain feilds and use them later

grep {
               match => ["@message","NullPointerException"]
               add_field => ["exception_message", "Exception message - %{@message}"]            
        add_field => ["exception_subject","NullpointerException occurred"]
               add_field => ["recipients_email","johnDoe@gmail.com"]
        type => "loglevel"
     }
  
grep {
        match => ["@message","IndexOutOfBoundsException"]
        add_field => ["exception_message", "Exception message - %{@message}"]            
        add_field => ["exception_subject","IndexOutOfBoundsException occurred...."]
        add_field => ["recipients_email","janeDoe@gmail.com"]
        type => "loglevel"
}

# This way, you can customize the message sent for each exception.
# again, recipients, subject and message are json attributes.
# Url points to the http end point which takes care of sending out mails. 
 http {
    content_type => "application/json"       
    format => "json"    
    http_method => "post"
    url => "http://localhost:8080/services/notification/email"
    mapping => ["recipients","%{recipients_email}","subject","%{exception_subject}","message","%{exception_message}"]      
    type => "loglevel"
  }

And if there are n-number of exceptions that you need to monitor you can define them in separate conf files  and provide the folder as input during logstash start up using logstash's command line flags. That way, it ll be easier to maintain the conf files. One file for every exception might be an overkill but how about one conf file per module?

Makes sense? B-)

Do let me know if you try this out.

Happy Coding :)
~ cheers.!

Logstash - Getting started

Remember this?

Problem statment for starters :
Consider this scenario. Any enterprise application these days comprises of one to few moving components. Moving components as in, components that are hosted on separate servers. A simple J2EE application which does basic CRUD operation via an User Interface has 2 components.
  1. Server 1 - To hold the business logic and UI
  2. Server 2 - Database server.
Now, ideally, as a developer I would be interested if there is a problem with either one of the components. I would like to be notified if there is a problem. This problem has two parts to it. 
1. To parse the Log messages.
2. Notify concerned parties.

1. To parse the Log messages. 
About Logstash from the description (in their own words)
Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use(like searching)
It is fully and freely open source. Comes with Apache 2.0 license.
Logstash's configuration consists of three parts
Inputs – Where to look for logs? Log source.
Filters – What are we looking for in the given logs? Say, a particular exception or a message
Outputs – What to do once I find the exception/message? Should I index it, should I do something else? Then go ahead and configure it up front.

Logstash requires these things to be configured in a *.conf file. And this file needs to be passed during start up. 
Sample test.conf file

input 
{
 file {
  # Answers the question - Where? Logstash will look for files with the pattern catalina.*.log
  # sincedb is a file which logstash uses to keep track of the log lines that has been
  # processed so far. 
  type => "loglevel"
                path => "D:/Karthick/Softwares/Tomcat/tomcat-7_2_3030/logs/catalina.*.log"  
  sincedb_path => "D:/logstash/sincedb"
  }
}
filter { 

 grep {
               # Answers the question - what are you looking for? 
        # In this example, I am interested in server start up. 
        # @message - maps to one log statement/event and I have defined a grep to match the word 
        # 'Server startup' in the message.
        match => ["@message","Server startup"]               
        type => "loglevel"
         }
}
output 
{
  stdout
  {
 # Answers the question - what to do if there is match? 
 # For now, we'll just output it to the console. 
  message=>"Grep'd message  - %{@message}"
  }
}

Steps:
- Download logstash jar from this location .
- Place the jar inside a working directory (D:/logstash in my case) and extract it.
- Copy the test.conf inside working directory (D:/logstash)
- Open command prompt and navigate to the working directory and run this command.

java -cp logstash-1.1.9-monolithic logstash.runner agent -f test.conf -v

Start local tomcat (since I've used Tomcat logs as my source)

Once logstash is done parsing the log file, you'll see the output in the logstash console.


Next post : Monitor exceptions using logstash

Happy Coding :)
~ cheers.!

Wednesday, May 1, 2013

Outages - my observation



Amazon.com was down for a brief period last Monday. A few hours, give or take. Hacker News was the first to report it. Or, I got to know about the outage via HN.

The news item read – “Was Amazon down?” pointing to the Amazon Home Page. Chaos ensued. It triggered a debate. People raised questions about the infrastructure. Some of them made sense but the rest were just moo points talking about the lost revenue per minute, per hour et al. Outages are nothing new to companies like Amazon and eBay and when they occur there is also heavy revenue loss. Agreed. But it's not that these companies don't care about them.  

If you think about it, major global banks do have their own maintenance window [moratorium period] during which they suspend activity and run tests to ensure that things are running as expected. Many offer a limited range of services during such periods. To be fair, these companies don't have that luxury. You cannot display this "Sorry boss. We have exceeded our daily limit of 10000 users. Do login tomorrow to make a purchase" message to the 10001th user who'd logged in hoping to cash in on the discounts.

When I was with eBay, I had a chance to observe how the teams, in general, cared about outages. Having a server up and running 24X7 is of utmost priority to these sites. Or, for any e-commerce site which has a global usage for that matter as these sites largely depend on the number of visitors. For anyone to make a purchase, the site has to be up and running. Less outages translates to being able to serve even more customers which again translates to more revenue (at least technically). That’s the reason why these companies emphasize on having a Site Reliability, SWAT teams on their toes 24X7 to support outages of any kind.

That said, I vividly remember reading this article. The article analyzes downtime and performance of sites during the 2011 US Holiday season. If you look at it, both eBay and Amazon had an uptime of a staggering 100%. Mind blowing isn’t it?

So, I have a site which caters to a reasonable no. of audience across the globe. Now, how do I make sure that it's up and running all the time or with minimal downtimes. 

Companies like eBay and Amazon can afford to have the necessary equipment in place to begin with and teams across geographies to monitor their health. Also, with their scale and the number of servers, all it takes is to remove the machine from traffic so for the rest of the machines, its usual business. What it does is - it gives the support teams the time to figure out the issue and fix it. Setting up a team to monitor one or two servers is overkill. My friend was working on an internal service which was deployed in a Tomcat accessible only to a specified group. He wrote a simple Java utility which would ping the machine in periodic intervals to know if it’s up and running. He exposed it as a windows service. The problem lies with the midsized teams with say about 10-20 servers. How can they go about monitoring their system health without manual intervention?

May be they can build a dashboard like this. But it requires someone to hit the page to know the status of the system. One way would be to periodically monitor logs for any exceptions and to notify a concerned list. Anything else?

The larger picture - how to ensure that the services are available 24X7?.

Please pitch in with your ideas.

PS: I have used the term site and company interchangeably in this article.

Happy coding :)
~ cheers.!

Factory pattern in Spring


Recently, I was working on a requirement to send notifications via email and sms in my project. My initial design was to have a common interface (NotificationService) with these methods - sendNotification and validateRequest. Both the SmsNotifier and EmailNotifier would implement the interface NotificationService and access to the notification interface would be through a rest end point (post).


And since I’d auto wired dependencies in the Resource class, I’d to figure out a way to inject implementations dynamically. So, I opted for a factory pattern design. This is a straight forward requirement but let’s see how to achieve this with spring.

package com.spring.prototype.service;

public interface NotificationService {
 String sendNotification();
}

@Component("email")
public class EmailNotificationService implements NotificationService {

 public String sendNotification() {
  return "Send notifications via email";
 }

}

package com.spring.prototype.service;

import org.springframework.stereotype.Component;

@Component("sms")
public class SmsNotificationService implements NotificationService {

 public String sendNotification() {
  return "Send notifications via sms";
 }

}

Solution: ServiceLocatorFactoryBean which takes two inputs
serviceLocatorInterface – which is responsible for creating classes based on the input
mappings – which maps names to actual implementations.

Add these configurations in the context xml.
Autowire factory class instead of the interface and let the input decide which implementation to choose.

<beans:bean
  class="org.springframework.beans.factory.config.ServiceLocatorFactoryBean"
  id="printStrategyFactory">
  <beans:property name="serviceLocatorInterface"
   value="com.spring.prototype.factory.NotificationFactory">
  </beans:property>
  <beans:property name="serviceMappings">
   <beans:props>
    <beans:prop key="email">email</beans:prop>
    <beans:prop key="sms">sms</beans:prop>
   </beans:props></beans:property>
</beans:bean>


Resource Class:
------------------

@Component
@Path("/notification")
@Produces(MediaType.TEXT_PLAIN)
public class NotificationResource {

 @Autowired
 NotificationFactory factory;

 @POST
 @Path("{type}")
 public String sendNotification(@PathParam("type") String type) {
  return factory.getNotificationService(type).sendNotification();
 }

}

Full project is available on github.

Note: You would actually end up writing a lot more code to send mail/sms. This post deals only with implementing a factory pattern in Spring and autowiring dependencies. Let me know what you think.
 
Happy coding :)

~ cheers.!