Tuesday, July 9, 2013

Unit testing philosophies - TDD and BDD

I am sure nobody denies the benefits of unit testing. Until 2-3 years ago everybody agreed that unit testing is definitely beneficial for the project, but very few had mechanisms (or tools) in place to do the same.
With Visual studio 2010 and MS Test Unit testing framework in place, it has now become very easy to write and maintain unit test cases within the same solution as your project.
MS Test unit tests can be executed singularly or in a group and the results of the test are available in the Test results window. The test result is actually a file with .trx extension located in 'TestResults' folder inside your project folder. One can easily export the .trx file into an html file by means of a command line utility freely available from codeplex at this location, http://trxtohtml.codeplex.com/
The html file can then be sent over an email, which can be viewed in a browser.

TDD:
Test Driven Development, is a unit testing mechanism wherein a class' methods are tested. Testing philosophy is Fail, code, re-factor. It is important to note that this philosophy reduces the dead or unnecessary code from the project, which means less code and hence less maintenance.
However, TDD approach is more understandable by the developers and is mainly testing the methods of a class. For a non technical PM it is difficult to get all the information from TDD test results. Hence, there remains a disconnect between a PM and the development team.

BDD:
Behavior Driven Development aims at abstracting the testing, one level up from just class methods testing (in TDD) to testing the scenarios. These scenarios can be same as Use cases which everyone is familiar with since UML was introduced. Hence this form of testing is easily understood by BA and PM from the end-user perspective.
One of the popular BDD frameworks for .NET is NSpec which is freely available via the Nuget manager.
Lets go through a sample to understand it better.

I am interested in testing the Account class, which is as below,

class Account
    {
        public double balance { get; set; }

        public int  AccountType { get; set; }


        public Account()
        {          
        }

        public bool CanWithdraw(double withdrawAmount)
        {
            bool bReturn = false;

            if (balance > withdrawAmount)
                bReturn = true;

            return bReturn;
        }
    }

And the BDD tests can be written as follows,
class describe_contexts : nspec
    {
        private Account account;
        void describe_Account()
        {
            context["when withdrawing cash"] = () =>
                {
                    before = () => account = new Account();
                    context["account is in credit"] = () =>
                        {
                            before = () => account.balance = 500;
                            it["the account dispenses cash"] = () => account.CanWithdraw(60).should_be_true();
                        };
                    context["account is overdrawn"] = () =>
                        {
                            before = () => account.balance = -500;
                            it["the account does not dispense cash"] = () => account.CanWithdraw(60).should_be_false();
                        };

                };
        }
    }

And to run this test, type the following command in the Package Manager Console,
PM> nspecrunner <Project name>\bin\debug\<Project name>.dll

You will be surprised to see how easy it is understand the output of this test.

describe contexts
  describe Account
    when withdrawing cash
      account is in credit
        the account dispenses cash
      account is overdrawn
        the account does not dispense cash

Thats it. It is very much readable and understandable by anyone. Which means that these tests can then form the basis for your requirements document which can then be shared with all the project stakeholders.
So now we have the power of integrating the project documents in your solution that can be maintained on the fly.

Hope you'd appreciate this article and going forward, would adopt more and more of BDD in your projects.


Wednesday, April 10, 2013

Well Designed and Maintainable Code - Points to consider


Well designed and maintainable code-base helps in reducing the cost of development (including Change Requests and code enhancements).
(A) What makes a code-base well designed and maintained?
Every software application is developed with an intention of solving a problem. The problem is specified by means of requirements (SRS). It is in the interest of all the stakeholders that the requirements are elicited to the maximum extent possible.
As explained above, if a software is developed against a set of requirements (or use cases if UML is used) then it must be possible to test each and every requirement implemented by the software, distinctly. Testability ------ (1)
Since a software implements a pre-defined set of requirements, technically, it is possible to write the entire application in one single file !!! But then is that a good approach?  (If I were to develop a library).
Hence, it is always, better to divide the entire application into smaller parts (layers/modules). So Separation of Concerns (layers) is another key---- (2)
Each module in the software should cater only to a specific functionality. Single responsibility Principle--- (3)
All the modules in the software must be loosely coupled and highly cohesive inorder to create a maintainable application. The impact of a change in one module must be minimal on other modules---------- (4)
Often developers  are faced with same kind of problems and scenarios across a multitude of applications. In such scenarios, it is often better to go for tried-and-tested solutions. i.e. Design Patterns -------- (5)
Apart from the above, sufficient documentation by means of code comments and project artefacts, adherence to consistent naming convention, principles of OOP (viz. polymorphism,  encapsulation, Inheritance must be used whereever necessary) ---- (6)

But, if somebody is taking a technical interview then simple answers are not understood by the interviewers. So explain them as follows,
A well designed and maintainable software exhibits following principles,
1) Single Responsibility Principle -  Every class must serve one responsibility
2) Open/Close Principle - A module must be open for extension but closed for modification
3) Liskov Substitution - instances of derived classes should be able to replace all the base class references without breaking the code
4) Interface Segregation - prefer client specific interfaces over one generic interface.
5) Dependency Inversion - All the modules should be based on abstractions and not concrete classes. Use dependency injection instead of creating instances of temporary 'Helper' classes.
To summarise, a maintainable software must be SOLID in design. 

(B) What underlying principles or qualities do you look for in a system of classes that tells you they are well designed and maintainable?
A software consists of modules/components that act in cohesion to produce the necessary functionality. Now, every module is implemented by means of a set of classes (in OOP languages, structural languages like 'C' do not support 'class').
Classes must be written so that it adheres to the principles of OOP. For instance, having all the data members of the class as 'public' is possible but it is a bad practice and strongly discouraged. Data Hiding  -------- (1)
Classes can be inherited from other classes to maximise the code re-use and create a hierachy of similar classes. 'Is-A' relationship, inheritance ------ (2)
If the developer has a reason for not allowing a class to be inherited then he can do so by decorating his class using 'sealed' attribute ( in C#) Usage of appropriate attributes ----(3)
As far as possible, creating 'new' objects of a class within another class must be avoided. Instead inject the dependency by means of constructor/.NET property.------- (4)
No Hardcoding of any sort in the classes ---------------- (5)
Limiting one class to one .cs file is a good practice ------ (6)
Apart from the above, code comments, naming convention and testability (unit testing) of classes still holds true and that cannot be relaxed.

Monday, February 18, 2013

Implementing POST requests in .NET

Data transfer between the client and server takes place over the sockets and in byte format. Which means that anything a client needs to send to a server must be converted into bytes either implicitly or explicitly.
POST requests differ from other HTTP requests (apart from PUT) in the concept that in POST request, the client sends data in the 'body' of the HTTP packet. Hence we should understand how to send this data on the client side and at the same time how do we extract the body from the HTTP packet on the server side for further processing. Most of the sites only speak about the GET requests which are far simpler to consume by any client. It is the POST/PUT requests which needs to be understood very well.

As mentioned earlier, the data needs to be sent as a stream of bytes hence if you need to transfer an object of a class then it must be serialized into a stream of bytes by serialization. This post relates with JSON (de)serialization.

Lets take a look at some client and server side code in .NET.

Client Side:


string endPoint = "http://localhost:20909/surveyupload"; // endpoint address to send the data on
var request = (HttpWebRequest)WebRequest.Create(endPoint);
request.Method = "POST";          

string PostData = Newtonsoft.Json.JsonConvert.SerializeObject(survey); // JSON serialization

if (!string.IsNullOrEmpty(PostData))
{                  
         byte[] byteContent = Encoding.UTF8.GetBytes(PostData);                

         using (var writeStream = request.GetRequestStream())
         {
                writeStream.Write(byteContent, 0, byteContent.Length);
          }
  }



using (var response = (HttpWebResponse)request.GetResponse())
{
      var responseValue = string.Empty;

      if (response.StatusCode != HttpStatusCode.OK)
      {
              var message = String.Format("Request failed. Received HTTP {0}", response.StatusCode);
                        throw new ApplicationException(message);
       }

       // grab the response
       using (var responseStream = response.GetResponseStream())
       {
             if (responseStream != null)
             using (var reader = new StreamReader(responseStream))
             {
                   responseValue = reader.ReadToEnd();
               }
         }
}


Server side:
The following method is your sample function implemented on the server side, that maps to the endpoint request made from client.


public string GetFile(System.IO.Stream data)
        {
            string JSONInString = string.Empty;
            using (StreamReader reader = new StreamReader(data))
            {
                JSONInString = reader.ReadToEnd();
                reader.Close();            
            }
            // Deserialize the streamed byte data into the object of known type as below
            SurveyData ReceivedData = JsonConvert.DeserializeObject<SurveyData>(JSONInString);
            if (ReceivedData != null)
                return "Success";
            else
                return "Failed";
        }

That's it. It is very easy to implement the POST requests in .NET. The above client requests can be further elaborated depending upon the requirement. This may involve adding additional headers, modifying content-types, user-agent, etc. properties of the Http request.

For details regarding the importance of REST services, please refer to, http://intelligentfactory.blogspot.co.uk/2012/09/rest-services-primer-all-about-rest.html