Monthly Archives: February 2011

Writing an Application Using TDD (Part 1) Introduction

I’m writing a post series about writing an interpreter using TDD (Test-Driven Development). My intention is to show the use of TDD in our production code. Since my TDD adoption, I produce better code (I hope ;-), and in less time (no more long debugging sessions ;-). There are other benefits: if you have an application with TDD, you get good code coverage, and you write down the code use cases and expected results of the application. All that gives confidence to improve the application, maybe by other team. In my opinion, delivering an application with TDD is a plus for the end customer and for the healthy evolution of a successful software. You can get the same using simply test, but TDD adds the right iterative process to build production code.

But the interpreter example is like a “long code kata”. You can say: “Hey, it’s not the kind of code I write every day, for fun and money”. Yes, you are right. Then, it’s time to start a post series about writing an application.

I choose the technologies to use:

– .NET 3.5 (maybe, I’ll switch to 4.0)

– ASP.NET MVC 2 (again, I could use 3, at some point)

– Visual Studio 2008 (the other candidate is VS 2010), using its test features, as in my previous posts.

I should select the persistence technology. I’m thinking in NHibernate 3.x + Fluent NHibernate or ConfORM; the alternative is Entity Framework 4 with Code First.

There are two path to follow, in this iterative example:

– Simple domain: start to write presentation code and test, then an application layer, then the persistence stuff

– More complex domain: write the domain code and test, then add presentation, then the persistence.

The first one is a kind of “top-down” approach. It’s the opposite of I was using in my interpreter series (note the absence of mocks, stubs in my “bottom-up” approach, writing the expressions, commands, then part of the lexer, part of the parser, etc…). With this new way of doing TDD, I want to show the incremental build of functionality, even without having a database, persistence model, or other technical stuff. ASP.NET MVC and TDD able us to write the initial use cases in a simple way, in a simple domain, embracing an agile and incremental development. Another benefit is that we can deliver something “that works”, to get early feedback from customer.

After exploring such way of doing development with TDD, I will switch to a slightly more complex domain.  In such case, I will prefer to start writing the domain with tests, only to focus in the core of the application, adding some presentation stuff, but without spend much time in complex interface stuff.

The format of a post series is important: there are many examples of application code with TDD (many open source projects), but most of them are the “final” stage of a long path. Writing a simple example incrementally is a better way to grasp the TDD-style of doing software development.

Well, enough for today. This is an intro post, to present the idea, and starting the engines! Comments and suggestions, welcome.

Keep tuned!

Angel “Java” Lopez

Links, news, Resources: Object Oriented Programming (1)

I’m a compulsive link collector 😉 (you can check my delicious links to see what I means). I usually share my links, news, discoveries via my Twitter feed, too. It’s time to share some of the links by topic. Let start with one: Object Oriented Programming. I will use the list to my own consumption, too 😉

Avoiding Dependencies
Insidious Dependencies
Avoid Entrenched Dependencies
Interesting entries by Steve Smith, to learn about dependencies between our objects and their consequences

InfoQ: Classes Are Premature Optimizations

Justin Love discusses the difference between the classic OOP programming model based on classes and prototypal inheritance built on objects as done in JavaScript, and how they affect performance.

He mentions Smalltalk and Self, too. Link via @HernanWilkinson.

Mixins vs Traits
Stackoverflow discussion: What is the difference between Mixins and Traits? Link via @jfroma.

Interfaces are not abstractions
To discuss:

the proliferation of interfaces that typically follow from TDD or use of DI may not be the pure goodness we tend to believe.

Modelos de Software con Objetos
@HernanWilkinson’s Spanish blog, about software models with objects.

Mi idea es postear lo que conozco sobre modelar/desarrollar con objetos, en base a lo que damos en la materia de POO y DAO de la UBA y mi experiencia personal

Going completely prototypal in JavaScript

In this post we present the Proto API which implements inheritance in JavaScript in a purely prototypal fashion. This contrasts it with all other JavaScript inheritance APIs that the author is aware of.

Hardware support for Objects: The MUSHROOM Project
A Distributed Multi-user Object-Oriented Programming Environment, with object-based memory.

In 1986 a group at the University of Manchester embarked upon an investigation into how developments in computer architecture could benefit the performance of dynamic object-oriented languages. We reasoned that if the object-oriented approach was of such great benefit to software development, then it would be all the more attractive if there was little loss of performance. Over the next five years we developed an architecture to support object-oriented languages, called the Mushroom architecture. The aim of the research was, starting with carte blanche, to discover what sort of architecture was best suited to Smalltalk-like languages.

Job Security through Code Obscurity
Use objects to obfuscate your code, specialize via inheritance, use lot of patterns, obscure code flow with virtuals and templates, include everything and more.

A non-hierarchical approach to object-oriented programming
Remember Flavors, object-oriented extension to Lisp.

Organic Programming
A Generative, Iterative and Pattern Language Independent (GIPLI) aproach to creating timeless Domain Specific Languages (DSL).

Solving the Expression Problem with OOP

The problem is, in a nutshell, how do you build an extensible data model and an extensible operation model that meets three goals: Code-level modularization, Separate compilation, Static type safety

Adding Dynamic Interfaces to Smalltalk

In this article we present SmallInterfaces; a new ontology of dynamic interfaces which makes a powerful use of the dynamic nature of Smalltalk. SmallInterfaces adds interfaces as honorary members to Smalltalk’s extensive reflection mechanism, in a manner portable across the many Smalltalk variants.


Trylon is a computer language. It is basically a cross between Python and Smalltalk. It uses indentation for program structure, like Python, and it uses Smalltalk’s expression syntax (but with precedence). Its objects are dynamically typed, but its programs are statically compiled (via C).

Object-Oriented PHP for Beginners

you’ll learn the concepts behind object-oriented programming (OOP), a style of coding in which related actions are grouped into classes to aid in creating more-compact, effective code.

To have gettters or not? Encapsulation vs use

SOLID by example
Source code examples in .NET, to understand SOLID principles.

The Open/Closed Principle: Concerns about Change in Software Design

This post investigates the applicability of the “Open/Closed Principle” when we add new functionality to a software design whose source code is entirely under our control.

Agent nouns are code smells
Discussion about: class names ending with agent nouns are a code smell (agent nouns as “helper”, “manager”)

Coding: The agent noun class
Mark Needham response to the above post.

OOP: What does an object’s responsibility entail? at Mark Needham

I believe that an object should be responsible for deciding how its data is used rather than having another object reach into it, retrieve its data and then decide what to do with it.

LDNUG : Mixing functional and object oriented approaches to programming in C#
Video of a talk by Mike Wagg and Mark Needham

Enough for this post. I will write more topic links posts.

Keep tuned!

Angel “Java” Lopez

Azure: Fractal application

In January, I reimplemented my Fractal application, now using Azure (my Azure-related posts). The idea is to calculate each sector of a fractal image, using the power of worker roles, store the result in blobs, and consume them from a WinForm application.

This is the solution:

The source code is in my AjCodeKatas Google project. The code is at:

If you are lazy to use SVN, this is the current frozen code:

The projects in the solution:

AzureFractal: the Azure cloud definition.

Fractal: it contains my original code from previous fractal applications. An independent library class.

Fractal.Azure: serialization utilities of fractal info, and a service facade to post that info to a Azure message queue.

AzureLibrary: utility classes I used in other Azure examples. They are evolving in each example.

FractalWorkerRole: the worker role that consumes messages indicating what sector (rectangle) of the Mandelbrot fractal to calculate.

Fractal.GUI: a client WinForm project that sends and receives message to/from the worker role, using Azure queues.

You should configure the solution to have a multiple startup:

The WinForm application sends a message to a queue, with the info about the fractal sector to calculate:

private void Calculate()
    Bitmap bitmap = new Bitmap(pcbFractal.Width, 
    pcbFractal.Image = bitmap;
    realWidth = realDelta * pcbFractal.Width;
    imgHeight = imgDelta * pcbFractal.Height;
    realMin = realCenter - realWidth / 2;
    imgMin = imgCenter - imgHeight / 2;
    int width = pcbFractal.Width;
    int height = pcbFractal.Height;
    Guid id = Guid.NewGuid();
    SectorInfo sectorinfo = new SectorInfo()
        Id = id,
        FromX = 0,
        FromY = 0,
        Width = width,
        Height = height,
        RealMinimum = realMin,
        ImgMinimum = imgMin,
        Delta = realDelta,
        MaxIterations = colors.Length,
        MaxValue = 4
    Calculator calculator = new Calculator();

The worker role reads messages from the queue, and deserialize SectorInfo:

while (true)
    CloudQueueMessage msg = queue.GetMessage();
    if (msg != null)
        Trace.WriteLine(string.Format("Processing {0}", msg.AsString));
        SectorInfo info = SectorUtilities.FromMessageToSectorInfo(msg);

If the sector is too big, new messages are generated:

if (info.Width > 100 || info.Height > 100)
    Trace.WriteLine("Splitting message...");
    for (int x = 0; x < info.Width; x += 100)
        for (int y = 0; y < info.Height; y += 100)
            SectorInfo newinfo = info.Clone();
            newinfo.FromX = x + info.FromX;
            newinfo.FromY = y + info.FromY;
            newinfo.Width = Math.Min(100, info.Width - x);
            newinfo.Height = Math.Min(100, info.Height - y);
            CloudQueueMessage newmsg = 

If the sector is small enough, then it is processed:

Trace.WriteLine("Processing message...");
Sector sector = calculator.CalculateSector(info);
string blobname = string.Format("{0}.{1}.{2}.{3}.{4}", 
info.Id, sector.FromX, sector.FromY, sector.Width, sector.Height);
CloudBlob blob = blobContainer.GetBlobReference(blobname);
MemoryStream stream = new MemoryStream();
BinaryWriter writer =new BinaryWriter(stream);
foreach (int value in sector.Values)
stream.Seek(0, SeekOrigin.Begin);
CloudQueueMessage outmsg = new CloudQueueMessage(blobname);

A blob with the result is generated, and a message is sent to another queue to notify the client application.

The WinForm has a thread with a loop reading messages from the second queue:

string blobname = msg.AsString;
CloudBlob blob = this.blobContainer.GetBlobReference(blobname);
MemoryStream stream = new MemoryStream();
string[] parameters = blobname.Split('.');
Guid id = new Guid(parameters[0]);
int fromx = Int32.Parse(parameters[1]);
int fromy = Int32.Parse(parameters[2]);
int width = Int32.Parse(parameters[3]);
int height = Int32.Parse(parameters[4]);
int[] values = new int[width * height];
stream.Seek(0, SeekOrigin.Begin);
BinaryReader reader = new BinaryReader(stream);
for (int k = 0; k < values.Length; k++)
    values[k] = reader.ReadInt32();
this.Invoke((Action<int,int,int,int,int[]>) ((x,y,h,w,v) 
this.DrawValues(x,y,h,w,v)), fromx, fromy, width, height, values);

Note the use of .Invoke to run the drawing of the image in the UI thread.

This is the WinForm app, after click on Calculate button. Note that the sectors are arriving:

There are some blob sectors that are still not arrived. You can drag the mouse to have a new sector:

You can change the colors, clicking on New Colors button:

This is a sample application, a “proof-of-concept”. Probably, you will get a better performance if you use a single machine. But the idea is that you can defer work to worker roles, specially if the work can be do in parallel (imagine a parallel render machine, for animations). If you run these an application in Azure, with many worker roles, the performance could be improved.

Next steps: implement a distributed web crawler, try distributed genetic algorithm, running in the Azure cloud.

Keep tuned!

Angel “Java” Lopez

Azure: A simple Application using Tables

Continuing with my Azure examples, this time I wrote a simple CRUD Web applications, using Tables, using Azure Storage Client.

It’s a classic ASP.NET application, this is the view for Customer/Index action:

You can download the solution from my AjCodeKatas Google project. The code is at:

If you want the current frozen version:

The simple entity Customer:

public class Customer : TableServiceEntity
    public Customer()
        : this(Guid.NewGuid().ToString())
    public Customer(string id)
        : base(id, string.Empty)
    public string Name { get; set; }
    public string Address { get; set; }
    public string Notes { get; set; }

I’m using the PartitionKey as the primary key, filling it with a Guid. The RowKey is the empty string. In a less simple application, I could save the invoices of a customer using the same partition key, and identifing each invoice with its RowKey.

A DataContext is in charge of expose an IQueryable of Customers:

public class DataContext : TableServiceContext
    public const string CustomerTableName = "Customers";
    public DataContext(string baseAddress, StorageCredentials credentials)
        : base(baseAddress, credentials)
        this.IgnoreResourceNotFoundException = true;
    public DataContext(CloudStorageAccount storageAccount)
        : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials)
        this.IgnoreResourceNotFoundException = true;
    public IQueryable<Customer> Customers
            return this.CreateQuery<Customer>(CustomerTableName);

Note the IgnoreNotFoundException: if true, I can retrieve an inexistent customer, and instead of raise an exception, return a null value.

There is a service to access and manage Customers:

public class CustomerServices
    private DataContext context;
    public CustomerServices(DataContext context)
        this.context = context;
    public Customer GetCustomerById(string id)
        return this.context.Customers.Where(c => c.PartitionKey == id).SingleOrDefault();
    public IEnumerable<Customer> GetCustomerList()
        return this.context.Customers.ToList().OrderBy(c => c.Name);
    public void AddCustomer(Customer customer)
        this.context.AddObject(DataContext.CustomerTableName, customer);
    public void UpdateCustomer(Customer customer)
        this.context.AttachTo(DataContext.CustomerTableName, customer, "*");
    public void DeleteCustomerById(string id)
        Customer c = this.GetCustomerById(id);

Note the Attach using ETag (third parameter) of “*” (any). This way, I can update the customer attaching the “in-memory-created” one to the data context, without retrieving it from the database. This approarch is viable if I have all the data of the customer. In most application you change only some fields, so you should retrieve the object, change it, and then, save the changes.

Using the service to retrieve the customers:

CloudStorageAccount storage = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
CustomerServices services = new CustomerServices(new DataContext(storage));
this.grdCustomerList.DataSource = services.GetCustomerList();

Note: it’s a sample application, simple and direct. A real application should separate the view model from the business model, and maybe, use an ASP.NET MVC front end. I will write this example, using MVC. In another series (out of Azure), I want to write an app using ASP.NET MVC AND TDD.

Next steps in Azure: a distributed fractal application, a distributed web crawler, implement a distributed genetic algorithm, using worker roles.

Keep tuned!

Angel “Java” Lopez

A Minimal Http Server in C#

Two months ago, I wrote a post implementing a Java minimal HTTP Server:

A Minimal Http Server in Java

These days, I began to explore node.js. I want to implements a minimal server in AjSharp that maps incoming request to dynamic functions, a la node.js/Javascript. But before that, I wanted a pure C# minimal implementation. I could use TcpListener (like the java ServerSocket I used in the previous post) as in the project (2001):

Create your own Web Server using C#

But in .NET, we have a minimal web listener in the class System.Net.HttpListener. See:

HttpListener for dummies: a simple “HTTP Request” reflector

So, I wrote the minimal console code:

class Program
    static string rootDirectory;
    static void Main(string[] args)
        rootDirectory = args[0];
        HttpListener listener = new HttpListener();
        for (int k = 1; k < args.Length; k++)
        while (true)
            HttpListenerContext context = listener.GetContext();
    private static void Process(HttpListenerContext context)
        string filename = context.Request.Url.AbsolutePath;
        filename = filename.Substring(1);
        if (string.IsNullOrEmpty(filename))
            filename = "index.html";
        filename = Path.Combine(rootDirectory, filename);
        Stream input = new FileStream(filename, FileMode.Open);
        byte[] buffer = new byte[1024*16];
        int nbytes;
        while ((nbytes = input.Read(buffer, 0, buffer.Length)) > 0)
            context.Response.OutputStream.Write(buffer, 0, nbytes);

The web server return the content of static files, from a root directory. The first parameter is the root directory, and the rest of parameters are the patterns to listen:

The result, using the static files I have in my Tomcat doc directory:

The output at console:

Next steps: use this code in AjSharp. Or extend it to support a pool of threads, or extend it to support /run/<commmandtoexecute> in the server.

Keep tuned!

Angel “Java” Lopez