Angel \”Java\” Lopez on Blog

July 21, 2011

Social Online Games Programming (Part 2) Tankster and Windows Azure Toolkit For Social Games

Previous post
Next post

Yesterday, July 20th, Microsoft released a preview version of Windows Azure Toolkit for Social Games, and published a beta version (with code) of a first demo game.

You can download the code from Codeplex:

You can play the game at:

The current solution main projects:

Tankster.GamePlay is a Web Role. The only worker role is Tankster.WorkerRole. Tankster.Core is a class library. There are interesting code at Tankster.Common: Azure utilities to access repositories, a job engine; all its code is game-agnostic.

These are my first short comments about the code and implemented features (remember, is a beta! Some of these features/implementations could change in the next release):

– Client technology: HTML5, Javascript, EaselJs (for canvas programming).
– Server technology: ASP.NET MVC 3, some Razor test views (interesting topic: how to test the game without the REAL game), WCF Web API (another interesting topic to discuss: alternative technologies to received the game activity)
– There is a client game model and entities in Javascript. See src/model, src/game.

– There is a server game model (see Tankster.Core class library project)

– You can play in single player mode, but you can choose multi-player online. Then, the game uses ACS Portal to login using Federated Authentication:

– The client code resides in a single page: index.html (with lot of referenced javascript files)
– Client code sends JSON data (commands) to WCF Web API endpoints, using Ajax/JQuery. There are services published, exposing a REST-like interface


– Most of the game activity is send to game/command service. The service updates the game status residing in a blob at Azure Storage. Code excerpt:

// Add gameAction
var gameAction = new GameAction
    Id = Guid.NewGuid(),
    Type = commandType,
    CommandData = commandData,
    UserId = this.CurrentUserId,
    Timestamp = DateTime.UtcNow
// Cleanup game actions lists
for (int i = 0; i < game.GameActions.Count(); i++)
    if (game.GameActions[i].Timestamp < DateTime.UtcNow.AddSeconds(-10))

– The game status is polled by javascript clients from blob storage. In this way, the ASP.NET MVC web role has less workload.

– The blob resides in the same domain, so no cross-domain JSON call is needed. But the game is prepared to use cross-domain Ajax call, replacing the XmlHttpRequest object by a Flash component.

– The Skirmish game mode (five players in a game) is made queuing the new players in a Game Queue, managed at worker role.

– The Statistics are processed at worker role: some game actions are sent to Azure queue, so the statistics process doesn’t disturb the client game.

– User status, Game status, Skirmish Game Queue status are blobs.

– Statistics data uses SQL Azure.

– There only worker role use a Job Engine to process many tasks, example:

// Game Queue for Skirmish game
    .SetupContext((message, context) =>
         context.Add("userId", message.UserId);
        new SkirmishGameQueueCommand(userRepository,
            gameRepository, workerContext))

There are a lot of points to comment and discuss, feed for upcoming posts. Surely, the solution will evolve and new versions will be published (this week? next week?). But it is interesting to have a published game online AND the code to analyzes it.

Keep tuned!

Angel “Java” Lopez

June 22, 2011

OAuth, OAuth2 and Azure Access Control Service (ACS): Links

Filed under: .NET, Azure, OAuth, OAuth2, Rest — ajlopez @ 9:37 am

The past two weeks I was working in a proof of concept application for a customer of mine, implementing OAuth2 using ACS (Azure Access Control Service). These are the principal links I used.

First, links about what is OAuth, its history, etc:

OAuth (Open Authorization) is an open standard for authorization. It allows users to share their private resources (e.g. photos, videos, contact lists) stored on one site with another site without having to hand out their credentials, typically username and password.

The Authoritative Guide to OAuth 1.0

The new OAuth 2:

OAuth in ACS and WCF:

Windows Azure AppFabric Access Control Service (ACS): WCF SWT/REST OAuth Scenario
Securing WCF Services with ACS

ACS (Azure Access Control Service) Added Support for OAuth 2.0 Protocol

This is the key web scenario example with code I studied [1]:
It uses SWT (Simple Web Token) tokens to protect REST services. Read the setup to understand what it’s needed (Service Identity configuration) at Azure ACS.

The second key scenario example is desktop flow:

DataMarket OAuth Samples – Rich Client (2)
DataMarket OAuth Samples – Web Client
Again, these examples uses SWT

I found these last two examples at:

I should review the code at:
Code Sample: OAuth 2.0 Certificate Authentication
contained in
Access Control Service Samples and Documentation

I could extend example [1] to support a WinForm client.

Keep tuned!

Angel “Java” Lopez

June 20, 2011

Links, News and Resources: Windows Azure (1)

Filed under: .NET, Azure, Links — ajlopez @ 2:23 pm

Next Post

Some links I found interesting about Azure:

Inside Windows Azure, the Cloud Operating System with Mark Russinovich

Introducing System.Web.Providers – ASP.NET Universal Providers for Session, Membership, Roles and User Profile on SQL Compact and SQL Azure – Scott Hanselman

Is Apple iCloud Powered by Microsoft Windows Azure?

Remote Desktop in Windows Azure–what’s it doing?

Node.js, Ruby, and Python in Windows Azure: A Look at What’s Possible

Node.js, CoffeeScript, and the Windows Azure Service Management API — Gist

Windows Azure Q&A with Roger Jennings — Visual Studio Magazine

Windows Azure Toolkit for Windows Phone 7

Azure VM Assistant (AzureVMAssist) : Windows Azure VM Information, Investigation and Diagnostics Utility

Edit and Apply New WIF’s Config Settings in Your Windows Azure WebRole… Without Redeploying!

Getting started with SQL Azure Development

Windows Azure | Pricing Estimator | Cost Calculator | Pricing Calculator

How to Deploy a Hadoop Cluster on Windows Azure

Scalable and Simple CQRS Views in the Cloud – Blog – CQRS and Cloud Computing

Patterns: Windows Azure – Upgrading your table storage schema without disrupting your service

My Azure Links:

Keep tuned! More links coming soon 😉

Angel “Java” Azure

June 15, 2011

Links, news, Resources: Node.js (1)

Filed under: Azure, JavaScript, Links, NodeJs — ajlopez @ 9:10 am

Next Post

You know: I’m a link collector. This is a short selection of my Node.js links:

Evented I/O for V8 JavaScript.

An example of a web server written in Node which responds with "Hello World" for every request.

var http = require(‘http’);
http.createServer(function (req, res) {
  res.writeHead(200, {‘Content-Type’: ‘text/plain’});
  res.end(‘Hello World\n’);
}).listen(1337, "");
console.log(‘Server running at’);

Modules <– Key feature

Mastering Node – Open Source Nodejs eBook

The CommonJS API will fill that gap by defining APIs that handle many common application needs, ultimately providing a standard library as rich as those of Python, Ruby and Java.

CommonJS Modules implementation
To understand modules in CommonJs

Learning Server-Side JavaScript with Node.js

A package manager for node

Learning Server-Side Javascript with Node.js

V8 JavaScript Engine

Playing with Node.js, Ubuntu, Sqlite3 and node-Sqlite

A geek with a hat » Comparing clojure and node.js for speed

Felix’s Node.js Guide

Why a JavaScript hater thinks everyone needs to learn JavaScript in the next year

Nave: version manager for node.js
It’s like rvm for ruby

The Node Ahead: JavaScript leaps from browser into future

Up and Running with Node.js

First Steps with Node.js: exciting stuff

NodeJS Tutorial with CouchDB and Haml – ErdNodeFlips

Node Tutorial Part 2

Deep inside Node.js with Ryan Dahl

JavaScript require in 100 lines of code
To understand scope and function of require() in Javascript

How to Install Node.JS on Windows – David Trejo’s Thoughts

Node.js, CoffeeScript, and the Windows Azure Service Management API

Node.js, Ruby, and Python in Windows Azure: MIX Talk

Node.js, Ruby, and Python in Windows Azure: A Look at What’s Possible | MIX11 | Channel 9

My links

Keep tuned!

Angel "Java" Lopez

June 13, 2011

Running AjSharp in Azure

Filed under: .NET, AjSharp, Azure, Distributed Computing, Open Source Projects — ajlopez @ 9:37 am

My weekend code kata was something I was thinking since last year: run AjSharp in Azure Worker Roles. The idea is: a worker role instance can receives text via queue messages containing AjSharp code, and execute it. The output is send as a message to other queue.

The result was committed in my AjCodeKata project: you must download trunk\Azure\AzureAjSharp AND trunk\AjLanguage (where AjSharp projects reside).

The solution:

The projects:

AzureAjSharp.WorkerRole: sample worker role, with these lines added:

CloudStorageAccount account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
Processor processor = new Processor(account);

Azure.AjSharp: the class library. It contains Processor class. The constructors need a cloud account and the names of: requests queue, responses queue and blob container. The request queue has messages with AjSharp code to execute. Response queue has the output text of such executions. The above processor.Start() command initiates the read and process of AjSharp code.

AzureAjSharp.Console: it reads lines from console, and when it reads a “send” line, the text is converted to a cloud message, sending it to the request queue. It has a thread that reads the response queue and prints the results.

AzureLibrary: auxiliar classes.

AjSharpVS2010, AjLanguageVS2010: AjSharp implementation.

When I run the console application, I can send AjSharp code to worker roles:

And more: AjSharp supports Include(“filetobeincluded”); where the file contains AjSharp code. I modified the launch of AjSharp machine to have an Include subroutine implementation that reads the content from a blob container.

A graph:

Then, I uploaded some simple code (the files are in Examples folder in Azure.AjSharp project) to ajsfiles blob container (DevStorage in this test):

(I’m using Neudesic Azure Storage Explorer, but I could use CloudBerry Explorer for Azure Storage: it supports folders in a tree).

This is the test running (using Include) HelloWorld.ajs, and ForOneToTen.ajs:

Next steps:

– Write more utilities in AjSharp, to be included if they are needed: file and directory utilities, download and upload of blobs, send and receive message using queues, broadcast messages to all worker instances, download and load of assemblies, etc. Sky is the limit! 😉

Then, you (or your program) can dinamically send tasks and receive results. Nice to have: Guids to identify tasks and their results; web interface; results stored as blob texts; cache (and flush) of included blob files, etc…

Keep tuned!

Angel “Java” Lopez

February 15, 2011

Azure: Fractal application

Filed under: .NET, Azure, Cloud Computing, Distributed Computing — ajlopez @ 10:21 am

In January, I reimplemented my Fractal application, now using Azure (my Azure-related posts). The idea is to calculate each sector of a fractal image, using the power of worker roles, store the result in blobs, and consume them from a WinForm application.

This is the solution:

The source code is in my AjCodeKatas Google project. The code is at:

If you are lazy to use SVN, this is the current frozen code:

The projects in the solution:

AzureFractal: the Azure cloud definition.

Fractal: it contains my original code from previous fractal applications. An independent library class.

Fractal.Azure: serialization utilities of fractal info, and a service facade to post that info to a Azure message queue.

AzureLibrary: utility classes I used in other Azure examples. They are evolving in each example.

FractalWorkerRole: the worker role that consumes messages indicating what sector (rectangle) of the Mandelbrot fractal to calculate.

Fractal.GUI: a client WinForm project that sends and receives message to/from the worker role, using Azure queues.

You should configure the solution to have a multiple startup:

The WinForm application sends a message to a queue, with the info about the fractal sector to calculate:

private void Calculate()
    Bitmap bitmap = new Bitmap(pcbFractal.Width, 
    pcbFractal.Image = bitmap;
    realWidth = realDelta * pcbFractal.Width;
    imgHeight = imgDelta * pcbFractal.Height;
    realMin = realCenter - realWidth / 2;
    imgMin = imgCenter - imgHeight / 2;
    int width = pcbFractal.Width;
    int height = pcbFractal.Height;
    Guid id = Guid.NewGuid();
    SectorInfo sectorinfo = new SectorInfo()
        Id = id,
        FromX = 0,
        FromY = 0,
        Width = width,
        Height = height,
        RealMinimum = realMin,
        ImgMinimum = imgMin,
        Delta = realDelta,
        MaxIterations = colors.Length,
        MaxValue = 4
    Calculator calculator = new Calculator();

The worker role reads messages from the queue, and deserialize SectorInfo:

while (true)
    CloudQueueMessage msg = queue.GetMessage();
    if (msg != null)
        Trace.WriteLine(string.Format("Processing {0}", msg.AsString));
        SectorInfo info = SectorUtilities.FromMessageToSectorInfo(msg);

If the sector is too big, new messages are generated:

if (info.Width > 100 || info.Height > 100)
    Trace.WriteLine("Splitting message...");
    for (int x = 0; x < info.Width; x += 100)
        for (int y = 0; y < info.Height; y += 100)
            SectorInfo newinfo = info.Clone();
            newinfo.FromX = x + info.FromX;
            newinfo.FromY = y + info.FromY;
            newinfo.Width = Math.Min(100, info.Width - x);
            newinfo.Height = Math.Min(100, info.Height - y);
            CloudQueueMessage newmsg = 

If the sector is small enough, then it is processed:

Trace.WriteLine("Processing message...");
Sector sector = calculator.CalculateSector(info);
string blobname = string.Format("{0}.{1}.{2}.{3}.{4}", 
info.Id, sector.FromX, sector.FromY, sector.Width, sector.Height);
CloudBlob blob = blobContainer.GetBlobReference(blobname);
MemoryStream stream = new MemoryStream();
BinaryWriter writer =new BinaryWriter(stream);
foreach (int value in sector.Values)
stream.Seek(0, SeekOrigin.Begin);
CloudQueueMessage outmsg = new CloudQueueMessage(blobname);

A blob with the result is generated, and a message is sent to another queue to notify the client application.

The WinForm has a thread with a loop reading messages from the second queue:

string blobname = msg.AsString;
CloudBlob blob = this.blobContainer.GetBlobReference(blobname);
MemoryStream stream = new MemoryStream();
string[] parameters = blobname.Split('.');
Guid id = new Guid(parameters[0]);
int fromx = Int32.Parse(parameters[1]);
int fromy = Int32.Parse(parameters[2]);
int width = Int32.Parse(parameters[3]);
int height = Int32.Parse(parameters[4]);
int[] values = new int[width * height];
stream.Seek(0, SeekOrigin.Begin);
BinaryReader reader = new BinaryReader(stream);
for (int k = 0; k < values.Length; k++)
    values[k] = reader.ReadInt32();
this.Invoke((Action<int,int,int,int,int[]>) ((x,y,h,w,v) 
this.DrawValues(x,y,h,w,v)), fromx, fromy, width, height, values);

Note the use of .Invoke to run the drawing of the image in the UI thread.

This is the WinForm app, after click on Calculate button. Note that the sectors are arriving:

There are some blob sectors that are still not arrived. You can drag the mouse to have a new sector:

You can change the colors, clicking on New Colors button:

This is a sample application, a “proof-of-concept”. Probably, you will get a better performance if you use a single machine. But the idea is that you can defer work to worker roles, specially if the work can be do in parallel (imagine a parallel render machine, for animations). If you run these an application in Azure, with many worker roles, the performance could be improved.

Next steps: implement a distributed web crawler, try distributed genetic algorithm, running in the Azure cloud.

Keep tuned!

Angel “Java” Lopez

February 8, 2011

Azure: A simple Application using Tables

Filed under: .NET, ASP.NET, Azure, Cloud Computing — ajlopez @ 10:24 am

Continuing with my Azure examples, this time I wrote a simple CRUD Web applications, using Tables, using Azure Storage Client.

It’s a classic ASP.NET application, this is the view for Customer/Index action:

You can download the solution from my AjCodeKatas Google project. The code is at:

If you want the current frozen version:

The simple entity Customer:

public class Customer : TableServiceEntity
    public Customer()
        : this(Guid.NewGuid().ToString())
    public Customer(string id)
        : base(id, string.Empty)
    public string Name { get; set; }
    public string Address { get; set; }
    public string Notes { get; set; }

I’m using the PartitionKey as the primary key, filling it with a Guid. The RowKey is the empty string. In a less simple application, I could save the invoices of a customer using the same partition key, and identifing each invoice with its RowKey.

A DataContext is in charge of expose an IQueryable of Customers:

public class DataContext : TableServiceContext
    public const string CustomerTableName = "Customers";
    public DataContext(string baseAddress, StorageCredentials credentials)
        : base(baseAddress, credentials)
        this.IgnoreResourceNotFoundException = true;
    public DataContext(CloudStorageAccount storageAccount)
        : base(storageAccount.TableEndpoint.AbsoluteUri, storageAccount.Credentials)
        this.IgnoreResourceNotFoundException = true;
    public IQueryable<Customer> Customers
            return this.CreateQuery<Customer>(CustomerTableName);

Note the IgnoreNotFoundException: if true, I can retrieve an inexistent customer, and instead of raise an exception, return a null value.

There is a service to access and manage Customers:

public class CustomerServices
    private DataContext context;
    public CustomerServices(DataContext context)
        this.context = context;
    public Customer GetCustomerById(string id)
        return this.context.Customers.Where(c => c.PartitionKey == id).SingleOrDefault();
    public IEnumerable<Customer> GetCustomerList()
        return this.context.Customers.ToList().OrderBy(c => c.Name);
    public void AddCustomer(Customer customer)
        this.context.AddObject(DataContext.CustomerTableName, customer);
    public void UpdateCustomer(Customer customer)
        this.context.AttachTo(DataContext.CustomerTableName, customer, "*");
    public void DeleteCustomerById(string id)
        Customer c = this.GetCustomerById(id);

Note the Attach using ETag (third parameter) of “*” (any). This way, I can update the customer attaching the “in-memory-created” one to the data context, without retrieving it from the database. This approarch is viable if I have all the data of the customer. In most application you change only some fields, so you should retrieve the object, change it, and then, save the changes.

Using the service to retrieve the customers:

CloudStorageAccount storage = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
CustomerServices services = new CustomerServices(new DataContext(storage));
this.grdCustomerList.DataSource = services.GetCustomerList();

Note: it’s a sample application, simple and direct. A real application should separate the view model from the business model, and maybe, use an ASP.NET MVC front end. I will write this example, using MVC. In another series (out of Azure), I want to write an app using ASP.NET MVC AND TDD.

Next steps in Azure: a distributed fractal application, a distributed web crawler, implement a distributed genetic algorithm, using worker roles.

Keep tuned!

Angel “Java” Lopez

December 13, 2010

Azure: Multithreads in Worker Role, an example

Filed under: .NET, Azure, Cloud Computing, Distributed Computing — ajlopez @ 9:41 am

In my previous post, I implemented a simple worker role, consuming and producing numbers from/to a queue. Now, I have a new app:

The worker role implements the generation of a Collatz sequence. See:

You can download the solution from my AjCodeKatas Google project. The code is at:

The initial page is simple:

The number range is send to the queue:

protected void btnProcess_Click(object sender, EventArgs e)
    int from = Convert.ToInt32(txtFromNumber.Text);
    int to = Convert.ToInt32(txtToNumber.Text);
    for (int k=from; k<=to; k++) 
        CloudQueueMessage msg = new CloudQueueMessage(k.ToString());

The worker role gets each of these message, and calculates the Collatz sequence:

I added a new feature in Azure.Library: a MessageProcessor that can consumes message from a queue, in its own thread:

public MessageProcessor(CloudQueue queue, Func<CloudQueueMessage, bool> process)
    this.queue = queue;
    this.process = process;
public void Start()
    Thread thread = new Thread(new ThreadStart(this.Run));
public void Run()
    while (true)
            CloudQueueMessage msg = this.queue.GetMessage();
            if (this.ProcessMessage(msg))
        catch (Exception ex)
            Trace.WriteLine(ex.Message, "Error");
public virtual bool ProcessMessage(CloudQueueMessage msg)
    if (msg != null && this.process != null)
        return this.process(msg);
    Trace.WriteLine("Working", "Information");
    return false;

Then, the worker role is launching a fixed number (12) of MessageProcessor. In this way, each instance is dedicated to process many message. I guess that this is not needed in this example. But it was an easy “proof of concept” to test the idea. Part of Run method in worker role;

QueueUtilities qutil = new QueueUtilities(account);
CloudQueue queue = qutil.CreateQueueIfNotExists("numbers");
CloudQueueClient qclient = account.CreateCloudQueueClient();
for (int k=0; k<11; k++) 
    CloudQueue q = qclient.GetQueueReference("numbers");
    MessageProcessor p = new MessageProcessor(q, this.ProcessMessage);
MessageProcessor processor = new MessageProcessor(queue, this.ProcessMessage);

The ProcessMessage is in charge of the real work:

private bool ProcessMessage(CloudQueueMessage msg)
    int number = Convert.ToInt32(msg.AsString);
    List<int> numbers = new List<int>() { number };
    while (number > 1)
        if ((number % 2) == 0)
            number = number / 2;
            number = number * 3 + 1;
    StringBuilder builder = new StringBuilder();
    foreach (int n in numbers)
        builder.Append(" ");
    Trace.WriteLine(builder.ToString(), "Information");
    return true;

The code of this example is in my

Next steps: more distributed apps (genetic algorithm, web crawler…)

Keep tuned!

Angel “Java” Lopez

December 9, 2010

Azure: a simple application

Filed under: .NET, Azure, Cloud Computing, Distributed Computing — ajlopez @ 9:19 am

This is my first post here, about Azure programming. An easy start: an application with one web role, and one worker role:

You can download the solution from my AjCodeKatas Google project. The code is at:

In the initial web page you can enter a number to process:

If you send the number 10, this data is send to a queue:

protected void btnProcess_Click(object sender, EventArgs e)
    int number = Convert.ToInt32(txtNumber.Text);
    CloudQueueMessage msg = new CloudQueueMessage(number.ToString());

The worker role is reading the queue. It decrements the number, and if the result is still positive, it is reinjected in the queue:

        public override void Run()
            // This is a sample worker implementation. Replace with your logic.
            Trace.WriteLine("NumberWorkerRole entry point called", "Information");
            CloudStorageAccount account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
            QueueUtilities qutil = new QueueUtilities(account);
            CloudQueue queue = qutil.CreateQueueIfNotExists("numbers");
            while (true)
                CloudQueueMessage msg = queue.GetMessage();
                if (msg != null)
                    int number = Convert.ToInt32(msg.AsString);
                    Trace.WriteLine(string.Format("Processing number: {0}", number), "Information");
                    if (number > 0)
                        CloudQueueMessage newmsg = new CloudQueueMessage(number.ToString());
                    Trace.WriteLine("Working", "Information");

You can see the output at Development Fabric UI:

Note the use of AzureLibrary to create a Queue:

        public CloudQueue CreateQueueIfNotExists(string queuename)
            CloudQueueClient queueStorage = this.account.CreateCloudQueueClient();
            CloudQueue queue = queueStorage.GetQueueReference(queuename);
            Trace.WriteLine("Creating queue...", "Information");
            Boolean queuecreated = false;
            while (queuecreated == false)
                    queuecreated = true;
                catch (StorageClientException e)
                    if (e.ErrorCode == StorageErrorCode.TransportError)
                        Trace.TraceError(string.Format("Connect failure! The most likely reason is that the local " +
                            "Development Storage tool is not running or your storage account configuration is incorrect. " +
                            "Message: '{0}'", e.Message));
            return queue;

I borrowed part of this code from Azure SDK samples.

Next steps to explore:

– Add instrumentation to worker role

– Use more instances, and generate more message (an explosion-like pattern)

– Add multithreading support in the worker role

– Example using table and blob storage

And the big ones:

– Inject and run AjSharp (or AjTalk) code at worker roles

– Implements a distributed application using roles (distributed genetic algorithm, distributed fractal or ray-tracer, montecarlo simulation, etc…)

Keep tuned!

Angel “Java” Lopez

October 31, 2009

NHibernate running in the Azure Cloud

Filed under: .NET, Azure, Cloud Computing, NHibernate — ajlopez @ 1:12 pm

Yesterday, I was talking with Fabio Maulo (@fabiomaulo) about many things, related to software development, teaching programming and, of course, NHibernate. We are living in Buenos Aires, Argentina, and it was a pleasure to talk with him, as usual. Í’m following Fabio in Twitter, and I’m a suscriber of his blog. Fabio is collaborating with NHibernate project for years, and he is a recognized developer in the .NET software community.

He told me details about a site built using NHibernate, and running on SQL Azure. You can see it online (Spanish content, Mexican site):

Fabio and his team worked hard to write this site, in less of a month (I’m waiting the team posts, with more detailed info, so, I’ll write only about the public parts here).

Curiously, the site is running using WebForms, but without ViewState, and without form tags embracing the full body inner HTML. All we are waiting Maulo and his team, explaining the implementation details. The code is based on using Model View Presenter, and it was build using tests, mocks and stubs, from presentation to persistence. Hey, Fabio! Please, write about the process and architecture decisions! 🙂

More info about NHibernate and Azure:

NHibernate on the cloud: SQL Azure Ayende NHibernate test results on Azure

Quick news NHibernate with SQL Azure Fabio’s first steps “All work… even the SchemaExport.” !!

NHibernate dialect for SQL Azure Adjustments for SchemaExport

I’m collecting links about NHibernate and Azure at:

There is an excellent post serie from Brad Adams, explaining Azure, Azure SQL, NHibernate, Silverlight, RIA .NET Service, and more:

Index for Business Apps example for Silverligth 3 RTM and .NET RIA Services July Update

Related to NHibernate and Azure, in that series:

Part 20: NHibernate
Part 23: Azure

Any other project using NHibernate on the cloud?

Angel “Java” Lopez

« Newer Posts

Create a free website or blog at