Message Passing Interface, CCR, DSS, and Pure MPI.NET

Recently, during my reseach about grid computing, Microsoft Robotics Studio, DSS and CCR, I found a very interesting paper:

High Performance Multi-Paradigm Messaging Runtime Integrating Grids and Multicore Systems

The authors are Two of the authors are Xiaohong Qiu, Geoffrey C. Fox, Huapeng Yuan, Seung-Hee Bae, from Indiana University Bloomington, and George Chrysanthakopoulos, Henrik Frystyk Nielsen, from Microsoft Research. Nielsen and Chrysanthakopoulos are the “creators” of the Concurrency and Coordination Runtime (CCR) and Decentralized Software Services (DSS), pillar technologies of Microsoft Robotics Studio, that can be used beyond robotics. More on these technologies at:

The paper abstract is:

eScience applications need to use distributed Grid environments where each component is an individual or cluster of multicore machines. These are expected to have 64-128 cores 5 years from now and need to support scalable parallelism. Users will want to compose heterogeneous components into single jobs and run seamlessly in both distributed fashion and on a future “Grid on a chip” with different subsets of cores supporting individual components. We support this with a simple programming model made up of two layers supporting traditional parallel and Grid programming paradigms (workflow) respectively. We examine for a parallel clustering application, the Concurrency and Coordination Runtime CCR from Microsoft as a multi-paradigm runtime that integrates the two layers. Our work uses managed code (C#) and for AMD and Intel processors shows around a factor of 5 better performance than Java. CCR has MPI pattern and dynamic threading latencies of a few microseconds that are competitive with the performance of standard MPI for C.

What is MPI? The acronym refers to Message Passing Interface. According to Wikipedia:

Message Passing Interface (MPI) is both a computer specification and is an implementation that allows many computers to communicate with one another. It is used in computer clusters.

There is a Microsoft Implementation:

Microsoft Message Passing Interface (MS MPI) is an implementation of the MPI2 specification by Microsoft for use in Windows Compute Cluster Server to interconnect and communicate (via messages) between High performance computing nodes. It is mostly compatible with the MPICH2 reference implementation, with some exceptions for job launch and management. MS MPI includes bindings for C and FORTRAN languages. It supports using the Microsoft Visual Studio for debugging purposes.

Oh! FORTRAN….. Those old good days! ;-). I remember working with Gregory Chaitin implementation of Lisp on FORTRAN, last century. But no back to the past, paraphrasing David Hilbert: Out of this paradise that Java and .NET have created nobody will expell us…. ;-). You can read the original cite at this interesting thread.

But I disgress. Back to topic.

The main sites about MPI are:

I was thinking of implementing some MPI ideas with .NET or Java, when I visited this site:

PureMpi.NET is a completely managed implementation of the message passing interface. The object-oriented API is simple, and easy to use for parallel programming. It has been developed based on the latest .NET technologies, including Windows Communication Foundation (WCF). This allows you to declaratively specify the binding and endpoint configuration for your environment, and performance needs. When using the SDK, a programmer will definitely see the MPI’ness of the interfaces come through, and will enjoy taking full advantage of .NET features – including generics, delegates, asynchronous results, exception handling, and extensibility points.

PureMpi.NET allows you to create high performance production quality parallel systems, with all the benefits of in .NET

It is an implementation that you can download and use, with VS2005 or VS2008. It uses generics to implements typed channels on MPI.

I downloaded the library, and installed it on a machine with Visual Studio 2008. The installation program added a new project template, Mpi.NET:

I created a project, that looks:

I modified the Program.cs to:


using System; using System.Collections.Generic; using System.Linq; using System.Text; using Mpi; namespace Mpi.NET1 { class Program { static void Main(string[] args) { ProcessorGroup.Process("MPIEnvironment", delegate(IDictionary<string, Comm> comms) { Comm comm = comms["MPI_COMM_WORLD"]; Console.WriteLine(comm.Rank); IAsyncResult result = comm.BeginSend<string>(0, "", "Rank: " + comm.Rank, TimeSpan.FromSeconds(30), null, null); if (comm.Rank == 0) { for (int i = 0; i < comm.Size; i++) { string receivedMsg = comm.Receive<string>(i, Constants.AnyTag, TimeSpan.FromSeconds(30)); Console.WriteLine(receivedMsg); } } comm.EndSend<string>(result); }); } } }

The ProcessGroup class is in charge of the processes to run. Note the support of delegates to specify the process. A MPI process receives a dictionary of Comm objects, channels to use to communicate with other MPI processes.

The ProcessGroup class has this structure (according to metadata info):


namespace Mpi { public class ProcessorGroup : IDisposable { public ProcessorGroup(Environment environment, Processor processor); public ProcessorGroup(string environment, Processor processor); public Environment Environment { get; } public ICollection<IAsyncResult> Results { get; } public void Dispose(); protected virtual void Dispose(bool disposing); public static void Process(string environmentConfigName, Processor processor); public void Start(); public void WaitForCompletion(); } }

The number and configuration of processors could be defined in the App.config file:


<?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="Mpi" type="Mpi.ConfigurationSection, Mpi"/> </configSections> <Mpi> <Environments> <Environment name="MPIEnvironment"> <Hosts> <Host comms="MPI_COMM_WORLD" client="MpiClient1" service="MpiService1" /> <Host comms="MPI_COMM_WORLD" client="MpiClient2" service="MpiService2"/> <Host comms="MPI_COMM_WORLD" client="MpiClient3" service="MpiService3"/> </Hosts> </Environment> </Environments> </Mpi> <system.serviceModel> <client> <endpoint address="net.tcp://localhost:8080/MpiService" binding="netTcpBinding" bindingConfiguration="" contract="Mpi.IMpiService" name="MpiClient1"> <identity> <userPrincipalName value="" /> </identity> </endpoint> <endpoint address="net.tcp://localhost:8081/MpiService" binding="netTcpBinding" bindingConfiguration="" contract="Mpi.IMpiService" name="MpiClient2"> <identity> <userPrincipalName value="" /> </identity> </endpoint> <endpoint address="net.tcp://localhost:8082/MpiService" binding="netTcpBinding" bindingConfiguration="" contract="Mpi.IMpiService" name="MpiClient3"> <identity> <userPrincipalName value="" /> </identity> </endpoint> </client> <behaviors> <serviceBehaviors> <behavior name="MpiServiceBehavior"> <serviceDebug httpHelpPageEnabled="false" httpsHelpPageEnabled="false" includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="MpiServiceBehavior" name="MpiService1"> <endpoint address="net.tcp://localhost:8080/MpiService" binding="netTcpBinding" bindingConfiguration="" name="MpiServiceEndpoint" contract="Mpi.IMpiService" /> </service> <service behaviorConfiguration="MpiServiceBehavior" name="MpiService2"> <endpoint address="net.tcp://localhost:8081/MpiService" binding="netTcpBinding" bindingConfiguration="" name="MpiServiceEndpoint" contract="Mpi.IMpiService" /> </service> <service behaviorConfiguration="MpiServiceBehavior" name="MpiService3"> <endpoint address="net.tcp://localhost:8082/MpiService" binding="netTcpBinding" bindingConfiguration="" name="MpiServiceEndpoint" contract="Mpi.IMpiService" /> </service> </services> </system.serviceModel> <system.runtime.serialization> <dataContractSerializer> <declaredTypes> </declaredTypes> </dataContractSerializer> </system.runtime.serialization> </configuration>

Oh! They use <host..>…  This remembers me AjMessages… 😉

Running the program produces:

Well, it’s not a great program, I must admit: but it’s my first MPI program. There are 3 “ranks”, according to config file above.

You’ll find many running examples include with Pure MPI.NET distribution. For me, it’s an interesting implementation of MPI ideas, with twists adapted from .NET world: generics and delegates are welcome.

¿Grid and MPI? Maybe. I must study the references mentioned in the cited paper. Althought the paper is dedicated to high performance issues, it has a good conceptual discussion of execution model, and relations with MPI, CCR and DSS.

Angel “Java” Lopez

9 thoughts on “Message Passing Interface, CCR, DSS, and Pure MPI.NET

  1. Pingback: Message Passing Interface, CCR, DSS, y Pure MPI.NET - Angel "Java" Lopez

  2. Pingback: Messages everywhere « Angel “Java” Lopez on Blog

  3. Pingback: Windows High Performance Computing (HPC) and Programming Resources « Angel “Java” Lopez on Blog

  4. Pingback: Recursos de Windows High Performance Computing (HPC) y Programación - Angel "Java" Lopez

  5. Pingback: Recursos de Windows High Performance Computing (HPC) y Programación | Buanzolandia

  6. Pingback: Recursos de Windows High Performance Computing (HPC) y Programaci??n | Buanzolandia

  7. Pingback: First steps with MPI.NET programming « Angel “Java” Lopez on Blog

  8. Pingback: Primeros pasos con MPI.NET - Angel "Java" Lopez

  9. Pingback: Primeros pasos con MPI.NET | Buanzolandia

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s