Angel \”Java\” Lopez on Blog

February 17, 2009

Computer Go and Windows HPC Server

Filed under: Artificial Intelligence, High Performance Computing — ajlopez @ 8:10 am

Last year, at PDC 2008, the Windows HPC Server team presented a cluster of computers playing the game of Go. This video shows the gorgeous Surface interface:

(If the game of Go is new to you, visit:

http://www.gobase.org

There is lot of information, and the rules of the game. There is a section dedicated only to Computer Go:

http://gobase.org/information/computers/

)

David Fotland is the author of the program of the video. David is a reknowned computer go developer.  There is an email of David, explaining his program and his deal with Windows HPC Server:

http://computer-go.org/pipermail/computer-go/2008-November/017025.html

(That is THE mailing list to follow, if you want to know more the computer go problem). David programmed a MonteCarlo approach, using MPI and the Windows HPC Cluster.

ManyFacesOfGo awarded the computer world championship, last year, running on a Windows HPC Server cluster (competition results). Note: the second program was running using a cluster, too. There is more info about the (commercial) ManyFacesOfGo program at:

http://www.smart-games.com/

2008 was a year full of surprised in computer go arena. Actually, the programs can’t beat a professional or strong amateur human player, but the odds are changing. You can read:

Latest Advance in Artificial Intelligence: Computer Wins a Game Against a Go Master

and the Wikipedia page on Computer Go:

http://en.wikipedia.org/wiki/Computer_Go

After decades of poor results, the computer programs begin to beat strong human players, but there is a lot of improvement to do. The complexity of the game avoids the use of brute force methods: I guess the solution will be a mixture of brute force, clustering, MonteCarlo, and more classical planning methods.

I have my own program framework, AjGo to explore algorithms that can be used in this fascinating game, the “hard problem” in AI board games. This is an screenshot of the main form:

Spanish posts explaining the program:

AjGo- hacia un programa que juegue al go
Computer Go y el programa AjGo

I keep a collection of links about Computer Go at delicious and at my personal site:

http://delicious.com/ajlopez/computergo
Computer Go links

Angel “Java” Lopez
http://www.ajlopez.com/en
http://twitter.com/ajlopez

December 26, 2008

Fractals using MPI.NET and HPC

Filed under: .NET, C Sharp, High Performance Computing — ajlopez @ 5:50 am

I updated my fractal example to support MPI.NET (Message Passing Interface with .NET) and parametric tasks in Windows HPC Server 2008. The example can be download from my ajcodekatas Google code:

http://code.google.com/p/ajcodekatas/source/browse/#svn/trunk/FractalExample

There are two solutions. Fractal.sln contains:

The Fractal.Console project is a console application that takes parameters from the command line. It uses that parameters to generate a serialized sector of the fractal, writing a file:

 

static void Main(string[] args) { if (args[4].Equals("*")) args[4] = "0"; SectorInfo sectorinfo = new SectorInfo() { RealMinimum = Convert.ToDouble(args[0]), ImgMinimum = Convert.ToDouble(args[1]), Delta = Convert.ToDouble(args[2]), FromX = Convert.ToInt32(args[3]), FromY = Convert.ToInt32(args[4]), Width = Convert.ToInt32(args[5]), Height = Convert.ToInt32(args[6]), MaxIterations = Convert.ToInt32(args[7]), MaxValue = Convert.ToInt32(args[8]) }; Calculator calculator = new Calculator(); Sector sector = calculator.CalculateSector(sectorinfo); SectorSerializer serializer = new SectorSerializer(); string filename = string.Format("{0}-{1}-{2}-{3}-{4}.bin", args[9], sectorinfo.FromX, sectorinfo.FromY, sectorinfo.Width, sectorinfo.Height); serializer.Serialize(sector, filename); }

 

You can run the project in Debug mode, with parameters:

Launch the new Fractal.GUIFiles project. It’s a winform application, with a new button Read:

 

Click Read button, and load the generated sector file located in the Fractal.Console bin\debug directory:

This is the result:

Creating the sectors with HPC

The console application can be used in a cluster. Suppose the application is installed in c:\FractalConsole in each node of a cluster. Suppose the name of the head node is HEAD-NODE, and that it has a shared directory named \shared. Then, we can submit a parametric job:

job submit /parametric:0-500:100 c:\FractalConsole\Fractal.Console.exe 0.3 0.3 0.01 0 * 1000 100 2000 4 \\HEAD-NODE\shared\sector

This command submit a parametric job to the cluster. The asterisk in the parameter lists will be replaced by the values 0,100,200,300,400 and 500 (this is the Y coordinate of the top left point of sector). Each execution will produce a file with a serialized sector in the shared directory, that you can read and show using the Fractal.GUIFiles app.

Using MPI.NET

There is a second solution Fractal.MPI:

This code uses MPI (Message Passing Interface). The rank 0 receives a sector, and then, the sector is partitioned between all ranks in execution. Each instance writes a file, representing a subsector of the original sector. 

To compile and run this example I installed the HPC Pack I downloaded from:

HPC Pack 2008 SDK download

and then, I installed MPI.NET Software

(I installed MPI.NET SDK.msi but I expanded MPI.NET-1.0.0.zip too: it has better examples, with VS solutions)

(note: if you want to run under XP Pro, you must download the previous version of the HPC SDK:
Microsoft Compute Cluster Pack SDK

The new SDK has an issue with XP. More info at:

http://social.microsoft.com/Forums/en-US/windowshpcdevs/thread/19deb181-15c2-40be-bb5e-2d4604b984a4
http://www.pluralsight.com/community/blogs/drjoe/archive/2008/10/10/32-bit-sdk-for-hpc-server-2008-fails-quot-the-procedure-entry-point-getprocessidofthread-could-not-be-located-quot.aspx
)

You can run the program using mpiexec utility, that launch many instances of the same program:

mpiexec -n 10 Fractal.Mpi.Exe 0 0 0.01 0 0 500 1000 2000 4 sector

The sectors will be produced by ten instances:

that you can read and show using the Fractal.GUIFiles.

You can run the above command in an HPC cluster, using:

job submit /numnodes=10 mpiexec c:\FractalMpi\Fractal.Mpi.Exe 0 0 0.01 0 0 500 1000 2000 4 \\HEAD-NODE\shared\sector

(assuming you had deployed the application in each node, inside c:\FractalMpi folder)

You have a more complete example at:

Learning Parallel Programming — from shared-memory multi-threading to distributed-memory multi-processing

Angel “Java” Lopez
http://www.ajlopez.com/en
http://twitter/ajlopez

December 7, 2008

First steps with MPI.NET programming

Filed under: .NET, High Performance Computing — ajlopez @ 9:10 am

These days, I’m exploring MPI programming, using the MPI.NET wrapper implementation from Microsoft, that can be used on Windows HPC Server 2008.

MPI stands for Message Passing Interface, an API that supports the program of parallel programs. A MPI application can be run in many instances, called ranks, and each instance can receive and send message from and to others. The API can be consumed from languages like C or Fortran. MPI.NET is a wrapper that eases the writing of MPI programs in .NET.

You don’t need a cluster to run an MPI executable. Each program can be tested locally, launching many ranks as processes in your local machine.

Some days ago, I wrote some sample code, to test my understanding of MPI.NET. You can download the source from my Skydrive in MpiNetFirstExamples.zip.

If you want to try other way, months ago I posted about another .NET implementation:

MPI Message Passing Interface in .NET

MPI

MPI (Message Passing Interface) is supported by Windows HPC. There is a Microsoft implementation:

Microsoft MPI (Windows)

that can be invoked from C++.

There is a .NET implementation over Microsoft MPI:

MPI.NET: High-Performance C# Library for Message Passing

It has source code and examples.

To these examples, I installed the HPC Pack I downloaded from:

HPC Pack 2008 SDK download

and then, I installed MPI.NET Software

(I installed MPI.NET SDK.msi but I expanded MPI.NET-1.0.0.zip too: it has better examples, with VS solutions)

When you install HPC Pack 2008 SDK, you get new programs:

And for MPI.NET:

If you expand the additional MPI.NET-1.0.0.zip you get a folder with more examples and documentation:

More about MPI in general:

MPI 2.0 Report
MPI Tutorials
Microsoft Messaging Passing Interface – Wikipedia, the free encyclopedia
Pure Mpi.NET

Hello World

As usual, a “Hello, World” MPI application is the first app to try. My solution looks:

The program.cs source is simple:

using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace MpiNetHelloWorld { class Program { static void Main(string[] args) { using (new MPI.Environment(ref args)) { Console.WriteLine("I'm {0} of {1}", MPI.Communicator.world.Rank, MPI.Communicator.world.Size); } } } }

Note the use of ref args inthe initialization of MPI.Environment. MPI receives additional dedicated arguments, so it has to process them and remove from the rest of arguments.

You can run alone, obtaining:

Not very impressive…. 😉

You can invoke a command line using mpiexec:

mpiexec -n 8 MpiNetHelloWorld.exe

then the output is:

There are 8 ranks (instances) running, in the same machine. If you have a cluster with MPI support (as Windows HPC Server 2008) you could run the program in all the nodes of the cluster.

Ringing the nodes

In the previous example, no communication between nodes occured. A classic example is to send messages in a ring, from rank 0 to 1 to 2, and at end, back to 0. This is my solution:

The program.cs code is:

class Program { static void Main(string[] args) { using (MPI.Environment environment = new MPI.Environment(ref args)) { Intracommunicator comm = MPI.Communicator.world; if (comm.Size < 2) { Console.WriteLine("At least two processes are needed"); return; } Console.WriteLine("I'm {0} of {1}", MPI.Communicator.world.Rank, MPI.Communicator.world.Size); if (comm.Rank == 0) // It's the root { string sendmessage = string.Format("Hello from {0} to {1}", comm.Rank, comm.Rank + 1); comm.Send(sendmessage, comm.Rank + 1, 0); string recmessage; comm.Receive<string>(comm.Size - 1, 0, out recmessage); Console.WriteLine("Received: {0}", recmessage); } else { string recmessage; comm.Receive<string>(comm.Rank - 1, 0, out recmessage); Console.WriteLine("Received: {0}", recmessage); string sendmessage = string.Format("Hello from {0} to {1}", comm.Rank, (comm.Rank + 1) % comm.Size); comm.Send(sendmessage, (comm.Rank + 1) % comm.Size, 0); } } } }

Scattering messages

There is another example (MpiNetScatter solution), where an array of integers is scattered to all ranks, from the rank 0:

class Program { static void Main(string[] args) { using (MPI.Environment environment = new MPI.Environment(ref args)) { Intracommunicator comm = Communicator.world; if (comm.Rank == 0) { int [] numbers = new int[comm.Size]; for (int k = 0; k < numbers.Length; k++) numbers[k] = k * k; int r = comm.Scatter(numbers); Console.WriteLine("Received {0} at {1}", r, comm.Rank); } else { int r = comm.Scatter<int>(0); Console.WriteLine("Received {0} at {1}", r, comm.Rank); } } } }

Threads and MPI

We can improve the ring examples, using new features from MPI2, supported by Microsoft implementation: sending and receiving messages using multiple threads. The solution is MpiNetMultiThreadRing. The code:

class Program { static void Main(string[] args) { using (MPI.Environment environment = new MPI.Environment(ref args)) { Intracommunicator comm = MPI.Communicator.world; if (comm.Size < 2) { Console.WriteLine("At least two processes are needed"); return; } MultiComm multicomm = new MultiComm(MPI.Communicator.world); Thread thread = new Thread(new ThreadStart(multicomm.Run)); thread.Start(); MultiComm multicomm2 = new MultiComm(MPI.Communicator.world); Thread thread2 = new Thread(new ThreadStart(multicomm2.Run)); thread2.Start(); thread.Join(); thread2.Join(); } } }

I wrote a helper class, MultiComm, that has methods to send and receive message. It uses a lock: MPI implementation doesn’t support the use of MPI commands from more than one thread simultaneously. So, I have to synchronize the methods that access MPI from different threads. It’s a shame, but it is what is supported.

Conclusion

MPI implies a new way of thinking applications. There is no easy path to MPIfied an algorithm or application. I should play with async message passing: in the above examples, when an instance sends a message, the other party should be listen to receive the message. Dispite its idiosyncrasy, MPI is an interesting field to explore, with a wide community with interesting applications.

Angel “Java” Lopez
http://www.ajlopez.com/en
http://twitter.com/ajlopez

December 4, 2008

Augmented Reality with Windows HPC Server

Filed under: Grid Computing, High Performance Computing — ajlopez @ 7:09 am

These days, my team is working with Windows High Performance Computing Server 2008. During my research on HPC, I found this demo (via Twitter search about HPC):

This work is frrom the people of High Performance Computing Center de Stuttgart (HLRS)

Augmented reality is a kind of virtual reality that combines real images with virtual ones. You can use a transparent headset and see a 3D scheme of an engine while you are repairing it. The group at HLRS is working with the Microsoft Technical Computing Initiative in such scenarios, more info at Augmented Reality in the automotive industry

There are photos of its installations at

Microsoft HPC Institute – HLRS – University of Stuttgart

It’s like my own hardware at home…. 🙂

I found additional videos at

Augmented Reality mit Windows HPC

More videos about HPC and MPI debugging at

HLRS

More information about Augmented Reality

What Is the Metaverse and Should HPC Care?
Augmented reality – Wikipedia, the free encyclopedia
Mixed reality – Wikipedia, the free encyclopedia

International Symposium on Mixed and Augmented Reality (ISMAR)

http://www.augmented.org/
How Augmented Reality Will Work

I want my personal Holodek!

Angel “Java” Lopez
http://www.ajlopez.com/en
http://twitter.com/ajlopez

November 4, 2008

Windows High Performance Computing (HPC) and Programming Resources

Filed under: .NET, Grid Computing, High Performance Computing — ajlopez @ 9:48 am

Since the last year, I was researching about distributed and grid computing. I found many useful resources and information (links at end of this post). One of the topics I found is Microsoft implementation of High Performance Computing (HPC). This post is a list of resources I think are relevant to the

First, the page of Windows HPC Server 2008:

http://www.microsoft.com/hpc

The first video to watch is the last week PDC 2008 Session:

HPC Session at last PDC
http://channel9.msdn.com/pdc2008/ES13/

It’s an excellent presentation, covering the new Windows HPC Server 2008, nodes, tasks and jobs, management tools, programming options, MPI and MPI.NET programming, computer go on HPC (beautiful idea), all the presentation deserves a dedicated post.

I like a short but interesting video, showing the management console, at:

http://channel9.msdn.com/shows/The+HPC+Show/Five-Minute-Intro-to-the-HPC-Server-2008-Management-Console/

THE blog to read is Windows HPC survival guide

A post as example: No scientist left behind with CRAY Supercomputer running Windows HPC Server 2008

They collected a set of resources at HPC Resource Kit

All the videos related to HPC at:

HPC | Tags | Channel 9

(interesting topics: WCF and HPC programming, HPC Basic Profile: open web services you can invoke from Java and other languages)

There is a community site dedicated to Windows HPC:

http://www.windowshpc.net/

with files, resources, source code and examples. 

Software to use

To start writing software for HPC, install Microsoft HPC Pack (Windows). I downloaded it from:

HPC Pack 2008 SDK download

and then, install MPI.NET Software

(I installed MPI.NET SDK.msi but I expanded MPI.NET-1.0.0.zip: it has better examples, with VS solutions)

You don’t need the HPC server to run these examples.

An excellent tutorial, implementing a fractal application using HPC 2008 at:

Learning Parallel Programming — from shared-memory multi-threading to distributed-memory multi-processing

Additional Links

If you want to explore the HPC programming possibilites, these are the topics to research:

HPC

http://www.hpccommunity.org/ HPC Community
http://www.hpcwire.com/ High Productivity Computing
http://www.ddj.com/hpc-high-performance-computing/
YouTube – An Overview of High Performance Computing and Challenges for the Future
http://en.wikipedia.org/wiki/High-performance_computing

MPI

MPI (Message Passing Interface) is supported by Windows HPC. There is a Microsoft implementation:

Microsoft MPI (Windows)

that can be invoked from C++.

There is a .NET implementation over Microsoft MPI:

MPI.NET: High-Performance C# Library for Message Passing

It has source code and examples.

(An old .NET wrapper at Codeplex project:  MPI .Net – Home)

I posted about other .NET implementation:

MPI Message Passing Interface in .NET

More about MPI

MPI 2.0 Report
MPI Tutorials
Microsoft Messaging Passing Interface – Wikipedia, the free encyclopedia
Pure Mpi.NET

Parallel Programming

Introduction to Parallel Computing very complete resource, (thanks to jgarcia)
Microsoft Innovation Day – November 5, 2006 they are presenting something related to DryadLINQ
Multithreading and Concurrency in .NET a very complete list of technologies available in .NET
http://www.microsoft.com/ccrdss Now, CCR/DSS as a separated package (formerly in Microsoft Robotics)
Adobe Press – 9780321603944 – Software Pipelines: The Key to Capitalizing on the Multi-core Revolution
Burton Smith: On General Purpose Super Computing and the History and Future of Parallelism | Going Deep | Channel 9
Welcome to Hadoop!
Dryad – Home A Microsoft research project
YouTube – Dryad: A general-purpose distributed execution platform Presentation at Google Talks
Concurrency: What Every Dev Must Know About Multithreaded Apps
Overview of concurrency in .NET Framework 3.5 | Igor Ostrovsky Blogging
Parallel Programming with .NET
Parallel Computing Developer Center from Microsoft
Parallel Virtual Machine – Wikipedia, the free encyclopedia
http://msdn.microsoft.com/msdnmag/issues/07/10/PLINQ/default.aspx Parallel LINQ

Map Reduce

Writing An Hadoop MapReduce Program In Python
Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks
Google Research Publication: MapReduce

Delicious

My delicious links about HPC, MPI, Parallel programming, Grid Computing, Map Reduce algorithms, CCR/DSS:

http://delicious.com/ajlopez/hpc
http://delicious.com/ajlopez/mpi
http://delicious.com/ajlopez/parallel
http://delicious.com/ajlopez/gridcomputing
http://delicious.com/ajlopez/mapreduce
http://delicious.com/ajlopez/ccr
http://delicious.com/ajlopez/dss

Computer Go is a fascinating topic:

http://delicious.com/ajlopez/computergo

Angel “Java” Lopez
http://www.ajlopez.com/en
http://twitter.com/ajlopez

Create a free website or blog at WordPress.com.