Version française
Home     About     Download     Resources     Contact us    
Browse thread
Looking for information regarding use of OCaml in scientific computing and simulation
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Eray Ozkural <examachine@g...>
Subject: Re: [Caml-list] Looking for information regarding use of OCaml in scientific computing and simulation
On Tue, Dec 22, 2009 at 3:11 PM, Eray Ozkural <examachine@gmail.com> wrote:
> However, let's not forget about the new GPU architectures, which are
> sort of hybrid. The newer GPUs will have more exotic on-chip
> interconnection networks.

Some more clarification, as the number of cores increase, you would
expect more of an MIMD architecture rather than SMP-like shared
memory+cache or the SIMD that some architectures were based on (like
the Cell processor etc.) Or a hybrid one, who knows? The certain
picture at the moment is that, the architectures are getting *more*
complex to program and to optimize for.

I don't think we can neglect the use of a shared memory space. Among
other things, with multiple cores, it allows us to directly implement
the PRAM algorithms that are prevalent in parallel computing
literature. On the other hand, most of the existing parallel
applications assume the traditional cluster architecture. I don't
think we can say at the moment, this or that programming *paradigm* is
the best, but I think it's high time we experiment with higher-level
constructs that are fitting for functional languages. Sure, you can
have parallelism with just a multi-threading construct and an
atomicity construct but I think there is much more to parallelism than
that for a high-level language. That stuff we have in parallel
assembly, it would be the sort of code that a compiler generates
perhaps.

I also anticipate that it should not be very difficult to write a
parallelizing compiler for ocaml. It would be great to directly target
all those nifty register files etc. in the NVIDIA GPU's. :) And it
seems perhaps the only sane way to make the kind of fine optimizations
that a complex architecture would call for.

Best,

-- 
Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
http://groups.yahoo.com/group/ai-philosophy
http://myspace.com/arizanesil http://myspace.com/malfunct