Version française
Home     About     Download     Resources     Contact us    
Browse thread
[Caml-list] Wish List for Large Mutable Objects
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: David McClain <dmcclain1@m...>
Subject: Re: [Caml-list] Wish List for Large Mutable Objects
Some good points here by Ville-Pertti,

Indeed the Scientific mode requires a balanced modulus operation on each
array index, not the one presently offered by OCaml Pervasive. But this is
used in lieu of bounds checking anyway, and the world has come to accept the
slight cost of array bounds checking.

There are really two issues that sort of got mixed together here, only
because BigArray mixed them up... One is the use of Scientific mode for some
arrays. The other is memory mapped arrays. These are really two separate
issues, and the extra cost on accessing mmapped arrays is worth the price
over the cost of slower buffered file I/O. It wouldn't be acceptable cost
for normal memory bound arrays.

Some processors do have alignment requirements, but every file system I was
referring to always guarantees a minimal alignment based on the underlying
array element type. These alignments generally coincide with the most
stringent alignment requirements in use today. Some processors like the G4
appear to be more lax on alignment requirements, but my bet is that
misaligned data cause some slowdown. I think the X86 architectures operate
this way too.

However, you do raise an interesting point about endianess. The more
portable file formats have generally accepted network byte ordering,
generally by incorporating old Sun-XDR data representations. And indeed for
memory mapped arrays, this would be an extra cost. But still, in this case,
it would far faster than buffered file I/O. My own tests show that a more or
less random access pattern in the mmapped array is 200 times faster than
fread/fwrite style of data accessing. So any addition machine cycles can
easily be hidden in that performance difference.

But again, let's separate these two issues. I generally know when I'm
accessing a mmapped array and when I'm not. I had to offer up a filename in
order to do mmapping... The only reason these two conversation threads
merged is because when I read the BigArray documentation, I found out that
these offer a primitive form of mmapped access in addition to normal memory
bound array accessing.

Not sure what multiple mappings you were referring to... I meant to allow a
kind of scatter-gather COW on normal memory bound arrays. Memmaped arrays
are a problem apart from this. Despite what might appear as a cost overhead,
the savings can be quite significant when combined with smart array slicing
and sectioning.

For example, in my NML, whenever I do an array slice (more complex
operations than supported by BigArray), what I actually do is pay the price
of all the if-then-else branching on only the first descent, generating a
tree of lambda closures on the way back out, so that all the actual copying
operation occurs without any more testing along the way. Sort of like
reaching down your throat and pulling yourself inside-out... heh!

The speed of these compound slicings is enormously faster than conventional
imperative logic. So while some operations are more costly, others benefit
greatly from higher order logic. In fact, a simple minded analysis shows
that if you ever intend to read or write a mutating representation array
then it pays to simply create a native double array once, and pay the cost
of representation mutation just once, and then allow repeat non-mutating,
faster, accesses to the underlying data. Keeping the array around in a
foreign format just adds incremental costs that will exceed this copying
cost, if you hit every element several times.

But as often as not, we do slice interesting sections from the data. Not
sure if this ever happens without first hitting every element on a
vectorized math op... My guess is no... and so the cost of copying must
occur no matter what.

David McClain
Senior Corporate Scientist
Avisere, Inc.

+1.520.390.7738 (USA)
david.mcclain@avisere.com



-------------------
To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr
Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners