Version française
Home     About     Download     Resources     Contact us    
Browse thread
Array 4 MB size limit
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Brian Hurt <bhurt@s...>
Subject: Re: [Caml-list] Array 4 MB size limit


On Sat, 20 May 2006, Jozef Kosoru wrote:

> 4G address space limit is off-topic here. Please note that the OCaml
> array size limit is 4M not 4G.

It is entirely on-topic.  How much headroom do you think you have?

Let's perform an experiment here.  Let's assume that Xavier decides you're 
right, and releases a new version of Ocaml with removes the 4M array 
limit.  How big of an array will you be able to create then?  More than 
4M, yes- but how much more?

The hardware imposes a hard and fast 4G limit.  On 32-bit x86 you can get 
a process larger than this.  But note, this includes everything- not just 
4G of data, but also all code, stack, shared libraries, address space set 
aside for the OS, everything.  The OS alone wants to grab at least 1G, if 
not 2G of address space (remember that it needs to mmap that hot new 512M 
graphics card you just bought- so claiming 1G as a minimum isn't extreme, 
as 512M will go to the graphics card).  To make life simple, let's say 
that everything except the huge array will fit in the upper 2G, leaving 2G 
for the huge array.

Now if it's a huge array of integers, each element only takes up 4 bytes, 
which means we can fit 512M integers in this array.  This sounds nice- on 
paper.  Wow, we could make an array 128 times as big as we can now! 
Except the proper way to look at this is that it's a mean 7 bits larger. 
And that's best case.

If we changed it to an array of floats, each float now takes 8 bytes, and 
that 2G can only hold 256M floats.  Our 128x improvement has now dropped 
to a 64x improvement- and we're starting to get a bad feeling about where 
this is going.

If we start trying to hold 3-dimensional pointers, that's 3 floats- and 
our 64x improvement has now dropped to about a 21x improvement.  Skaller's 
about to pipe up right about here and proclaim the advantages of 
single-precision floats, which would get you back up to about a 42x 
improvement.  But also note that I'm ignoring the boxing costs here- 
adding the tag word for the structure plus the pointer in the array (a 
structure is boxed) ups us from a size of 24 bytes to 32 bytes (for double 
precision- 12 bytes to 20 bytes for single precision), making our 21x 
improvement (42x improvement) a mere 16x (25x) improvement.

Now, what happens if we try to hold a triangle of three points (i.e. for 
3D polygon mapping)?  How many polygons can we hold?  Well, a polygon 
takes 9 floats (3 floats each for 3 points), so that's 72 bytes there. 
Assume the polygon is a structure and each point is a structure, and that 
means 4 tag words (1 for each of the 3 points + 1 for the polygon 
structure) and 4 pointers of 4 bytes each, for a grand total of 104 bytes 
per polygon.  2G can now hold about 19M unique polygons- a mere 5x 
advantage.

The 4M limit is "artificial" and "low"- but not by as much as you might 
think.

And the 4G limit starts hitting you in other ways.  Consider the classic 
programming pattern of reading the entire file into a list (or array) of 
strings.  Considering that I can get a 300G hard disk for about $110:
http://froogle.google.com/froogle_cluster?q=SATA+hard+disk&pid=4829187676318423498&oid=15465117802862862360&btnG=Search+Froogle&scoring=mrd&hl=en

It costs about $0.73USD to buy 2G of hard disk space.  I try to slurp that 
73 cent file into memory, and boom- welcome to the 4G limit.  Heck, you 
have to do special incantations just to open and read a file that big, let 
only trying to store the whole thing in memory.  This is why I claim that 
once hard disks start getting larger than the address space, the address 
space needs to increase.

And I'm not even going to go into "subtle" bugs, like ptrdiff_t overflows, 
or the fact that as the adress space fills, it's more likely a wild 
pointer will point to valid memory, and thus cause harder to debug 
problems than simple segfaults, here.

On the other side, increasing the array size has a cost.  Among other 
things, it slows down garbage collection.  If done badly, it could break 
existing code.  The C-99 standard did this- by introducing long long, it 
*silently* broke conformant code.  I've yet to forgive them for doing 
this.  More to the point, it silently broke *my* code.  Which is why I'm 
violently allergic to suggestions that we "patch around" the 32-bit 
limitation.  When people suggest this, I tend to hear "I want to break 
your code".  Because that's how it worked last time.

> I disagree. Actually I think an opposite. A suggestion to move to 64-bit
> is just patching the symptoms of the real problem - a design flaw in the
> OCaml programming language. And this issue will not go away on 64-bits.
> Just like 22 bits for the array size is a completely artificial limit on
> 32-bit platforms, (N - 10) imposes once again such an unlogical 54 bits
> limit on 64-bit computers. And I don't think "it makes complete sense on
> a 64-bit architecture".

By this logic, Ocaml should be able to support array sizes of 10^100th on 
16-bit platforms.  Why is Ocaml imposing stupid artificial limits?

I note with humor that the Java programming language a) defined ints to be 
32 bits, and b) made the index type for arrays, vectors, and many other 
data structures ints and not longs.  When you move to 64 bits, it's Java, 
much more so than Ocaml, that hits artificial and low limits on array 
size.  Ocaml will be able to have arrays millions of times larger than the 
largest possible array in Java.

> If you create OCaml programs for yourself then maybe you can switch to
> 64-bits overnight (and perpahs you already did so). But if you need to
> distribute your application to customers/users then it's much more
> complicated. And explaining them that they should upgrade all
> workstations because your software has various 4M limitations here and
> there sounds like a bad joke, doesn't it?

You call them 32-bit limits, which they are.  I admit that this is a 
problem- but it's not one created by Ocaml.

>
> 32-bit x86 platform is still good enough for many purposes and its 2^32
> limit is acceptable in most cases while 2^22 is not.

Except that you're here comparing apples to oranges again.  The 2^22 limit 
is in *words*, while the 2^32 limit is in *bytes*.  So it's really more of 
a 2^24 vr.s 2^31 comparison.

Brian