Version française
Home     About     Download     Resources     Contact us    

This site is updated infrequently. For up-to-date information, please visit the new OCaml website at

Browse thread
[Caml-list] Out_of_memory exception in output_value
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2004-05-28 (20:54)
From: Eric Dahlman <edahlman@a...>
Subject: Re: [Caml-list] Out_of_memory exception in output_value

On May 28, 2004, at 2:44 PM, skaller wrote:

> On Sat, 2004-05-29 at 02:47, Jacques Carette wrote:
>>> It turns out that during output_value, the OCaml code allocates a
>>> block of memory to contain the marshalled representation of the data.
>>> Each time it runs out of room, it doubles the size of that block 
>>> using
>>> realloc().
>> Wouldn't it be more system-friendly to try successively factors *2, 
>> *1.5,
>> *1.1, and *1.05 before actually failing?

I am not sure that it would have that much benefit for all of the 
complexity it would introduce.  I expect the set of interesting things 
with a marshaled representation which is too large for the present 
approach but still small enough to work for a smaller factors is really 
small.  Keep in mind that when the size is grown the assumption is that 
you will have to copy the lower half of the already marshaled data into 
the new object since it is almost guaranteed that it will not be 
possible to grow the object in place as something else will almost 
surely have been allocated after it.  A better approach would be to 
have an output_really_big_object which allocated a huge buffer in a 
single go or an approach which streamed the marshaled data so as not to 
have to keep it in memory.  (I don't know if that approach is possible 
using ocaml's marshaling, i didn't look.) At any rate this still 
addresses what I believe to be a small class of useful situations, on 
the whole I suspect that the data will be small enough to work with the 
present approach or so big that the fractional approach will also fail.

> I have a published book chapter part of which deals with
> this issue in some detail, including some performance
> measurements.
> The general solution is much cleaner -- to use
> a user supplied allocation function something like:
> new_size = f old_size requested_size
> Doubling the size, or indeed even using a pure multiplier,
> is only one possible option: simply adding a fixed amout
> is another option, and an arbitrary function handles
> both cases and many more. So a general solution is to make
> the operation polymorphic by making the calculator function
> an argument (in practice with a sensible default).

Since you will have to copy the data on a realloc doubling has the nice 
effect of giving us a constant bound on the costs of reallocation in a 
calculation.  This does not hold for approaches like increasing in 
fixed increments which will convert an algorithm which in linear in the 
size of the data into one which is quadratic.  I think that doubling 
might well be the best "guestimate" for a general library to make.

> My experiments were with variable length arrays used
> to hold big integers, so some operations produced
> small increases (like addition) whereas other
> produced increases much faster (like multiplication).

In this domain specific case you have a wealth of information and you 
can calculate a very tight bound on the size  of the resulting array so 
you should never have to grow the result as you go.  Just allocate it 
correctly in the beginning or am I missing something?

> A quick and easy fix would be to use a global variable
> containing the function which the client can reset.
> Yeah, I hate global state but here the actual function
> chosen has no semantic impact, it only affects performance
> (unless you run out of memory .. of course). So this
> time a global might be OK :)

I don't know about this one, it really smacks of premature optimization 
and abstraction inversion.  The present approach to buffer growing has 
a constant amortized cost.  If one was in the position of being able to 
significantly improve on that for a given problem domain by carefully 
tweaking such an allocation function then I would almost guarantee that 
it would be better to create a custom solution for that domain.  This 
would allow explicitly using that information to create a better 
algorithm rather than just tweaking a general mechanism.

Just my two cents,

To unsubscribe, mail Archives:
Bug reports: FAQ:
Beginner's list: