Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repeated custom block allocations lead to space leaks #4616

Closed
vicuna opened this issue Sep 22, 2008 · 3 comments
Closed

Repeated custom block allocations lead to space leaks #4616

vicuna opened this issue Sep 22, 2008 · 3 comments
Assignees
Milestone

Comments

@vicuna
Copy link

vicuna commented Sep 22, 2008

Original bug ID: 4616
Reporter: @mshinwell
Assigned to: @damiendoligez
Status: closed (set by @alainfrisch on 2014-04-23T15:39:04Z)
Resolution: suspended
Priority: normal
Severity: major
Version: 3.10.2
Target version: 4.01.1+dev
Category: ~DO NOT USE (was: OCaml general)
Monitored by: @ygrek dpowers sweeks till letaris bhurt-aw @dbuenzli @Chris00 @mmottl

Bug description

This one concerns native code compilation; I haven't checked if the behaviour differs for bytecode.

Allocation of a custom block with a finalizer, or of size > Max_young_wosize, increments a counter (caml_extra_heap_resources) according to the user-provided mem/max ratio. Then, caml_adjust_gc_speed is called, which causes a minor GC (including a major slice collection) to be effected at the next suitable time.

This is at odds with the manual, which as I read it claims something along the lines of a full major collection: "If you allocate many custom blocks with used / max = 1 / N, the GC will then do one full cycle (examining every object in the heap and calling finalization functions on those that are unreachable) every N allocations."

The collection that is performed when the ratio tips over the 1.0 boundary uses a complex heuristic to determine how much to collect (== the argument to caml_major_collection_slice). Suppose that heuristic seriously underestimates the amount that's been allocated recently. Then, the slice collection might fail to collect an amount commensurate with the amount allocated, and we can end up with the counter being reset to 0.0 and yet still have a load of garbage on the heap (with associated allocations on the C heap). If we keep on allocating the custom blocks, then the same happens over again, with memory usage in a sort of feedback cycle of allocations and collections. This can lead to bloated programs with oscillatory memory usage.

One way to easily trigger this behaviour is using a program that allocates and then lets go of a lot of Bigarray values. I have seen one circumstance where an executable making heavy use of Bigarrays was slowly growing up through the gigabytes of memory used. Unfortunately it had to be killed before it was obvious whether the growth was part of a stabilisation cycle; it's not clear whether this problem can lead to completely unbounded usage.

Here is a simple example which exhibits the problem. At first sight this program might appear to not use very much memory at all -- but on my machine it oscillates between 200Mb and 280Mb:

let create () = Bigarray.Array1.create Bigarray.int Bigarray.c_layout (32*1024)

let rec f _ =
let x = create () in
let y = create () in
Bigarray.Array1.blit x y;
f x

let () = f (create ())

Adding some full major collections dramatically reduces the memory usage. (Similarly, the program that was in the gigabytes sits below 10Mb with some regular forced full major collections.) I wonder if the heuristic needs adjusting somehow to cope with this pattern of allocation better?

@vicuna
Copy link
Author

vicuna commented May 20, 2010

Comment author: @damiendoligez

The simple example actually behaves as expected: the bigarray library declares that bigarrays are supposed to use 256M of memory, so oscillating between 200 and 280 is pretty good.

I'm still interested in a smallish example that exhibits the problem, especially if you also test it
with Gc.allocation_policy = 1 to avoid fragmentation.

@vicuna
Copy link
Author

vicuna commented Jun 14, 2013

Comment author: @xavierleroy

This PR has been in the "feedback" state for 3 years. I move to close it unless we gather new information very soon.

@vicuna
Copy link
Author

vicuna commented Apr 23, 2014

Comment author: @alainfrisch

As suggested by Xavier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants