Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to get rid of the "fatal" out of memory ? #7185

Closed
vicuna opened this issue Mar 17, 2016 · 7 comments
Closed

Is it possible to get rid of the "fatal" out of memory ? #7185

vicuna opened this issue Mar 17, 2016 · 7 comments

Comments

@vicuna
Copy link

vicuna commented Mar 17, 2016

Original bug ID: 7185
Reporter: @mlasson
Assigned to: @damiendoligez
Status: assigned (set by @mshinwell on 2016-12-08T09:13:56Z)
Resolution: open
Priority: low
Severity: feature
Category: runtime system and C interface
Monitored by: @nojb @gasche @diml @ygrek @dbuenzli @alainfrisch

Bug description

As you all know, there are essentially two ways a caml program can fail due to a lack of memory.

Either,

  • it raises a catchable Out_of_memory exception; this is usually raised by a C binding that wants to report an allocation failure or by the runtime when caml_alloc_shr while we are not in a minor collection,

  • or with a fatal error that terminates the program if caml_alloc_shr is not able to expand the heap in the middle of a minor collection.

The first situation can be exercised by allocating big chunks of memory:
let _ = try
ignore (Array.init Sys.max_array_length
(fun k -> Array.init
Sys.max_array_length (fun _ -> k)))
with Out_of_memory -> Printf.printf "I've survived\n%!" (* Printed *)

whereas the second is likely to happen while reaching the limit with small increments of memory:
let rec init acc n =
if n >= 0 then init (n :: acc) (n - 1)
else acc

let _ = try
ignore (init [] max_int)
with Out_of_memory -> Printf.printf "I've survived\n%!" (* Not printed *)

In some applications reaching the memory limits is a normal way to use a program (eg. in scientific or financial computing it is natural to push a system to its limits and these limits are often quite difficult to estimate without actually running the computation). In that cases, having a decent way to report the failure and its reason to the user is more complicated in the case of fatal error.

Would it really be impossible to hack the GC to avoid fatal errors and always raise Out_of_memory ?
While we are in the middle of a minor collection, could we somehow undo the unfinished job of the GC (ie. put everything we've just copied in major heap back to the minor heap) in order to raise the exception as if we've never tried to collect ? Or has some information been definitively lost in the process ?

@vicuna
Copy link
Author

vicuna commented Mar 17, 2016

Comment author: @alainfrisch

It seems we could indeed scan the minor heap linearly, detect blocks which have been copied to the major heap in order to copy them back. The hard part might be to undo the rewriting of pointers from other blocks into these moved blocks, since the information is not kept, AFAICT. Perhaps one could instead scan from the roots again.

Damien: do you think it is somehow doable?

While the problem is most noticeable on 32-bit architectures, it is also applicable to other systems with constraints on memory usage.

@vicuna
Copy link
Author

vicuna commented Mar 17, 2016

Comment author: @jhjourdan

Another solution, I think, would be to pre-allocate a block of memory for the major heap that is never touched, except when getting out of memory.

When encountering an out of memory, we:
1- use this "emergency" block to empty the minor heap
2- unwind the stack to find the exception handler, thus releasing local roots
3- re-run the major GC (and the compactor), until freeing memory
4- re-allocate the emergency region
5- launch the exception handler

@vicuna
Copy link
Author

vicuna commented Mar 17, 2016

Comment author: @alainfrisch

This is a very interesting approach. Damien: do you see any obstacle or downside to this approach (except that one "waste" the equivalent of the minor heap size)?

@vicuna
Copy link
Author

vicuna commented Mar 18, 2016

Comment author: @lpw25

What happens if you can't reallocate the emergency buffer in step 4?

@vicuna
Copy link
Author

vicuna commented Mar 18, 2016

Comment author: @alainfrisch

What happens if you can't reallocate the emergency buffer in step 4?

I can see several variants:

  • Immediately try to reallocate the buffer in step 4; if it fails, simply continue unwinding the stack until the next handler, and iterate. The downside is that this can silently drop handlers that would for instance log the error, or restore some invariants. Of course, if they need to allocate, the fact that they cannot be guaranteed to be executed in case of OOM condition is clear enough, but in case they manage to do their job without allocating, one can do better:

  • Instead of reallocating the buffer immediately, keep the allocation pointer to the top of the minor heap (even though it is empty) so that the next allocation will trigger the GC. Only reallocate at this point (and re-raise OOM if not possible). This preserves the expected semantics that the OOM exception is only raised at allocation points; and it guarantees that e.g. a try..finally block that does not allocate can always do its job.

  • Immediately try to reallocate the buffer in step 4; if it fails, abort the process as of today.

  • Immediately try to reallocate the buffer in step 4; if it fails, split the minor heap in two equal halves (the new minor heap, and the new emergency buffer). Back to previous step if the minor heap becomes too small.

@vicuna
Copy link
Author

vicuna commented Dec 8, 2016

Comment author: @mshinwell

As far as I recall we were considering turning all out-of-memory errors into fatal errors to reduce the number of possibilities for asynchronous exceptions. (cf. #852).

@doligez Can you comment?

@github-actions
Copy link

github-actions bot commented May 9, 2020

This issue has been open one year with no activity. Consequently, it is being marked with the "stale" label. What this means is that the issue will be automatically closed in 30 days unless more comments are added or the "stale" label is removed. Comments that provide new information on the issue are especially welcome: is it still reproducible? did it appear in other contexts? how critical is it? etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants