Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compilation of record functional modification is costly #6217

Closed
vicuna opened this issue Oct 30, 2013 · 7 comments
Closed

Compilation of record functional modification is costly #6217

vicuna opened this issue Oct 30, 2013 · 7 comments

Comments

@vicuna
Copy link

vicuna commented Oct 30, 2013

Original bug ID: 6217
Reporter: @ppedrot
Assigned to: @gasche
Status: resolved (set by @gasche on 2016-06-26T15:39:23Z)
Resolution: fixed
Priority: normal
Severity: tweak
Version: 4.01.0
Target version: 4.03.1+dev
Fixed in version: 4.04.0 +dev / +beta1 / +beta2
Category: back end (clambda to assembly)
Tags: junior_job
Monitored by: @bobzhang @gasche @hcarty

Bug description

Currently, compilation of the { r with ... } expression seems to have a length threshold that triggers a change in the way it is compiled. For instance,

type t = {
t1 : int option;
t2 : int option;
t3 : int option;
t4 : int option;
t5 : int option;
}

let set1 x t = {
t with t1 = Some x;
}

produces a direct allocation on the minor heap, together with a bunch of assembly moves that copy the record fieldwise. Problem is, adding a [t6 : int option] field changes this to an [Obj.dup] followed by a [caml_modify].

I don't know if this has been decided with respect to some benchmarks, but [caml_modify] is really costly, and for the record (no pun intended), this overhead was indeed observable in Coq. Maybe this threshold should be incremented?

@vicuna
Copy link
Author

vicuna commented Sep 15, 2014

Comment author: @damiendoligez

There is another problem with this: if the record is immutable it's possible that, in the near future, the call to [caml_modify] will break the GC's invariants.

@vicuna
Copy link
Author

vicuna commented Sep 17, 2014

Comment author: bobot

The current heuristic is 3 + 2 * number_of_modified_file >= number_of_field for the piece by piece copy (transl_record in translcore.ml)

Instead of modifying this heuristic and for taking into account Damien's comment, can't we just replace the setfield:

Psetfield(lbl.lbl_pos, maybe_pointer expr) by
Psetfield(lbl.lbl_pos, false)

Since we are sure that the values assigned are older than the duplicated record?

@vicuna
Copy link
Author

vicuna commented Dec 5, 2014

Comment author: @gasche

Note: François Pottier also hit this limit when experimenting with converting the Menhir parser generator from a mutable field update to a pure record update. While this comes at basically no cost if the state of the parser is a record of 5 fields, changing it to 6 fields gives an overall 30% performance overhead (in the generated parsers).

@vicuna
Copy link
Author

vicuna commented Jan 15, 2015

Comment author: @damiendoligez

@bobot, it's not enough to be sure that the assigned values are older than the record, you also have to be sure that the record is in the minor heap.

@vicuna
Copy link
Author

vicuna commented Dec 4, 2015

Comment author: madroach

@doligez
Wouldn't this be as easy to do as comparing the record size to Max_young_wosize ?

@vicuna
Copy link
Author

vicuna commented Apr 10, 2016

Comment author: Otini

I have run some benchmarks, and fieldwise copy is indeed always faster while the new record is allocated on the minor heap, that is, while it has less than 256 fields. For bigger records (up to 513 fields) the second method is 1.5 to 4 times faster.

@vicuna
Copy link
Author

vicuna commented Jun 26, 2016

Comment author: @gasche

This issue was solved by merging Otini's pull request in trunk ( b3216f0 )

#538

Now the immutable field-by-field copy is always preferred below max_young_wosize (the maximal size of objects that go in the minor heap), that is currently 256.

(The pull request is a bit noisy with three commits instead of one, apologies for forgetting to rebase.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants