Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compilation on 4.04.0 takes twice as long as 4.03.0 #7456

Closed
vicuna opened this issue Jan 9, 2017 · 7 comments
Closed

Compilation on 4.04.0 takes twice as long as 4.03.0 #7456

vicuna opened this issue Jan 9, 2017 · 7 comments
Assignees
Milestone

Comments

@vicuna
Copy link

vicuna commented Jan 9, 2017

Original bug ID: 7456
Reporter: @kayceesrk
Assigned to: @mshinwell
Status: resolved (set by @mshinwell on 2017-03-10T10:21:02Z)
Resolution: duplicate
Priority: normal
Severity: major
Platform: x86-64
OS: Ubuntu
OS Version: 16.04
Version: 4.04.0
Target version: 4.05.0 +dev/beta1/beta2/beta3/rc1
Category: misc
Monitored by: @gasche @ygrek

Bug description

While trying to reproduce the issue #7452, I noticed that the compilation with 4.04.0 takes twice as long as compiling the same file with 4.03.0. I've attached the timing info:

4.03.0

all: 31.864s
parsing(test.ml): 0.544s
typing(test.ml): 2.516s
transl(test.ml): 0.476s
generate(test.ml): 28.324s
cmm(sourcefile(test.ml)): 0.572s
compile_phrases(sourcefile(test.ml)): 23.968s
selection(sourcefile(test.ml)): 1.696s
comballoc(sourcefile(test.ml)): 0.176s
cse(sourcefile(test.ml)): 1.036s
deadcode(sourcefile(test.ml)): 0.600s
spill(sourcefile(test.ml)): 2.548s
liveness(sourcefile(test.ml)): 2.664s
split(sourcefile(test.ml)): 1.008s
regalloc(sourcefile(test.ml)): 12.932s
linearize(sourcefile(test.ml)): 0.204s
emit(sourcefile(test.ml)): 0.980s
scheduling(sourcefile(test.ml)): 0.012s
assemble(sourcefile(test.ml)): 0.004s

real 1m7.576s
user 1m6.860s
sys 0m0.540s

4.04.0

all: 82.040s
parsing(test.ml): 0.560s
typing(test.ml): 2.480s
transl(test.ml): 0.516s
generate(test.ml): 78.480s
cmm(sourcefile(test.ml)): 0.552s
compile_phrases(sourcefile(test.ml)): 30.496s
selection(sourcefile(test.ml)): 1.812s
comballoc(sourcefile(test.ml)): 0.256s
cse(sourcefile(test.ml)): 1.268s
deadcode(sourcefile(test.ml)): 0.716s
spill(sourcefile(test.ml)): 3.088s
split(sourcefile(test.ml)): 1.656s
liveness(sourcefile(test.ml)): 3.576s
regalloc(sourcefile(test.ml)): 16.236s
linearize(sourcefile(test.ml)): 0.356s
emit(sourcefile(test.ml)): 1.316s
scheduling(sourcefile(test.ml)): 0.000s
assemble(sourcefile(test.ml)): 0.004s

real 2m19.632s
user 2m18.336s
sys 0m1.140s

Steps to reproduce

Requires opam package extprot to be installed.

@vicuna
Copy link
Author

vicuna commented Jan 9, 2017

Comment author: @gasche

(just looking at the data and making obvious conclusion) the overhead seems to come from "generate" then "compile_phrases" (although the latter could just come from "generate" producing sensibly more code to compile: all backend passes seem to be 30%-60% slower, which is consistent with more code being produced):

4.03.0:

generate(test.ml): 28.324s
compile_phrases(sourcefile(test.ml)): 23.968s

4.04.0:

generate(test.ml): 78.480s
compile_phrases(sourcefile(test.ml)): 30.496s

@vicuna
Copy link
Author

vicuna commented Jan 10, 2017

Comment author: @kayceesrk

The numbers themselves are quite suspect and don't seem to add up. For example, with the 4.03.0 numbers:

all: 31.864s

parsing(test.ml): 0.544s
typing(test.ml): 2.516s
transl(test.ml): 0.476s
generate(test.ml): 28.324s
-------
31.86s
-------

cmm(sourcefile(test.ml)): 0.572s
compile_phrases(sourcefile(test.ml)): 23.968s
selection(sourcefile(test.ml)): 1.696s
comballoc(sourcefile(test.ml)): 0.176s
cse(sourcefile(test.ml)): 1.036s
deadcode(sourcefile(test.ml)): 0.600s
spill(sourcefile(test.ml)): 2.548s
liveness(sourcefile(test.ml)): 2.664s
split(sourcefile(test.ml)): 1.008s
regalloc(sourcefile(test.ml)): 12.932s
linearize(sourcefile(test.ml)): 0.204s
emit(sourcefile(test.ml)): 0.980s
scheduling(sourcefile(test.ml)): 0.012s
assemble(sourcefile(test.ml)): 0.004s
--------
55.440s
--------

On 4.04.0:

all: 82.040s

parsing(test.ml): 0.560s
typing(test.ml): 2.480s
transl(test.ml): 0.516s
generate(test.ml): 78.480s
-------
82.036s
-------

cmm(sourcefile(test.ml)): 0.552s
compile_phrases(sourcefile(test.ml)): 30.496s
selection(sourcefile(test.ml)): 1.812s
comballoc(sourcefile(test.ml)): 0.256s
cse(sourcefile(test.ml)): 1.268s
deadcode(sourcefile(test.ml)): 0.716s
spill(sourcefile(test.ml)): 3.088s
split(sourcefile(test.ml)): 1.656s
liveness(sourcefile(test.ml)): 3.576s
regalloc(sourcefile(test.ml)): 16.236s
linearize(sourcefile(test.ml)): 0.356s
emit(sourcefile(test.ml)): 1.316s
scheduling(sourcefile(test.ml)): 0.000s
assemble(sourcefile(test.ml)): 0.004s
--------
61.332s
--------

I suspect an unrelated bug in the timing infrastructure.

@vicuna
Copy link
Author

vicuna commented Jan 10, 2017

Comment author: joris

Thank you for looking at this. I noticed this too but i'm so used to the really long compile time of this file that i didn't even think about submitting a PR.

@vicuna
Copy link
Author

vicuna commented Feb 19, 2017

Comment author: @xavierleroy

So, suspect numbers or real issue? Has anyone looked at the sizes of intermediate representations to get an idea of where the code gets bigger? Anyone willing to investigate?

@vicuna
Copy link
Author

vicuna commented Feb 22, 2017

Comment author: @alainfrisch

Bumping severity. Such a regression in compiler performance is not a minor issue.

@vicuna
Copy link
Author

vicuna commented Feb 23, 2017

Comment author: @mshinwell

I will look at this

@vicuna
Copy link
Author

vicuna commented Mar 10, 2017

Comment author: @mshinwell

  1. I don't think there is anything wrong with -dtimings. Recent fixes on trunk have altered the output so it should be more clear which timings aggregate with which others.

  2. The performance regression is fixed by Fix hash table performance problem in Emitaux (PR#7456) #1092

@vicuna vicuna closed this as completed Mar 10, 2017
@vicuna vicuna added this to the 4.05.0 milestone Mar 14, 2019
@vicuna vicuna added the bug label Mar 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants