English version
Accueil     À propos     Téléchargement     Ressources     Contactez-nous    

Ce site est rarement mis à jour. Pour les informations les plus récentes, rendez-vous sur le nouveau site OCaml à l'adresse ocaml.org.

Browse thread
[Caml-list] Announcing the OMake build system version 0.9.1
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2004-09-05 (20:12)
From: skaller <skaller@u...>
Subject: Re: [Caml-list] Announcing the OMake build system version 0.9.1
On Mon, 2004-09-06 at 04:45, Marcin 'Qrczak' Kowalczyk wrote:
> skaller <skaller@users.sourceforge.net> writes:
> > together these two functions conspire to determine which
> > input files generate which outputs.
> So I must either explicitly tell the dependencies, or run a script
> which computes them. Same as in make.

This is getting too confusing.

For interscript itself, the answer is no: it automatically
extracts all sources no matter what you change.

No dependency information is required, and no targets,
and no makefile.

It does this by fixpoint iteration -- the interscript
program knows nothing about the source you're generating.
It does know (a) the set of LP sources, because you give
it the first one on the command line and it follows includes.

It also knows every file you generated, because you 
called it to open the file and it keeps track.

What I'm suggesting is that this process is quite general.
It isn't restricted to 'extract source from LP source'.
In particular I leverage that in interscript itself,
by allowing the client to generate sources any way
they think fit using embedded Python.

This idea extends to any kind of build processes.
There's no difference in principle between extracting
sources from a tarball, running interscript, or compiling
a C program to an object file.

All such process will build by fixpoint iteration.

Clearly the exact build commands must be given
to build the system -- whether that is done the hard
way or the commands are generated doesn't matter,
since the fixpoint concept can rebuild the command
script just like any other output.

Clearly if you do NOT encode some dependency information,
the process will be inefficient.

If you lie, it may fail. If you provide partial information
it speeds up, the more the better.

> Assume that I forget that compilation of a particular file also reads
> another included file, i.e. forget one dependency link. Then after
> changing the included file, the target will not be rebuilt (I don't
> believe that it will, because then all other targets are in the same
> situation and everything would be rebuilt, which is impractical).
> So *all* dependencies must be specified. Same as in make.

You are missing the idea -- all the things are not only rebuilt
every time (in principle) they're rebuilt AT LEAST TWICE.

There is no way to check convergence in one pass.

So in the first instance the dependency information
is used to optimise the build order. This reduces
the number of passes required.

For example in worst case to build 26 files A-Z,
where each is dependent on the previous one alphabetically,
if the build order is ZY .. CBA then it will take 26
passes to build, and 27 to verify convergence.

That's not 26 compiles, which you think is too slow --
its 26 SQUARED compiles.

If you reorder the list using partial dependency
information, that reduces the number of passes -- NOT
the number of compiles per pass. If you have enough
information to order the compiles ABCDE ... Z then it
takes one pass to build (and one to verify convergence).

Eliminating some of those compiles is a related but
distinct problem. You may well need full dependency
information to eliminate every eliminable compilation.

As mentioned I personally don't bother with that 
in Felix package, I just use a hand coded linear
order (one pass always converges) and check
each object file against its *.ml file -- 
and  then compile everything thereafter -- that
always works (because it accounts for all the
dependencies, since they've already been checked).

This algorithm doesn't do the minimum compiles,
but ocamlopt.opt is so damn fast it just doesn't matter:
in practice it reduces my compile time by 5-10 times
(since most work is on the backend implementation).

So actually -- I'm not using the fixpoint iteration
at all to compile ocaml code. Its main use in my
package is to handle configuration (which can require
three passes since there's stuff that generates stuff
that generates stuff which the first generator needs .. :)

In practice the fixpoint stuff is quite useful:
first I naturally write things in dependency order
anyhow, and secondly all those passes aren't
done every time -- I just run one pass usually.
Remember that the system uses a persistent store,
so that one pass is probably pass 457 or something :)
I try to strike a balance between reasonable build
speed, and reasonable amount of script to specify
the actual build -- that's important for platform
independence and robustness -- remember the key use
of the build process isn't me building the package,
its for my clients to build the package -- and they
really do have to compile everything :)

> > If you try to include a file that is generated,
> > all is well even if it is generated *after* you
> > include it.
> This implies that a compile error doesn't abort the compilation

Yes. More correctly an error in the build process doesn't
stop the build continuing on (in general). If I want this,
I have to actively program it.

> (because the error might result from lack of source which will be made
> later). So if I made a fatal error in a C header file included in many
> C sources, the time to rebuild all of them and detect the error
> multiple times will be wasted.

Yes. As I have said, the fixpoint idea works, but is
not automatically efficient -- you still need to do some
work to make it efficient.

However unlike make, you can do it as your project grows,
and you can use arbitrary executable script.

> 'Make' would
> know that it makes no sense to compile B before C.

Make is brain dead, it only know what you tell it.
It can't compute anything you can't do in Python
in a roughly similar number of lines of code.
The converse is not true -- Python is much more
expressive than make.

In both cases if you want a completely optimal
build order you have to maintain a complete
dependency graph.

> > As a counter-example: latex doesn't always converge.
> It's the only counter-example I know.

You will discover another source of recusive build
dependencies if you try to bootstrap your language --
and especially if you *also* try to write the build
tool that manages that in your language too .. :))

John Skaller, mailto:skaller@users.sf.net
voice: 061-2-9660-0850, 
snail: PO BOX 401 Glebe NSW 2037 Australia
Checkout the Felix programming language http://felix.sf.net

To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr
Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners