Version française
Home     About     Download     Resources     Contact us    
Browse thread
Estimating the size of the ocaml community
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Michael Walter <michael.walter@g...>
Subject: Re: [Caml-list] The boon of static type checking
On Sat, 12 Feb 2005 09:22:10 -0600 (CST), Brian Hurt <bhurt@spnz.org> wrote:
> On Mon, 7 Feb 2005, Michael Walter wrote:
> 
> > On Sun, 6 Feb 2005 23:34:02 -0600 (CST), Brian Hurt <bhurt@spnz.org> wrote:
> > > Probably a bad idea, but I've got to jump in here.
> > >
> > > Full disclosure: I *hate* C++.  Mainly because I've actually written real
> > > programs in it.  The next time I have to use C++ in any sort of serious
> > > way I'm registering c++sucks.com and starting a website to catalog all the
> > > different ways C++ sucks.  Feel free to stop reading at this point.
> > :-)
> >
> > > ...
> > > > g++ seems to generate better
> > > > code than ocamlopt for similar simple problems
> > > > (see Alioth for quantitative evidence given silly
> > > > set of sample 'problems')
> > >
> > > Yep.  And, conservatively, 10 times as much effort has gone into the gcc
> > > optimizer as the Ocaml optimizer.  Possibly 100 times.  For, according to
> > > Alioth, about a 10% improvement.  It's only with gcc 3.x that C++ managed
> > > to beat Ocaml on performance.
> > More effort having gone into gcc and better performance of gcc are
> > arguments pro gcc, right? ;-)
> 
> If the 10-30% performance advantage (best case) is the difference between
> success and failure, then maybe.  Of course, going to a professional C/C++
> complier like Intel's cc, or IBM's xlc, will buy you another 5-10% over
> GCC, as they've put maybe 10x more effort into their compilers than has
> gone into gcc.
> 
> This is, of course, assuming that a) you are falling into the best case
> situation, and b) you'd have implemented the same algorithm in both cases,
> and c) time to implement is irrelevent.  Of course, if time to implement
> really is irrelevent, than going to hand tuned assembly will buy you
> another 10-30%, generally, and occassionally 2x performance (SSE/Altivec
> optimizations).
Time to implement is obviously relevant.

> > > > IMHO the single major inefficiency in C++ is also a source
> > > > of efficiency -- lack of a garbage collector.
> > >
> > > It's a source of efficiency on the small scale- it's easy to write a 1,000
> > > line program with hand allocation.  Rather harder to write a 10,000 line
> > > program, and a major bitch to write a 100,000 line program without garbage
> > > collection.
> > Personally I like it that in C++ you actually have the choice to use
> > appropriate garbage collection schemes when you desire to do (yep,
> > multiple kind of GCs for different subsystems/data/... is a win).
> > Makes it easier with > 1,000,000 line programs :-)
> 
> Yes!  Having a choice means you can fuck it up!
Sure. That's part of the game, trading the "shooting yourself in the
lag" factor versus the benefits you get from it.

> And I disbeleive the "makes it easier with large programs" statement.
I was talking about that it's easier to write a > 1,000,000 line
program (possibly partially) with GC than without GC.

> It's contrary to all evidence I've seen, and all my experience.  The
> complexity of a program is, I've postulated, a function of the number of
> interactions between different parts of the code.  And that therefor the
> innate complexity approximately scales with the square of the number of
> lines of code- so a 10,000 line program is 100 times as complicated as a
> 1,000 line program.  Brooks has evidence of this as well.
I sense bad abstractions.

> Now, if there are multiple different "memory management domains", that
> require different behaviors, you are now introducing new interactions to
> the program.  This is introducing complexity.
And reducing complexity for all the code which uses GC'ed memory
management. Again, a tradeoff.

> [example feat. wrong abstractions & shooting yourself in the foot is fun]

> > > Don't assume that inlining is optimization.  Actually, it generally isn't.
> > > Having actually timed it on modern hardware, a function call costs like
> > > 2-3 clock cycles these days.  Plus 1-2 clock cycles per argument.  This is
> > > compared to the 10-30 clock cycles a mispredicted branch costs, the 20+
> > > clock cycles an L1 cache miss/L2 cache hit costs, and the 100-350+ clock
> > > cycles of an L2 cache miss/memory fetch.
> > Inlining for very small functions generally is an optimization.
> 
> Very small functions, yes.  But it's less of an optimization than people
> think, and (especially in C++) it gets way overused.
I don't think so. From my experience basically noone is using
__forceinline except for "very small functions" (on a probably mislead
attempt to outsmart the compiler), and everyone lets the compiler
decide which functions to inline.

What I'm saying is that choosing a language is a tradeoff, and the
kind of tradeoff C++ gives you can be a very good one (if not the
best) for particular problem domains. You can see evidence for such a
domain in the time spent on improving already very good compilers :-)

Michael