Version française
Home     About     Download     Resources     Contact us    
Browse thread
STM support in OCaml
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Gerd Stolpmann <info@g...>
Subject: Re: [Caml-list] STM support in OCaml
Am Donnerstag, den 09.03.2006, 09:06 +1100 schrieb skaller:
> On Wed, 2006-03-08 at 14:45 -0600, Brian Hurt wrote:
> 
> > One comment I will make is that a mutex is expensive, but not *that* 
> > expensive.  I just wrote a quick program (available if anyone cares) in 
> > GNU C that measures the cost, in clocks, of locking and unlocking a posix 
> > mutex.  On my desktop box (AMD Athlon XP 2200+ 1.8GHz), I'm getting a cost 
> > of like 44 clock cycles.  Which makes it less expensive than an L2 cache 
> > miss.

> I have no idea if Linux, for example, running SMP kernel,
> is smart enough to know if a mutex is shared between two
> processing units or not: AFAIK Linux doesn't support
> interprocess mutex. Windows does. Be interesting to
> compare.

Of course POSIX supports interprocess mutexes: Mutexes are inherited
across fork(). And Linux even implements threads as processes, so you
can even place mutexes in shared memory (but do not ask me how).

Anyway, measuring the costs of a mutex is probably not simple. They are
highly optimized for the frequently occurring cases. And today "SMP"
behaves often more like NUMA, especially for multi-core CPUs.

> As mentioned before the only data I have at the moment
> is a two thread counter increment experiment on a dual
> CPU G5 box, where the speed up from 2 CPUs vs 1 was
> a factor of 15 .. times SLOWER.

You mean for a highly congested mutex you saw that slowdown. 

Gerd
-- 
------------------------------------------------------------
Gerd Stolpmann * Viktoriastr. 45 * 64293 Darmstadt * Germany 
gerd@gerd-stolpmann.de          http://www.gerd-stolpmann.de
Phone: +49-6151-153855                  Fax: +49-6151-997714
------------------------------------------------------------