Version française
Home     About     Download     Resources     Contact us    
Browse thread
Where's my non-classical shared memory concurrency technology?
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Pierre-Evariste Dagand <pedagand@g...>
Subject: Re: [Caml-list] Re: Where's my non-classical shared memory concurrency technology?
Hi all,

>  i am under the impression that STM is harder to optimize since
>  generally you don't know what the transactions collided on. whereas
>  with a "hot lock" you can see precisely what code uses that lock and
>  what it locks.

I'm not so sure... In fact, my work in the past 4 months has been to
build a toy language to experiment with the Automatic Mutual Exclusion
semantic defined in [1]. At the beginning, I was, just like most of
you, quite sceptical about STM and its performance.

Besides, in this language, *everything* is under a transaction :
unless you specify it explicitly with the keyword "unprotected", every
access to memory is saved in a transaction. This have the nice
property to produce, by default, proven thread-safe code.

But, without transactional processors, this have a high performance
cost, as you can imagine. The second part of my work has been to
develop a kind of profiler that provide the developer with the "hot
transactions". And I have just finished it a minute ago, hence I can
show you a nice, typical output :

Frequency          Definition pos.          Transactions
80.%               88                       { [ 31247 | read ]  } <->
{ [ 31247 | read ] [ 31318 | write ]  }
                                            { [ 31247 | read ]  } <->
{ [ 31247 | read ] [ 31318 | write ]  }
                                            { [ 31247 | read ]  } <->
{ [ 31247 | read ] [ 31318 | write ]  }
                                            { [ 31247 | read ]  } <->
{ [ 31247 | read ] [ 31318 | write ]  }
20.%               49                       { [ 31425 | read ]  } <->
{ [ 31425 | read ] [ 31450 | write ]  }

Therefore, you first develop your code, with everything in a
transaction. Then, you run the profiler. Here, you see that the
reference cell defined at character 88 of the source code is a the
root of 80% of the conflicts. And you know the conflicting
transactions. At that point, you can either change the code to get
less contention of this reference cell (my choice here) or shorten the
transactions by, explicitly, committing them more frequently.

( this profiles my implementation of a concurrent Queue and the cell
at char 88 contains the size of the queue that is
incremented/decremented when we push/pop )

Hence, concerning performance, you will be able to, easily, identify
"hot transactions" and reduce the number of conflicts. But I see
someone in the room arguing that he don't even want to bother with
transactions because there is a 4242% decrease in speed. Good news !
With the "unprotected" keyword, you can work out of any transactions,
at full speed, *without sacrificing thread-safety* (and this is
ensured by the type-system).

To sum up, the idea is "thread-safe by default" and then "earn more
(speedup) by working more". Some people has been elected with such
ideas, I might get the Turing award for that...

If someone is interested in this toy language, let me know. But the
great and crazy part now is to transfer this technology in OCaml. For
the programmer, that's just 3 new keywords in the language. For the
guy that will implement AME in OCaml, well... that's a lot of work...



Pierre-Evariste DAGAND