Version française
Home     About     Download     Resources     Contact us    
Browse thread
Where's my non-classical shared memory concurrency technology?
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Ulf Wiger (TN/EAB) <ulf.wiger@e...>
Subject: Re: [Caml-list] Re: Where's my non-classical shared memory concurrency technology?
Gerd Stolpmann skrev:
> 
> This is simply nonsense. Different concurrency techniques
 > have different problems.

True.

 > For example, in event
 > handling-based concurrency you do not need locks, hence
 > you cannot run into deadlocks.

Yes you can. We've even had to write design rules to this
effect to educate our commercial Erlang programmers.

There seems to be a common belief that switching from
synchronous to asynchronous communication will eliminate
the risk for deadlock, but just like Jon noted, if two
threads/processes wait for events from the other
processes, they may deadlock*. Using asynchronous
programming just makes the deadlock much more difficult
to detect, than if the processes communicate synchronously.

* In the sense that neither can continue. Deadlock doesn't
require the presence of locks at all.

Going back to Jon's observation that you cannot exploit
multicore with event-based programming, I'm inclined to
agree, even though I think that message-passing concurrency
is quite suitable for making use of multiple cores (albeit
addressing a wholly different problem from data parallelism).

The problem with event-based programming is that it doesn't
scale complexity-wise, and just as when programming with
mutexes, introducing true parallelism just makes it worse.
While there may be some simple applications which actually
can scale up this way, doing so with more "interesting"
concurrency patterns is courting disaster.

I could list some juicy examples of important commercial
products that are limited to a single core for this very
reason, but alas I'm not permitted to. I have to ask you
to take my word for it.

When scaling up message-passing (or event-based) concurrency,
you have to do one of two things:

1) ensure that your code is stable in the face of timing
    variations and message reordering
2) calculate the entire event/state matrix

For hard real-time, you must do (2) anyway. For soft real-time,
you don't have to, since a missed deadline can be viewed as
a temporary glitch rather than a system error. And (2) suffers
from the same problems as model checking - it doesn't scale
well.

For a phone system (soft real-time), if you pick up the phone
and don't get dial tone, you replace the handset, then pick
it up again - normally, it will work then. Everyone's experienced
this, and it doesn't bother us unless it happens often.
Similarly, we can accept if it occasionally takes a few seconds
longer than usual.

The same behavior would be extremely unnerving - possibly fatal -
if the breaks on your car (hard real-time) started exhibiting it.

BR,
Ulf W