Mantis Bug Tracker

View Issue Details Jump to Notes ] Issue History ] Print ]
IDProjectCategoryView StatusDate SubmittedLast Update
0004351OCamlOCaml generalpublic2007-07-26 18:332009-03-31 13:09
Reportermottl 
Assigned Toxleroy 
PrioritynormalSeveritytweakReproducibilityalways
StatusclosedResolutionfixed 
PlatformOSOS Version
Product Version3.10+dev 
Target VersionFixed in Version3.11.0 
Summary0004351: Performance improvements to thread synchronization
DescriptionIt seems that several parts of the runtime system do not make optimal
use of locking primitives. Here are some recommendations, which are
easy to implement and which would give quite noticable performance
improvements in heavily threaded code:

Mutex.unlock releases the runtime lock during the call to
pthread_mutex_unlock. This seems completely unnecessary, since this
function never blocks and should always return very quickly. Note that
leaving the blocking section itself unlocks a mutex and may even cause a
context switch so why more than double the effort? Fix: drop the calls
to enter/leave_blocking_section and the then unnecessary GC-protection
of the mutex value.

Note that caml_condition_signal and caml_condition_broadcast also release
the runtime lock, which may not be a good idea for similar reasons.

There is also a problem with Mutex.lock: this function essentially makes
an unrealistically pessimistic assumption, namely that the mutex to be
acquired is already locked. Only then would it be justified to always
release the runtime lock.

But it is good practice to avoid lock contention so in the very vast
majority of cases sane user code will attempt to acquire an uncontented
mutex. In this most frequent case there is no reason to release the
runtime lock. Fix: attempt to lock the mutex with "pthread_mutex_trylock"
first. Only if this fails, release the runtime lock and do a blocking
"pthread_mutex_lock". The almost certain, resulting context-switch will
generally dwarf the cost of trying to lock twice in this infrequent case
anyway, and the frequent case will be considerably more efficient.

The same problem (pessimistic locking assumptions) also seems to be
present in the code for locking I/O-channels.

The proposed fixes would not only reduce the computational effort required
to lock/unlock mutexes under realistic conditions, but would also reduce
(very expensive!) context-switches under these conditions if there are
many threads competing for the OCaml runtime lock.

I have attached two files in a tarball that demonstrate the drastic difference in an admittedly artificial benchmark. "foo.ml" contains the OCaml-code, and
"foo_stubs.c" contains improved bindings for thread synchronization, which you may want to incorporate into your code in otherlibs/systhreads/posix.c. I haven't written any fixes for the I/O-mutexes, but this should be straightforward.

Just compile the code using:

  ocamlopt -cc "gcc" -thread unix.cmxa threads.cmxa foo_stubs.c foo.ml -o foo

Running this code takes around 56ms on my machine. Commenting out the
code block in foo.ml that binds the "fast" locking primitives (i.e. to
use the standard calls) will result in a program that takes around 1500ms,
which is a pretty substantial slowdown (> 25x).
TagsNo tags attached.
Attached Filesgz file icon locking_improvements.tar.gz [^] (1,158 bytes) 2007-07-26 18:33

- Relationships

-  Notes
(0004122)
jm (reporter)
2007-07-27 03:41
edited on: 2007-07-27 06:04

Intel P4 mobile, Linux 2.6.22, ocamlopt 3.10.1+dev0 (2007-05-21)


$ L=42; F=0; for i in `seq $L`; do A=`./foo`; F=$(($F+${A%ms})); done; echo $(($F/$L))
761

$ L=42; F=0; for i in `seq $L`; do A=`./foo_fast`; F=$(($F+${A%ms})); done; echo $(($F/$L))
127

$ echo $((761/127))
5

Still a _nice_ improvement Mr.Mottl =)

(0004127)
xleroy (administrator)
2007-08-01 16:01

Very interesting ideas, thanks. I especially like the suggestion to use mutex_trylock before releasing the runtime lock.

The reason for releasing the runtime lock around non-blocking mutex and condition variable operations is to offer opportunities for rescheduling around these (communication) operations, and therefore make it less likely that a thread can starve other threads. I agree we're probably rescheduling too much with the current implementation. As usual, we need to find a compromise between throughput and fairness. I put that on my "to do" list.
(0004222)
xleroy (administrator)
2007-10-31 10:13

Tentative implementation in CVS trunk. POSIX version tested OK, Win32 version remains to be tested.

- Issue History
Date Modified Username Field Change
2007-07-26 18:33 mottl New Issue
2007-07-26 18:33 mottl File Added: locking_improvements.tar.gz
2007-07-27 03:41 jm Note Added: 0004122
2007-07-27 06:04 jm Note Edited: 0004122
2007-08-01 16:01 xleroy Note Added: 0004127
2007-08-01 16:01 xleroy Assigned To => xleroy
2007-08-01 16:01 xleroy Status new => acknowledged
2007-10-31 10:13 xleroy Note Added: 0004222
2007-10-31 10:13 xleroy Status acknowledged => resolved
2007-10-31 10:13 xleroy Resolution open => fixed
2009-03-31 13:09 xleroy Status resolved => closed
2009-03-31 13:09 xleroy Fixed in Version => 3.11.0


Copyright © 2000 - 2011 MantisBT Group
Powered by Mantis Bugtracker