Version française
Home     About     Download     Resources     Contact us    

This site is updated infrequently. For up-to-date information, please visit the new OCaml website at

Browse thread
[Caml-list] Thread and sockets
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2001-08-09 (09:33)
From: Xavier Leroy <Xavier.Leroy@i...>
Subject: Re: [Caml-list] Thread and sockets
> Hello,
> I'm building a Caml client/server program using posix threads and
> sockets. Right now, I have 3 threads:
>  1. normal thread for user computation
>  2. a thread doing a blocking call to Unix.accept to setup new
>     connections
>  3. a thread doing a blocking call to (with infinite
>     timeout) to get messages on opened connections
> I've observed that my design is bad and I have several race
> conditions. Worse problems are when I open and close connections. For
> example, if thread (3) is blocked and a new connection is opened by
> thread (2), thread (3) won't read new messages. I have similar problems
> when sockets are closed. How to do it properly?

You could merge threads 2 and 3, by including the accepting socket in
the "read" set of file descriptors given to select.  The accepting
socket will generate a "ready for read" event when a connection
attempt is pending; you can then do "accept" on the socket to pick the
new connection.

> I think I should open a dedicated thread for each new socket [1].
> However I don't know how to stop a thread waiting inside a blocking
> call. What happens if the remote part close the socket where a thread is
> doing a select (or a read). Does this thread get an exception?

It gets an "end of file" condition.  That is, "select" says that the
socket is ready for reading, and "read" will return whatever data is
left on this socket, then 0.  

Generally speaking, there are three possible architectures for
threaded servers:

1- One thread per connection: the main thread calls "accept" and
creates a new thread for each connection.  Easy to implement, but can
lead to far too many threads.

2- User-level multiplexing: one thread calls "select" to handle all
connections (e.g. threads 2 + 3 in your example, modified as described
above).  Very economical in system resources, but requires
event-driven programming.

3- Thread pool: one thread accepts connection requests and queues them
somewhere.  A fixed number of worker threads pick connections from
this queue and process them.  Excellent model when each connection can
be handled quickly (e.g. a Web server), but if connection processing
takes an unbounded amount of time, this can lead to starvation of the
clients.  The number of worker threads can be adjusted dynamically
based on the load (number of pending connections), but still an upper
bound on the number of worker threads is needed.

Hope this helps,

- Xavier Leroy
Bug reports:  FAQ:
To unsubscribe, mail  Archives: