Previous Contents Next

Presentation of Part IV

The fourth part introduces the concepts of parallel programming and presents models for shared and distributed memory. It is not necessary to have access to a parallel supercomputer to express concurrent algorithms or to implement distributed applications. In this preamble we define the different terms used in the following chapters.

In sequential programming, one instruction is executed after another. This is called causal dependency. Sequential programs have the property of being deterministic. For the same input, one program will always terminate or never terminate. In the case of termination, always the same result will be produced. Determinism implies that the same circumstances will always lead to the same effects. The only exceptions, which we have already met in Objective CAML, are provided by functions taking external information as input, such as the function Sys.time.

In parallel programming, a program is split into several active processes. Each process is sequential, but several instructions, belonging to different processes, are executed in parallel, ``at the same time.'' The sequence is transformed into concurrency. This is called causal independence. The same circumstances may lead to different, mutually exclusive effects (only one effect is produced). An immediate consequence is the loss of determinism: the same program with the same input may or may not terminate. In the case of termination different results may be produced.

In order to control the execution of a parallel program, it is necessary to introduce two new notions: From the point of view of causality, synchronization assures that several independent circumstances have to be reproduced before an effect may take place. Communications have a temporal constraint: a message can not be received before it is sent. Communication can occur in different variants: communication directed from one process to one other (point-to-point) or as distribution (one-to-all, or all-to-all).

The two models of parallel programming described by figure 17.13 differ in execution control by the forms of synchronization and communication.

Figure 17.13: Models of parallelism

Each process Pi corresponds to a sequential process. The set of these processes, interacting via shared memory (M) or via a medium, constitutes a parallel application.

Shared Memory Model
Communication is implicit in the shared memory model. An information is given through writing into a zone of the shared memory. It is received when another process reads this zone. The synchronization, in contrast, has to be explicit. Constructs of mutual exclusion and waiting conditions are used.

This model is used when shared resources are used in a concurrent way. The construction of operating systems can be cited as an example.

Distributed Memory Model
In this model each sequential process Pi has a private memory Mi, to which no other process has access. The processes have to communicate in order to transmit information through a medium. The difficulties in this model arise from the implementation of the medium. The programs which care for this are called protocols.

Protocols are organized in layers. The higher-level protocols implement more elaborate services, using the lower-level services.

There exist several types of communication. They depend on the capability of the medium to store information and of the blocking, respectively non-blocking character of sender and receiver. We talk about synchronous communication when the transfer of information is not possible before a global synchronization between sender and receiver takes place. In his case both sender and receiver may be blocked.

If the medium has the storage capabilities, it can store messages for a later transmission. Therefore the communication can be asynchronous and non-blocking. It may be necessary to indicate the storage capacity of the medium, the order of transmission, the delay and the reliability of transmissions.

Finally, if the transmission is non-blocking with a medium not able to store messages, a volatile communication results: only the receiving processes which are ready will receive the sent message, which is lost for the other processes.

In the model of distributed memory the communication is explicit, but the synchronization is implicit (Synchronization is produced by communication). It is dual to the model of shared memory.

Physical and Logical Parallelism
The model of distributed memory is valid in the case of physical and logical parallelism. Physical parallelism refers for example to a computer network. Examples for logical parallelism are Unix processes communicating via pipes, or lightweight processes communicating via channels. There are no common global values known by all proceses like, for example, a global clock.

The model of distributed memory is closer to physical parallelism, where there is no effectively shared memory. Nevertheless, shared memory can be simulated across a computer nerwork.

The fourth part will show how to construct parallel applications with Objective CAML using the two presented models. It relies on the Unix library, which interfaces Unix system calls to Objective CAML, and on the Thread library, which implements lightweight processes. A major part of the Unix library is ported to Windows, especially the functions on file descriptors. These are used to read and to write on files, but also for communication pipes and for sockets of computer networks.

Chapter 18 describes essential concepts of the Unix library. It concentrates on the communication of a process with it's exterior and with other processes. The notion of process in this chapter is that of a ``heavyweight process'' as in Unix. They are created by the fork system call which duplicates the execution context and the memory for the data, producing a chain of processes. The interaction between processes is implemented by signals or by communication pipes.

Chapter 19 concentrates on the notion of lightweight processes of the Thread library. In contrast to the heavy processes mentioned before, they duplicate nothing but the execution context of an existing process. The memory is shared between the creator and the thread. Depending on the programming style, the light Objective CAML processes permit to use the parallelism model of shared memory (imperative style) or the model of separated memory (purely functional style). The Thread library contains several modules allowing to start and stop threads, to manage locks for mutual exclusion, to wait for a condition and to communicate between threads via channels. In this model, there is no gain in execution time, not even for multi-processor machines. But the formulation of parallel algorithms is made easier.

Chapter 20 is devoted to the construction of distributed Internet applications. The Internet is presented from the point of view of low-level protocols. With the help of communication sockets several processes running on different machines are able to communicate with each other. The communication through sockets is an asynchronous point-to-point communication. The role of the different processes taking part in the communication of a distributed application is in general asymmetrical. This is the case for client-server architectures. The server is a process accepting requests and trying to respond. The other process, the client, sends a request to the server and waits for a response. Many services accessible in the Internet follow this architecture.

Chapter 21 presents a library and two complete applications. The library allows to define the communication between clients and servers starting from a given protocol. The first application revisits the robots of chapter 17 to give a distributed version. The second application constructs an HTTP server to manage a request form taking up again the management of associations presented in chapter 6.

Previous Contents Next