This manual documents the release 5.1 of the OCaml system. It is organized as follows.
OCaml runs on several operating systems. The parts of this manual that are specific to one operating system are presented as shown below:
Unix: This is material specific to the Unix family of operating systems, including Linux and macOS.
Windows: This is material specific to Microsoft Windows (Vista, 7, 8, 10, 11).
The OCaml system is copyright © 1996–2023 Institut National de Recherche en Informatique et en Automatique (INRIA). INRIA holds all ownership rights to the OCaml system.
The OCaml system is open source and can be freely redistributed. See the file LICENSE in the distribution for licensing information.
The OCaml documentation and user’s manual is copyright © 2023 Institut National de Recherche en Informatique et en Automatique (INRIA).
The OCaml documentation and user's manual is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
The sample code in the user's manual and in the reference documentation of the standard library is licensed under a Creative Commons CC0 1.0 Universal (CC0 1.0) Public Domain Dedication License.
The complete OCaml distribution can be accessed via the ocaml.org website. This site contains a lot of additional information on OCaml.
Part I |
This part of the manual is a tutorial introduction to the OCaml language. A good familiarity with programming in a conventional languages (say, C or Java) is assumed, but no prior exposure to functional languages is required. The present chapter introduces the core language. Chapter 2 deals with the module system, chapter 3 with the object-oriented features, chapter 4 with labeled arguments, chapter 5 with polymorphic variants, chapter 6 with the limitations of polymorphism, and chapter 8 gives some advanced examples.
For this overview of OCaml, we use the interactive system, which is started by running ocaml from the Unix shell or Windows command prompt. This tutorial is presented as the transcript of a session with the interactive system: lines starting with # represent user input; the system responses are printed below, without a leading #.
Under the interactive system, the user types OCaml phrases terminated by ;; in response to the # prompt, and the system compiles them on the fly, executes them, and prints the outcome of evaluation. Phrases are either simple expressions, or let definitions of identifiers (either values or functions).
The OCaml system computes both the value and the type for each phrase. Even function parameters need no explicit type declaration: the system infers their types from their usage in the function. Notice also that integers and floating-point numbers are distinct types, with distinct operators: + and * operate on integers, but +. and *. operate on floats.
Recursive functions are defined with the let rec binding:
In addition to integers and floating-point numbers, OCaml offers the usual basic data types:
Predefined data structures include tuples, arrays, and lists. There are also general mechanisms for defining your own data structures, such as records and variants, which will be covered in more detail later; for now, we concentrate on lists. Lists are either given in extension as a bracketed list of semicolon-separated elements, or built from the empty list [] (pronounce “nil”) by adding elements in front using the :: (“cons”) operator.
As with all other OCaml data structures, lists do not need to be explicitly allocated and deallocated from memory: all memory management is entirely automatic in OCaml. Similarly, there is no explicit handling of pointers: the OCaml compiler silently introduces pointers where necessary.
As with most OCaml data structures, inspecting and destructuring lists is performed by pattern-matching. List patterns have exactly the same form as list expressions, with identifiers representing unspecified parts of the list. As an example, here is insertion sort on a list:
The type inferred for sort, 'a list -> 'a list, means that sort can actually apply to lists of any type, and returns a list of the same type. The type 'a is a type variable, and stands for any given type. The reason why sort can apply to lists of any type is that the comparisons (=, <=, etc.) are polymorphic in OCaml: they operate between any two values of the same type. This makes sort itself polymorphic over all list types.
The sort function above does not modify its input list: it builds and returns a new list containing the same elements as the input list, in ascending order. There is actually no way in OCaml to modify a list in-place once it is built: we say that lists are immutable data structures. Most OCaml data structures are immutable, but a few (most notably arrays) are mutable, meaning that they can be modified in-place at any time.
The OCaml notation for the type of a function with multiple arguments is
arg1_type -> arg2_type -> ... -> return_type. For example,
the type inferred for insert, 'a -> 'a list -> 'a list, means that insert
takes two arguments, an element of any type 'a and a list with elements of
the same type 'a and returns a list of the same type.
OCaml is a functional language: functions in the full mathematical sense are supported and can be passed around freely just as any other piece of data. For instance, here is a deriv function that takes any float function as argument and returns an approximation of its derivative function:
Even function composition is definable:
Functions that take other functions as arguments are called “functionals”, or “higher-order functions”. Functionals are especially useful to provide iterators or similar generic operations over a data structure. For instance, the standard OCaml library provides a List.map functional that applies a given function to each element of a list, and returns the list of the results:
This functional, along with a number of other list and array functionals, is predefined because it is often useful, but there is nothing magic with it: it can easily be defined as follows.
User-defined data structures include records and variants. Both are defined with the type declaration. Here, we declare a record type to represent rational numbers.
Record fields can also be accessed through pattern-matching:
Since there is only one case in this pattern matching, it is safe to expand directly the argument r in a record pattern:
Unneeded fields can be omitted:
Optionally, missing fields can be made explicit by ending the list of fields with a trailing wildcard _::
When both sides of the = sign are the same, it is possible to avoid repeating the field name by eliding the =field part:
This short notation for fields also works when constructing records:
At last, it is possible to update few fields of a record at once:
With this functional update notation, the record on the left-hand side of with is copied except for the fields on the right-hand side which are updated.
The declaration of a variant type lists all possible forms for values of that type. Each case is identified by a name, called a constructor, which serves both for constructing values of the variant type and inspecting them by pattern-matching. Constructor names are capitalized to distinguish them from variable names (which must start with a lowercase letter). For instance, here is a variant type for doing mixed arithmetic (integers and floats):
This declaration expresses that a value of type number is either an integer, a floating-point number, or the constant Error representing the result of an invalid operation (e.g. a division by zero).
Enumerated types are a special case of variant types, where all alternatives are constants:
To define arithmetic operations for the number type, we use pattern-matching on the two numbers involved:
Another interesting example of variant type is the built-in 'a option type which represents either a value of type 'a or an absence of value:
This type is particularly useful when defining function that can fail in common situations, for instance
The most common usage of variant types is to describe recursive data structures. Consider for example the type of binary trees:
This definition reads as follows: a binary tree containing values of type 'a (an arbitrary type) is either empty, or is a node containing one value of type 'a and two subtrees also containing values of type 'a, that is, two 'a btree.
Operations on binary trees are naturally expressed as recursive functions following the same structure as the type definition itself. For instance, here are functions performing lookup and insertion in ordered binary trees (elements increase from left to right):
( This subsection can be skipped on the first reading )
Astute readers may have wondered what happens when two or more record fields or constructors share the same name
The answer is that when confronted with multiple options, OCaml tries to use locally available information to disambiguate between the various fields and constructors. First, if the type of the record or variant is known, OCaml can pick unambiguously the corresponding field or constructor. For instance:
In the first example, (r:first_record) is an explicit annotation telling OCaml that the type of r is first_record. With this annotation, Ocaml knows that r.x refers to the x field of the first record type. Similarly, the type annotation in the second example makes it clear to OCaml that the constructors A, B and C come from the first variant type. Contrarily, in the last example, OCaml has inferred by itself that the type of r can only be first_record and there are no needs for explicit type annotations.
Those explicit type annotations can in fact be used anywhere. Most of the time they are unnecessary, but they are useful to guide disambiguation, to debug unexpected type errors, or combined with some of the more advanced features of OCaml described in later chapters.
Secondly, for records, OCaml can also deduce the right record type by looking at the whole set of fields used in a expression or pattern:
Since the fields x and y can only appear simultaneously in the first record type, OCaml infers that the type of project_and_rotate is first_record -> first_record.
In last resort, if there is not enough information to disambiguate between different fields or constructors, Ocaml picks the last defined type amongst all locally valid choices:
Here, OCaml has inferred that the possible choices for the type of {x;z} are first_record and middle_record, since the type last_record has no field z. Ocaml then picks the type middle_record as the last defined type between the two possibilities.
Beware that this last resort disambiguation is local: once Ocaml has chosen a disambiguation, it sticks to this choice, even if it leads to an ulterior type error:
Moreover, being the last defined type is a quite unstable position that may change surreptitiously after adding or moving around a type definition, or after opening a module (see chapter 2). Consequently, adding explicit type annotations to guide disambiguation is more robust than relying on the last defined type disambiguation.
Though all examples so far were written in purely applicative style, OCaml is also equipped with full imperative features. This includes the usual while and for loops, as well as mutable data structures such as arrays. Arrays are either created by listing semicolon-separated element values between [| and |] brackets, or allocated and initialized with the Array.make function, then filled up later by assignments. For instance, the function below sums two vectors (represented as float arrays) componentwise.
Record fields can also be modified by assignment, provided they are declared mutable in the definition of the record type:
OCaml has no built-in notion of variable – identifiers whose current value can be changed by assignment. (The let binding is not an assignment, it introduces a new identifier with a new scope.) However, the standard library provides references, which are mutable indirection cells, with operators ! to fetch the current contents of the reference and := to assign the contents. Variables can then be emulated by let-binding a reference. For instance, here is an in-place insertion sort over arrays:
References are also useful to write functions that maintain a current state between two calls to the function. For instance, the following pseudo-random number generator keeps the last returned number in a reference:
Again, there is nothing magical with references: they are implemented as a single-field mutable record, as follows.
In some special cases, you may need to store a polymorphic function in a data structure, keeping its polymorphism. Doing this requires user-provided type annotations, since polymorphism is only introduced automatically for global definitions. However, you can explicitly give polymorphic types to record fields.
OCaml provides exceptions for signalling and handling exceptional conditions. Exceptions can also be used as a general-purpose non-local control structure, although this should not be overused since it can make the code harder to understand. Exceptions are declared with the exception construct, and signalled with the raise operator. For instance, the function below for taking the head of a list uses an exception to signal the case where an empty list is given.
Exceptions are used throughout the standard library to signal cases where the library functions cannot complete normally. For instance, the List.assoc function, which returns the data associated with a given key in a list of (key, data) pairs, raises the predefined exception Not_found when the key does not appear in the list:
Exceptions can be trapped with the try…with construct:
The with part does pattern matching on the exception value with the same syntax and behavior as match. Thus, several exceptions can be caught by one try…with construct:
Also, finalization can be performed by trapping all exceptions, performing the finalization, then re-raising the exception:
An alternative to try…with is to catch the exception while pattern matching:
Note that this construction is only useful if the exception is raised between match…with. Exception patterns can be combined with ordinary patterns at the toplevel,
but they cannot be nested inside other patterns. For instance, the pattern Some (exception A) is invalid.
When exceptions are used as a control structure, it can be useful to make them as local as possible by using a locally defined exception. For instance, with
the function f cannot raise a Done exception, which removes an entire class of misbehaving functions.
OCaml allows us to defer some computation until later when we need the result of that computation.
We use lazy (expr) to delay the evaluation of some expression expr. For example, we can defer the computation of 1+1 until we need the result of that expression, 2. Let us see how we initialize a lazy expression.
We added print_endline "lazy_two evaluation" to see when the lazy expression is being evaluated.
The value of lazy_two is displayed as <lazy>, which means the expression has not been evaluated yet, and its final value is unknown.
Note that lazy_two has type int lazy_t. However, the type 'a lazy_t is an internal type name, so the type 'a Lazy.t should be preferred when possible.
When we finally need the result of a lazy expression, we can call Lazy.force on that expression to force its evaluation. The function force comes from standard-library module Lazy.
Notice that our function call above prints “lazy_two evaluation” and then returns the plain value of the computation.
Now if we look at the value of lazy_two, we see that it is not displayed as <lazy> anymore but as lazy 2.
This is because Lazy.force memoizes the result of the forced expression. In other words, every subsequent call of Lazy.force on that expression returns the result of the first computation without recomputing the lazy expression. Let us force lazy_two once again.
The expression is not evaluated this time; notice that “lazy_two evaluation” is not printed. The result of the initial computation is simply returned.
Lazy patterns provide another way to force a lazy expression.
We can also use lazy patterns in pattern matching.
The lazy expression lazy_expr is forced only if the lazy_guard value yields true once computed. Indeed, a simple wildcard pattern (not lazy) never forces the lazy expression’s evaluation. However, a pattern with keyword lazy, even if it is wildcard, always forces the evaluation of the deferred computation.
We finish this introduction with a more complete example representative of the use of OCaml for symbolic processing: formal manipulations of arithmetic expressions containing variables. The following variant type describes the expressions we shall manipulate:
We first define a function to evaluate an expression given an environment that maps variable names to their values. For simplicity, the environment is represented as an association list.
Now for a real symbolic processing, we define the derivative of an expression with respect to a variable dv:
As shown in the examples above, the internal representation (also called abstract syntax) of expressions quickly becomes hard to read and write as the expressions get larger. We need a printer and a parser to go back and forth between the abstract syntax and the concrete syntax, which in the case of expressions is the familiar algebraic notation (e.g. 2*x+1).
For the printing function, we take into account the usual precedence rules (i.e. * binds tighter than +) to avoid printing unnecessary parentheses. To this end, we maintain the current operator precedence and print parentheses around an operator only if its precedence is less than the current precedence.
There is a printf function in the Printf module (see chapter 2) that allows you to make formatted output more concisely. It follows the behavior of the printf function from the C standard library. The printf function takes a format string that describes the desired output as a text interspersed with specifiers (for instance %d, %f). Next, the specifiers are substituted by the following arguments in their order of apparition in the format string:
The OCaml type system checks that the type of the arguments and the specifiers are compatible. If you pass it an argument of a type that does not correspond to the format specifier, the compiler will display an error message:
The fprintf function is like printf except that it takes an output channel as the first argument. The %a specifier can be useful to define custom printers (for custom types). For instance, we can create a printing template that converts an integer argument to signed decimal:
The advantage of those printers based on the %a specifier is that they can be composed together to create more complex printers step by step. We can define a combinator that can turn a printer for 'a type into a printer for 'a optional:
If the value of its argument its None, the printer returned by pp_option printer prints None otherwise it uses the provided printer to print Some .
Here is how to rewrite the pretty-printer using fprintf:
Due to the way that format strings are built, storing a format string requires an explicit type annotation:
All examples given so far were executed under the interactive system. OCaml code can also be compiled separately and executed non-interactively using the batch compilers ocamlc and ocamlopt. The source code must be put in a file with extension .ml. It consists of a sequence of phrases, which will be evaluated at runtime in their order of appearance in the source file. Unlike in interactive mode, types and values are not printed automatically; the program must call printing functions explicitly to produce some output. The ;; used in the interactive examples is not required in source files created for use with OCaml compilers, but can be helpful to mark the end of a top-level expression unambiguously even when there are syntax errors. Here is a sample standalone program to print the greatest common divisor (gcd) of two numbers:
(* File gcd.ml *) let rec gcd a b = if b = 0 then a else gcd b (a mod b);; let main () = let a = int_of_string Sys.argv.(1) in let b = int_of_string Sys.argv.(2) in Printf.printf "%d\n" (gcd a b); exit 0;; main ();;
Sys.argv is an array of strings containing the command-line parameters. Sys.argv.(1) is thus the first command-line parameter. The program above is compiled and executed with the following shell commands:
$ ocamlc -o gcd gcd.ml $ ./gcd 6 9 3 $ ./gcd 7 11 1
More complex standalone OCaml programs are typically composed of multiple source files, and can link with precompiled libraries. Chapters 13 and 16 explain how to use the batch compilers ocamlc and ocamlopt. Recompilation of multi-file OCaml projects can be automated using third-party build systems, such as dune.
This chapter introduces the module system of OCaml.
A primary motivation for modules is to package together related definitions (such as the definitions of a data type and associated operations over that type) and enforce a consistent naming scheme for these definitions. This avoids running out of names or accidentally confusing names. Such a package is called a structure and is introduced by the struct…end construct, which contains an arbitrary sequence of definitions. The structure is usually given a name with the module binding. For instance, here is a structure packaging together a type of FIFO queues and their operations:
Outside the structure, its components can be referred to using the “dot notation”, that is, identifiers qualified by a structure name. For instance, Fifo.add is the function add defined inside the structure Fifo and Fifo.queue is the type queue defined in Fifo.
Another possibility is to open the module, which brings all identifiers defined inside the module into the scope of the current structure.
Opening a module enables lighter access to its components, at the cost of making it harder to identify in which module an identifier has been defined. In particular, opened modules can shadow identifiers present in the current scope, potentially leading to confusing errors:
A partial solution to this conundrum is to open modules locally, making the components of the module available only in the concerned expression. This can also make the code both easier to read (since the open statement is closer to where it is used) and easier to refactor (since the code fragment is more self-contained). Two constructions are available for this purpose:
and
In the second form, when the body of a local open is itself delimited by parentheses, braces or bracket, the parentheses of the local open can be omitted. For instance,
This second form also works for patterns:
It is also possible to copy the components of a module inside another module by using an include statement. This can be particularly useful to extend existing modules. As an illustration, we could add functions that return an optional value rather than an exception when the queue is empty.
Signatures are interfaces for structures. A signature specifies which components of a structure are accessible from the outside, and with which type. It can be used to hide some components of a structure (e.g. local function definitions) or export some components with a restricted type. For instance, the signature below specifies the queue operations empty, add, top and pop, but not the auxiliary function make. Similarly, it makes the queue type abstract (by not providing its actual representation as a concrete type). This ensures that users of the Fifo module cannot violate data structure invariants that operations rely on, such as “if the front list is empty, the rear list must also be empty”.
Restricting the Fifo structure to this signature results in another view of the Fifo structure where the make function is not accessible and the actual representation of queues is hidden:
The restriction can also be performed during the definition of the structure, as in
module Fifo = (struct ... end : FIFO);;
An alternate syntax is provided for the above:
module Fifo : FIFO = struct ... end;;
Like for modules, it is possible to include a signature to copy its components inside the current signature. For instance, we can extend the FIFO signature with the top_opt and pop_opt functions:
Functors are “functions” from modules to modules. Functors let you create parameterized modules and then provide other modules as parameter(s) to get a specific implementation. For instance, a Set module implementing sets as sorted lists could be parameterized to work with any module that provides an element type and a comparison function compare (such as OrderedString):
By applying the Set functor to a structure implementing an ordered type, we obtain set operations for this type:
As in the Fifo example, it would be good style to hide the actual implementation of the type set, so that users of the structure will not rely on sets being lists, and we can switch later to another, more efficient representation of sets without breaking their code. This can be achieved by restricting Set by a suitable functor signature:
In an attempt to write the type constraint above more elegantly, one may wish to name the signature of the structure returned by the functor, then use that signature in the constraint:
The problem here is that SET specifies the type element abstractly, so that the type equality between element in the result of the functor and t in its argument is forgotten. Consequently, WrongStringSet.element is not the same type as string, and the operations of WrongStringSet cannot be applied to strings. As demonstrated above, it is important that the type element in the signature SET be declared equal to Elt.t; unfortunately, this is impossible above since SET is defined in a context where Elt does not exist. To overcome this difficulty, OCaml provides a with type construct over signatures that allows enriching a signature with extra type equalities:
As in the case of simple structures, an alternate syntax is provided for defining functors and restricting their result:
module AbstractSet2(Elt: ORDERED_TYPE) : (SET with type element = Elt.t) = struct ... end;;
Abstracting a type component in a functor result is a powerful technique that provides a high degree of type safety, as we now illustrate. Consider an ordering over character strings that is different from the standard ordering implemented in the OrderedString structure. For instance, we compare strings without distinguishing upper and lower case.
Note that the two types AbstractStringSet.set and NoCaseStringSet.set are not compatible, and values of these two types do not match. This is the correct behavior: even though both set types contain elements of the same type (strings), they are built upon different orderings of that type, and different invariants need to be maintained by the operations (being strictly increasing for the standard ordering and for the case-insensitive ordering). Applying operations from AbstractStringSet to values of type NoCaseStringSet.set could give incorrect results, or build lists that violate the invariants of NoCaseStringSet.
All examples of modules so far have been given in the context of the interactive system. However, modules are most useful for large, batch-compiled programs. For these programs, it is a practical necessity to split the source into several files, called compilation units, that can be compiled separately, thus minimizing recompilation after changes.
In OCaml, compilation units are special cases of structures and signatures, and the relationship between the units can be explained easily in terms of the module system. A compilation unit A comprises two files:
These two files together define a structure named A as if the following definition was entered at top-level:
module A: sig (* contents of file A.mli *) end = struct (* contents of file A.ml *) end;;
The files that define the compilation units can be compiled separately using the ocamlc -c command (the -c option means “compile only, do not try to link”); this produces compiled interface files (with extension .cmi) and compiled object code files (with extension .cmo). When all units have been compiled, their .cmo files are linked together using the ocamlc command. For instance, the following commands compile and link a program composed of two compilation units Aux and Main:
$ ocamlc -c Aux.mli # produces aux.cmi $ ocamlc -c Aux.ml # produces aux.cmo $ ocamlc -c Main.mli # produces main.cmi $ ocamlc -c Main.ml # produces main.cmo $ ocamlc -o theprogram Aux.cmo Main.cmo
The program behaves exactly as if the following phrases were entered at top-level:
module Aux: sig (* contents of Aux.mli *) end = struct (* contents of Aux.ml *) end;; module Main: sig (* contents of Main.mli *) end = struct (* contents of Main.ml *) end;;
In particular, Main can refer to Aux: the definitions and declarations contained in Main.ml and Main.mli can refer to definition in Aux.ml, using the Aux.ident notation, provided these definitions are exported in Aux.mli.
The order in which the .cmo files are given to ocamlc during the linking phase determines the order in which the module definitions occur. Hence, in the example above, Aux appears first and Main can refer to it, but Aux cannot refer to Main.
Note that only top-level structures can be mapped to separately-compiled files, but neither functors nor module types. However, all module-class objects can appear as components of a structure, so the solution is to put the functor or module type inside a structure, which can then be mapped to a file.
(Chapter written by Jérôme Vouillon, Didier Rémy and Jacques Garrigue)
This chapter gives an overview of the object-oriented features of OCaml.
Note that the relationship between object, class and type in OCaml is different than in mainstream object-oriented languages such as Java and C++, so you shouldn’t assume that similar keywords mean the same thing. Object-oriented features are used much less frequently in OCaml than in those languages. OCaml has alternatives that are often more appropriate, such as modules and functors. Indeed, many OCaml programs do not use objects at all.
The class point below defines one instance variable x and two methods get_x and move. The initial value of the instance variable is 0. The variable x is declared mutable, so the method move can change its value.
We now create a new point p, instance of the point class.
Note that the type of p is point. This is an abbreviation automatically defined by the class definition above. It stands for the object type <get_x : int; move : int -> unit>, listing the methods of class point along with their types.
We now invoke some methods of p:
The evaluation of the body of a class only takes place at object creation time. Therefore, in the following example, the instance variable x is initialized to different values for two different objects.
The class point can also be abstracted over the initial values of the x coordinate.
Like in function definitions, the definition above can be abbreviated as:
An instance of the class point is now a function that expects an initial parameter to create a point object:
The parameter x_init is, of course, visible in the whole body of the definition, including methods. For instance, the method get_offset in the class below returns the position of the object relative to its initial position.
Expressions can be evaluated and bound before defining the object body of the class. This is useful to enforce invariants. For instance, points can be automatically adjusted to the nearest point on a grid, as follows:
(One could also raise an exception if the x_init coordinate is not on the grid.) In fact, the same effect could be obtained here by calling the definition of class point with the value of the origin.
An alternate solution would have been to define the adjustment in a special allocation function:
However, the former pattern is generally more appropriate, since the code for adjustment is part of the definition of the class and will be inherited.
This ability provides class constructors as can be found in other languages. Several constructors can be defined this way to build objects of the same class but with different initialization patterns; an alternative is to use initializers, as described below in section 3.4.
There is another, more direct way to create an object: create it without going through a class.
The syntax is exactly the same as for class expressions, but the result is a single object rather than a class. All the constructs described in the rest of this section also apply to immediate objects.
Unlike classes, which cannot be defined inside an expression, immediate objects can appear anywhere, using variables from their environment.
Immediate objects have two weaknesses compared to classes: their types are not abbreviated, and you cannot inherit from them. But these two weaknesses can be advantages in some situations, as we will see in sections 3.3 and 3.10.
A method or an initializer can invoke methods on self (that is, the current object). For that, self must be explicitly bound, here to the variable s (s could be any identifier, even though we will often choose the name self.)
Dynamically, the variable s is bound at the invocation of a method. In particular, when the class printable_point is inherited, the variable s will be correctly bound to the object of the subclass.
A common problem with self is that, as its type may be extended in subclasses, you cannot fix it in advance. Here is a simple example.
You can ignore the first two lines of the error message. What matters is the last one: putting self into an external reference would make it impossible to extend it through inheritance. We will see in section 3.12 a workaround to this problem. Note however that, since immediate objects are not extensible, the problem does not occur with them.
Let-bindings within class definitions are evaluated before the object is constructed. It is also possible to evaluate an expression immediately after the object has been built. Such code is written as an anonymous hidden method called an initializer. Therefore, it can access self and the instance variables.
Initializers cannot be overridden. On the contrary, all initializers are evaluated sequentially. Initializers are particularly useful to enforce invariants. Another example can be seen in section 8.1.
It is possible to declare a method without actually defining it, using the keyword virtual. This method will be provided later in subclasses. A class containing virtual methods must be flagged virtual, and cannot be instantiated (that is, no object of this class can be created). It still defines type abbreviations (treating virtual methods as other methods.)
Instance variables can also be declared as virtual, with the same effect as with methods.
Private methods are methods that do not appear in object interfaces. They can only be invoked from other methods of the same object.
Note that this is not the same thing as private and protected methods in Java or C++, which can be called from other objects of the same class. This is a direct consequence of the independence between types and classes in OCaml: two unrelated classes may produce objects of the same type, and there is no way at the type level to ensure that an object comes from a specific class. However a possible encoding of friend methods is given in section 3.17.
Private methods are inherited (they are by default visible in subclasses), unless they are hidden by signature matching, as described below.
Private methods can be made public in a subclass.
The annotation virtual here is only used to mention a method without providing its definition. Since we didn’t add the private annotation, this makes the method public, keeping the original definition.
An alternative definition is
The constraint on self’s type is requiring a public move method, and this is sufficient to override private.
One could think that a private method should remain private in a subclass. However, since the method is visible in a subclass, it is always possible to pick its code and define a method of the same name that runs that code, so yet another (heavier) solution would be:
Of course, private methods can also be virtual. Then, the keywords must appear in this order: method private virtual.
Class interfaces are inferred from class definitions. They may also be defined directly and used to restrict the type of a class. Like class declarations, they also define a new type abbreviation.
In addition to program documentation, class interfaces can be used to constrain the type of a class. Both concrete instance variables and concrete private methods can be hidden by a class type constraint. Public methods and virtual members, however, cannot.
Or, equivalently:
The interface of a class can also be specified in a module signature, and used to restrict the inferred signature of a module.
We illustrate inheritance by defining a class of colored points that inherits from the class of points. This class has all instance variables and all methods of class point, plus a new instance variable c and a new method color.
A point and a colored point have incompatible types, since a point has no method color. However, the function get_x below is a generic function applying method get_x to any object p that has this method (and possibly some others, which are represented by an ellipsis in the type). Thus, it applies to both points and colored points.
Methods need not be declared previously, as shown by the example:
Multiple inheritance is allowed. Only the last definition of a method is kept: the redefinition in a subclass of a method that was visible in the parent class overrides the definition in the parent class. Previous definitions of a method can be reused by binding the related ancestor. Below, super is bound to the ancestor printable_point. The name super is a pseudo value identifier that can only be used to invoke a super-class method, as in super#print.
A private method that has been hidden in the parent class is no longer visible, and is thus not overridden. Since initializers are treated as private methods, all initializers along the class hierarchy are evaluated, in the order they are introduced.
Note that for clarity’s sake, the method print is explicitly marked as overriding another definition by annotating the method keyword with an exclamation mark !. If the method print were not overriding the print method of printable_point, the compiler would raise an error:
This explicit overriding annotation also works for val and inherit:
Reference cells can be implemented as objects. The naive definition fails to typecheck:
The reason is that at least one of the methods has a polymorphic type (here, the type of the value stored in the reference cell), thus either the class should be parametric, or the method type should be constrained to a monomorphic type. A monomorphic instance of the class could be defined by:
Note that since immediate objects do not define a class type, they have no such restriction.
On the other hand, a class for polymorphic references must explicitly list the type parameters in its declaration. Class type parameters are listed between [ and ]. The type parameters must also be bound somewhere in the class body by a type constraint.
The type parameter in the declaration may actually be constrained in the body of the class definition. In the class type, the actual value of the type parameter is displayed in the constraint clause.
Let us consider a more complex example: define a circle, whose center may be any kind of point. We put an additional type constraint in method move, since no free variables must remain unaccounted for by the class type parameters.
An alternate definition of circle, using a constraint clause in the class definition, is shown below. The type #point used below in the constraint clause is an abbreviation produced by the definition of class point. This abbreviation unifies with the type of any object belonging to a subclass of class point. It actually expands to < get_x : int; move : int -> unit; .. >. This leads to the following alternate definition of circle, which has slightly stronger constraints on its argument, as we now expect center to have a method get_x.
The class colored_circle is a specialized version of class circle that requires the type of the center to unify with #colored_point, and adds a method color. Note that when specializing a parameterized class, the instance of type parameter must always be explicitly given. It is again written between [ and ].
While parameterized classes may be polymorphic in their contents, they are not enough to allow polymorphism of method use.
A classical example is defining an iterator.
At first look, we seem to have a polymorphic iterator, however this does not work in practice.
Our iterator works, as shows its first use for summation. However, since objects themselves are not polymorphic (only their constructors are), using the fold method fixes its type for this individual object. Our next attempt to use it as a string iterator fails.
The problem here is that quantification was wrongly located: it is not the class we want to be polymorphic, but the fold method. This can be achieved by giving an explicitly polymorphic type in the method definition.
As you can see in the class type shown by the compiler, while polymorphic method types must be fully explicit in class definitions (appearing immediately after the method name), quantified type variables can be left implicit in class descriptions. Why require types to be explicit? The problem is that (int -> int -> int) -> int -> int would also be a valid type for fold, and it happens to be incompatible with the polymorphic type we gave (automatic instantiation only works for toplevel types variables, not for inner quantifiers, where it becomes an undecidable problem.) So the compiler cannot choose between those two types, and must be helped.
However, the type can be completely omitted in the class definition if it is already known, through inheritance or type constraints on self. Here is an example of method overriding.
The following idiom separates description and definition.
Note here the (self : int #iterator) idiom, which ensures that this object implements the interface iterator.
Polymorphic methods are called in exactly the same way as normal methods, but you should be aware of some limitations of type inference. Namely, a polymorphic method can only be called if its type is known at the call site. Otherwise, the method will be assumed to be monomorphic, and given an incompatible type.
The workaround is easy: you should put a type constraint on the parameter.
Of course the constraint may also be an explicit method type. Only occurrences of quantified variables are required.
Another use of polymorphic methods is to allow some form of implicit subtyping in method arguments. We have already seen in section 3.8 how some functions may be polymorphic in the class of their argument. This can be extended to methods.
Note here the special syntax (#point0 as 'a) we have to use to quantify the extensible part of #point0. As for the variable binder, it can be omitted in class specifications. If you want polymorphism inside object field it must be quantified independently.
In method m1, o must be an object with at least a method n1, itself polymorphic. In method m2, the argument of n2 and x must have the same type, which is quantified at the same level as 'a.
Subtyping is never implicit. There are, however, two ways to perform subtyping. The most general construction is fully explicit: both the domain and the codomain of the type coercion must be given.
We have seen that points and colored points have incompatible types. For instance, they cannot be mixed in the same list. However, a colored point can be coerced to a point, hiding its color method:
An object of type t can be seen as an object of type t' only if t is a subtype of t'. For instance, a point cannot be seen as a colored point.
Indeed, narrowing coercions without runtime checks would be unsafe. Runtime type checks might raise exceptions, and they would require the presence of type information at runtime, which is not the case in the OCaml system. For these reasons, there is no such operation available in the language.
Be aware that subtyping and inheritance are not related. Inheritance is a syntactic relation between classes while subtyping is a semantic relation between types. For instance, the class of colored points could have been defined directly, without inheriting from the class of points; the type of colored points would remain unchanged and thus still be a subtype of points.
The domain of a coercion can often be omitted. For instance, one can define:
In this case, the function colored_point_to_point is an instance of the function to_point. This is not always true, however. The fully explicit coercion is more precise and is sometimes unavoidable. Consider, for example, the following class:
The object type c0 is an abbreviation for <m : 'a; n : int> as 'a. Consider now the type declaration:
The object type c1 is an abbreviation for the type <m : 'a> as 'a. The coercion from an object of type c0 to an object of type c1 is correct:
However, the domain of the coercion cannot always be omitted. In that case, the solution is to use the explicit form. Sometimes, a change in the class-type definition can also solve the problem
While class types c1 and c2 are different, both object types c1 and c2 expand to the same object type (same method names and types). Yet, when the domain of a coercion is left implicit and its co-domain is an abbreviation of a known class type, then the class type, rather than the object type, is used to derive the coercion function. This allows leaving the domain implicit in most cases when coercing from a subclass to its superclass. The type of a coercion can always be seen as below:
Note the difference between these two coercions: in the case of to_c2, the type #c2 = < m : 'a; .. > as 'a is polymorphically recursive (according to the explicit recursion in the class type of c2); hence the success of applying this coercion to an object of class c0. On the other hand, in the first case, c1 was only expanded and unrolled twice to obtain < m : < m : c1; .. >; .. > (remember #c1 = < m : c1; .. >), without introducing recursion. You may also note that the type of to_c2 is #c2 -> c2 while the type of to_c1 is more general than #c1 -> c1. This is not always true, since there are class types for which some instances of #c are not subtypes of c, as explained in section 3.16. Yet, for parameterless classes the coercion (_ :> c) is always more general than (_ : #c :> c).
A common problem may occur when one tries to define a coercion to a class c while defining class c. The problem is due to the type abbreviation not being completely defined yet, and so its subtypes are not clearly known. Then, a coercion (_ :> c) or (_ : #c :> c) is taken to be the identity function, as in
As a consequence, if the coercion is applied to self, as in the following example, the type of self is unified with the closed type c (a closed object type is an object type without ellipsis). This would constrain the type of self be closed and is thus rejected. Indeed, the type of self cannot be closed: this would prevent any further extension of the class. Therefore, a type error is generated when the unification of this type with another type would result in a closed object type.
However, the most common instance of this problem, coercing self to its current class, is detected as a special case by the type checker, and properly typed.
This allows the following idiom, keeping a list of all objects belonging to a class or its subclasses:
This idiom can in turn be used to retrieve an object whose type has been weakened:
The type < m : int > we see here is just the expansion of c, due to the use of a reference; we have succeeded in getting back an object of type c.
The previous coercion problem can often be avoided by first
defining the abbreviation, using a class type:
It is also possible to use a virtual class. Inheriting from this class simultaneously forces all methods of c to have the same type as the methods of c'.
One could think of defining the type abbreviation directly:
However, the abbreviation #c' cannot be defined directly in a similar way. It can only be defined by a class or a class-type definition. This is because a #-abbreviation carries an implicit anonymous variable .. that cannot be explicitly named. The closer you get to it is:
with an extra type variable capturing the open object type.
It is possible to write a version of class point without assignments on the instance variables. The override construct {< ... >} returns a copy of “self” (that is, the current object), possibly changing the value of some instance variables.
As with records, the form {< x >} is an elided version of {< x = x >} which avoids the repetition of the instance variable name. Note that the type abbreviation functional_point is recursive, which can be seen in the class type of functional_point: the type of self is 'a and 'a appears inside the type of the method move.
The above definition of functional_point is not equivalent to the following:
While objects of either class will behave the same, objects of their subclasses will be different. In a subclass of bad_functional_point, the method move will keep returning an object of the parent class. On the contrary, in a subclass of functional_point, the method move will return an object of the subclass.
Functional update is often used in conjunction with binary methods as illustrated in section 8.2.1.
Objects can also be cloned, whether they are functional or imperative. The library function Oo.copy makes a shallow copy of an object. That is, it returns a new object that has the same methods and instance variables as its argument. The instance variables are copied but their contents are shared. Assigning a new value to an instance variable of the copy (using a method call) will not affect instance variables of the original, and conversely. A deeper assignment (for example if the instance variable is a reference cell) will of course affect both the original and the copy.
The type of Oo.copy is the following:
The keyword as in that type binds the type variable 'a to the object type < .. >. Therefore, Oo.copy takes an object with any methods (represented by the ellipsis), and returns an object of the same type. The type of Oo.copy is different from type < .. > -> < .. > as each ellipsis represents a different set of methods. Ellipsis actually behaves as a type variable.
In fact, Oo.copy p will behave as p#copy assuming that a public method copy with body {< >} has been defined in the class of p.
Objects can be compared using the generic comparison functions = and <>. Two objects are equal if and only if they are physically equal. In particular, an object and its copy are not equal.
Other generic comparisons such as (<, <=, ...) can also be used on objects. The relation < defines an unspecified but strict ordering on objects. The ordering relationship between two objects is fixed permanently once the two objects have been created, and it is not affected by mutation of fields.
Cloning and override have a non empty intersection. They are interchangeable when used within an object and without overriding any field:
Only the override can be used to actually override fields, and only the Oo.copy primitive can be used externally.
Cloning can also be used to provide facilities for saving and restoring the state of objects.
The above definition will only backup one level. The backup facility can be added to any class by using multiple inheritance.
We can define a variant of backup that retains all copies. (We also add a method clear to manually erase all copies.)
Recursive classes can be used to define objects whose types are mutually recursive.
Although their types are mutually recursive, the classes widget and window are themselves independent.
A binary method is a method which takes an argument of the same type as self. The class comparable below is a template for classes with a binary method leq of type 'a -> bool where the type variable 'a is bound to the type of self. Therefore, #comparable expands to < leq : 'a -> bool; .. > as 'a. We see here that the binder as also allows writing recursive types.
We then define a subclass money of comparable. The class money simply wraps floats as comparable objects.1 We will extend money below with more operations. We have to use a type constraint on the class parameter x because the primitive <= is a polymorphic function in OCaml. The inherit clause ensures that the type of objects of this class is an instance of #comparable.
Note that the type money is not a subtype of type comparable, as the self type appears in contravariant position in the type of method leq. Indeed, an object m of class money has a method leq that expects an argument of type money since it accesses its value method. Considering m of type comparable would allow a call to method leq on m with an argument that does not have a method value, which would be an error.
Similarly, the type money2 below is not a subtype of type money.
It is however possible to define functions that manipulate objects of type either money or money2: the function min will return the minimum of any two objects whose type unifies with #comparable. The type of min is not the same as #comparable -> #comparable -> #comparable, as the abbreviation #comparable hides a type variable (an ellipsis). Each occurrence of this abbreviation generates a new variable.
This function can be applied to objects of type money or money2.
More examples of binary methods can be found in sections 8.2.1 and 8.2.3.
Note the use of override for method times. Writing new money2 (k *. repr) instead of {< repr = k *. repr >} would not behave well with inheritance: in a subclass money3 of money2 the times method would return an object of class money2 but not of class money3 as would be expected.
The class money could naturally carry another binary method. Here is a direct definition:
The above class money reveals a problem that often occurs with binary methods. In order to interact with other objects of the same class, the representation of money objects must be revealed, using a method such as value. If we remove all binary methods (here plus and leq), the representation can easily be hidden inside objects by removing the method value as well. However, this is not possible as soon as some binary method requires access to the representation of objects of the same class (other than self).
Here, the representation of the object is known only to a particular object. To make it available to other objects of the same class, we are forced to make it available to the whole world. However we can easily restrict the visibility of the representation using the module system.
Another example of friend functions may be found in section 8.2.3. These examples occur when a group of objects (here objects of the same class) and functions should see each others internal representation, while their representation should be hidden from the outside. The solution is always to define all friends in the same module, give access to the representation and use a signature constraint to make the representation abstract outside the module.
(Chapter written by Jacques Garrigue)
If you have a look at modules ending in Labels in the standard library, you will see that function types have annotations you did not have in the functions you defined yourself.
Such annotations of the form name: are called labels. They are meant to document the code, allow more checking, and give more flexibility to function application. You can give such names to arguments in your programs, by prefixing them with a tilde ~.
When you want to use distinct names for the variable and the label appearing in the type, you can use a naming label of the form ~name:. This also applies when the argument is not a variable.
Labels obey the same rules as other identifiers in OCaml, that is you cannot use a reserved keyword (like in or to) as a label.
Formal parameters and arguments are matched according to their respective labels, the absence of label being interpreted as the empty label. This allows commuting arguments in applications. One can also partially apply a function on any argument, creating a new function of the remaining parameters.
If several arguments of a function bear the same label (or no label), they will not commute among themselves, and order matters. But they can still commute with other arguments.
An interesting feature of labeled arguments is that they can be made optional. For optional parameters, the question mark ? replaces the tilde ~ of non-optional ones, and the label is also prefixed by ? in the function type. Default values may be given for such optional parameters.
A function taking some optional arguments must also take at least one non-optional argument. The criterion for deciding whether an optional argument has been omitted is the non-labeled application of an argument appearing after this optional argument in the function type. Note that if that argument is labeled, you will only be able to eliminate optional arguments by totally applying the function, omitting all optional arguments and omitting all labels for all remaining arguments.
Optional parameters may also commute with non-optional or unlabeled ones, as long as they are applied simultaneously. By nature, optional arguments do not commute with unlabeled arguments applied independently.
Here (test () ()) is already (0,0,0) and cannot be further applied.
Optional arguments are actually implemented as option types. If you do not give a default value, you have access to their internal representation, type 'a option = None | Some of 'a. You can then provide different behaviors when an argument is present or not.
It may also be useful to relay an optional argument from a function call to another. This can be done by prefixing the applied argument with ?. This question mark disables the wrapping of optional argument in an option type.
While they provide an increased comfort for writing function applications, labels and optional arguments have the pitfall that they cannot be inferred as completely as the rest of the language.
You can see it in the following two examples.
The first case is simple: g is passed ~y and then ~x, but f expects ~x and then ~y. This is correctly handled if we know the type of g to be x:int -> y:int -> int in advance, but otherwise this causes the above type clash. The simplest workaround is to apply formal parameters in a standard order.
The second example is more subtle: while we intended the argument bump to be of type ?step:int -> int -> int, it is inferred as step:int -> int -> 'a. These two types being incompatible (internally normal and optional arguments are different), a type error occurs when applying bump_it to the real bump.
We will not try here to explain in detail how type inference works. One must just understand that there is not enough information in the above program to deduce the correct type of g or bump. That is, there is no way to know whether an argument is optional or not, or which is the correct order, by looking only at how a function is applied. The strategy used by the compiler is to assume that there are no optional arguments, and that applications are done in the right order.
The right way to solve this problem for optional parameters is to add a type annotation to the argument bump.
In practice, such problems appear mostly when using objects whose methods have optional arguments, so writing the type of object arguments is often a good idea.
Normally the compiler generates a type error if you attempt to pass to a function a parameter whose type is different from the expected one. However, in the specific case where the expected type is a non-labeled function type, and the argument is a function expecting optional parameters, the compiler will attempt to transform the argument to have it match the expected type, by passing None for all optional parameters.
This transformation is coherent with the intended semantics, including side-effects. That is, if the application of optional parameters shall produce side-effects, these are delayed until the received function is really applied to an argument.
Like for names, choosing labels for functions is not an easy task. A good labeling is one which
We explain here the rules we applied when labeling OCaml libraries.
To speak in an “object-oriented” way, one can consider that each function has a main argument, its object, and other arguments related with its action, the parameters. To permit the combination of functions through functionals in commuting label mode, the object will not be labeled. Its role is clear from the function itself. The parameters are labeled with names reminding of their nature or their role. The best labels combine nature and role. When this is not possible the role is to be preferred, since the nature will often be given by the type itself. Obscure abbreviations should be avoided.
ListLabels.map : f:('a -> 'b) -> 'a list -> 'b list
UnixLabels.write : file_descr -> buf:bytes -> pos:int -> len:int -> unit
When there are several objects of same nature and role, they are all left unlabeled.
ListLabels.iter2 : f:('a -> 'b -> unit) -> 'a list -> 'b list -> unit
When there is no preferable object, all arguments are labeled.
BytesLabels.blit : src:bytes -> src_pos:int -> dst:bytes -> dst_pos:int -> len:int -> unit
However, when there is only one argument, it is often left unlabeled.
BytesLabels.create : int -> bytes
This principle also applies to functions of several arguments whose return type is a type variable, as long as the role of each argument is not ambiguous. Labeling such functions may lead to awkward error messages when one attempts to omit labels in an application, as we have seen with ListLabels.fold_left.
Here are some of the label names you will find throughout the libraries.
Label | Meaning |
f: | a function to be applied |
pos: | a position in a string, array or byte sequence |
len: | a length |
buf: | a byte sequence or string used as buffer |
src: | the source of an operation |
dst: | the destination of an operation |
init: | the initial value for an iterator |
cmp: | a comparison function, e.g. Stdlib.compare |
mode: | an operation mode or a flag list |
All these are only suggestions, but keep in mind that the choice of labels is essential for readability. Bizarre choices will make the program harder to maintain.
In the ideal, the right function name with right labels should be enough to understand the function’s meaning. Since one can get this information with OCamlBrowser or the ocaml toplevel, the documentation is only used when a more detailed specification is needed.
(Chapter written by Jacques Garrigue)
Variants as presented in section 1.4 are a powerful tool to build data structures and algorithms. However they sometimes lack flexibility when used in modular programming. This is due to the fact that every constructor is assigned to a unique type when defined and used. Even if the same name appears in the definition of multiple types, the constructor itself belongs to only one type. Therefore, one cannot decide that a given constructor belongs to multiple types, or consider a value of some type to belong to some other type with more constructors.
With polymorphic variants, this original assumption is removed. That is, a variant tag does not belong to any type in particular, the type system will just check that it is an admissible value according to its use. You need not define a type before using a variant tag. A variant type will be inferred independently for each of its uses.
In programs, polymorphic variants work like usual ones. You just have to prefix their names with a backquote character `.
[>`Off|`On] list means that to match this list, you should at least be able to match `Off and `On, without argument. [<`On|`Off|`Number of int] means that f may be applied to `Off, `On (both without argument), or `Number n where n is an integer. The > and < inside the variant types show that they may still be refined, either by defining more tags or by allowing less. As such, they contain an implicit type variable. Because each of the variant types appears only once in the whole type, their implicit type variables are not shown.
The above variant types were polymorphic, allowing further refinement. When writing type annotations, one will most often describe fixed variant types, that is types that cannot be refined. This is also the case for type abbreviations. Such types do not contain < or >, but just an enumeration of the tags and their associated types, just like in a normal datatype definition.
Type-checking polymorphic variants is a subtle thing, and some expressions may result in more complex type information.
Here we are seeing two phenomena. First, since this matching is open (the last case catches any tag), we obtain the type [> `A | `B] rather than [< `A | `B] in a closed matching. Then, since x is returned as is, input and return types are identical. The notation as 'a denotes such type sharing. If we apply f to yet another tag `E, it gets added to the list.
Here f1 and f2 both accept the variant tags `A and `B, but the argument of `A is int for f1 and string for f2. In f’s type `C, only accepted by f1, disappears, but both argument types appear for `A as int & string. This means that if we pass the variant tag `A to f, its argument should be both int and string. Since there is no such value, f cannot be applied to `A, and `B is the only accepted input.
Even if a value has a fixed variant type, one can still give it a larger type through coercions. Coercions are normally written with both the source type and the destination type, but in simple cases the source type may be omitted.
You may also selectively coerce values through pattern matching.
When an or-pattern composed of variant tags is wrapped inside an alias-pattern, the alias is given a type containing only the tags enumerated in the or-pattern. This allows for many useful idioms, like incremental definition of functions.
To make this even more comfortable, you may use type definitions as abbreviations for or-patterns. That is, if you have defined type myvariant = [`Tag1 of int | `Tag2 of bool], then the pattern #myvariant is equivalent to writing (`Tag1(_ : int) | `Tag2(_ : bool)).
Such abbreviations may be used alone,
or combined with with aliases.
After seeing the power of polymorphic variants, one may wonder why they were added to core language variants, rather than replacing them.
The answer is twofold. The first aspect is that while being pretty efficient, the lack of static type information allows for less optimizations, and makes polymorphic variants slightly heavier than core language ones. However noticeable differences would only appear on huge data structures.
More important is the fact that polymorphic variants, while being type-safe, result in a weaker type discipline. That is, core language variants do actually much more than ensuring type-safety, they also check that you use only declared constructors, that all constructors present in a data-structure are compatible, and they enforce typing constraints to their parameters.
For this reason, you must be more careful about making types explicit when you use polymorphic variants. When you write a library, this is easy since you can describe exact types in interfaces, but for simple programs you are probably better off with core language variants.
Beware also that some idioms make trivial errors very hard to find. For instance, the following code is probably wrong but the compiler has no way to see it.
You can avoid such risks by annotating the definition itself.
This chapter covers more advanced questions related to the limitations of polymorphic functions and types. There are some situations in OCaml where the type inferred by the type checker may be less generic than expected. Such non-genericity can stem either from interactions between side-effects and typing or the difficulties of implicit polymorphic recursion and higher-rank polymorphism.
This chapter details each of these situations and, if it is possible, how to recover genericity.
Maybe the most frequent examples of non-genericity derive from the interactions between polymorphic types and mutation. A simple example appears when typing the following expression
Since the type of None is 'a option and the function ref has type 'b -> 'b ref, a natural deduction for the type of store would be 'a option ref. However, the inferred type, '_weak1 option ref, is different. Type variables whose names start with a _weak prefix like '_weak1 are weakly polymorphic type variables, sometimes shortened to “weak type variables”. A weak type variable is a placeholder for a single type that is currently unknown. Once the specific type t behind the placeholder type '_weak1 is known, all occurrences of '_weak1 will be replaced by t. For instance, we can define another option reference and store an int inside:
After storing an int inside another_store, the type of another_store has been updated from '_weak2 option ref to int option ref. This distinction between weakly and generic polymorphic type variable protects OCaml programs from unsoundness and runtime errors. To understand from where unsoundness might come, consider this simple function which swaps a value x with the value stored inside a store reference, if there is such value:
We can apply this function to our store
After these three swaps the stored value is 3. Everything is fine up to now. We can then try to swap 3 with a more interesting value, for instance a function:
At this point, the type checker rightfully complains that it is not possible to swap an integer and a function, and that an int should always be traded for another int. Furthermore, the type checker prevents us from manually changing the type of the value stored by store:
Indeed, looking at the type of store, we see that the weak type '_weak1 has been replaced by the type int
Therefore, after placing an int in store, we cannot use it to store any value other than an int. More generally, weak types protect the program from undue mutation of values with a polymorphic type.
Moreover, weak types cannot appear in the signature of toplevel modules: types must be known at compilation time. Otherwise, different compilation units could replace the weak type with different and incompatible types. For this reason, compiling the following small piece of code
let option_ref = ref None
yields a compilation error
Error: The type of this expression, '_weak1 option ref, contains type variables that cannot be generalized
To solve this error, it is enough to add an explicit type annotation to specify the type at declaration time:
let option_ref: int option ref = ref None
This is in any case a good practice for such global mutable variables. Otherwise, they will pick out the type of first use. If there is a mistake at this point, it can result in confusing type errors when later, correct uses are flagged as errors.
Identifying the exact context in which polymorphic types should be replaced by weak types in a modular way is a difficult question. Indeed the type system must handle the possibility that functions may hide persistent mutable states. For instance, the following function uses an internal reference to implement a delayed identity function
It would be unsound to apply this fake_id function to values with different types. The function fake_id is therefore rightfully assigned the type '_weak3 -> '_weak3 rather than 'a -> 'a. At the same time, it ought to be possible to use a local mutable state without impacting the type of a function.
To circumvent these dual difficulties, the type checker considers that any value returned by a function might rely on persistent mutable states behind the scene and should be given a weak type. This restriction on the type of mutable values and the results of function application is called the value restriction. Note that this value restriction is conservative: there are situations where the value restriction is too cautious and gives a weak type to a value that could be safely generalized to a polymorphic type:
Quite often, this happens when defining functions using higher order functions. To avoid this problem, a solution is to add an explicit argument to the function:
With this argument, id_again is seen as a function definition by the type checker and can therefore be generalized. This kind of manipulation is called eta-expansion in lambda calculus and is sometimes referred under this name.
There is another partial solution to the problem of unnecessary weak types, which is implemented directly within the type checker. Briefly, it is possible to prove that weak types that only appear as type parameters in covariant positions –also called positive positions– can be safely generalized to polymorphic types. For instance, the type 'a list is covariant in 'a:
Note that the type inferred for empty is 'a list and not the '_weak5 list that should have occurred with the value restriction.
The value restriction combined with this generalization for covariant type parameters is called the relaxed value restriction.
Variance describes how type constructors behave with respect to subtyping. Consider for instance a pair of type x and xy with x a subtype of xy, denoted x :> xy:
As x is a subtype of xy, we can convert a value of type x to a value of type xy:
Similarly, if we have a value of type x list, we can convert it to a value of type xy list, since we could convert each element one by one:
In other words, x :> xy implies that x list :> xy list, therefore the type constructor 'a list is covariant (it preserves subtyping) in its parameter 'a.
Contrarily, if we have a function that can handle values of type xy
it can also handle values of type x:
Note that we can rewrite the type of f and f' as
In this case, we have x :> xy implies xy proc :> x proc. Notice that the second subtyping relation reverse the order of x and xy: the type constructor 'a proc is contravariant in its parameter 'a. More generally, the function type constructor 'a -> 'b is covariant in its return type 'b and contravariant in its argument type 'a.
A type constructor can also be invariant in some of its type parameters, neither covariant nor contravariant. A typical example is a reference:
If we were able to coerce x to the type xy ref as a variable xy, we could use xy to store the value `Y inside the reference and then use the x value to read this content as a value of type x, which would break the type system.
More generally, as soon as a type variable appears in a position describing mutable state it becomes invariant. As a corollary, covariant variables will never denote mutable locations and can be safely generalized. For a better description, interested readers can consult the original article by Jacques Garrigue on http://www.math.nagoya-u.ac.jp/~garrigue/papers/morepoly-long.pdf
Together, the relaxed value restriction and type parameter covariance help to avoid eta-expansion in many situations.
Moreover, when the type definitions are exposed, the type checker is able to infer variance information on its own and one can benefit from the relaxed value restriction even unknowingly. However, this is not the case anymore when defining new abstract types. As an illustration, we can define a module type collection as:
In this situation, when coercing the module List2 to the module type COLLECTION, the type checker forgets that 'a List2.t was covariant in 'a. Consequently, the relaxed value restriction does not apply anymore:
To keep the relaxed value restriction, we need to declare the abstract type 'a COLLECTION.t as covariant in 'a:
We then recover polymorphism:
The second major class of non-genericity is directly related to the problem of type inference for polymorphic functions. In some circumstances, the type inferred by OCaml might be not general enough to allow the definition of some recursive functions, in particular for recursive functions acting on non-regular algebraic data types.
With a regular polymorphic algebraic data type, the type parameters of the type constructor are constant within the definition of the type. For instance, we can look at arbitrarily nested list defined as:
Note that the type constructor regular_nested always appears as 'a regular_nested in the definition above, with the same parameter 'a. Equipped with this type, one can compute a maximal depth with a classic recursive function
Non-regular recursive algebraic data types correspond to polymorphic algebraic data types whose parameter types vary between the left and right side of the type definition. For instance, it might be interesting to define a datatype that ensures that all lists are nested at the same depth:
Intuitively, a value of type 'a nested is a list of list …of list of elements a with k nested list. We can then adapt the maximal_depth function defined on regular_depth into a depth function that computes this k. As a first try, we may define
The type error here comes from the fact that during the definition of depth, the type checker first assigns to depth the type 'a -> 'b . When typing the pattern matching, 'a -> 'b becomes 'a nested -> 'b, then 'a nested -> int once the List branch is typed. However, when typing the application depth n in the Nested branch, the type checker encounters a problem: depth n is applied to 'a list nested, it must therefore have the type 'a list nested -> 'b. Unifying this constraint with the previous one leads to the impossible constraint 'a list nested = 'a nested. In other words, within its definition, the recursive function depth is applied to values of type 'a t with different types 'a due to the non-regularity of the type constructor nested. This creates a problem because the type checker had introduced a new type variable 'a only at the definition of the function depth whereas, here, we need a different type variable for every application of the function depth.
The solution of this conundrum is to use an explicitly polymorphic type annotation for the type 'a:
In the type of depth, 'a.'a nested -> int, the type variable 'a is universally quantified. In other words, 'a.'a nested -> int reads as “for all type 'a, depth maps 'a nested values to integers”. Whereas the standard type 'a nested -> int can be interpreted as “let be a type variable 'a, then depth maps 'a nested values to integers”. There are two major differences with these two type expressions. First, the explicit polymorphic annotation indicates to the type checker that it needs to introduce a new type variable every time the function depth is applied. This solves our problem with the definition of the function depth.
Second, it also notifies the type checker that the type of the function should be polymorphic. Indeed, without explicit polymorphic type annotation, the following type annotation is perfectly valid
since 'a,'b and 'c denote type variables that may or may not be polymorphic. Whereas, it is an error to unify an explicitly polymorphic type with a non-polymorphic type:
An important remark here is that it is not needed to explicit fully the type of depth: it is sufficient to add annotations only for the universally quantified type variables:
With explicit polymorphic annotations, it becomes possible to implement any recursive function that depends only on the structure of the nested lists and not on the type of the elements. For instance, a more complex example would be to compute the total number of elements of the nested lists:
Similarly, it may be necessary to use more than one explicitly polymorphic type variables, like for computing the nested list of list lengths of the nested list:
Explicit polymorphic annotations are however not sufficient to cover all the cases where the inferred type of a function is less general than expected. A similar problem arises when using polymorphic functions as arguments of higher-order functions. For instance, we may want to compute the average depth or length of two nested lists:
It would be natural to factorize these two definitions as:
However, the type of average len is less generic than the type of average_len, since it requires the type of the first and second argument to be the same:
As previously with polymorphic recursion, the problem stems from the fact that type variables are introduced only at the start of the let definitions. When we compute both f x and f y, the type of x and y are unified together. To avoid this unification, we need to indicate to the type checker that f is polymorphic in its first argument. In some sense, we would want average to have type
val average: ('a. 'a nested -> int) -> 'a nested -> 'b nested -> int
Note that this syntax is not valid within OCaml: average has an universally quantified type 'a inside the type of one of its argument whereas for polymorphic recursion the universally quantified type was introduced before the rest of the type. This position of the universally quantified type means that average is a second-rank polymorphic function. This kind of higher-rank functions is not directly supported by OCaml: type inference for second-rank polymorphic function and beyond is undecidable; therefore using this kind of higher-rank functions requires to handle manually these universally quantified types.
In OCaml, there are two ways to introduce this kind of explicit universally quantified types: universally quantified record fields,
and universally quantified object methods:
To solve our problem, we can therefore use either the record solution:
or the object one:
Generalized algebraic datatypes, or GADTs, extend usual sum types in two ways: constraints on type parameters may change depending on the value constructor, and some type variables may be existentially quantified. Adding constraints is done by giving an explicit return type, where type parameters are instantiated:
This return type must use the same type constructor as the type being defined, and have the same number of parameters. Variables are made existential when they appear inside a constructor’s argument, but not in its return type. Since the use of a return type often eliminates the need to name type parameters in the left-hand side of a type definition, one can replace them with anonymous types _ in that case.
The constraints associated to each constructor can be recovered through pattern-matching. Namely, if the type of the scrutinee of a pattern-matching contains a locally abstract type, this type can be refined according to the constructor used. These extra constraints are only valid inside the corresponding branch of the pattern-matching. If a constructor has some existential variables, fresh locally abstract types are generated, and they must not escape the scope of this branch.
We write an eval function:
And use it:
It is important to remark that the function eval is using the polymorphic syntax for locally abstract types. When defining a recursive function that manipulates a GADT, explicit polymorphic recursion should generally be used. For instance, the following definition fails with a type error:
In absence of an explicit polymorphic annotation, a monomorphic type is inferred for the recursive function. If a recursive call occurs inside the function definition at a type that involves an existential GADT type variable, this variable flows to the type of the recursive function, and thus escapes its scope. In the above example, this happens in the branch App(f,x) when eval is called with f as an argument. In this branch, the type of f is ($App_'b -> a) term. The prefix $ in $App_'b denotes an existential type named by the compiler (see 7.5). Since the type of eval is 'a term -> 'a, the call eval f makes the existential type $App_'b flow to the type variable 'a and escape its scope. This triggers the above error.
Type inference for GADTs is notoriously hard. This is due to the fact some types may become ambiguous when escaping from a branch. For instance, in the Int case above, n could have either type int or a, and they are not equivalent outside of that branch. As a first approximation, type inference will always work if a pattern-matching is annotated with types containing no free type variables (both on the scrutinee and the return type). This is the case in the above example, thanks to the type annotation containing only locally abstract types.
In practice, type inference is a bit more clever than that: type annotations do not need to be immediately on the pattern-matching, and the types do not have to be always closed. As a result, it is usually enough to only annotate functions, as in the example above. Type annotations are propagated in two ways: for the scrutinee, they follow the flow of type inference, in a way similar to polymorphic methods; for the return type, they follow the structure of the program, they are split on functions, propagated to all branches of a pattern matching, and go through tuples, records, and sum types. Moreover, the notion of ambiguity used is stronger: a type is only seen as ambiguous if it was mixed with incompatible types (equated by constraints), without type annotations between them. For instance, the following program types correctly.
Here the return type int is never mixed with a, so it is seen as non-ambiguous, and can be inferred. When using such partial type annotations we strongly suggest specifying the -principal mode, to check that inference is principal.
The exhaustiveness check is aware of GADT constraints, and can automatically infer that some cases cannot happen. For instance, the following pattern matching is correctly seen as exhaustive (the Add case cannot happen).
Usually, the exhaustiveness check only tries to check whether the cases omitted from the pattern matching are typable or not. However, you can force it to try harder by adding refutation cases, written as a full stop. In the presence of a refutation case, the exhaustiveness check will first compute the intersection of the pattern with the complement of the cases preceding it. It then checks whether the resulting patterns can really match any concrete values by trying to type-check them. Wild cards in the generated patterns are handled in a special way: if their type is a variant type with only GADT constructors, then the pattern is split into the different constructors, in order to check whether any of them is possible (this splitting is not done for arguments of these constructors, to avoid non-termination). We also split tuples and variant types with only one case, since they may contain GADTs inside. For instance, the following code is deemed exhaustive:
Namely, the inferred remaining case is Some _, which is split into Some (Int, _) and Some (Bool, _), which are both untypable because deep expects a non-existing char t as the first element of the tuple. Note that the refutation case could be omitted here, because it is automatically added when there is only one case in the pattern matching.
Another addition is that the redundancy check is now aware of GADTs: a case will be detected as redundant if it could be replaced by a refutation case using the same pattern.
The term type we have defined above is an indexed type, where a type parameter reflects a property of the value contents. Another use of GADTs is singleton types, where a GADT value represents exactly one type. This value can be used as runtime representation for this type, and a function receiving it can have a polytypic behavior.
Here is an example of a polymorphic function that takes the runtime representation of some type t and a value of the same type, then pretty-prints the value as a string:
Another frequent application of GADTs is equality witnesses.
Here type eq has only one constructor, and by matching on it one adds a local constraint allowing the conversion between a and b. By building such equality witnesses, one can make equal types which are syntactically different.
Here is an example using both singleton types and equality witnesses to implement dynamic types.
The typing of pattern matching in the presence of GADTs can generate many existential types. When necessary, error messages refer to these existential types using compiler-generated names. Currently, the compiler generates these names according to the following nomenclature:
As shown by the last item, the current behavior is imperfect and may be improved in future versions.
As explained above, pattern-matching on a GADT constructor may introduce existential types. Syntax has been introduced which allows them to be named explicitly. For instance, the following code names the type of the argument of f and uses this name.
All existential type variables of the constructor must by introduced by the (type ...) construct and bound by a type annotation on the outside of the constructor argument.
GADT pattern-matching may also add type equations to non-local abstract types. The behaviour is the same as with local abstract types. Reusing the above eq type, one can write:
Of course, not all abstract types can be refined, as this would contradict the exhaustiveness check. Namely, builtin types (those defined by the compiler itself, such as int or array), and abstract types defined by the local module, are non-instantiable, and as such cause a type error rather than introduce an equation.
(Chapter written by Didier Rémy)
In this chapter, we show some larger examples using objects, classes and modules. We review many of the object features simultaneously on the example of a bank account. We show how modules taken from the standard library can be expressed as classes. Lastly, we describe a programming pattern known as virtual types through the example of window managers.
In this section, we illustrate most aspects of Object and inheritance by refining, debugging, and specializing the following initial naive definition of a simple bank account. (We reuse the module Euro defined at the end of chapter 3.)
We now refine this definition with a method to compute interest.
We make the method interest private, since clearly it should not be called freely from the outside. Here, it is only made accessible to subclasses that will manage monthly or yearly updates of the account.
We should soon fix a bug in the current definition: the deposit method can be used for withdrawing money by depositing negative amounts. We can fix this directly:
However, the bug might be fixed more safely by the following definition:
In particular, this does not require the knowledge of the implementation of the method deposit.
To keep track of operations, we extend the class with a mutable field history and a private method trace to add an operation in the log. Then each method to be traced is redefined.
One may wish to open an account and simultaneously deposit some initial amount. Although the initial implementation did not address this requirement, it can be achieved by using an initializer.
A better alternative is:
Indeed, the latter is safer since the call to deposit will automatically benefit from safety checks and from the trace. Let’s test it:
Closing an account can be done with the following polymorphic function:
Of course, this applies to all sorts of accounts.
Finally, we gather several versions of the account into a module Account abstracted over some currency.
This shows the use of modules to group several class definitions that can in fact be thought of as a single unit. This unit would be provided by a bank for both internal and external uses. This is implemented as a functor that abstracts over the currency so that the same code can be used to provide accounts in different currencies.
The class bank is the real implementation of the bank account (it could have been inlined). This is the one that will be used for further extensions, refinements, etc. Conversely, the client will only be given the client view.
Hence, the clients do not have direct access to the balance, nor the history of their own accounts. Their only way to change their balance is to deposit or withdraw money. It is important to give the clients a class and not just the ability to create accounts (such as the promotional discount account), so that they can personalize their account. For instance, a client may refine the deposit and withdraw methods so as to do his own financial bookkeeping, automatically. On the other hand, the function discount is given as such, with no possibility for further personalization.
It is important to provide the client’s view as a functor Client so that client accounts can still be built after a possible specialization of the bank. The functor Client may remain unchanged and be passed the new definition to initialize a client’s view of the extended account.
The functor Client may also be redefined when some new features of the account can be given to the client.
One may wonder whether it is possible to treat primitive types such as integers and strings as objects. Although this is usually uninteresting for integers or strings, there may be some situations where this is desirable. The class money above is such an example. We show here how to do it for strings.
A naive definition of strings as objects could be:
However, the method escaped returns an object of the class ostring, and not an object of the current class. Hence, if the class is further extended, the method escaped will only return an object of the parent class.
As seen in section 3.16, the solution is to use functional update instead. We need to create an instance variable containing the representation s of the string.
As shown in the inferred type, the methods escaped and sub now return objects of the same type as the one of the class.
Another difficulty is the implementation of the method concat. In order to concatenate a string with another string of the same class, one must be able to access the instance variable externally. Thus, a method repr returning s must be defined. Here is the correct definition of strings:
Another constructor of the class string can be defined to return a new string of a given length:
Here, exposing the representation of strings is probably harmless. We do could also hide the representation of strings as we hid the currency in the class money of section 3.17.
There is sometimes an alternative between using modules or classes for parametric data types. Indeed, there are situations when the two approaches are quite similar. For instance, a stack can be straightforwardly implemented as a class:
However, writing a method for iterating over a stack is more problematic. A method fold would have type ('b -> 'a -> 'b) -> 'b -> 'b. Here 'a is the parameter of the stack. The parameter 'b is not related to the class 'a stack but to the argument that will be passed to the method fold. A naive approach is to make 'b an extra parameter of class stack:
However, the method fold of a given object can only be applied to functions that all have the same type:
A better solution is to use polymorphic methods, which were introduced in OCaml version 3.05. Polymorphic methods makes it possible to treat the type variable 'b in the type of fold as universally quantified, giving fold the polymorphic type Forall 'b. ('b -> 'a -> 'b) -> 'b -> 'b. An explicit type declaration on the method fold is required, since the type checker cannot infer the polymorphic type by itself.
A simplified version of object-oriented hash tables should have the following class type.
A simple implementation, which is quite reasonable for small hash tables is to use an association list:
A better implementation, and one that scales up better, is to use a true hash table… whose elements are small hash tables!
Implementing sets leads to another difficulty. Indeed, the method union needs to be able to access the internal representation of another object of the same class.
This is another instance of friend functions as seen in section 3.17. Indeed, this is the same mechanism used in the module Set in the absence of objects.
In the object-oriented version of sets, we only need to add an additional method tag to return the representation of a set. Since sets are parametric in the type of elements, the method tag has a parametric type 'a tag, concrete within the module definition but abstract in its signature. From outside, it will then be guaranteed that two objects with a method tag of the same type will share the same representation.
The following example, known as the subject/observer pattern, is often presented in the literature as a difficult inheritance problem with inter-connected classes. The general pattern amounts to the definition a pair of two classes that recursively interact with one another.
The class observer has a distinguished method notify that requires two arguments, a subject and an event to execute an action.
The class subject remembers a list of observers in an instance variable, and has a distinguished method notify_observers to broadcast the message notify to all observers with a particular event e.
The difficulty usually lies in defining instances of the pattern above by inheritance. This can be done in a natural and obvious manner in OCaml, as shown on the following example manipulating windows.
As can be expected, the type of window is recursive.
However, the two classes of window_subject and window_observer are not mutually recursive.
Classes window_observer and window_subject can still be extended by inheritance. For instance, one may enrich the subject with new behaviors and refine the behavior of the observer.
We can also create a different kind of observer:
and attach several observers to the same object:
In this chapter, we shall look at the parallel programming facilities in OCaml. The OCaml standard library exposes low-level primitives for parallel programming. We recommend the users to utilise higher-level parallel programming libraries such as domainslib. This tutorial will first cover the high-level parallel programming using domainslib followed by low-level primitives exposed by the compiler.
OCaml distinguishes concurrency and parallelism and provides distinct mechanisms for expressing them. Concurrency is overlapped execution of tasks (section 12.24.2) whereas parallelism is simultaneous execution of tasks. In particular, parallel tasks overlap in time but concurrent tasks may or may not overlap in time. Tasks may execute concurrently by yielding control to each other. While concurrency is a program structuring mechanism, parallelism is a mechanism to make your programs run faster. If you are interested in the concurrent programming mechanisms in OCaml, please refer to the section 12.24 on effect handlers and the chapter 33 on the threads library.
Domains are the units of parallelism in OCaml. The module Domain provides the primitives to create and manage domains. New domains can be spawned using the spawn function.
The spawn function executes the given computation in parallel with the calling domain.
Domains are heavy-weight entities. Each domain maps 1:1 to an operating system thread. Each domain also has its own runtime state, which includes domain-local structures for allocating memory. Hence, they are relatively expensive to create and tear down.
It is recommended that the programs do not spawn more domains than cores available.
In this tutorial, we shall be implementing, running and measuring the performance of parallel programs. The results observed are dependent on the number of cores available on the target machine. This tutorial is being written on a 2.3 GHz Quad-Core Intel Core i7 MacBook Pro with 4 cores and 8 hardware threads. It is reasonable to expect roughly 4x performance on 4 domains for parallel programs with little coordination between the domains, and when the machine is not under load. Beyond 4 domains, the speedup is likely to be less than linear. We shall also use the command-line benchmarking tool hyperfine for benchmarking our programs.
We shall use the program to compute the nth Fibonacci number using recursion as a running example. The sequential program for computing the nth Fibonacci number is given below.
The program can be compiled and benchmarked as follows.
$ ocamlopt -o fib.exe fib.ml $ ./fib.exe 42 fib(42) = 433494437 $ hyperfine './fib.exe 42' # Benchmarking Benchmark 1: ./fib.exe 42 Time (mean ± sd): 1.193 s ± 0.006 s [User: 1.186 s, System: 0.003 s] Range (min … max): 1.181 s … 1.202 s 10 runs
We see that it takes around 1.2 seconds to compute the 42nd Fibonacci number.
Spawned domains can be joined using the join function to get their results. The join function waits for target domain to terminate. The following program computes the nth Fibonacci number twice in parallel.
The program spawns two domains which compute the nth Fibonacci number. The spawn function returns a Domain.t value which can be joined to get the result of the parallel computation. The join function blocks until the computation runs to completion.
$ ocamlopt -o fib_twice.exe fib_twice.ml $ ./fib_twice.exe 42 fib(42) = 433494437 fib(42) = 433494437 $ hyperfine './fib_twice.exe 42' Benchmark 1: ./fib_twice.exe 42 Time (mean ± sd): 1.249 s ± 0.025 s [User: 2.451 s, System: 0.012 s] Range (min … max): 1.221 s … 1.290 s 10 runs
As one can see that computing the nth Fibonacci number twice almost took the same time as computing it once thanks to parallelism.
Let us attempt to parallelise the Fibonacci function. The two recursive calls may be executed in parallel. However, naively parallelising the recursive calls by spawning domains for each one will not work as it spawns too many domains.
$ ocamlopt -o fib_par1.exe fib_par1.ml $ ./fib_par1.exe 42 Fatal error: exception Failure("failed to allocate domain")
OCaml has a limit of 128 domains that can be active at the same time. An attempt to spawn more domains will raise an exception. How then can we parallelise the Fibonacci function?
The OCaml standard library provides only low-level primitives for concurrent and parallel programming, leaving high-level programming libraries to be developed and distributed outside the core compiler distribution. Domainslib is such a library for nested-parallel programming, which is epitomised by the parallelism available in the recursive Fibonacci computation. Let us use domainslib to parallelise the recursive Fibonacci program. It is recommended that you install domainslib using the opam package manager. This tutorial uses domainslib version 0.5.0.
Domainslib provides an async/await mechanism for spawning parallel tasks and awaiting their results. On top of this mechanism, domainslib provides parallel iterators. At its core, domainslib has an efficient implementation of work-stealing queue in order to efficiently share tasks with other domains. A parallel implementation of the Fibonacci program is given below.
(* fib_par2.ml *) let num_domains = int_of_string Sys.argv.(1) let n = int_of_string Sys.argv.(2) let rec fib n = if n < 2 then 1 else fib (n - 1) + fib (n - 2) module T = Domainslib.Task let rec fib_par pool n = if n > 20 then begin let a = T.async pool (fun _ -> fib_par pool (n-1)) in let b = T.async pool (fun _ -> fib_par pool (n-2)) in T.await pool a + T.await pool b end else fib n let main () = let pool = T.setup_pool ~num_domains:(num_domains - 1) () in let res = T.run pool (fun _ -> fib_par pool n) in T.teardown_pool pool; Printf.printf "fib(%d) = %d\n" n res let _ = main ()
The program takes the number of domains and the input to the Fibonacci function as the first and the second command-line arguments respectively.
Let us start with the main function. First, we set up a pool of domains on which the nested parallel tasks will run. The domain invoking the run function will also participate in executing the tasks submitted to the pool. We invoke the parallel Fibonacci function fib_par in the run function. Finally, we tear down the pool and print the result.
For sufficiently large inputs (n > 20), the fib_par function spawns the left and the right recursive calls asynchronously in the pool using the async function. The async function returns a promise for the result. The result of an asynchronous computation is obtained by awaiting the promise using the await function. The await function call blocks until the promise is resolved.
For small inputs, the fib_par function simply calls the sequential Fibonacci function fib. It is important to switch to sequential mode for small problem sizes. If not, the cost of parallelisation will outweigh the work available.
For simplicity, we use ocamlfind to compile this program. It is recommended that the users use dune to build their programs that utilise libraries installed through opam.
$ ocamlfind ocamlopt -package domainslib -linkpkg -o fib_par2.exe fib_par2.ml $ ./fib_par2.exe 1 42 fib(42) = 433494437 $ hyperfine './fib.exe 42' './fib_par2.exe 2 42' \ './fib_par2.exe 4 42' './fib_par2.exe 8 42' Benchmark 1: ./fib.exe 42 Time (mean ± sd): 1.217 s ± 0.018 s [User: 1.203 s, System: 0.004 s] Range (min … max): 1.202 s … 1.261 s 10 runs Benchmark 2: ./fib_par2.exe 2 42 Time (mean ± sd): 628.2 ms ± 2.9 ms [User: 1243.1 ms, System: 4.9 ms] Range (min … max): 625.7 ms … 634.5 ms 10 runs Benchmark 3: ./fib_par2.exe 4 42 Time (mean ± sd): 337.6 ms ± 23.4 ms [User: 1321.8 ms, System: 8.4 ms] Range (min … max): 318.5 ms … 377.6 ms 10 runs Benchmark 4: ./fib_par2.exe 8 42 Time (mean ± sd): 250.0 ms ± 9.4 ms [User: 1877.1 ms, System: 12.6 ms] Range (min … max): 242.5 ms … 277.3 ms 11 runs Summary './fib_par2.exe 8 42' ran 1.35 ± 0.11 times faster than './fib_par2.exe 4 42' 2.51 ± 0.10 times faster than './fib_par2.exe 2 42' 4.87 ± 0.20 times faster than './fib.exe 42'
The results show that, with 8 domains, the parallel Fibonacci program runs 4.87 times faster than the sequential version.
Many numerical algorithms use for-loops. The parallel-for primitive provides a straight-forward way to parallelise such code. Let us take the spectral-norm benchmark from the computer language benchmarks game and parallelise it. The sequential version of the program is given below.
Observe that the program has nested loops in eval_A_times_u and eval_At_times_u. Each iteration of the outer loop body reads from u but writes to disjoint memory locations in v. Hence, the iterations of the outer loop are not dependent on each other and can be executed in parallel.
The parallel version of spectral norm is shown below.
(* spectralnorm_par.ml *) let num_domains = try int_of_string Sys.argv.(1) with _ -> 1 let n = try int_of_string Sys.argv.(2) with _ -> 32 let eval_A i j = 1. /. float((i+j)*(i+j+1)/2+i+1) module T = Domainslib.Task let eval_A_times_u pool u v = let n = Array.length v - 1 in T.parallel_for pool ~start:0 ~finish:n ~body:(fun i -> let vi = ref 0. in for j = 0 to n do vi := !vi +. eval_A i j *. u.(j) done; v.(i) <- !vi ) let eval_At_times_u pool u v = let n = Array.length v - 1 in T.parallel_for pool ~start:0 ~finish:n ~body:(fun i -> let vi = ref 0. in for j = 0 to n do vi := !vi +. eval_A j i *. u.(j) done; v.(i) <- !vi ) let eval_AtA_times_u pool u v = let w = Array.make (Array.length u) 0.0 in eval_A_times_u pool u w; eval_At_times_u pool w v let () = let pool = T.setup_pool ~num_domains:(num_domains - 1) () in let u = Array.make n 1.0 and v = Array.make n 0.0 in T.run pool (fun _ -> for _i = 0 to 9 do eval_AtA_times_u pool u v; eval_AtA_times_u pool v u done); let vv = ref 0.0 and vBv = ref 0.0 in for i=0 to n-1 do vv := !vv +. v.(i) *. v.(i); vBv := !vBv +. u.(i) *. v.(i) done; T.teardown_pool pool; Printf.printf "%0.9f\n" (sqrt(!vBv /. !vv))
Observe that the parallel_for function is isomorphic to the for-loop in the sequential version. No other change is required except for the boiler plate code to set up and tear down the pools.
$ ocamlopt -o spectralnorm.exe spectralnorm.ml $ ocamlfind ocamlopt -package domainslib -linkpkg -o spectralnorm_par.exe \ spectralnorm_par.ml $ hyperfine './spectralnorm.exe 4096' './spectralnorm_par.exe 2 4096' \ './spectralnorm_par.exe 4 4096' './spectralnorm_par.exe 8 4096' Benchmark 1: ./spectralnorm.exe 4096 Time (mean ± sd): 1.989 s ± 0.013 s [User: 1.972 s, System: 0.007 s] Range (min … max): 1.975 s … 2.018 s 10 runs Benchmark 2: ./spectralnorm_par.exe 2 4096 Time (mean ± sd): 1.083 s ± 0.015 s [User: 2.140 s, System: 0.009 s] Range (min … max): 1.064 s … 1.102 s 10 runs Benchmark 3: ./spectralnorm_par.exe 4 4096 Time (mean ± sd): 698.7 ms ± 10.3 ms [User: 2730.8 ms, System: 18.3 ms] Range (min … max): 680.9 ms … 721.7 ms 10 runs Benchmark 4: ./spectralnorm_par.exe 8 4096 Time (mean ± sd): 921.8 ms ± 52.1 ms [User: 6711.6 ms, System: 51.0 ms] Range (min … max): 838.6 ms … 989.2 ms 10 runs Summary './spectralnorm_par.exe 4 4096' ran 1.32 ± 0.08 times faster than './spectralnorm_par.exe 8 4096' 1.55 ± 0.03 times faster than './spectralnorm_par.exe 2 4096' 2.85 ± 0.05 times faster than './spectralnorm.exe 4096'
On the author’s machine, the program scales reasonably well up to 4 domains but performs worse with 8 domains. Recall that the machine only has 4 physical cores. Debugging and fixing this performance issue is beyond the scope of this tutorial.
An important aspect of the scalability of parallel OCaml programs is the scalability of the garbage collector (GC). The OCaml GC is designed to have both low latency and good parallel scalability. OCaml has a generational garbage collector with a small minor heap and a large major heap. New objects (upto a certain size) are allocated in the minor heap. Each domain has its own domain-local minor heap arena into which new objects are allocated without synchronising with the other domains. When a domain exhausts its minor heap arena, it calls for a stop-the-world collection of the minor heaps. In the stop-the-world section, all the domains collect their minor heap arenas in parallel evacuating the survivors to the major heap.
For the major heap, each domain maintains domain-local, size-segmented pools of memory into which large objects and survivors from the minor collection are allocated. Having domain-local pools avoids synchronisation for most major heap allocations. The major heap is collected by a concurrent mark-and-sweep algorithm that involves a few short stop-the-world pauses for each major cycle.
Overall, the users should expect the garbage collector to scale well with increasing number of domains, with the latency remaining low. For more information on the design and evaluation of the garbage collector, please have a look at the ICFP 2020 paper on Retrofitting Parallelism onto OCaml.
Modern processors and compilers aggressively optimise programs. These optimisations accelerate without otherwise affecting sequential programs, but cause surprising behaviours to be visible in parallel programs. To benefit from these optimisations, OCaml adopts a relaxed memory model that precisely specifies which of these relaxed behaviours programs may observe. While these models are difficult to program against directly, the OCaml memory model provides recipes that retain the simplicity of sequential reasoning.
Firstly, immutable values may be freely shared between multiple domains and may be accessed in parallel. For mutable data structures such as reference cells, arrays and mutable record fields, programmers should avoid data races. Reference cells, arrays and mutable record fields are said to be non-atomic data structures. A data race is said to occur when two domains concurrently access a non-atomic memory location without synchronisation and at least one of the accesses is a write. OCaml provides a number of ways to introduce synchronisation including atomic variables (section 9.7) and mutexes (section 9.5).
Importantly, for data race free (DRF) programs, OCaml provides sequentially consistent (SC) semantics – the observed behaviour of such programs can be explained by the interleaving of operations from different domains. This property is known as DRF-SC guarantee. Moreover, in OCaml, DRF-SC guarantee is modular – if a part of a program is data race free, then the OCaml memory model ensures that those parts have sequential consistency despite other parts of the program having data races. Even for programs with data races, OCaml provides strong guarantees. While the user may observe non sequentially consistent behaviours, there are no crashes.
For more details on the relaxed behaviours in the presence of data races, please have a look at the chapter on the hard bits of the memory model (chapter 10).
Domains may perform blocking synchronisation with the help of Mutex, Condition and Semaphore modules. These modules are the same as those used to synchronise threads created by the threads library (chapter 33). For clarity, in the rest of this chapter, we shall call the threads created by the threads library as systhreads. The following program implements a concurrent stack using mutex and condition variables.
The concurrent stack is implemented using a record with three fields: a mutable field contents which stores the elements in the stack, a mutex to control access to the contents field, and a condition variable nonempty, which is used to signal blocked domains waiting for the stack to become non-empty.
The push operation locks the mutex, updates the contents field with a new list whose head is the element being pushed and the tail is the old list. The condition variable nonempty is signalled while the lock is held in order to wake up any domains waiting on this condition. If there are waiting domains, one of the domains is woken up. If there are none, then the signal operation has no effect.
The pop operation locks the mutex and checks whether the stack is empty. If so, the calling domain waits on the condition variable nonempty using the wait primitive. The wait call atomically suspends the execution of the current domain and unlocks the mutex. When this domain is woken up again (when the wait call returns), it holds the lock on mutex. The domain tries to read the contents of the stack again. If the pop operation sees that the stack is non-empty, it updates the contents to the tail of the old list, and returns the head.
The use of mutex to control access to the shared resource contents introduces sufficient synchronisation between multiple domains using the stack. Hence, there are no data races when multiple domains use the stack in parallel.
How do systhreads interact with domains? The systhreads created on a particular domain remain pinned to that domain. Only one systhread at a time is allowed to run OCaml code on a particular domain. However, systhreads belonging to a particular domain may run C library or system code in parallel. Systhreads belonging to different domains may execute in parallel.
When using systhreads, the thread created for executing the computation given to Domain.spawn is also treated as a systhread. For example, the following program creates in total two domains (including the initial domain) with two systhreads each (including the initial systhread for each of the domains).
(* dom_thr.ml *) let m = Mutex.create () let r = ref None (* protected by m *) let task () = let my_thr_id = Thread.(id (self ())) in let my_dom_id :> int = Domain.self () in Mutex.lock m; begin match !r with | None -> Printf.printf "Thread %d running on domain %d saw initial write\n%!" my_thr_id my_dom_id | Some their_thr_id -> Printf.printf "Thread %d running on domain %d saw the write by thread %d\n%!" my_thr_id my_dom_id their_thr_id; end; r := Some my_thr_id; Mutex.unlock m let task' () = let t = Thread.create task () in task (); Thread.join t let main () = let d = Domain.spawn task' in task' (); Domain.join d let _ = main ()
$ ocamlopt -I +threads unix.cmxa threads.cmxa -o dom_thr.exe dom_thr.ml $ ./dom_thr.exe Thread 1 running on domain 1 saw initial write Thread 0 running on domain 0 saw the write by thread 1 Thread 2 running on domain 1 saw the write by thread 0 Thread 3 running on domain 0 saw the write by thread 2
This program uses a shared reference cell protected by a mutex to communicate between the different systhreads running on two different domains. The systhread identifiers uniquely identify systhreads in the program. The initial domain gets the domain id and the thread id as 0. The newly spawned domain gets domain id as 1.
During parallel execution with multiple domains, C code running on a domain may run in parallel with any C code running in other domains even if neither of them has released the “domain lock”. Prior to OCaml 5.0, C bindings may have assumed that if the OCaml runtime lock is not released, then it would be safe to manipulate global C state (e.g. initialise a function-local static value). This is no longer true in the presence of parallel execution with multiple domains.
Mutex, condition variables and semaphores are used to implement blocking synchronisation between domains. For non-blocking synchronisation, OCaml provides Atomic variables. As the name suggests, non-blocking synchronisation does not provide mechanisms for suspending and waking up domains. On the other hand, primitives used in non-blocking synchronisation are often compiled to atomic read-modify-write primitives that the hardware provides. As an example, the following program increments a non-atomic counter and an atomic counter in parallel.
$ ocamlopt -o incr.exe incr.ml $ ./incr.exe 1_000_000 Non-atomic ref count: 1187193 Atomic ref count: 2000000
Observe that the result from using the non-atomic counter is lower than what one would naively expect. This is because the non-atomic incr function is equivalent to:
Observe that the load and the store are two separate operations, and the increment operation as a whole is not performed atomically. When two domains execute this code in parallel, both of them may read the same value of the counter curr and update it to curr + 1. Hence, instead of two increments, the effect will be that of a single increment. On the other hand, the atomic counter performs the load and the store atomically with the help of hardware support for atomicity. The atomic counter returns the expected result.
The atomic variables can be used for low-level synchronisation between the domains. The following example uses an atomic variable to exchange a message between two domains.
While the sender and the receiver compete to access r, this is not a data race since r is an atomic reference.
The Atomic module is used to implement non-blocking, lock-free data structures. The following program implements a lock-free stack.
The atomic stack is represented by an atomic reference that holds a list. The push and pop operations use the compare_and_set primitive to attempt to atomically update the atomic reference. The expression compare_and_set r seen v sets the value of r to v if and only if its current value is physically equal to seen. Importantly, the comparison and the update occur atomically. The expression evaluates to true if the comparison succeeded (and the update happened) and false otherwise.
If the compare_and_set fails, then some other domain is also attempting to update the atomic reference at the same time. In this case, the push and pop operations call Domain.cpu_relax to back off for a short duration allowing competing domains to make progress before retrying the failed operation. This lock-free stack implementation is also known as Treiber stack.
This chapter describes the details of OCaml relaxed memory model. The relaxed memory model describes what values an OCaml program is allowed to witness when reading a memory location. If you are interested in high-level parallel programming in OCaml, please have a look at the parallel programming chapter 9.
This chapter is aimed at experts who would like to understand the details of the OCaml memory model from a practitioner’s perspective. For a formal definition of the OCaml memory model, its guarantees and the compilation to hardware memory models, please have a look at the PLDI 2018 paper on Bounding Data Races in Space and Time. The memory model presented in this chapter is an extension of the one presented in the PLDI 2018 paper. This chapter also covers some pragmatic aspects of the memory model that are not covered in the paper.
The simplest memory model that we could give to our programs is sequential consistency. Under sequential consistency, the values observed by the program can be explained through some interleaving of the operations from different domains in the program. For example, consider the following program with two domains d1 and d2 executing in parallel:
The reference cells a and b are initially 1. The user may observe r1 = 2, r2 = 0, r3 = 2 if the write to b in d2 occurred before the read of b in d1. Here, the observed behaviour can be explained in terms of interleaving of the operations from different domains.
Let us now assume that a and b are aliases of each other.
In the above program, the variables ab, a and b refer to the same reference cell. One would expect that the assertion in the main function will never fail. The reasoning is that if r2 is 0, then the write in d2 occurred before the read of b in d1. Given that a and b are aliases, the second read of a in d1 should also return 0.
Surprisingly, this assertion may fail in OCaml due to compiler optimisations. The OCaml compiler observes the common sub-expression !a * 2 in d1 and optimises the program to:
This optimisation is known as the common sub-expression elimination (CSE). Such optimisations are valid and necessary for good performance, and do not change the sequential meaning of the program. However, CSE breaks sequential reasoning.
In the optimized program above, even if the write to b in d2 occurs between the first and the second reads in d1, the program will observe the value 2 for r3, causing the assertion to fail. The observed behaviour cannot be explained by interleaving of operations from different domains in the source program. Thus, CSE optimization is said to be invalid under sequential consistency.
One way to explain the observed behaviour is as if the operations performed on a domain were reordered. For example, if the second and the third reads from d2 were reordered,
then we can explain the observed behaviour (2,0,2) returned by d1.
The other source of reordering is by the hardware. Modern hardware architectures have complex cache hierarchies with multiple levels of cache. While cache coherence ensures that reads and writes to a single memory location respect sequential consistency, the guarantees on programs that operate on different memory locations are much weaker. Consider the following program:
Under sequential consistency, we would never expect the assertion to fail. However, even on x86, which offers much stronger guarantees than ARM, the writes performed at a CPU core are not immediately published to all of the other cores. Since a and b are different memory locations, the reads of a and b may both witness the initial values, leading to the assertion failure.
This behaviour can be explained if a load is allowed to be reordered before a preceding store to a different memory location. This reordering can happen due to the presence of in-core store-buffers on modern processors. Each core effectively has a FIFO buffer of pending writes to avoid the need to block while a write completes. The writes to a and b may be in the store-buffers of cores c1 and c2 running the domains d1 and d2, respectively. The reads of b and a running on the cores c1 and c2, respectively, will not see the writes if the writes have not propagated from the buffers to the main memory.
The aim of the OCaml relaxed memory model is to precisely describe which orders are preserved by the OCaml program. The compiler and the hardware are free to optimize the program as long as they respect the ordering guarantees of the memory model. While programming directly under the relaxed memory model is difficult, the memory model also describes the conditions under which a program will only exhibit sequentially consistent behaviours. This guarantee is known as data race freedom implies sequential consistency (DRF-SC). In this section, we shall describe this guarantee. In order to do this, we first need a number of definitions.
OCaml classifies memory locations into atomic and non-atomic locations. Reference cells, array fields and mutable record fields are non-atomic memory locations. Immutable objects are non-atomic locations with an initialising write but no further updates. Atomic memory locations are those that are created using the Atomic module.
Let us imagine that the OCaml programs are executed by an abstract machine that executes one action at a time, arbitrarily picking one of the available domains at each step. We classify actions into two: inter-domain and intra-domain. An inter-domain action is one which can be observed and be influenced by actions on other domains. There are several inter-domain actions:
On the other hand, intra-domain actions can neither be observed nor influence the execution of other domains. Examples include evaluating an arithmetic expression, calling a function, etc. The memory model specification ignores such intra-domain actions. In the sequel, we use the term action to indicate inter-domain actions.
A totally ordered list of actions executed by the abstract machine is called an execution trace. There might be several possible execution traces for a given program due to non-determinism.
For a given execution trace, we define an irreflexive, transitive happens-before relation that captures the causality between actions in the OCaml program. The happens-before relation is defined as the smallest transitive relation satisfying the following properties:
In a given trace, two actions are said to be conflicting if they access the same non-atomic location, at least one is a write and neither is an initialising write to that location.
We say that a program has a data race if there exists some execution trace of the program with two conflicting actions and there does not exist a happens-before relationship between the conflicting accesses. A program without data races is said to be correctly synchronised.
DRF-SC guarantee: A program without data races will only exhibit sequentially consistent behaviours.
DRF-SC is a strong guarantee for the programmers. Programmers can use sequential reasoning i.e., reasoning by executing one inter-domain action after the other, to identify whether their program has a data race. In particular, they do not need to reason about reorderings described in section 10.1 in order to determine whether their program has a data race. Once the determination that a particular program is data race free is made, they do not need to worry about reorderings in their code.
In this section, we will look at examples of using DRF-SC for program reasoning. In this section, we will use the functions with names dN to represent domains executing in parallel with other domains. That is, we assume that there is a main function that runs the dN functions in parallel as follows:
let main () = let h1 = Domain.spawn d1 in let h2 = Domain.spawn d2 in ... ignore @@ Domain.join h1; ignore @@ Domain.join h2
Here is a simple example with a data race:
r is a non-atomic reference. The two domains race to access the reference, and d1 is a write. Since there is no happens-before relationship between the conflicting accesses, there is a data race.
Both of the programs that we had seen in the section 10.1 have data races. It is no surprise that they exhibit non sequentially consistent behaviours.
Accessing disjoint array indices and fields of a record in parallel is not a data race. For example,
do not have data races.
Races on atomic locations do not lead to a data race.
Atomic variables may be used for implementing non-blocking communication between domains.
Observe that the actions a and d write and read from the same non-atomic location msg, respectively, and hence are conflicting. We need to establish that a and d have a happens-before relationship in order to show that this program does not have a data race.
The action a precedes b in program order, and hence, a happens-before b. Similarly, c happens-before d. If d2 observes the atomic variable flag to be true, then b precedes c in happens-before order. Since happens-before is transitive, the conflicting actions a and d are in happens-before order. If d2 observes the flag to be false, then the read of msg is not done. Hence, there is no conflicting access in this execution trace. Hence, the program does not have a data race.
The following modified version of the message passing program does have a data race.
The domain d2 now unconditionally reads the non-atomic reference msg. Consider the execution trace:
Atomic.get flag; (* c *) !msg; (* d *) msg := 42; (* a *) Atomic.set flag true (* b *)
In this trace, d and a are conflicting operations. But there is no happens-before relationship between them. Hence, this program has a data race.
The OCaml memory model offers strong guarantees even for programs with data races. It offers what is known as local data race freedom sequential consistency (LDRF-SC) guarantee. A formal definition of this property is beyond the scope of this manual chapter. Interested readers are encouraged to read the PLDI 2018 paper on Bounding Data Races in Space and Time.
Informally, LDRF-SC says that the data race free parts of the program remain sequentially consistent. That is, even if the program has data races, those parts of the program that are disjoint from the parts with data races are amenable to sequential reasoning.
Consider the following snippet:
Observe that c is a newly allocated reference. Can the read of c return a value which is not 42? That is, can a ever be not 42? Surprisingly, in the C++ and Java memory models, the answer is yes. With the C++ memory model, if the program has a data race, even in unrelated parts, then the semantics is undefined. If this snippet were linked with a library that had a data race, then, under the C++ memory model, the read may return any value. Since data races on unrelated locations can affect program behaviour, we say that C++ memory model is not bounded in space.
Unlike C++, Java memory model is bounded in space. But Java memory model is not bounded in time; data races in the future will affect the past behaviour. For example, consider the translation of this example to Java. We assume a prior definition of Class c {int x;} and a shared non-volatile variable C g. Now the snippet may be part of a larger program with parallel threads:
(* Thread 1 *) C c = new C(); c.x = 42; a = c.x; g = c; (* Thread 2 *) g.x = 7;
The read of c.x and the write of g in the first thread are done on separate memory locations. Hence, the Java memory model allows them to be reordered. As a result, the write in the second thread may occur before the read of c.x, and hence, c.x returns 7.
The OCaml equivalent of the Java code above is:
Observe that there is a data race on both g and c. Consider only the first three instructions in snippet:
let c = ref 0 in c := 42; let a = !c in ...
The OCaml memory model is bounded both in space and time. The only memory location here is c. Reasoning only about this snippet, there is neither the data race in space (the race on g) nor in time (the future race on c). Hence, the snippet will have sequentially consistent behaviour, and the value returned by !c will be 42.
The OCaml memory model guarantees that even for programs with data races, memory safety is preserved. While programs with data races may observe non-sequentially consistent behaviours, they will not crash.
In this section, we describe the semantics of the OCaml memory model. A formal definition of the operational view of the memory model is presented in section 3 of the PLDI 2018 paper on Bounding Data Races in Space and Time. This section presents an informal description of the memory model with the help of an example.
Given an OCaml program, which may possibly contain data races, the operational semantics tells you the values that may be observed by the read of a memory location. For simplicity, we restrict the intra-thread actions to just the accesses to atomic and non-atomic locations, ignoring domain spawn and join operations, and the operations on mutexes.
We describe the semantics of the OCaml memory model in a straightforward small-step operational manner. That is, the semantics is described by an abstract machine that executes one action at a time, arbitrarily picking one of the available domains at each step. This is similar to the abstract machine that we had used to describe the happens-before relationship in section 10.2.2.
In the semantics, we model non-atomic locations as finite maps from timestamps t to values v. We take timestamps to be rational numbers. The timestamps are totally ordered but dense; there is a timestamp between any two others.
For example,
a: [t1 -> 1; t2 -> 2] b: [t3 -> 3; t4 -> 4; t5 -> 5] c: [t6 -> 5; t7 -> 6; t8 -> 7]
represents three non-atomic locations a, b and c and their histories. The location a has two writes at timestamps t1 and t2 with values 1 and 2, respectively. When we write a: [t1 -> 1; t2 -> 2], we assume that t1 < t2. We assume that the locations are initialised with a history that has a single entry at timestamp 0 that maps to the initial value.
Each domain is equipped with a frontier, which is a map from non-atomic locations to timestamps. Intuitively, each domain’s frontier records, for each non-atomic location, the latest write known to the thread. More recent writes may have occurred, but are not guaranteed to be visible.
For example,
d1: [a -> t1; b -> t3; c -> t7] d2: [a -> t1; b -> t4; c -> t7]
represents two domains d1 and d2 and their frontiers.
Let us now define the semantics of non-atomic reads and writes. Suppose domain d1 performs the read of b. For non-atomic reads, the domains may read an arbitrary element of the history for that location, as long as it is not older than the timestamp in the domains’s frontier. In this case, since d1 frontier at b is at t3, the read may return the value 3, 4 or 5. A non-atomic read does not change the frontier of the current domain.
Suppose domain d2 writes the value 10 to c (c := 10). We pick a new timestamp t9 for this write such that it is later than d2’s frontier at c. Note a subtlety here: this new timestamp might not be later than everything else in the history, but merely later than any other write known to the writing domain. Hence, t9 may be inserted in c’s history either (a) between t7 and t8 or (b) after t8. Let us pick the former option for our discussion. Since the new write appears after all the writes known by the domain d2 to the location c, d2’s frontier at c is also updated. The new state of the abstract machine is:
(* Non-atomic locations *) a: [t1 -> 1; t2 -> 2] b: [t3 -> 3; t4 -> 4; t5 -> 5] c: [t6 -> 5; t7 -> 6; t9 -> 10; t8 -> 7] (* new write at t9 *) (* Domains *) d1: [a -> t1; b -> t3; c -> t7] d2: [a -> t1; b -> t4; c -> t9] (* frontier updated at c *)
Atomic locations carry not only values but also synchronization information. We model atomic locations as a pair of the value held by that location and a frontier. The frontier models the synchronization information, which is merged with the frontiers of threads that operate on the location. In this way, non-atomic writes made by one thread can become known to another by communicating via an atomic location.
For example,
(* Atomic locations *) A: 10, [a -> t1; b -> t5; c -> t7] B: 5, [a -> t2; b -> t4; c -> t6]
shows two atomic variables A and B with values 10 and 5, respectively, and frontiers of their own. We use upper-case variable names to indicate atomic locations.
During atomic reads, the frontier of the location is merged into that of the domain performing the read. For example, suppose d1 reads B. The read returns 5, and d1’s frontier updated by merging it with B’s frontier, choosing the later timestamp for each location. The abstract machine state before the atomic read is:
(* Non-atomic locations *) a: [t1 -> 1; t2 -> 2] b: [t3 -> 3; t4 -> 4; t5 -> 5] c: [t6 -> 5; t7 -> 6; t9 -> 10; t8 -> 7] (* Domains *) d1: [a -> t1; b -> t3; c -> t7] d2: [a -> t1; b -> t4; c -> t9] (* Atomic locations *) A: 10, [a -> t1; b -> t5; c -> t7] B: 5, [a -> t2; b -> t4; c -> t6]
As a result of the atomic read, the abstract machine state is updated to:
(* Non-atomic locations *) a: [t1 -> 1; t2 -> 2] b: [t3 -> 3; t4 -> 4; t5 -> 5] c: [t6 -> 5; t7 -> 6; t9 -> 10; t8 -> 7] (* Domains *) d1: [a -> t2; b -> t4; c -> t7] (* frontier updated at a and b *) d2: [a -> t1; b -> t4; c -> t9] (* Atomic locations *) A: 10, [a -> t1; b -> t5; c -> t7] B: 5, [a -> t2; b -> t4; c -> t6]
During atomic writes, the value held by the atomic location is updated. The frontiers of both the writing domain and that of the location being written to are updated to the merge of the two frontiers. For example, if d2 writes 20 to A in the current machine state, the machine state is updated to:
(* Non-atomic locations *) a: [t1 -> 1; t2 -> 2] b: [t3 -> 3; t4 -> 4; t5 -> 5] c: [t6 -> 5; t7 -> 6; t9 -> 10; t8 -> 7] (* Domains *) d1: [a -> t2; b -> t4; c -> t7] d2: [a -> t1; b -> t5; c -> t9] (* frontier updated at b *) (* Atomic locations *) A: 20, [a -> t1; b -> t5; c -> t9] (* value updated. frontier updated at c. *) B: 5, [a -> t2; b -> t4; c -> t6]
Let us revisit an example from earlier (section 10.1).
This program has a data race on a and b, and hence, the program may exhibit non sequentially consistent behaviour. Let us use the semantics to show that the program may exhibit r1 = 0 && r2 = 0.
The initial state of the abstract machine is:
(* Non-atomic locations *) a: [t0 -> 0] b: [t1 -> 0] (* Domains *) d1: [a -> t0; b -> t1] d2: [a -> t0; b -> t1]
There are several possible schedules for executing this program. Let us consider the following schedule:
1: a := 1 @ d1 2: b := 1 @ d2 3: !b @ d1 4: !a @ d2
After the first action a:=1 by d1, the machine state is:
(* Non-atomic locations *) a: [t0 -> 0; t2 -> 1] (* new write at t2 *) b: [t1 -> 0] (* Domains *) d1: [a -> t2; b -> t1] (* frontier updated at a *) d2: [a -> t0; b -> t1]
After the second action b:=1 by d2, the machine state is:
(* Non-atomic locations *) a: [t0 -> 0; t2 -> 1] b: [t1 -> 0; t3 -> 1] (* new write at t3 *) (* Domains *) d1: [a -> t2; b -> t1] d2: [a -> t0; b -> t3] (* frontier updated at b *)
Now, for the third action !b by d1, observe that d1’s frontier at b is at t1. Hence, the read may return either 0 or 1. Let us assume that it returns 0. The machine state is not updated by the non-atomic read.
Similarly, for the fourth action !a by d2, d2’s frontier at a is at t0. Hence, this read may also return either 0 or 1. Let us assume that it returns 0. Hence, the assertion in the original program, assert (not (r1 = 0 && r2 = 0)), will fail for this particular execution.
There are certain operations which are not memory model compliant.
Part II |
This document is intended as a reference manual for the OCaml language. It lists the language constructs, and gives their precise syntax and informal semantics. It is by no means a tutorial introduction to the language. A good working knowledge of OCaml is assumed.
No attempt has been made at mathematical rigor: words are employed with their intuitive meaning, without further definition. As a consequence, the typing rules have been left out, by lack of the mathematical framework required to express them, while they are definitely part of a full formal definition of the language.
The syntax of the language is given in BNF-like notation. Terminal symbols are set in typewriter font (like this). Non-terminal symbols are set in italic font (like that). Square brackets […] denote optional components. Curly brackets {…} denotes zero, one or several repetitions of the enclosed components. Curly brackets with a trailing plus sign {…}+ denote one or several repetitions of the enclosed components. Parentheses (…) denote grouping.
The following characters are considered as blanks: space, horizontal tabulation, carriage return, line feed and form feed. Blanks are ignored, but they separate adjacent identifiers, literals and keywords that would otherwise be confused as one single identifier, literal or keyword.
Comments are introduced by the two characters (*, with no intervening blanks, and terminated by the characters *), with no intervening blanks. Comments are treated as blank characters. Comments do not occur inside string or character literals. Nested comments are handled correctly.
|
Identifiers are sequences of letters, digits, _ (the underscore character), and ' (the single quote), starting with a letter or an underscore. Letters contain at least the 52 lowercase and uppercase letters from the ASCII set. The current implementation also recognizes as letters some characters from the ISO 8859-1 set (characters 192–214 and 216–222 as uppercase letters; characters 223–246 and 248–255 as lowercase letters). This feature is deprecated and should be avoided for future compatibility.
All characters in an identifier are meaningful. The current implementation accepts identifiers up to 16000000 characters in length.
In many places, OCaml makes a distinction between capitalized identifiers and identifiers that begin with a lowercase letter. The underscore character is considered a lowercase letter for this purpose.
|
An integer literal is a sequence of one or more digits, optionally preceded by a minus sign. By default, integer literals are in decimal (radix 10). The following prefixes select a different radix:
Prefix | Radix |
0x, 0X | hexadecimal (radix 16) |
0o, 0O | octal (radix 8) |
0b, 0B | binary (radix 2) |
(The initial 0 is the digit zero; the O for octal is the letter O.) An integer literal can be followed by one of the letters l, L or n to indicate that this integer has type int32, int64 or nativeint respectively, instead of the default type int for integer literals. The interpretation of integer literals that fall outside the range of representable integer values is undefined.
For convenience and readability, underscore characters (_) are accepted (and ignored) within integer literals.
|
Floating-point decimal literals consist in an integer part, a fractional part and an exponent part. The integer part is a sequence of one or more digits, optionally preceded by a minus sign. The fractional part is a decimal point followed by zero, one or more digits. The exponent part is the character e or E followed by an optional + or - sign, followed by one or more digits. It is interpreted as a power of 10. The fractional part or the exponent part can be omitted but not both, to avoid ambiguity with integer literals. The interpretation of floating-point literals that fall outside the range of representable floating-point values is undefined.
Floating-point hexadecimal literals are denoted with the 0x or 0X prefix. The syntax is similar to that of floating-point decimal literals, with the following differences. The integer part and the fractional part use hexadecimal digits. The exponent part starts with the character p or P. It is written in decimal and interpreted as a power of 2.
For convenience and readability, underscore characters (_) are accepted (and ignored) within floating-point literals.
|
Character literals are delimited by ' (single quote) characters. The two single quotes enclose either one character different from ' and \, or one of the escape sequences below:
Sequence | Character denoted |
\\ | backslash (\) |
\" | double quote (") |
\' | single quote (') |
\n | linefeed (LF) |
\r | carriage return (CR) |
\t | horizontal tabulation (TAB) |
\b | backspace (BS) |
\space | space (SPC) |
\ddd | the character with ASCII code ddd in decimal |
\xhh | the character with ASCII code hh in hexadecimal |
\oooo | the character with ASCII code ooo in octal |
|
String literals are delimited by " (double quote) characters. The two double quotes enclose a sequence of either characters different from " and \, or escape sequences from the table given above for character literals, or a Unicode character escape sequence.
A Unicode character escape sequence is substituted by the UTF-8 encoding of the specified Unicode scalar value. The Unicode scalar value, an integer in the ranges 0x0000...0xD7FF or 0xE000...0x10FFFF, is defined using 1 to 6 hexadecimal digits; leading zeros are allowed.
To allow splitting long string literals across lines, the sequence \newline spaces-or-tabs (a backslash at the end of a line followed by any number of spaces and horizontal tabulations at the beginning of the next line) is ignored inside string literals.
Quoted string literals provide an alternative lexical syntax for string literals. They are useful to represent strings of arbitrary content without escaping. Quoted strings are delimited by a matching pair of { quoted-string-id | and | quoted-string-id } with the same quoted-string-id on both sides. Quoted strings do not interpret any character in a special way but requires that the sequence | quoted-string-id } does not occur in the string itself. The identifier quoted-string-id is a (possibly empty) sequence of lowercase letters and underscores that can be freely chosen to avoid such issue.
The current implementation places practically no restrictions on the length of string literals.
To avoid ambiguities, naming labels in expressions cannot just be defined syntactically as the sequence of the three tokens ~, ident and :, and have to be defined at the lexical level.
|
Naming labels come in two flavours: label for normal arguments and optlabel for optional ones. They are simply distinguished by their first character, either ~ or ?.
Despite label and optlabel being lexical entities in expressions, their expansions ~ label-name : and ? label-name : will be used in grammars, for the sake of readability. Note also that inside type expressions, this expansion can be taken literally, i.e. there are really 3 tokens, with optional blanks between them.
|
See also the following language extensions: extension operators, extended indexing operators, and binding operators.
Sequences of “operator characters”, such as <=> or !!, are read as a single token from the infix-symbol or prefix-symbol class. These symbols are parsed as prefix and infix operators inside expressions, but otherwise behave like normal identifiers.
The identifiers below are reserved as keywords, and cannot be employed otherwise:
and as assert asr begin class constraint do done downto else end exception external false for fun function functor if in include inherit initializer land lazy let lor lsl lsr lxor match method mod module mutable new nonrec object of open or private rec sig struct then to true try type val virtual when while with
The following character sequences are also keywords:
!= # & && ' ( ) * + , - -. -> . .. .~ : :: := :> ; ;; < <- = > >] >} ? [ [< [> [| ] _ ` { {< | |] || } ~
Note that the following identifiers are keywords of the now unmaintained Camlp4 system and should be avoided for backwards compatibility reasons.
parser value $ $$ $: <: << >> ??
Lexical ambiguities are resolved according to the “longest match” rule: when a character sequence can be decomposed into two tokens in several different ways, the decomposition retained is the one with the longest first token.
|
Preprocessors that generate OCaml source code can insert line number directives in their output so that error messages produced by the compiler contain line numbers and file names referring to the source file before preprocessing, instead of after preprocessing. A line number directive starts at the beginning of a line, is composed of a # (sharp sign), followed by a positive integer (the source line number), followed by a character string (the source file name). Line number directives are treated as blanks during lexical analysis.
This section describes the kinds of values that are manipulated by OCaml programs.
Integer values are integer numbers from −230 to 230−1, that is −1073741824 to 1073741823. The implementation may support a wider range of integer values: on 64-bit platforms, the current implementation supports integers ranging from −262 to 262−1.
Floating-point values are numbers in floating-point representation. The current implementation uses double-precision floating-point numbers conforming to the IEEE 754 standard, with 53 bits of mantissa and an exponent ranging from −1022 to 1023.
Character values are represented as 8-bit integers between 0 and 255. Character codes between 0 and 127 are interpreted following the ASCII standard. The current implementation interprets character codes between 128 and 255 following the ISO 8859-1 standard.
String values are finite sequences of characters. The current implementation supports strings containing up to 224 − 5 characters (16777211 characters); on 64-bit platforms, the limit is 257 − 9.
Tuples of values are written (v1, …, vn), standing for the n-tuple of values v1 to vn. The current implementation supports tuple of up to 222 − 1 elements (4194303 elements).
Record values are labeled tuples of values. The record value written { field1 = v1; …; fieldn = vn } associates the value vi to the record field fieldi, for i = 1 … n. The current implementation supports records with up to 222 − 1 fields (4194303 fields).
Arrays are finite, variable-sized sequences of values of the same type. The current implementation supports arrays containing up to 222 − 1 elements (4194303 elements) unless the elements are floating-point numbers (2097151 elements in this case); on 64-bit platforms, the limit is 254 − 1 for all arrays.
Variant values are either a constant constructor, or a non-constant constructor applied to a number of values. The former case is written constr; the latter case is written constr (v1, ... , vn ), where the vi are said to be the arguments of the non-constant constructor constr. The parentheses may be omitted if there is only one argument.
The following constants are treated like built-in constant constructors:
Constant | Constructor |
false | the boolean false |
true | the boolean true |
() | the “unit” value |
[] | the empty list |
The current implementation limits each variant type to have at most 246 non-constant constructors and 230−1 constant constructors.
Polymorphic variants are an alternate form of variant values, not belonging explicitly to a predefined variant type, and following specific typing rules. They can be either constant, written `tag-name, or non-constant, written `tag-name(v).
Functional values are mappings from values to values.
Objects are composed of a hidden internal state which is a record of instance variables, and a set of methods for accessing and modifying these variables. The structure of an object is described by the toplevel class that created it.
Identifiers are used to give names to several classes of language objects and refer to these objects by name later:
These eleven name spaces are distinguished both by the context and by the capitalization of the identifier: whether the first letter of the identifier is in lowercase (written lowercase-ident below) or in uppercase (written capitalized-ident). Underscore is considered a lowercase letter for this purpose.
|
See also the following language extension: extended indexing operators.
As shown above, prefix and infix symbols as well as some keywords can be used as value names, provided they are written between parentheses. The capitalization rules are summarized in the table below.
Name space | Case of first letter |
Values | lowercase |
Constructors | uppercase |
Labels | lowercase |
Polymorphic variant tags | uppercase |
Exceptions | uppercase |
Type constructors | lowercase |
Record fields | lowercase |
Classes | lowercase |
Instance variables | lowercase |
Methods | lowercase |
Modules | uppercase |
Module types | any |
Note on polymorphic variant tags: the current implementation accepts lowercase variant tags in addition to capitalized variant tags, but we suggest you avoid lowercase variant tags for portability and compatibility with future OCaml versions.
|
A named object can be referred to either by its name (following the usual static scoping rules for names) or by an access path prefix . name, where prefix designates a module and name is the name of an object defined in that module. The first component of the path, prefix, is either a simple module name or an access path name1 . name2 …, in case the defining module is itself nested inside other modules. For referring to type constructors, module types, or class types, the prefix can also contain simple functor applications (as in the syntactic class extended-module-path above) in case the defining module is the result of a functor application.
Label names, tag names, method names and instance variable names need not be qualified: the former three are global labels, while the latter are local to a class.
|
See also the following language extensions: first-class modules, attributes and extension nodes.
The table below shows the relative precedences and associativity of operators and non-closed type constructions. The constructions with higher precedences come first.
Operator | Associativity |
Type constructor application | – |
# | – |
* | – |
-> | right |
as | – |
Type expressions denote types in definitions of data types as well as in type constraints over patterns and expressions.
The type expression ' ident stands for the type variable named ident. The type expression _ stands for either an anonymous type variable or anonymous type parameters. In data type definitions, type variables are names for the data type parameters. In type constraints, they represent unspecified types that can be instantiated by any type to satisfy the type constraint. In general the scope of a named type variable is the whole top-level phrase where it appears, and it can only be generalized when leaving this scope. Anonymous variables have no such restriction. In the following cases, the scope of named type variables is restricted to the type expression where they appear: 1) for universal (explicitly polymorphic) type variables; 2) for type variables that only appear in public method specifications (as those variables will be made universal, as described in section 11.9.1); 3) for variables used as aliases, when the type they are aliased to would be invalid in the scope of the enclosing definition (i.e. when it contains free universal type variables, or locally defined types.)
The type expression ( typexpr ) denotes the same type as typexpr.
The type expression typexpr1 -> typexpr2 denotes the type of functions mapping arguments of type typexpr1 to results of type typexpr2.
label-name : typexpr1 -> typexpr2 denotes the same function type, but the argument is labeled label.
? label-name : typexpr1 -> typexpr2 denotes the type of functions mapping an optional labeled argument of type typexpr1 to results of type typexpr2. That is, the physical type of the function will be typexpr1 option -> typexpr2.
The type expression typexpr1 * … * typexprn denotes the type of tuples whose elements belong to types typexpr1, … typexprn respectively.
Type constructors with no parameter, as in typeconstr, are type expressions.
The type expression typexpr typeconstr, where typeconstr is a type constructor with one parameter, denotes the application of the unary type constructor typeconstr to the type typexpr.
The type expression (typexpr1,…,typexprn) typeconstr, where typeconstr is a type constructor with n parameters, denotes the application of the n-ary type constructor typeconstr to the types typexpr1 through typexprn.
In the type expression _ typeconstr , the anonymous type expression _ stands in for anonymous type parameters and is equivalent to (_, …,_) with as many repetitions of _ as the arity of typeconstr.
The type expression typexpr as ' ident denotes the same type as typexpr, and also binds the type variable ident to type typexpr both in typexpr and in other types. In general the scope of an alias is the same as for a named type variable, and covers the whole enclosing definition. If the type variable ident actually occurs in typexpr, a recursive type is created. Recursive types for which there exists a recursive path that does not contain an object or polymorphic variant type constructor are rejected, except when the -rectypes mode is selected.
If ' ident denotes an explicit polymorphic variable, and typexpr denotes either an object or polymorphic variant type, the row variable of typexpr is captured by ' ident, and quantified upon.
|
Polymorphic variant types describe the values a polymorphic variant may take.
The first case is an exact variant type: all possible tags are known, with their associated types, and they can all be present. Its structure is fully known.
The second case is an open variant type, describing a polymorphic variant value: it gives the list of all tags the value could take, with their associated types. This type is still compatible with a variant type containing more tags. A special case is the unknown type, which does not define any tag, and is compatible with any variant type.
The third case is a closed variant type. It gives information about all the possible tags and their associated types, and which tags are known to potentially appear in values. The exact variant type (first case) is just an abbreviation for a closed variant type where all possible tags are also potentially present.
In all three cases, tags may be either specified directly in the `tag-name [of typexpr] form, or indirectly through a type expression, which must expand to an exact variant type, whose tag specifications are inserted in its place.
Full specifications of variant tags are only used for non-exact closed types. They can be understood as a conjunctive type for the argument: it is intended to have all the types enumerated in the specification.
Such conjunctive constraints may be unsatisfiable. In such a case the corresponding tag may not be used in a value of this type. This does not mean that the whole type is not valid: one can still use other available tags. Conjunctive constraints are mainly intended as output from the type checker. When they are used in source programs, unsolvable constraints may cause early failures.
An object type < [method-type { ; method-type }] > is a record of method types.
Each method may have an explicit polymorphic type: { ' ident }+ . typexpr. Explicit polymorphic variables have a local scope, and an explicit polymorphic type can only be unified to an equivalent one, where only the order and names of polymorphic variables may change.
The type < { method-type ; } .. > is the type of an object whose method names and types are described by method-type1, …, method-typen, and possibly some other methods represented by the ellipsis. This ellipsis actually is a special kind of type variable (called row variable in the literature) that stands for any number of extra method types.
The type # classtype-path is a special kind of abbreviation. This abbreviation unifies with the type of any object belonging to a subclass of the class type classtype-path. It is handled in a special way as it usually hides a type variable (an ellipsis, representing the methods that may be added in a subclass). In particular, it vanishes when the ellipsis gets instantiated. Each type expression # classtype-path defines a new type variable, so type # classtype-path -> # classtype-path is usually not the same as type (# classtype-path as ' ident) -> ' ident.
There are no type expressions describing (defined) variant types nor record types, since those are always named, i.e. defined before use and referred to by name. Type definitions are described in section 11.8.1.
|
See also the following language extension: extension literals.
The syntactic class of constants comprises literals from the four base types (integers, floating-point numbers, characters, character strings), the integer variants, and constant constructors from both normal and polymorphic variants, as well as the special constants false, true, (), [], and [||], which behave like constant constructors, and begin end, which is equivalent to ().
|
See also the following language extensions: first-class modules, attributes and extension nodes.
The table below shows the relative precedences and associativity of operators and non-closed pattern constructions. The constructions with higher precedences come first.
Operator | Associativity |
.. | – |
lazy (see section 11.6) | – |
Constructor application, Tag application | right |
:: | right |
, | – |
| | left |
as | – |
Patterns are templates that allow selecting data structures of a given shape, and binding identifiers to components of the data structure. This selection operation is called pattern matching; its outcome is either “this value does not match this pattern”, or “this value matches this pattern, resulting in the following bindings of names to values”.
A pattern that consists in a value name matches any value, binding the name to the value. The pattern _ also matches any value, but does not bind any name.
Patterns are linear: a variable cannot be bound several times by a given pattern. In particular, there is no way to test for equality between two parts of a data structure using only a pattern:
However, we can use a when guard for this purpose:
A pattern consisting in a constant matches the values that are equal to this constant.
The pattern pattern1 as value-name matches the same values as pattern1. If the matching against pattern1 is successful, the name value-name is bound to the matched value, in addition to the bindings performed by the matching against pattern1.
The pattern ( pattern1 ) matches the same values as pattern1. A type constraint can appear in a parenthesized pattern, as in ( pattern1 : typexpr ). This constraint forces the type of pattern1 to be compatible with typexpr.
The pattern pattern1 | pattern2 represents the logical “or” of the two patterns pattern1 and pattern2. A value matches pattern1 | pattern2 if it matches pattern1 or pattern2. The two sub-patterns pattern1 and pattern2 must bind exactly the same identifiers to values having the same types. Matching is performed from left to right. More precisely, in case some value v matches pattern1 | pattern2, the bindings performed are those of pattern1 when v matches pattern1. Otherwise, value v matches pattern2 whose bindings are performed.
The pattern constr ( pattern1 , … , patternn ) matches all variants whose constructor is equal to constr, and whose arguments match pattern1 … patternn. It is a type error if n is not the number of arguments expected by the constructor.
The pattern constr _ matches all variants whose constructor is constr.
The pattern pattern1 :: pattern2 matches non-empty lists whose heads match pattern1, and whose tails match pattern2.
The pattern [ pattern1 ; … ; patternn ] matches lists of length n whose elements match pattern1 …patternn, respectively. This pattern behaves like pattern1 :: … :: patternn :: [].
The pattern `tag-name pattern1 matches all polymorphic variants whose tag is equal to tag-name, and whose argument matches pattern1.
If the type [('a,'b,…)] typeconstr = [ `tag-name1 typexpr1 | … | `tag-namen typexprn] is defined, then the pattern #typeconstr is a shorthand for the following or-pattern: ( `tag-name1(_ : typexpr1) | … | `tag-namen(_ : typexprn)). It matches all values of type [< typeconstr ].
The pattern pattern1 , … , patternn matches n-tuples whose components match the patterns pattern1 through patternn. That is, the pattern matches the tuple values (v1, …, vn) such that patterni matches vi for i = 1,… , n.
The pattern { field1 [= pattern1] ; … ; fieldn [= patternn] } matches records that define at least the fields field1 through fieldn, and such that the value associated to fieldi matches the pattern patterni, for i = 1,… , n. A single identifier fieldk stands for fieldk = fieldk , and a single qualified identifier module-path . fieldk stands for module-path . fieldk = fieldk . The record value can define more fields than field1 …fieldn; the values associated to these extra fields are not taken into account for matching. Optionally, a record pattern can be terminated by ; _ to convey the fact that not all fields of the record type are listed in the record pattern and that it is intentional. Optional type constraints can be added field by field with { field1 : typexpr1 = pattern1 ;… ;fieldn : typexprn = patternn } to force the type of fieldk to be compatible with typexprk.
The pattern [| pattern1 ; … ; patternn |] matches arrays of length n such that the i-th array element matches the pattern patterni, for i = 1,… , n.
The pattern ' c ' .. ' d ' is a shorthand for the pattern
where c1, c2, …, cn are the characters that occur between c and d in the ASCII character set. For instance, the pattern '0'..'9' matches all characters that are digits.
(Introduced in Objective Caml 3.11)
|
The pattern lazy pattern matches a value v of type Lazy.t, provided pattern matches the result of forcing v with Lazy.force. A successful match of a pattern containing lazy sub-patterns forces the corresponding parts of the value being matched, even those that imply no test such as lazy value-name or lazy _. Matching a value with a pattern-matching where some patterns contain lazy sub-patterns may imply forcing parts of the value, even when the pattern selected in the end has no lazy sub-pattern.
For more information, see the description of module Lazy in the standard library (module Lazy).
(Introduced in OCaml 4.02)
A new form of exception pattern, exception pattern , is allowed only as a toplevel pattern or inside a toplevel or-pattern under a match...with pattern-matching (other occurrences are rejected by the type-checker).
Cases with such a toplevel pattern are called “exception cases”, as opposed to regular “value cases”. Exception cases are applied when the evaluation of the matched expression raises an exception. The exception value is then matched against all the exception cases and re-raised if none of them accept the exception (as with a try...with block). Since the bodies of all exception and value cases are outside the scope of the exception handler, they are all considered to be in tail-position: if the match...with block itself is in tail position in the current function, any function call in tail position in one of the case bodies results in an actual tail call.
A pattern match must contain at least one value case. It is an error if all cases are exceptions, because there would be no code to handle the return of a value.
For patterns, local opens are limited to the module-path.(pattern) construction. This construction locally opens the module referred to by the module path module-path in the scope of the pattern pattern.
When the body of a local open pattern is delimited by [ ], [| |], or { }, the parentheses can be omitted. For example, module-path.[pattern] is equivalent to module-path.([pattern]), and module-path.[| pattern |] is equivalent to module-path.([| pattern |]).
|
See also the following language extensions: first-class modules, overriding in open statements, syntax for Bigarray access, attributes, extension nodes and extended indexing operators.
The table below shows the relative precedences and associativity of operators and non-closed constructions. The constructions with higher precedence come first. For infix and prefix symbols, we write “*…” to mean “any symbol starting with *”.
Construction or operator | Associativity |
prefix-symbol | – |
. .( .[ .{ (see section 12.11) | – |
#… | left |
function application, constructor application, tag application, assert, lazy | left |
- -. (prefix) | – |
**… lsl lsr asr | right |
*… /… %… mod land lor lxor | left |
+… -… | left |
:: | right |
@… ^… | right |
=… <… >… |… &… $… != | left |
& && | right |
or || | right |
, | – |
<- := | right |
if | – |
; | right |
let match fun function try | – |
It is simple to test or refresh one’s understanding:
An expression consisting in a constant evaluates to this constant. For example, 3.14 or [||].
An expression consisting in an access path evaluates to the value bound to this path in the current evaluation environment. The path can be either a value name or an access path to a value component of a module.
The expressions ( expr ) and begin expr end have the same value as expr. The two constructs are semantically equivalent, but it is good style to use begin … end inside control structures:
if … then begin … ; … end else begin … ; … end
and ( … ) for the other grouping situations.
Parenthesized expressions can contain a type constraint, as in ( expr : typexpr ). This constraint forces the type of expr to be compatible with typexpr.
Parenthesized expressions can also contain coercions ( expr [: typexpr] :> typexpr) (see subsection 11.7.7 below).
Function application is denoted by juxtaposition of (possibly labeled) expressions. The expression expr argument1 … argumentn evaluates the expression expr and those appearing in argument1 to argumentn. The expression expr must evaluate to a functional value f, which is then applied to the values of argument1, …, argumentn.
The order in which the expressions expr, argument1, …, argumentn are evaluated is not specified.
Arguments and parameters are matched according to their respective labels. Argument order is irrelevant, except among arguments with the same label, or no label.
If a parameter is specified as optional (label prefixed by ?) in the type of expr, the corresponding argument will be automatically wrapped with the constructor Some, except if the argument itself is also prefixed by ?, in which case it is passed as is.
If a non-labeled argument is passed, and its corresponding parameter is preceded by one or several optional parameters, then these parameters are defaulted, i.e. the value None will be passed for them. All other missing parameters (without corresponding argument), both optional and non-optional, will be kept, and the result of the function will still be a function of these missing parameters to the body of f.
In all cases but exact match of order and labels, without optional parameters, the function type should be known at the application point. This can be ensured by adding a type constraint. Principality of the derivation can be checked in the -principal mode.
As a special case, OCaml supports labels-omitted full applications: if the function has a known arity, all the arguments are unlabeled, and their number matches the number of non-optional parameters, then labels are ignored and non-optional parameters are matched in their definition order. Optional arguments are defaulted. This omission of labels is discouraged and results in a warning, see 13.5.1.
Two syntactic forms are provided to define functions. The first form is introduced by the keyword function:
|
This expression evaluates to a functional value with one argument. When this function is applied to a value v, this value is matched against each pattern pattern1 to patternn. If one of these matchings succeeds, that is, if the value v matches the pattern patterni for some i, then the expression expri associated to the selected pattern is evaluated, and its value becomes the value of the function application. The evaluation of expri takes place in an environment enriched by the bindings performed during the matching.
If several patterns match the argument v, the one that occurs first in the function definition is selected. If none of the patterns matches the argument, the exception Match_failure is raised.
The other form of function definition is introduced by the keyword fun:
This expression is equivalent to:
An optional type constraint typexpr can be added before -> to enforce the type of the result to be compatible with the constraint typexpr:
is equivalent to
Beware of the small syntactic difference between a type constraint on the last parameter
and one on the result
The parameter patterns ~lab and ~(lab [: typ]) are shorthands for respectively ~lab:lab and ~lab:(lab [: typ]), and similarly for their optional counterparts.
A function of the form fun ? lab :( pattern = expr0 ) -> expr is equivalent to
where ident is a fresh variable, except that it is unspecified when expr0 is evaluated.
After these two transformations, expressions are of the form
If we ignore labels, which will only be meaningful at function application, this is equivalent to
That is, the fun expression above evaluates to a curried function with n arguments: after applying this function n times to the values v1 … vn, the values will be matched in parallel against the patterns pattern1 … patternn. If the matching succeeds, the function returns the value of expr in an environment enriched by the bindings performed during the matchings. If the matching fails, the exception Match_failure is raised.
The cases of a pattern matching (in the function, match and try constructs) can include guard expressions, which are arbitrary boolean expressions that must evaluate to true for the match case to be selected. Guards occur just before the -> token and are introduced by the when keyword:
|
Matching proceeds as described before, except that if the value matches some pattern patterni which has a guard condi, then the expression condi is evaluated (in an environment enriched by the bindings performed during matching). If condi evaluates to true, then expri is evaluated and its value returned as the result of the matching, as usual. But if condi evaluates to false, the matching is resumed against the patterns following patterni.
The let and let rec constructs bind value names locally. The construct
evaluates expr1 … exprn in some unspecified order and matches their values against the patterns pattern1 … patternn. If the matchings succeed, expr is evaluated in the environment enriched by the bindings performed during matching, and the value of expr is returned as the value of the whole let expression. If one of the matchings fails, the exception Match_failure is raised.
An alternate syntax is provided to bind variables to functional values: instead of writing
in a let expression, one may instead write
Recursive definitions of names are introduced by let rec:
The only difference with the let construct described above is that the bindings of names to values performed by the pattern-matching are considered already performed when the expressions expr1 to exprn are evaluated. That is, the expressions expr1 to exprn can reference identifiers that are bound by one of the patterns pattern1, …, patternn, and expect them to have the same value as in expr, the body of the let rec construct.
The recursive definition is guaranteed to behave as described above if the expressions expr1 to exprn are function definitions (fun … or function …), and the patterns pattern1 … patternn are just value names, as in:
This defines name1 … namen as mutually recursive functions local to expr.
The behavior of other forms of let rec definitions is implementation-dependent. The current implementation also supports a certain class of recursive definitions of non-functional values, as explained in section 12.1.
(Introduced in OCaml 4.04)
It is possible to define local exceptions in expressions: let exception constr-decl in expr .
The syntactic scope of the exception constructor is the inner expression, but nothing prevents exception values created with this constructor from escaping this scope. Two executions of the definition above result in two incompatible exception constructors (as for any exception definition). For instance:
(Introduced in OCaml 3.12)
Polymorphic type annotations in let-definitions behave in a way similar to polymorphic methods:
These annotations explicitly require the defined value to be polymorphic, and allow one to use this polymorphism in recursive occurrences (when using let rec). Note however that this is a normal polymorphic type, unifiable with any instance of itself.
The expression expr1 ; expr2 evaluates expr1 first, then expr2, and returns the value of expr2.
The expression if expr1 then expr2 else expr3 evaluates to the value of expr2 if expr1 evaluates to the boolean true, and to the value of expr3 if expr1 evaluates to the boolean false.
The else expr3 part can be omitted, in which case it defaults to else ().
The expression
|
matches the value of expr against the patterns pattern1 to patternn. If the matching against patterni succeeds, the associated expression expri is evaluated, and its value becomes the value of the whole match expression. The evaluation of expri takes place in an environment enriched by the bindings performed during matching. If several patterns match the value of expr, the one that occurs first in the match expression is selected.
If none of the patterns match the value of expr, the exception Match_failure is raised.
The expression expr1 && expr2 evaluates to true if both expr1 and expr2 evaluate to true; otherwise, it evaluates to false. The first component, expr1, is evaluated first. The second component, expr2, is not evaluated if the first component evaluates to false. Hence, the expression expr1 && expr2 behaves exactly as
The expression expr1 || expr2 evaluates to true if one of the expressions expr1 and expr2 evaluates to true; otherwise, it evaluates to false. The first component, expr1, is evaluated first. The second component, expr2, is not evaluated if the first component evaluates to true. Hence, the expression expr1 || expr2 behaves exactly as
The boolean operators & and or are deprecated synonyms for (respectively) && and ||.
The expression while expr1 do expr2 done repeatedly evaluates expr2 while expr1 evaluates to true. The loop condition expr1 is evaluated and tested at the beginning of each iteration. The whole while … done expression evaluates to the unit value ().
The expression for name = expr1 to expr2 do expr3 done first evaluates the expressions expr1 and expr2 (the boundaries) into integer values n and p. Then, the loop body expr3 is repeatedly evaluated in an environment where name is successively bound to the values n, n+1, …, p−1, p. The loop body is never evaluated if n > p.
The expression for name = expr1 downto expr2 do expr3 done evaluates similarly, except that name is successively bound to the values n, n−1, …, p+1, p. The loop body is never evaluated if n < p.
In both cases, the whole for expression evaluates to the unit value ().
The expression
|
evaluates the expression expr and returns its value if the evaluation of expr does not raise any exception. If the evaluation of expr raises an exception, the exception value is matched against the patterns pattern1 to patternn. If the matching against patterni succeeds, the associated expression expri is evaluated, and its value becomes the value of the whole try expression. The evaluation of expri takes place in an environment enriched by the bindings performed during matching. If several patterns match the value of expr, the one that occurs first in the try expression is selected. If none of the patterns matches the value of expr, the exception value is raised again, thereby transparently “passing through” the try construct.
The expression expr1 , … , exprn evaluates to the n-tuple of the values of expressions expr1 to exprn. The evaluation order of the subexpressions is not specified.
The expression constr expr evaluates to the unary variant value whose constructor is constr, and whose argument is the value of expr. Similarly, the expression constr ( expr1 , … , exprn ) evaluates to the n-ary variant value whose constructor is constr and whose arguments are the values of expr1, …, exprn.
The expression constr (expr1, …, exprn) evaluates to the variant value whose constructor is constr, and whose arguments are the values of expr1 … exprn.
For lists, some syntactic sugar is provided. The expression expr1 :: expr2 stands for the constructor ( :: ) applied to the arguments ( expr1 , expr2 ), and therefore evaluates to the list whose head is the value of expr1 and whose tail is the value of expr2. The expression [ expr1 ; … ; exprn ] is equivalent to expr1 :: … :: exprn :: [], and therefore evaluates to the list whose elements are the values of expr1 to exprn.
The expression `tag-name expr evaluates to the polymorphic variant value whose tag is tag-name, and whose argument is the value of expr.
The expression { field1 [= expr1] ; … ; fieldn [= exprn ]} evaluates to the record value { field1 = v1; …; fieldn = vn } where vi is the value of expri for i = 1,… , n. A single identifier fieldk stands for fieldk = fieldk, and a qualified identifier module-path . fieldk stands for module-path . fieldk = fieldk. The fields field1 to fieldn must all belong to the same record type; each field of this record type must appear exactly once in the record expression, though they can appear in any order. The order in which expr1 to exprn are evaluated is not specified. Optional type constraints can be added after each field { field1 : typexpr1 = expr1 ;… ; fieldn : typexprn = exprn } to force the type of fieldk to be compatible with typexprk.
The expression { expr with field1 [= expr1] ; … ; fieldn [= exprn] } builds a fresh record with fields field1 … fieldn equal to expr1 … exprn, and all other fields having the same value as in the record expr. In other terms, it returns a shallow copy of the record expr, except for the fields field1 … fieldn, which are initialized to expr1 … exprn. As previously, single identifier fieldk stands for fieldk = fieldk, a qualified identifier module-path . fieldk stands for module-path . fieldk = fieldk and it is possible to add an optional type constraint on each field being updated with { expr with field1 : typexpr1 = expr1 ; … ; fieldn : typexprn = exprn }.
The expression expr1 . field evaluates expr1 to a record value, and returns the value associated to field in this record value.
The expression expr1 . field <- expr2 evaluates expr1 to a record value, which is then modified in-place by replacing the value associated to field in this record by the value of expr2. This operation is permitted only if field has been declared mutable in the definition of the record type. The whole expression expr1 . field <- expr2 evaluates to the unit value ().
The expression [| expr1 ; … ; exprn |] evaluates to a n-element array, whose elements are initialized with the values of expr1 to exprn respectively. The order in which these expressions are evaluated is unspecified.
The expression expr1 .( expr2 ) returns the value of element number expr2 in the array denoted by expr1. The first element has number 0; the last element has number n−1, where n is the size of the array. The exception Invalid_argument is raised if the access is out of bounds.
The expression expr1 .( expr2 ) <- expr3 modifies in-place the array denoted by expr1, replacing element number expr2 by the value of expr3. The exception Invalid_argument is raised if the access is out of bounds. The value of the whole expression is ().
The expression expr1 .[ expr2 ] returns the value of character number expr2 in the string denoted by expr1. The first character has number 0; the last character has number n−1, where n is the length of the string. The exception Invalid_argument is raised if the access is out of bounds.
The expression expr1 .[ expr2 ] <- expr3 modifies in-place the string denoted by expr1, replacing character number expr2 by the value of expr3. The exception Invalid_argument is raised if the access is out of bounds. The value of the whole expression is (). Note: this possibility is offered only for backward compatibility with older versions of OCaml and will be removed in a future version. New code should use byte sequences and the Bytes.set function.
Symbols from the class infix-symbol, as well as the keywords *, +, -, -., =, !=, <, >, or, ||, &, &&, :=, mod, land, lor, lxor, lsl, lsr, and asr can appear in infix position (between two expressions). Symbols from the class prefix-symbol, as well as the keywords - and -. can appear in prefix position (in front of an expression).
Infix and prefix symbols do not have a fixed meaning: they are simply interpreted as applications of functions bound to the names corresponding to the symbols. The expression prefix-symbol expr is interpreted as the application ( prefix-symbol ) expr. Similarly, the expression expr1 infix-symbol expr2 is interpreted as the application ( infix-symbol ) expr1 expr2.
The table below lists the symbols defined in the initial environment and their initial meaning. (See the description of the core library module Stdlib in chapter 27 for more details). Their meaning may be changed at any time using let ( infix-op ) name1 name2 = …
Note: the operators &&, ||, and ~- are handled specially and it is not advisable to change their meaning.
The keywords - and -. can appear both as infix and prefix operators. When they appear as prefix operators, they are interpreted respectively as the functions (~-) and (~-.).
Operator | Initial meaning |
+ | Integer addition. |
- (infix) | Integer subtraction. |
~- - (prefix) | Integer negation. |
* | Integer multiplication. |
/ | Integer division. Raise Division_by_zero if second argument is zero. |
mod | Integer modulus. Raise Division_by_zero if second argument is zero. |
land | Bitwise logical “and” on integers. |
lor | Bitwise logical “or” on integers. |
lxor | Bitwise logical “exclusive or” on integers. |
lsl | Bitwise logical shift left on integers. |
lsr | Bitwise logical shift right on integers. |
asr | Bitwise arithmetic shift right on integers. |
+. | Floating-point addition. |
-. (infix) | Floating-point subtraction. |
~-. -. (prefix) | Floating-point negation. |
*. | Floating-point multiplication. |
/. | Floating-point division. |
** | Floating-point exponentiation. |
@ | List concatenation. |
^ | String concatenation. |
! | Dereferencing (return the current contents of a reference). |
:= | Reference assignment (update the reference given as first argument with the value of the second argument). |
= | Structural equality test. |
<> | Structural inequality test. |
== | Physical equality test. |
!= | Physical inequality test. |
< | Test “less than”. |
<= | Test “less than or equal”. |
> | Test “greater than”. |
>= | Test “greater than or equal”. |
&& & | Boolean conjunction. |
|| or | Boolean disjunction. |
When class-path evaluates to a class body, new class-path evaluates to a new object containing the instance variables and methods of this class.
When class-path evaluates to a class function, new class-path evaluates to a function expecting the same number of arguments and returning a new object of this class.
Creating directly an object through the object class-body end construct is operationally equivalent to defining locally a class class-name = object class-body end —see sections 11.9.2 and following for the syntax of class-body— and immediately creating a single object from it by new class-name.
The typing of immediate objects is slightly different from explicitly defining a class in two respects. First, the inferred object type may contain free type variables. Second, since the class body of an immediate object will never be extended, its self type can be unified with a closed object type.
The expression expr # method-name invokes the method method-name of the object denoted by expr.
If method-name is a polymorphic method, its type should be known at the invocation site. This is true for instance if expr is the name of a fresh object (let ident = new class-path … ) or if there is a type constraint. Principality of the derivation can be checked in the -principal mode.
The instance variables of a class are visible only in the body of the methods defined in the same class or a class that inherits from the class defining the instance variables. The expression inst-var-name evaluates to the value of the given instance variable. The expression inst-var-name <- expr assigns the value of expr to the instance variable inst-var-name, which must be mutable. The whole expression inst-var-name <- expr evaluates to ().
An object can be duplicated using the library function Oo.copy (see module Oo). Inside a method, the expression {< [inst-var-name [= expr] { ; inst-var-name [= expr] }] >} returns a copy of self with the given instance variables replaced by the values of the associated expressions. A single instance variable name id stands for id = id. Other instance variables have the same value in the returned object as in self.
Expressions whose type contains object or polymorphic variant types can be explicitly coerced (weakened) to a supertype. The expression (expr :> typexpr) coerces the expression expr to type typexpr. The expression (expr : typexpr1 :> typexpr2) coerces the expression expr from type typexpr1 to type typexpr2.
The former operator will sometimes fail to coerce an expression expr from a type typ1 to a type typ2 even if type typ1 is a subtype of type typ2: in the current implementation it only expands two levels of type abbreviations containing objects and/or polymorphic variants, keeping only recursion when it is explicit in the class type (for objects). As an exception to the above algorithm, if both the inferred type of expr and typ are ground (i.e. do not contain type variables), the former operator behaves as the latter one, taking the inferred type of expr as typ1. In case of failure with the former operator, the latter one should be used.
It is only possible to coerce an expression expr from type typ1 to type typ2, if the type of expr is an instance of typ1 (like for a type annotation), and typ1 is a subtype of typ2. The type of the coerced expression is an instance of typ2. If the types contain variables, they may be instantiated by the subtyping algorithm, but this is only done after determining whether typ1 is a potential subtype of typ2. This means that typing may fail during this latter unification step, even if some instance of typ1 is a subtype of some instance of typ2. In the following paragraphs we describe the subtyping relation used.
A fixed object type admits as subtype any object type that includes all its methods. The types of the methods shall be subtypes of those in the supertype. Namely,
is a supertype of
which may contain an ellipsis .. if every typi is a supertype of the corresponding typ′i.
A monomorphic method type can be a supertype of a polymorphic method type. Namely, if typ is an instance of typ′, then 'a1 … 'an . typ′ is a subtype of typ.
Inside a class definition, newly defined types are not available for subtyping, as the type abbreviations are not yet completely defined. There is an exception for coercing self to the (exact) type of its class: this is allowed if the type of self does not appear in a contravariant position in the class type, i.e. if there are no binary methods.
A polymorphic variant type typ is a subtype of another polymorphic variant type typ′ if the upper bound of typ (i.e. the maximum set of constructors that may appear in an instance of typ) is included in the lower bound of typ′, and the types of arguments for the constructors of typ are subtypes of those in typ′. Namely,
which may be a shrinkable type, is a subtype of
which may be an extensible type, if every typi is a subtype of typ′i.
Other types do not introduce new subtyping, but they may propagate the subtyping of their arguments. For instance, typ1 * typ2 is a subtype of typ′1 * typ′2 when typ1 and typ2 are respectively subtypes of typ′1 and typ′2. For function types, the relation is more subtle: typ1 -> typ2 is a subtype of typ′1 -> typ′2 if typ1 is a supertype of typ′1 and typ2 is a subtype of typ′2. For this reason, function types are covariant in their second argument (like tuples), but contravariant in their first argument. Mutable types, like array or ref are neither covariant nor contravariant, they are nonvariant, that is they do not propagate subtyping.
For user-defined types, the variance is automatically inferred: a parameter is covariant if it has only covariant occurrences, contravariant if it has only contravariant occurrences, variance-free if it has no occurrences, and nonvariant otherwise. A variance-free parameter may change freely through subtyping, it does not have to be a subtype or a supertype. For abstract and private types, the variance must be given explicitly (see section 11.8.1), otherwise the default is nonvariant. This is also the case for constrained arguments in type definitions.
OCaml supports the assert construct to check debugging assertions. The expression assert expr evaluates the expression expr and returns () if expr evaluates to true. If it evaluates to false the exception Assert_failure is raised with the source file name and the location of expr as arguments. Assertion checking can be turned off with the -noassert compiler option. In this case, expr is not evaluated at all.
As a special case, assert false is reduced to raise (Assert_failure ...), which gives it a polymorphic type. This means that it can be used in place of any expression (for example as a branch of any pattern-matching). It also means that the assert false “assertions” cannot be turned off by the -noassert option.
The expression lazy expr returns a value v of type Lazy.t that encapsulates the computation of expr. The argument expr is not evaluated at this point in the program. Instead, its evaluation will be performed the first time the function Lazy.force is applied to the value v, returning the actual value of expr. Subsequent applications of Lazy.force to v do not evaluate expr again. Applications of Lazy.force may be implicit through pattern matching (see 11.6).
The expression let module module-name = module-expr in expr locally binds the module expression module-expr to the identifier module-name during the evaluation of the expression expr. It then returns the value of expr. For example:
The expressions let open module-path in expr and module-path.(expr) are strictly equivalent. These constructions locally open the module referred to by the module path module-path in the respective scope of the expression expr.
When the body of a local open expression is delimited by [ ], [| |], or { }, the parentheses can be omitted. For expression, parentheses can also be omitted for {< >}. For example, module-path.[expr] is equivalent to module-path.([expr]), and module-path.[| expr |] is equivalent to module-path.([| expr |]).
Type definitions bind type constructors to data types: either variant types, record types, type abbreviations, or abstract data types. They also bind the value constructors and record fields associated with the definition.
|
See also the following language extensions: private types, generalized algebraic datatypes, attributes, extension nodes, extensible variant types and inline records.
Type definitions are introduced by the type keyword, and consist in one or several simple definitions, possibly mutually recursive, separated by the and keyword. Each simple definition defines one type constructor.
A simple definition consists in a lowercase identifier, possibly preceded by one or several type parameters, and followed by an optional type equation, then an optional type representation, and then a constraint clause. The identifier is the name of the type constructor being defined.
type colour = | Red | Green | Blue | Yellow | Black | White | RGB of {r : int; g : int; b : int} type 'a tree = Lf | Br of 'a * 'a tree * 'a;; type t = {decoration : string; substance : t'} and t' = Int of int | List of t list
In the right-hand side of type definitions, references to one of the type constructor name being defined are considered as recursive, unless type is followed by nonrec. The nonrec keyword was introduced in OCaml 4.02.2.
The optional type parameters are either one type variable ' ident, for type constructors with one parameter, or a list of type variables ('ident1,…,'identn), for type constructors with several parameters. Each type parameter may be prefixed by a variance constraint + (resp. -) indicating that the parameter is covariant (resp. contravariant), and an injectivity annotation ! indicating that the parameter can be deduced from the whole type. These type parameters can appear in the type expressions of the right-hand side of the definition, optionally restricted by a variance constraint ; i.e. a covariant parameter may only appear on the right side of a functional arrow (more precisely, follow the left branch of an even number of arrows), and a contravariant parameter only the left side (left branch of an odd number of arrows). If the type has a representation or an equation, and the parameter is free (i.e. not bound via a type constraint to a constructed type), its variance constraint is checked but subtyping etc. will use the inferred variance of the parameter, which may be less restrictive; otherwise (i.e. for abstract types or non-free parameters), the variance must be given explicitly, and the parameter is invariant if no variance is given.
The optional type equation = typexpr makes the defined type equivalent to the type expression typexpr: one can be substituted for the other during typing. If no type equation is given, a new type is generated: the defined type is incompatible with any other type.
The optional type representation describes the data structure representing the defined type, by giving the list of associated constructors (if it is a variant type) or associated fields (if it is a record type). If no type representation is given, nothing is assumed on the structure of the type besides what is stated in the optional type equation.
The type representation = [|] constr-decl { | constr-decl } describes a variant type. The constructor declarations constr-decl1, …, constr-decln describe the constructors associated to this variant type. The constructor declaration constr-name of typexpr1 * … * typexprn declares the name constr-name as a non-constant constructor, whose arguments have types typexpr1 …typexprn. The constructor declaration constr-name declares the name constr-name as a constant constructor. Constructor names must be capitalized.
The type representation = { field-decl { ; field-decl } [;] } describes a record type. The field declarations field-decl1, …, field-decln describe the fields associated to this record type. The field declaration field-name : poly-typexpr declares field-name as a field whose argument has type poly-typexpr. The field declaration mutable field-name : poly-typexpr behaves similarly; in addition, it allows physical modification of this field. Immutable fields are covariant, mutable fields are non-variant. Both mutable and immutable fields may have explicitly polymorphic types. The polymorphism of the contents is statically checked whenever a record value is created or modified. Extracted values may have their types instantiated.
The two components of a type definition, the optional equation and the optional representation, can be combined independently, giving rise to four typical situations:
The type variables appearing as type parameters can optionally be prefixed by + or - to indicate that the type constructor is covariant or contravariant with respect to this parameter. This variance information is used to decide subtyping relations when checking the validity of :> coercions (see section 11.7.7).
For instance, type +'a t declares t as an abstract type that is covariant in its parameter; this means that if the type τ is a subtype of the type σ, then τ t is a subtype of σ t. Similarly, type -'a t declares that the abstract type t is contravariant in its parameter: if τ is a subtype of σ, then σ t is a subtype of τ t. If no + or - variance annotation is given, the type constructor is assumed non-variant in the corresponding parameter. For instance, the abstract type declaration type 'a t means that τ t is neither a subtype nor a supertype of σ t if τ is subtype of σ.
The variance indicated by the + and - annotations on parameters is enforced only for abstract and private types, or when there are type constraints. Otherwise, for abbreviations, variant and record types without type constraints, the variance properties of the type constructor are inferred from its definition, and the variance annotations are only checked for conformance with the definition.
Injectivity annotations are only necessary for abstract types and private row types, since they can otherwise be deduced from the type declaration: all parameters are injective for record and variant type declarations (including extensible types); for type abbreviations a parameter is injective if it has an injective occurrence in its defining equation (be it private or not). For constrained type parameters in type abbreviations, they are injective if either they appear at an injective position in the body, or if all their type variables are injective; in particular, if a constrained type parameter contains a variable that doesn’t appear in the body, it cannot be injective.
The construct constraint ' ident = typexpr allows the specification of type parameters. Any actual type argument corresponding to the type parameter ident has to be an instance of typexpr (more precisely, ident and typexpr are unified). Type variables of typexpr can appear in the type equation and the type declaration.
|
Exception definitions add new constructors to the built-in variant
type exn
of exception values. The constructors are declared as
for a definition of a variant type.
The form exception constr-decl generates a new exception, distinct from all other exceptions in the system. The form exception constr-name = constr gives an alternate name to an existing exception.
Classes are defined using a small language, similar to the module language.
Class types are the class-level equivalent of type expressions: they specify the general shape and type properties of classes.
|
See also the following language extensions: attributes and extension nodes.
The expression classtype-path is equivalent to the class type bound to the name classtype-path. Similarly, the expression [ typexpr1 , … typexprn ] classtype-path is equivalent to the parametric class type bound to the name classtype-path, in which type parameters have been instantiated to respectively typexpr1, …typexprn.
The class type expression typexpr -> class-type is the type of class functions (functions from values to classes) that take as argument a value of type typexpr and return as result a class of type class-type.
The class type expression object [( typexpr )] { class-field-spec } end is the type of a class body. It specifies its instance variables and methods. In this type, typexpr is matched against the self type, therefore providing a name for the self type.
A class body will match a class body type if it provides definitions for all the components specified in the class body type, and these definitions meet the type requirements given in the class body type. Furthermore, all methods either virtual or public present in the class body must also be present in the class body type (on the other hand, some instance variables and concrete private methods may be omitted). A virtual method will match a concrete method, which makes it possible to forget its implementation. An immutable instance variable will match a mutable instance variable.
Local opens are supported in class types since OCaml 4.06.
The inheritance construct inherit class-body-type provides for inclusion of methods and instance variables from other class types. The instance variable and method types from class-body-type are added into the current class type.
A specification of an instance variable is written val [mutable] [virtual] inst-var-name : typexpr, where inst-var-name is the name of the instance variable and typexpr its expected type. The flag mutable indicates whether this instance variable can be physically modified. The flag virtual indicates that this instance variable is not initialized. It can be initialized later through inheritance.
An instance variable specification will hide any previous specification of an instance variable of the same name.
The specification of a method is written method [private] method-name : poly-typexpr, where method-name is the name of the method and poly-typexpr its expected type, possibly polymorphic. The flag private indicates that the method cannot be accessed from outside the object.
The polymorphism may be left implicit in public method specifications: any type variable which is not bound to a class parameter and does not appear elsewhere inside the class specification will be assumed to be universal, and made polymorphic in the resulting method type. Writing an explicit polymorphic type will disable this behaviour.
If several specifications are present for the same method, they must have compatible types. Any non-private specification of a method forces it to be public.
A virtual method specification is written method [private] virtual method-name : poly-typexpr, where method-name is the name of the method and poly-typexpr its expected type.
The construct constraint typexpr1 = typexpr2 forces the two type expressions to be equal. This is typically used to specify type parameters: in this way, they can be bound to specific type expressions.
Class expressions are the class-level equivalent of value expressions: they evaluate to classes, thus providing implementations for the specifications expressed in class types.
|
|
See also the following language extensions: locally abstract types, attributes and extension nodes.
The expression class-path evaluates to the class bound to the name class-path. Similarly, the expression [ typexpr1 , … typexprn ] class-path evaluates to the parametric class bound to the name class-path, in which type parameters have been instantiated respectively to typexpr1, …typexprn.
The expression ( class-expr ) evaluates to the same module as class-expr.
The expression ( class-expr : class-type ) checks that class-type matches the type of class-expr (that is, that the implementation class-expr meets the type specification class-type). The whole expression evaluates to the same class as class-expr, except that all components not specified in class-type are hidden and can no longer be accessed.
Class application is denoted by juxtaposition of (possibly labeled) expressions. It denotes the class whose constructor is the first expression applied to the given arguments. The arguments are evaluated as for expression application, but the constructor itself will only be evaluated when objects are created. In particular, side-effects caused by the application of the constructor will only occur at object creation time.
The expression fun [[?]label-name:]pattern -> class-expr evaluates to a function from values to classes. When this function is applied to a value v, this value is matched against the pattern pattern and the result is the result of the evaluation of class-expr in the extended environment.
Conversion from functions with default values to functions with patterns only works identically for class functions as for normal functions.
The expression
is a short form for
The let and let rec constructs bind value names locally, as for the core language expressions.
If a local definition occurs at the very beginning of a class definition, it will be evaluated when the class is created (just as if the definition was outside of the class). Otherwise, it will be evaluated when the object constructor is called.
Local opens are supported in class expressions since OCaml 4.06.
|
The expression object class-body end denotes a class body. This is the prototype for an object : it lists the instance variables and methods of an object of this class.
A class body is a class value: it is not evaluated at once. Rather, its components are evaluated each time an object is created.
In a class body, the pattern ( pattern [: typexpr] ) is matched against self, therefore providing a binding for self and self type. Self can only be used in method and initializers.
Self type cannot be a closed object type, so that the class remains extensible.
Since OCaml 4.01, it is an error if the same method or instance variable name is defined several times in the same class body.
The inheritance construct inherit class-expr allows reusing methods and instance variables from other classes. The class expression class-expr must evaluate to a class body. The instance variables, methods and initializers from this class body are added into the current class. The addition of a method will override any previously defined method of the same name.
An ancestor can be bound by appending as lowercase-ident to the inheritance construct. lowercase-ident is not a true variable and can only be used to select a method, i.e. in an expression lowercase-ident # method-name. This gives access to the method method-name as it was defined in the parent class even if it is redefined in the current class. The scope of this ancestor binding is limited to the current class. The ancestor method may be called from a subclass but only indirectly.
The definition val [mutable] inst-var-name = expr adds an instance variable inst-var-name whose initial value is the value of expression expr. The flag mutable allows physical modification of this variable by methods.
An instance variable can only be used in the methods and initializers that follow its definition.
Since version 3.10, redefinitions of a visible instance variable with the same name do not create a new variable, but are merged, using the last value for initialization. They must have identical types and mutability. However, if an instance variable is hidden by omitting it from an interface, it will be kept distinct from other instance variables with the same name.
A variable specification is written val [mutable] virtual inst-var-name : typexpr. It specifies whether the variable is modifiable, and gives its type.
Virtual instance variables were added in version 3.10.
A method definition is written method method-name = expr. The definition of a method overrides any previous definition of this method. The method will be public (that is, not private) if any of the definition states so.
A private method, method private method-name = expr, is a method that can only be invoked on self (from other methods of the same object, defined in this class or one of its subclasses). This invocation is performed using the expression value-name # method-name, where value-name is directly bound to self at the beginning of the class definition. Private methods do not appear in object types. A method may have both public and private definitions, but as soon as there is a public one, all subsequent definitions will be made public.
Methods may have an explicitly polymorphic type, allowing them to be used polymorphically in programs (even for the same object). The explicit declaration may be done in one of three ways: (1) by giving an explicit polymorphic type in the method definition, immediately after the method name, i.e. method [private] method-name : { ' ident }+ . typexpr = expr; (2) by a forward declaration of the explicit polymorphic type through a virtual method definition; (3) by importing such a declaration through inheritance and/or constraining the type of self.
Some special expressions are available in method bodies for manipulating instance variables and duplicating self:
|
The expression inst-var-name <- expr modifies in-place the current object by replacing the value associated to inst-var-name by the value of expr. Of course, this instance variable must have been declared mutable.
The expression {< inst-var-name1 = expr1 ; … ; inst-var-namen = exprn >} evaluates to a copy of the current object in which the values of instance variables inst-var-name1, …, inst-var-namen have been replaced by the values of the corresponding expressions expr1, …, exprn.
A method specification is written method [private] virtual method-name : poly-typexpr. It specifies whether the method is public or private, and gives its type. If the method is intended to be polymorphic, the type must be explicitly polymorphic.
Since Ocaml 3.12, the keywords inherit!, val! and method! have the same semantics as inherit, val and method, but they additionally require the definition they introduce to be overriding. Namely, method! requires method-name to be already defined in this class, val! requires inst-var-name to be already defined in this class, and inherit! requires class-expr to override some definitions. If no such overriding occurs, an error is signaled.
As a side-effect, these 3 keywords avoid the warnings 7 (method override) and 13 (instance variable override). Note that warning 7 is disabled by default.
The construct constraint typexpr1 = typexpr2 forces the two type expressions to be equals. This is typically used to specify type parameters: in that way they can be bound to specific type expressions.
A class initializer initializer expr specifies an expression that will be evaluated whenever an object is created from the class, once all its instance variables have been initialized.
|
A class definition class class-binding { and class-binding } is recursive. Each class-binding defines a class-name that can be used in the whole expression except for inheritance. It can also be used for inheritance, but only in the definitions that follow its own.
A class binding binds the class name class-name to the value of expression class-expr. It also binds the class type class-name to the type of the class, and defines two type abbreviations : class-name and # class-name. The first one is the type of objects of this class, while the second is more general as it unifies with the type of any object belonging to a subclass (see section 11.4).
A class must be flagged virtual if one of its methods is virtual (that is, appears in the class type, but is not actually defined). Objects cannot be created from a virtual class.
The class type parameters correspond to the ones of the class type and of the two type abbreviations defined by the class binding. They must be bound to actual types in the class definition using type constraints. So that the abbreviations are well-formed, type variables of the inferred type of the class must either be type parameters or be bound in the constraint clause.
|
This is the counterpart in signatures of class definitions. A class specification matches a class definition if they have the same type parameters and their types match.
|
A class type definition class class-name = class-body-type defines an abbreviation class-name for the class body type class-body-type. As for class definitions, two type abbreviations class-name and # class-name are also defined. The definition can be parameterized by some type parameters. If any method in the class type body is virtual, the definition must be flagged virtual.
Two class type definitions match if they have the same type parameters and they expand to matching types.
Module types are the module-level equivalent of type expressions: they specify the general shape and type properties of modules.
|
|
See also the following language extensions: recovering the type of a module, substitution inside a signature, type-level module aliases, attributes, extension nodes, generative functors, and module type substitutions.
The expression modtype-path is equivalent to the module type bound to the name modtype-path. The expression ( module-type ) denotes the same type as module-type.
Signatures are type specifications for structures. Signatures sig … end are collections of type specifications for value names, type names, exceptions, module names and module type names. A structure will match a signature if the structure provides definitions (implementations) for all the names specified in the signature (and possibly more), and these definitions meet the type requirements given in the signature.
An optional ;; is allowed after each specification in a signature. It serves as a syntactic separator with no semantic meaning.
A specification of a value component in a signature is written val value-name : typexpr, where value-name is the name of the value and typexpr its expected type.
The form external value-name : typexpr = external-declaration is similar, except that it requires in addition the name to be implemented as the external function specified in external-declaration (see chapter 22).
A specification of one or several type components in a signature is written type typedef { and typedef } and consists of a sequence of mutually recursive definitions of type names.
Each type definition in the signature specifies an optional type equation = typexpr and an optional type representation = constr-decl … or = { field-decl … }. The implementation of the type name in a matching structure must be compatible with the type expression specified in the equation (if given), and have the specified representation (if given). Conversely, users of that signature will be able to rely on the type equation or type representation, if given. More precisely, we have the following four situations:
The specification exception constr-decl in a signature requires the matching structure to provide an exception with the name and arguments specified in the definition, and makes the exception available to all users of the structure.
A specification of one or several classes in a signature is written class class-spec { and class-spec } and consists of a sequence of mutually recursive definitions of class names.
Class specifications are described more precisely in section 11.9.4.
A specification of one or several class types in a signature is written class type classtype-def { and classtype-def } and consists of a sequence of mutually recursive definitions of class type names. Class type specifications are described more precisely in section 11.9.5.
A specification of a module component in a signature is written module module-name : module-type, where module-name is the name of the module component and module-type its expected type. Modules can be nested arbitrarily; in particular, functors can appear as components of structures and functor types as components of signatures.
For specifying a module component that is a functor, one may write
instead of
A module type component of a signature can be specified either as a manifest module type or as an abstract module type.
An abstract module type specification module type modtype-name allows the name modtype-name to be implemented by any module type in a matching signature, but hides the implementation of the module type to all users of the signature.
A manifest module type specification module type modtype-name = module-type requires the name modtype-name to be implemented by the module type module-type in a matching signature, but makes the equality between modtype-name and module-type apparent to all users of the signature.
The expression open module-path in a signature does not specify any components. It simply affects the parsing of the following items of the signature, allowing components of the module denoted by module-path to be referred to by their simple names name instead of path accesses module-path . name. The scope of the open stops at the end of the signature expression.
The expression include module-type in a signature performs textual inclusion of the components of the signature denoted by module-type. It behaves as if the components of the included signature were copied at the location of the include. The module-type argument must refer to a module type that is a signature, not a functor type.
The module type expression functor ( module-name : module-type1 ) -> module-type2 is the type of functors (functions from modules to modules) that take as argument a module of type module-type1 and return as result a module of type module-type2. The module type module-type2 can use the name module-name to refer to type components of the actual argument of the functor. If the type module-type2 does not depend on type components of module-name, the module type expression can be simplified with the alternative short syntax module-type1 -> module-type2 . No restrictions are placed on the type of the functor argument; in particular, a functor may take another functor as argument (“higher-order” functor).
When the result module type is itself a functor,
one may use the abbreviated form
Assuming module-type denotes a signature, the expression module-type with mod-constraint { and mod-constraint } denotes the same signature where type equations have been added to some of the type specifications, as described by the constraints following the with keyword. The constraint type [type-parameters] typeconstr = typexpr adds the type equation = typexpr to the specification of the type component named typeconstr of the constrained signature. The constraint module module-path = extended-module-path adds type equations to all type components of the sub-structure denoted by module-path, making them equivalent to the corresponding type components of the structure denoted by extended-module-path.
For instance, if the module type name S is bound to the signature
sig type t module M: (sig type u end) end
then S with type t=int denotes the signature
sig type t=int module M: (sig type u end) end
and S with module M = N denotes the signature
sig type t module M: (sig type u=N.u end) end
A functor taking two arguments of type S that share their t component is written
functor (A: S) (B: S with type t = A.t) ...
Constraints are added left to right. After each constraint has been applied, the resulting signature must be a subtype of the signature before the constraint was applied. Thus, the with operator can only add information on the type components of a signature, but never remove information.
Module expressions are the module-level equivalent of value expressions: they evaluate to modules, thus providing implementations for the specifications expressed in module types.
|
See also the following language extensions: recursive modules, first-class modules, overriding in open statements, attributes, extension nodes and generative functors.
The expression module-path evaluates to the module bound to the name module-path.
The expression ( module-expr ) evaluates to the same module as module-expr.
The expression ( module-expr : module-type ) checks that the type of module-expr is a subtype of module-type, that is, that all components specified in module-type are implemented in module-expr, and their implementation meets the requirements given in module-type. In other terms, it checks that the implementation module-expr meets the type specification module-type. The whole expression evaluates to the same module as module-expr, except that all components not specified in module-type are hidden and can no longer be accessed.
Structures struct … end are collections of definitions for value names, type names, exceptions, module names and module type names. The definitions are evaluated in the order in which they appear in the structure. The scopes of the bindings performed by the definitions extend to the end of the structure. As a consequence, a definition may refer to names bound by earlier definitions in the same structure.
For compatibility with toplevel phrases (chapter 14), optional ;; are allowed after and before each definition in a structure. These ;; have no semantic meanings. Similarly, an expr preceded by ;; is allowed as a component of a structure. It is equivalent to let _ = expr, i.e. expr is evaluated for its side-effects but is not bound to any identifier. If expr is the first component of a structure, the preceding ;; can be omitted.
A value definition let [rec] let-binding { and let-binding } bind value names in the same way as a let … in … expression (see section 11.7.2). The value names appearing in the left-hand sides of the bindings are bound to the corresponding values in the right-hand sides.
A value definition external value-name : typexpr = external-declaration implements value-name as the external function specified in external-declaration (see chapter 22).
A definition of one or several type components is written type typedef { and typedef } and consists of a sequence of mutually recursive definitions of type names.
Exceptions are defined with the syntax exception constr-decl or exception constr-name = constr.
A definition of one or several classes is written class class-binding { and class-binding } and consists of a sequence of mutually recursive definitions of class names. Class definitions are described more precisely in section 11.9.3.
A definition of one or several classes is written class type classtype-def { and classtype-def } and consists of a sequence of mutually recursive definitions of class type names. Class type definitions are described more precisely in section 11.9.5.
The basic form for defining a module component is module module-name = module-expr, which evaluates module-expr and binds the result to the name module-name.
One can write
instead of
Another derived form is
which is equivalent to
A definition for a module type is written module type modtype-name = module-type. It binds the name modtype-name to the module type denoted by the expression module-type.
The expression open module-path in a structure does not define any components nor perform any bindings. It simply affects the parsing of the following items of the structure, allowing components of the module denoted by module-path to be referred to by their simple names name instead of path accesses module-path . name. The scope of the open stops at the end of the structure expression.
The expression include module-expr in a structure re-exports in the current structure all definitions of the structure denoted by module-expr. For instance, if you define a module S as below
defining the module B as
is equivalent to defining it as
The difference between open and include is that open simply provides short names for the components of the opened structure, without defining any components of the current structure, while include also adds definitions for the components of the included structure.
The expression functor ( module-name : module-type ) -> module-expr evaluates to a functor that takes as argument modules of the type module-type1, binds module-name to these modules, evaluates module-expr in the extended environment, and returns the resulting modules as results. No restrictions are placed on the type of the functor argument; in particular, a functor may take another functor as argument (“higher-order” functor).
When the result module expression is itself a functor,
one may use the abbreviated form
The expression module-expr1 ( module-expr2 ) evaluates module-expr1 to a functor and module-expr2 to a module, and applies the former to the latter. The type of module-expr2 must match the type expected for the arguments of the functor module-expr1.
|
Compilation units bridge the module system and the separate compilation system. A compilation unit is composed of two parts: an interface and an implementation. The interface contains a sequence of specifications, just as the inside of a sig … end signature expression. The implementation contains a sequence of definitions and expressions, just as the inside of a struct … end module expression. A compilation unit also has a name unit-name, derived from the names of the files containing the interface and the implementation (see chapter 13 for more details). A compilation unit behaves roughly as the module definition
A compilation unit can refer to other compilation units by their names, as if they were regular modules. For instance, if U is a compilation unit that defines a type t, other compilation units can refer to that type under the name U.t; they can also refer to U as a whole structure. Except for names of other compilation units, a unit interface or unit implementation must not have any other free variables. In other terms, the type-checking and compilation of an interface or implementation proceeds in the initial environment
where name1 … namen are the names of the other compilation units available in the search path (see chapter 13 for more details) and specification1 … specificationn are their respective interfaces.
This chapter describes language extensions and convenience features that are implemented in OCaml, but not described in chapter 11.
(Introduced in Objective Caml 1.00)
As mentioned in section 11.7.2, the let rec binding construct, in addition to the definition of recursive functions, also supports a certain class of recursive definitions of non-functional values, such as
which binds name1 to the cyclic list 1::2::1::2::…, and name2 to the cyclic list 2::1::2::1::…Informally, the class of accepted definitions consists of those definitions where the defined names occur only inside function bodies or as argument to a data constructor.
More precisely, consider the expression:
It will be accepted if each one of expr1 … exprn is statically constructive with respect to name1 … namen, is not immediately linked to any of name1 … namen, and is not an array constructor whose arguments have abstract type.
An expression e is said to be statically constructive with respect to the variables name1 … namen if at least one of the following conditions is true: