Version française
Home     About     Download     Resources     Contact us    

This site is updated infrequently. For up-to-date information, please visit the new OCaml website at

Browse thread
[Caml-list] Understanding why Ocaml doesn't support operator overloading.
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2002-11-29 (15:26)
From: Xavier Leroy <xavier.leroy@i...>
Subject: Re: [Caml-list] Understanding why Ocaml doesn't support operator overloading.
> Some time ago, when looking at Ocaml for the first time, I got
> baffled by the lack of operator overloading. I am still wondering
> why this is the case.  Could someone please point me to more
> information about this?
> I remember reading something about operator overloading and type inference 
> beeing hard to combine.

I don't know how technical you'd like the answer to be, so let me
start with a simple explanation that doesn't get into all technical

The problem is indeed the combination of overloading and type
inference.  The catch-22 is this:
- overloading determines the meaning of an operator from the types of
  its arguments (e.g. to determine whether + is integer addition or
  floating-point addition);
- type inference relies (among other things) on the fact that each
  operator has a unique type to determine the types of its arguments
  (e.g. if one of the arguments is a function parameter).

If you don't see the circularity, consider

        let f x = x + x

If "+" is overloaded on integers and on floats, you get two possible
types for f: int -> int or float -> float.  None of these types is
better than the other; if the compiler commits to one of them, say
int->int, later applications of f (e.g. to a float) can be rejected.

In technical terms, we say that the principal types property fails.
This property says that the inferred type is always the "best" in the
sense that it subsumes all other possible types.  Its a crucial
property in order to do type inference, both from a theoretical and a
practical (usability) standpoint.

There are many ways to go about the problem with "f" above.
A simple one is to reject the definition as ambiguous, and require the
programmer to disambiguate, e.g. by putting a type declaration on "x".
Another equally simple is to resolve ambiguities using a default
strategy, e.g. favor "int" over "float".  Both ways aren't very nice,
since they break the principal types property.

Many type inference systems have been proposed for overloading that
preserve the principal types property.  The most famous example (but
not the only one) is Haskell's type classes.  If you look at the
literature, you'll see that they all involve significant typing
machinery; some even have significant impact on the design of the
whole language (as in the case of Haskell).  I'm not criticizing here,
just pointing out that combining type inference and overloading is not
a trivial extension of ML-style type inference.

Hope this (partially) answers your question.

- Xavier Leroy
To unsubscribe, mail Archives:
Bug reports: FAQ:
Beginner's list: