English version
Accueil     À propos     Téléchargement     Ressources     Contactez-nous    

Ce site est rarement mis à jour. Pour les informations les plus récentes, rendez-vous sur le nouveau site OCaml à l'adresse ocaml.org.

Browse thread
ANNOUNCE : libsndfile-ocaml alpha
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2007-01-01 (19:58)
From: Erik de Castro Lopo <mle+ocaml@m...>
Subject: Re: [Caml-list] ANNOUNCE : libsndfile-ocaml alpha
Matthieu Dubuget wrote:

> Is it really the case? I thought that it was possible to create a
> bigarray wrapping a C array without
> copying datas. I do not know how to achieve this for float arrays?

When reading files libsndfile always does at least one copy; from
the disk to the array supplied by the caller. This single copy 
only occurs if the data requested by the caller is the same format
and endian-ness as the format requested by the caller. When the data
formats are not the same two copies are required; from the disk to
and buffer internal to libsndfile and then a copy/data conversion
to the buffer suppiled by the caller.

The above doesn't change regardless of whether the caller supplies
an Ocaml float array or a bigarray.

In addition, I also regard the most common case to be the one where 
a data convesion takes place between the file format and the format
requested by the caller.

> This could be interesting for read-only access or in-place modifications?

I don't see how this would be different float array vs bigarray.

> Another very interesting feature of bigarrays is the memory mapping of a
> file as a big array, very useful to work with BIG files.

Firstly, libsndfile doesn't use mem-mapping because the most common case
is where the disk format is different from the file format. Secondly I
consider a big file to be one containing say an hour of multiple channels 
(say 8) of 32 bit float data at high sample rates (say 96kHz). That file

    96000 * 60 * 60 * 8 * 4 bytes => 
    11059.200 Mbytes =>
    11.059 Gbytes

Nobody is going to load the whole of that file into memory at once. Instead,
the most sensible and most general approach is to load it in in chunks.

  Erik de Castro Lopo
"I saw `cout' being shifted "Hello world" times to the left 
and stopped right there." -- Steve Gonedes