New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unix.read isn't POSIX-confirmant even if the OS is #8352
Comments
Comment author: administrator
Right. Now that you mention this, I found a discussion of it at: http://pauillac.inria.fr/~aschmitt/cwn/2003.04.22.html#3 Thus the remaining options are:
-- |
Comment author: administrator
That doesn't work. A blocking section is not allowed to access the -- Damien |
Comment author: @damiendoligez select() doesn't guarantee that data is available on the file descriptor when read() is called, because the data might disappear (i.e. be removed by some other process) between the call to select() and the call to read(). In that case, we are left with a thread that blocks outside of a blocking section, a major problem for multithreaded programs. Our only realistic option is to document this behaviour. |
Comment author: gerd The POSIX text is not very exact. Traditonally, read() does not return fewer bytes than requested for regular files (independently of how you read the POSIX specs - all POSIX-based OS do this). There are other cases which are also not exactly specified (e.g. behaviour for devices or message-based channels), and here the Ocaml behaviour hurts more. E.g. you cannot receive a 64K Internet datagram - it is cut off at 16K. In Ocamlnet the chosen workaround is to use bigarrays as primary buffers, i.e. you have something like val mem_read : Unix.file_descr -> buffer -> int -> int -> int where type buffer = (char, int8_unsigned_el, c_layout) Array1.t The user has now the option to make the buffer as large as necessary. This solution has the nice property that you can even save one data copy if you can directly process the data in the bigarray buffer. If you copy the bigarray to a string (btw, this function is missing in the runtime), you have exactly the same overhead as the current solution for Unix.read. As bigarrays are malloc-ed memory, we do not run into the problem that the buffer can be moved around by the GC when triggered by another thread. So, my suggestion is:
This might imply some restructuring of the libraries, especially, the C part of Bigarrays would have to be moved to the normal stdlib. Btw, write, recv, send have similar problems. |
This issue has been open one year with no activity. Consequently, it is being marked with the "stale" label. What this means is that the issue will be automatically closed in 30 days unless more comments are added or the "stale" label is removed. Comments that provide new information on the issue are especially welcome: is it still reproducible? did it appear in other contexts? how critical is it? etc. |
The fixed limit on Unix read buffer sizes is still there and still undocumented (although it's larger than when this was reported), so this is still an issue. |
The fixed limit is here to stay. We've been saying it should be documented since 2003, but nobody cares to. So let's no care to for one more year. |
One thing has changed: Bigarray is now part of stdlib, so Gerd's solution has become rather easy to implement. |
This issue has been open one year with no activity. Consequently, it is being marked with the "stale" label. What this means is that the issue will be automatically closed in 30 days unless more comments are added or the "stale" label is removed. Comments that provide new information on the issue are especially welcome: is it still reproducible? did it appear in other contexts? how critical is it? etc. |
These are variants of `Unix.read` and `Unix.single_write` that operate over a big array of characters instead of a byte array. Since big arrays are allocated outside the heap, we don't need to go through the 65536-byte bounce buffer, and can read or write more than 65536 bytes in one system call. Also: document the effect of the bounce buffer on the `read`, `write` and `single_write` functions. Fixes: ocaml#8352 Closes: ocaml#10897
These are variants of `Unix.read` and `Unix.single_write` that operate over a big array of characters instead of a byte array. Since big arrays are allocated outside the heap, we don't need to go through the 65536-byte bounce buffer, and can read or write more than 65536 bytes in one system call. Also: document the effect of the bounce buffer on the `read`, `write` and `single_write` functions. Fixes: ocaml#8352 Closes: ocaml#10897
These are variants of `Unix.read` and `Unix.single_write` that operate over a big array of characters instead of a byte array. Since big arrays are allocated outside the heap, we don't need to go through the 65536-byte bounce buffer, and can read or write more than 65536 bytes in one system call. Also: document the effect of the bounce buffer on the `read`, `write` and `single_write` functions. Fixes: ocaml#8352 Closes: ocaml#10897
These are variants of `Unix.read` and `Unix.single_write` that operate over a big array of characters instead of a byte array. Since big arrays are allocated outside the heap, we don't need to go through the 65536-byte bounce buffer, and can read or write more than 65536 bytes in one system call. Also: document the effect of the bounce buffer on the `read`, `write` and `single_write` functions. Fixes: ocaml#8352 Closes: ocaml#10897
These are variants of `Unix.read` and `Unix.single_write` that operate over a big array of characters instead of a byte array. Since big arrays are allocated outside the heap, we don't need to go through the 65536-byte bounce buffer, and can read or write more than 65536 bytes in one system call. Also: document the effect of the bounce buffer on the `read`, `write` and `single_write` functions. Fixes: ocaml#8352 Closes: ocaml#10897
Can someone reopen this? It seems to be worked on by Xavier (see linked commits above) |
Thanks for noticing, reopened. |
Those commits you see are an experiment of mine, using "bigbytes" (1D bigarrays of characters). I'm not satisfied with the results, which is why I haven't submitted it as a PR, and probably will not in the future. A more promising direction, outlined at the latest developer's meeting, is to have a way to allocate byte arrays that are guaranteed not to be moved by the GC. This would be very useful for the Ctypes library too. So, let's try to investigate this alternative instead of investing more efforts into bigbytes. |
As a rule of thumb: everything Github shows you of my activity which is not submitted as a PR should be ignored. |
Is there more public information/discussion about that ? This could be quite a game changer for some API designs. The bigbytes/bytes split is a big pain of the system at the moment. |
There will by the end of this message :-) Here are my relevant notes from the dev meeting:
Additional context from my memory: currently OCaml 5 does not support compaction, and every meeting for a few years has briefly discussed this at some point. Many workflows don't fragment that much and don't need compaction. (This is especially true with the best-fit policy for the major allocator, but note that OCaml 5 does not currently support best-fit either, and we are not sure whether the current allocator has the same fragmentation-avoiding qualities or not.) Some people use compaction explicitly on critical programs that first go through an initialization regime where they do a fair amount of allocation work, and then move to a steady state where they don't need most of the initialization-time data anymore. They found that explicitly compacting at the end of the initialization regime is very helpful to have better performance in the steady state due to the smaller memory footprint. This time @avsm mentioned that non-compaction is very useful for some patterns. @stedolan suggested that it may be enough for those patterns to have pinned / non-moving bytes, and allow compaction of the rest of the system. At the meeting, there was a consensus that this would be a good compromise. |
#12365 adds I/O functions that use 1D bigarrays as buffers, with unlimited buffer size. |
Original bug ID: 1909
Reporter: administrator
Status: acknowledged
Resolution: open
Priority: normal
Severity: feature
Category: otherlibs
Bug description
Put this text into /tmp/foo.ml:
let len: int = 32000;;
let buf: string = String.create len;;
let infd: Unix.file_descr = Unix.openfile "/usr/bin/tcsh" [Unix.O_RDONLY] 0;;
let bytes: int = Unix.read infd buf 0 len;;
let _ = Format.eprintf "bytes is %d, len is %d.\n@?" bytes len;;
(Here /usr/bin/tcsh can be replaced by any convenient readable file
that has at least 32000 bytes.)
Give these commands:
cd /tmp
/usr/bin/ocamlc unix.cma foo.ml -o foo
./foo
It prints:
bytes is 16384, len is 32000.
I think this is wrong. bytes should have been 32000. This is because
the documentation at http://caml.inria.fr/ocaml/htmlman/manual035.html says:
Refer to sections 2 and 3 of the Unix manual for more details on
the behavior of these functions.
and "man 2 read" for Linux and some other operating systems says that
the read system call is POSIX conformant. To see the POSIX spec,
register at
http://www.unix-systems.org/version3/online.html
and visit
http://www.opengroup.org/onlinepubs/007904975/functions/read.html
It says:
Upon successful completion, where nbyte is greater than 0, read()
shall mark for update the st_atime field of the file, and shall return
the number of bytes read. This number shall never be greater than
nbyte. The value returned may be less than nbyte if the number of
bytes left in the file is less than nbyte, if the read() request was
interrupted by a signal, or if the file is a pipe or FIFO or special
file and has fewer than nbyte bytes immediately available for
reading. For example, a read() from a file associated with a terminal
may return one typed line of data.
The common English meaning (but not the meaning used by mathematicians
and programmers and perhaps lawyers) for "if" in
The value returned may be less than nbyte if ...blah blah...
is if-and-only-if. If they really meant a mathematical "if" there,
then that whole sentence is meaningless because it would be saying
that if these conditions hold, the value returned may be less than
nbyte, and if these conditions don't hold, then we aren't saying
anything so there is no constraint and the value returned may still be
less than nbyte. They must have meant "only if".
I submitted a bug to POSIX to change that "if" to "only if".
If you run foo under strace, you can see that the length argument
passed to the C "read" system call is 16384. This is a consequence of
the code in otherlibs/unix/read.c:
Begin_root (buf);
numbytes = Long_val(len);
if (numbytes > UNIX_BUFFER_SIZE) numbytes = UNIX_BUFFER_SIZE;
enter_blocking_section();
ret = read(Int_val(fd), iobuf, (int) numbytes);
leave_blocking_section();
if (ret == -1) uerror("read", Nothing);
memmove (&Byte(buf, Long_val(ofs)), iobuf, ret);
End_roots();
The length limit is a consequence of reading the bytes into a buffer
before copying them, and I don't see any point in doing that. You
could do this just as well:
Begin_root (buf);
numbytes = Long_val(len);
enter_blocking_section();
ret = read(Int_val(fd), &Byte(buf, Long_val(ofs)), (int) numbytes);
leave_blocking_section();
if (ret == -1) uerror("read", Nothing);
End_roots();
(If you copy this code, remember to remove the declaration of iobuf.)
I'm using the Debian unstable ocaml 3.07-7.
Tim Freeman tim@fungible.com
GPG public key fingerprint ECDF 46F8 3B80 BB9E 575D 7180 76DF FE00 34B1 5C78
Computers don't like it when you anthropomorphize them. -- Chris Phoenix
The text was updated successfully, but these errors were encountered: