English version
Accueil     À propos     Téléchargement     Ressources     Contactez-nous    

Ce site est rarement mis à jour. Pour les informations les plus récentes, rendez-vous sur le nouveau site OCaml à l'adresse ocaml.org.

Browse thread
How to write a GC for size > memory
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: 2009-11-13 (05:10)
From: Goswin von Brederlow <goswin-v-b@w...>
Subject: How to write a GC for size > memory

as I might have mentioned in the past I'm writing C bindings for
libaio and libfuse to write a filesystem in ocaml.

The problem I'm now facing is freeing blocks on the filesystem when
they are no longer used. The filesystem has a huge B-Tree containing
inodes, file attributes, directory entries (lets call them simple
values as none of them are pointer like) or references to data blocks
of a file (lets call them pointer). The filesystem is purely
copy-on-write except for the root of the B-Tree, which is stored
round-robin in a select few blocks at the begining of the drive.

Simplified I have

type key = int
type block_t = int (* pointer to block *)
type entry = Data of int | Block of block_t
type leaf = (key * entry) array
type node = (key * block_t) array
type tree = Leaf of leaf | Node of node
type roots = tree array

To free unused blocks and to defragment the filesystem I though I
would implement a moving GC. This seems actualy is perfect for a GC as
I have no mutable structures (except the roots which I can watch).

But here are my problems:

1) I can't put a bit into every (data) block.
   a) there is no space. A block has 4k of data
   b) i can't modify the block anyway without risking damage in a

2) I have a lot of blocks, 1-2 billion of them. Keeping even a single
   bit per block in memory exceeds what I can spare.

3) The B-tree exceeds the spare memory too so it needs to be swaped in
   and out with a minimum of trashing the disk.

I'm thinking of using GC mixed with reference counting for this. The
available space is split into chunks of 65536 blocks. For each chunks
I add an entry in the B-Tree counting the number of blocks allocated
(0-65536 and monoton rising) and the number of references to blocks in
that chunks. The number of references gives me a metric how urgend a
GC run over the chunk is. Overall there will be 16k-64k chunks, which
doesn't cost much space (and therefore can fit in memory). To optimize
locality a chunk can only contain B-Tree nodes (few chunks) or data
(many many chunks) but never both.

The GC then has 3 modes:

1) defrag mode, lots of free space left

Go through the B-Tree and copy blocks from anywhere into a new chunk
so that data (or B-Tree nodes) becomes sequential. Keeping the B-Tree
nodes sequential will improve sequential access to keys fast (sweeps
in the GC and certain FS operations: readdir(), unlink() mostly)

2) freeing chunk mode, space is not abundant or chunks with large drop count

Pick a worthy chunk. Sweep the B-Tree completly and copy used blocks
from that chunk somewhere else. After the sweep there can't be any
references to that chunk so set the allocated and droped counts to
0. The chunk is now free again. Since I sweep the B-Tree I can
calculate the used count for each chunk here so the next run can pick
a worthy chunk.

3) emergency compaction mode, space is running out

Declare an emergency and suspend filesystem operations. This can't be
concurrent as the FS would run out of space.

Pick a worth chunk and allocate a bitmap in memory for it. Sweep the
B-Tree completely and mark all used blocks in the bitmap. Compact the
chunk and update the B-Tree (another sweep). Set the allocated count
to the number of used blocks and used to the number of references. The
chunk is then partialy allocated again. Since space it tight don't
regenerate the other used counts to avoid B-Tree updates eating
space. Idealy this should only use free blocks in the chunk it
compacts to record the changes to the B-Tree.

Normaly the GC would switch between defrag and freeing chunk
mode. Both would be concurrent to normal filesystem
operations. Possibly only run when the filesystem is idle. The
compation mode would only happen in situations where the FS would
otherwise have to return ENOSPC.

To further improve this I would like to add a generational component
into the GC. I have no mutables (except the roots) so an old chunk (or
an old B-Tree node) can hold no references to a newer chunk. Also new
files are far more likely to be deleted again than old files and new
B-Tree nodes more likely to be modified again than old B-Tree
nodes. Seems to screem for a generational approach. A generational
approach should allow the GC to only scan a fraction of the B-Tree for
its sweep. But I'm not quite sure how to go about that.

Comments, Ideas, Improvements, Urls welcome.