Version franaise
Home About Download Resources Contact us
Browse thread
Importance of weak hash tables initial size ?
[ Home ] [ Index: by date | by threads ]
[ Search: ]

[ Message by date: previous | next ] [ Message in thread: previous | next ] [ Thread: previous | next ]
Date: -- (:)
From: Pierre Etchemaïté <petchema@c...>
Subject: Importance of weak hash tables initial size ?

	Hi all,

I noticed that while the initial size of regular hash tables does not
matter too much for performance (beside the cost of additional resizes
and elements rehashing to reach the final size), creating weak hash
tables with a small initial size leads to a weak hash table with a
much higher load factor than the same program with a larger initial
hash table size.

For short, large weak hash tables have a significantly worse access
time performance when created with a small initial size.

Specifically, what is the rationale behind the

  t.limit <- t.limit + 2

in weak.ml ?

It may be ok to use a higher load factor than Hashtbl (who uses a
constant limit of 2), since there's some overallocation going on in
each bucket of weak hash tables, but I fail to understand why the load
factor limit should also be raised with each resizing...

Best regards,
Pierre.