-
Notifications
You must be signed in to change notification settings - Fork 363
hashtable crash #44
Comments
Can you provide a Could you reduce this problem to a test case? For example, can you modify tests/hashtable.c so that it reproduces this problem and then paste your modified version of that file? |
my modified in tests/hashtable.c : // if (howmany > 1000) { load_data(&ht, 100000000); Core was generated by `/data1/hadoop/pengcheng/libphenom/tests/.libs/lt-hashtable.t'. warning: Source file is more recent than executable. Thanks very much! |
this bt full Program terminated with signal 11, Segmentation fault. warning: Source file is more recent than executable. |
Addresses #44 The problem was that the compiler was producing int32_t sized results when figuring out sizes and offsets. Combine this with a large number of elements (large enough to overflow 32-bits) and the table would be allocated smaller than we needed. This solves the problem in the fewest keystrokes by promoting the size of a couple of fields to 64-bit, but has the side effect of changing the size of the hashtable struct.
Resolved by the attached commit; thanks for reporting this! Also note: the hash table will work more efficiently for you if you call If you don't know the final size at the time that you call |
You're welcome. Thank you for your advice. |
Yes |
来信收到,谢谢 |
when i set 67108864 or more elements, the hashtalbe crash.
(gdb) p *ht
$5 = {nelems = 67108864, table_size = 134217728, elem_size = 16, mask = 134217727, kdef = 0x7ffff7ff6ea0, vdef = 0x7ffff7ff68e0,
table = 0x7ffe68e92020 ""}
my init:
reph = ph_ht_init(&(mth->ht), 10000,
&ph_ht_string_key_def, &ph_ht_uint32_val_def);
The text was updated successfully, but these errors were encountered: