On 30 Mar 2001, at 9:18, Chemolli Francesco (USI) <ChemolliF@GruppoCredit.it> wrote:
> ...
> > Generally it showed very little fragmentation problems, even
> > came back
> > to normal after sudden spikes of mallocs, but it is still prone to
> > leaving several chunks around that have only few or single
> > used items.
> > But I guess this is unavoidable with any kind of chunked allocs.
> > Main source of chunked fragmentation was storeentry pool and
> > its close
> > relatives. It seemed wanted to limit chunk size for multimillion item
> > pools. On the other hand, other pools enjoyed large chunks. So I was
> > wondering about adding a hint during mempool initialisation.
>
> Sounds intersting. Can you give a few numbers (what pools get the
> most from chunking, what pools get the least, what is the
> allocations statistic distribution, what is the expected cost
> in terms of added complexity of the chunked approach, what is
> the memory overhead.
Unfortunately at a time I got stuck with some nasty memory corruption
that only revealed with my chunked pools. I believe it was a bug
somewhere else but I didn't find it. Then I ran out of time. So I
don't run it in a production box right now.
Benefits were many-sided. Most users were StoreEntry, MD5 and
heap_node. I modified dlmalloc abit so that it was eager to alloc
chunks with mmap(). So unfilled chunks didn't contribute to heap
fragmentation. Of course, dlmalloc wastes 1 page per mmapped alloc,
so memory benefits were with larger chunks. While chasing the
corruption, I even went as far as to mprotect() pages after a free,
and it helped me to find a trashing of freed disk buffer in aio.
It is hard to tell what pools get the least. Perhaps those that
only make very few allocations during a squid run, and I picked
minimal chunk size to be 16K. Some pools stayed empty most of the
time.
Idle fragmentation was most for pools with spikes in allocs, like
Client socket buffs, client request buf, things that depend on
clients sudden request spikes. But this is understandable and I
think it was not very bad.
Idle pool limit worked ok, being only slightly over configured
limits.
You can see a sample of cachemgr memory util output at:
http://www.online.ee/~andre/squid/mempool/memutil_chunked.html
the thing itself is at:
http://www.online.ee/~andre/squid/mempool/MemPool.c
Memory overhead is quite small, 24 bytes per chunk, pluss dlmalloc
overhead and 1 page if mmapped. minimal object size is 1 pointer
(4 or 8 bytes). Overall inuse mem typically stayed within 95-99%
of total allocated.
Of course, all it gives eventually is avoiding dlmalloc overhead
which is quite small. Perhaps it would give more when we need to
increase storeEntry struct by few bytes, then dlmalloc overhead
would increase by 16 bytes while with chunked pool aligned to 4
or 8 depending on cpu. I got impression that memory usage dropped
by 25-30%, but this wasn't a longterm run. Besides, there are lots
of very small allocs not yet moved onto pools.
cpu overhead is quite minimal. Mostly is during a free, as we
have to find a chunk where the obj belongs. The more chunks in
a pool, the more overhead. But in terms of CPU ticks it wasn't
bad at all. I also zero out freed objects, and this takes on
average more time than finding a chunk. Basically, CPU times
were comparable to old mempool very closely.
Added complexity is transparent, and not extremely high.
> As an added thought, if a hint is given on pool creation, it
> might be useful to change the chunk size (in number of entries)
> for each pool, to allow for tweaking. Where there is a tendency
> to have many chunks with few items in each, make the chunks
> smaller. On the converse, when there is no such problem, make them
> bigger to benefit the most. Just my .02 euro.
jup. thats what I did. To not brake existing mempool api, I added
a function memPoolTune that can be called before first alloc to
change chunk size.
------------------------------------
Andres Kroonmaa <andre@online.ee>
CTO, Delfi Online
Tel: 6501 731, Fax: 6501 708
Pärnu mnt. 158, Tallinn,
11317 Estonia
Received on Fri Mar 30 2001 - 03:24:15 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:41 MST