Thx for the response Adrian. Earlier I was using only AUFS on each drive,
and the system choked on IOWait above 200 req/sec. But, after I added COSS
in the mix, it improved VASTLY.
It is doing more than 16,000 req / min on average (HTTP + ICP, is this the
right way, or should I just count HTTP req?), so I guess the peak would be
well over 25,000 req / min. The server hardware is a Core 2 Duo with 8 GB
RAM of which 2 GB is cache_mem (heap GDSF), the rest is left to be free for
the OS to handle disk buffer. 1 x 80 GB IDE for OS & logs. 4 x 160 GB SATA
3.0 Gbps HDDs. Each drive has 1 x 32 GB COSS (heap GDSF)
(overwrite-percent=0% max-stripe-waste=32768 membufs=256 MB maxfullbufs=32
MB max-size=32768) storage and 1 x 65 GB AUFS (heap LFUDA) (L1=16 & L2=256)
storage.
Hit ratio is over 45% (sometimes peaking 80%). I've just put a nice delay
pool for rapidshare and some other popular download sites, and that improved
the byte hit ratio to average 23% and sometimes peaking to 32% (without
YouTube or any other fancy caching). Each day, the cache passes around
330GB of data.
Since you're the COSS expert, I would really love to hear about what you
think of my configuration options for COSS above. Do you think I can
improve them?
As for L1 and L2 numbers in AUFS, can you suggest any benchmark tests which
I can run and give you feedback?
Also, if anybody else can share their ideas / experience, it would be great!
I'm a bit puzzled about the following:
1. Although I've set the cache_replacement_policy as differentl to each
other (GDSF for COSS and LFUDA for AUFS), as has been suggested by the HP
whitepaper which is referenced in the config file, the "Current Squid
Configuratoin" page in CacheMGR shows only LFUDA above all the 8 (eight)
cache_store entries. Does that mean all of them are LFUDA? Isn't GDSF
better for smaller objects?
2. When I had only one type of storage (AUFS), it was easy to find out the
average objects per cache_store. However, now that I've 2 types on each of
the 4 HDDs, I can't seem to find out how many of the total 11,000,000 plus
objects that are being reported in CacheMGR are actually in the COSS and
AUFS partitions. Is there a way to find that out?
Sorry for the large post. :( But, I couldn't find good answers to these
questions on a more recent context, say after 2.6 STABLE 15 had been
released.
If there are any ways where I can do some tests / benchmarking for the
community, please let me know. I'll be glad to accomodate.
Regards
HASSAN
----- Original Message -----
From: "Adrian Chadd" <adrian.chadd_at_gmail.com>
To: "Nyamul Hassan" <mnhassan_at_usa.net>
Cc: "Squid Users" <squid-users_at_squid-cache.org>
Sent: Saturday, December 06, 2008 00:32
Subject: Re: [squid-users] Number of Spindles
Things have changed somewhat since that algorithm was decided upon.
Directory searches were linear and the amount of buffer cache /
directory name cache available wasn't huge.
Having large directories took time to search and took RAM to cache.
Noone's really sat down and done any hard-core tuning - or at least,
they've done it, but haven't published the results anywhere. :)
Adrian
2008/12/3 Nyamul Hassan <mnhassan_at_usa.net>:
> Why aren't there any (or marginal / insignificant) improvements over 3
> spindles? Is it because squid is a single threaded application?
>
> On this note, what impact does the L1 and L2 directories have on AUFS
> performance? I understand that these are there to control the number of
> objects in each folder. But, what would be a good number of files to keep
> in a directory, performance wise?
>
> Regards
> HASSAN
>
>
>
> ----- Original Message ----- From: "Amos Jeffries" <squid3_at_treenet.co.nz>
> To: "Henrik Nordstrom" <henrik_at_henriknordstrom.net>
> Cc: "Nyamul Hassan" <mnhassan_at_usa.net>; "Squid Users"
> <squid-users_at_squid-cache.org>
> Sent: Monday, December 01, 2008 04:33
> Subject: Re: [squid-users] Number of Spindles
>
>
>>> sön 2008-11-30 klockan 09:56 +0600 skrev Nyamul Hassan:
>>>
>>>> "The primary purpose of these tests is to show that Squid's performance
>>>> doesn't increase in proportion to the number of disk drives. Excluding
>>>> other
>>>> factors, you may be able to get better performance from three systems
>>>> with
>>>> one disk drive each, rather than a single system with three drives."
>>>
>>> There is a significant difference up to 3 drives in my tests.
>>>
>>
>> Um, can you clarify please? Do you mean difference in experience than
>> described, or separate systems are faster up to 3 drives?
>>
>> Amos
>>
>>
>>
>
>
Received on Fri Dec 05 2008 - 20:30:17 MST
This archive was generated by hypermail 2.2.0 : Sat Dec 06 2008 - 12:00:01 MST