On Thu, Jan 8, 2009 at 1:26 PM, Vianney Lejeune <via.lej_at_free.fr> wrote:
>>>
>> > cache_mem 250 MB
>>>
>>> maximum_object_size_in_memory 50 KB
>>
>> memory, memory, memory. The more you can throw at the problem the more
>> objects can be kept and served while hot. Squid with 64-bit can easily
>> handle many GBs of memory cache. (at cost of slow shutdown when it saves the
>> hottest to disk for the next round.)
>>
>>
>>> cache_replacement_policy heap LFUDA
>>
>> Been a while since I looked at these, to maximize bytes you want the
>> policy that looks at object size as well as 'coldness'. To remove the
>> smaller cool objects before the larger equally cool ones.
>>
>>> cache_dir ufs /data/spool/squid 30000 16 256
>
>
> By the way, what about the ideal settings for cache_mem, cache size and so
> on, is there any formula ? Are 2*500 GB HD faster than 1*1TB ?
Yes, as each of those can handle i/o operations concurrently. In
general, the more disks the better the performance: squid performance
is usually constrained by the disk head seek times.
See http://wiki.squid-cache.org/SquidFaq/RAID
-- /kinkieReceived on Thu Jan 08 2009 - 15:37:27 MST
This archive was generated by hypermail 2.2.0 : Thu Jan 08 2009 - 12:00:02 MST