On 5/08/2013 4:17 a.m., babajaga wrote:
> Like I guessed already in my first reply, you are reaching the max limit of
> cached objects in your cache_dir, like Amos explained. Which will render
> ineffective part of your disk space.
>
> However, as an alternative to using rock, you can setup a second ufs/aufs
> cache_dir.
> (Especially, in case, you have a production system, I would suggest 2
> ufs/aufs.)
Erm. On fast or high traffic proxies Squid uses the disk I/O capacity to
the limits of the hardware. If you place 2 UFS based cache_dir on one
physical disk spindle with lots of small objects they will fight for I/O
resources with the result of dramatic reduction in both performance and
disk lifetime relative to the traffic speed. Rock and COSS cache types
avoid this by aggregating the small objects into large blocks which get
read/write all at once.
> BUT, and this is valid for both alternatives, be careful then to avoid
> double caching by applying
> consistent limits on the size of cached objects.
You won't get double caching within one proxy process. This only happens
with multiple proxies or with SMP workers.
> Note, that there are several limits to be considered:
> maximum_object_size_in_memory xxxxxx KB
> maximum_object_size yyyyyyy KB
> minimum_object_size 0 KB
> cache_dir aufs /var/cacheA/squid27 250 16 256 min-size=0 max-size=zzzzzzzz
> cache_dir aufs /var/cacheB/squid27 250 16 256 min-size=zzzzzzz+1
> max-size=yyyyyyyy KB
>
> And, when doing this, you should use the newest squid release. Or good, old
> 2.7 :-)
> Reason is, that there were a few squid versions 3.x, having a bug when
> evaluating the combination of different limit options, with the consequence,
> of not storing certain cachable objects on disk.
That bug still exists, the important thing until that gets fixed is to
place maximum_object_size above the cache_dir options.
Amos
Received on Sun Aug 04 2013 - 22:35:28 MDT
This archive was generated by hypermail 2.2.0 : Mon Aug 05 2013 - 12:00:17 MDT