thanks i just switch my cache drives to aufs can you explane me in detail
what other changes i made in my squid.conf for high cache resuls we have
almost 45 mb link for proxy services 30 mb. can i add more harddrive to
caching or just tweak to my squid and linux kernal. ! remember we are using
RHEL ES 4 . i know bsd given high availablity but we can'nt use
Adrian Chadd wrote:
>
> G'day,
>
> A few notes.
>
> * Diskd isn't stable, and won't be until I commit my next set of patches
> to 2.7 and 3.0; use aufs for now.
>
> * Caching windows updates will be possible in Squid-2.7. It'll require
> some
> rules and a custom rewrite helper.
>
> * 3.0 isn't yet as fast as 2.6 or 2.7.
>
>
> Adrian
>
> On Tue, Feb 12, 2008, pokeman wrote:
>>
>> Well I experience with squid cache not good works on heavy load I 4 core
>> processor machine with 7 scsi drives 4 gb ram average work load in peak
>> hours 3000 users 30 mb bandwidth on that machine using RHEL ES 4. I
>> search
>> many articles on high cache performance specially windows update these
>> days
>> very headache to save PSF extension i heard In squid release 3.0 for
>> better
>> performance but why squid developers could???nt find solution for cache
>> windows update in 2.6 please suggest me if I am doing something wrong in
>> my
>> squid.conf
>>
>>
>> http_port 3128 transparent
>> range_offset_limit 0 KB
>> cache_mem 512 MB
>> pipeline_prefetch on
>> shutdown_lifetime 2 seconds
>> coredump_dir /var/log/squid
>> ignore_unknown_nameservers on
>> acl all src 0.0.0.0/0.0.0.0
>> acl ourusers src 192.168.100.0/24
>> hierarchy_stoplist cgi-bin ?
>> maximum_object_size 16 MB
>> minimum_object_size 0 KB
>> maximum_object_size_in_memory 64 KB
>> cache_replacement_policy heap LFUDA
>> memory_replacement_policy heap GDSF
>> cache_dir diskd /cache1 7000 16 256
>> cache_dir diskd /cache2 7000 16 256
>> cache_dir diskd /cache3 7000 16 256
>> cache_dir diskd /cache4 7000 16 256
>> cache_dir diskd /cache5 7000 16 256
>> cache_dir diskd /cache6 7000 16 256
>> cache_dir diskd /cache7 7000 16 256
>> cache_access_log none
>> cache_log /var/log/squid/cache.log
>> cache_store_log none
>> dns_nameservers 127.0.0.1
>> refresh_pattern windowsupdate.com/.*\.(cab|exe|dll) 43200 100%
>> 43200
>> refresh_pattern download.microsoft.com/.*\.(cab|exe|dll) 43200 100%
>> 43200
>> refresh_pattern au.download.windowsupdate.com/.*\.(cab|exe|psf) 43200
>> 100%
>> 43200
>> refresh_pattern ^ftp: 1440 20% 10080
>> refresh_pattern ^gopher: 1440 0% 1440
>> refresh_pattern cgi-bin 0 0% 0
>> refresh_pattern \? 0 0% 4320
>> refresh_pattern . 0 20% 4320
>> negative_ttl 1 minutes
>> positive_dns_ttl 24 hours
>> negative_dns_ttl 1 minutes
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/255.255.255.255
>> acl to_localhost dst 127.0.0.0/8
>> acl SSL_ports port 443 563
>> acl Safe_ports port 1195 1107 1174 1212 1000
>> acl Safe_ports port 80 # http
>> acl Safe_ports port 82 # http
>> acl Safe_ports port 81 # http
>> acl Safe_ports port 21 # ftp
>> acl Safe_ports port 443 563 # https, snews
>> acl Safe_ports port 70 # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535 # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>> acl CONNECT method CONNECT
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow ourusers
>> http_access deny all
>> http_reply_access allow all
>> cache allow all
>> icp_access allow ourusers
>> icp_access deny all
>> cache_mgr info@fariya.com
>> visible_hostname CE-Fariya
>> dns_testnames localhost
>> reload_into_ims on
>> quick_abort_min 0 KB
>> quick_abort_max 0 KB
>> log_fqdn off
>> half_closed_clients off
>> client_db off
>> ipcache_size 16384
>> ipcache_low 90
>> ipcache_high 95
>> fqdncache_size 8129
>> log_icp_queries off
>> strip_query_terms off
>> store_dir_select_algorithm round-robin
>> client_persistent_connections off
>> server_persistent_connections on
>> persistent_request_timeout 1 minute
>> client_lifetime 60 minutes
>> pconn_timeout 10 seconds
>>
>>
>>
>> Adrian Chadd wrote:
>> >
>> > On Thu, Jan 31, 2008, Chris Woodfield wrote:
>> >> Interesting. What sort of size threshold do you see where performance
>> >> begins to drop off? Is it just a matter of larger objects reducing
>> >> hitrate (due to few objects being cacheable in memory) or a bottleneck
>> >> in squid itself that causes issues?
>> >
>> > Its a bottleneck in the Squid code which makes accessing the n'th 4k
>> > chunk in memory take O(N) time.
>> >
>> > Its one of the things I'd like to fix after Squid-2.7 is released.
>> >
>> >
>> >
>> > Adrian
>> >
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Mem-Cache-flush-tp14951540p15449954.html
>> Sent from the Squid - Users mailing list archive at Nabble.com.
>
> --
> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
> Support -
> - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
>
>
-- View this message in context: http://www.nabble.com/Mem-Cache-flush-tp14951540p15452542.html Sent from the Squid - Users mailing list archive at Nabble.com.Received on Wed Feb 13 2008 - 02:34:45 MST
This archive was generated by hypermail pre-2.1.9 : Sat Mar 01 2008 - 12:00:05 MST