hi all ,
i have implemented aufs with rock
i had that logs at logs of rock cache.log !!!
what does that mean ??
i have done the following
i have 5 hardsisk aufs dir
2hardsisk rock dir
i have as below :
workers 8
#dns_v4_first on
cpu_affinity_map process_numbers=1,2,3,4,5,6,7,8,9
cores=2,4,6,8,10,12,14,16,18
#####################################################################
if ${process_number} = 4
include /etc/squid/aufs1.conf
endif
###################################################################
if ${process_number} = 2
include /etc/squid/aufs2.conf
endif
################################################################
if ${process_number} = 6
include /etc/squid/aufs3.conf
endif
#################################################################
if ${process_number} = 7
include /etc/squid/aufs4.conf
endif
#################################################################
if ${process_number} = 8
include /etc/squid/aufs5.conf
endif
===========================================================================
each aufs.conf has dir aufs in it.
but after alll fo that ,
i have still low bandwith saving !!!!
does the errors harmfull
*Worker I/O push queue overflow: ipcIo7.30506r9
*
regards
-----
Dr.x
-- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Worker-I-O-push-queue-overflow-ipcIo7-30506r9-tp4664857.html Sent from the Squid - Users mailing list archive at Nabble.com.Received on Sun Feb 16 2014 - 15:05:54 MST
This archive was generated by hypermail 2.2.0 : Fri Feb 21 2014 - 12:00:06 MST