Hello,
Look at this:
2013/08/20 07:55:26 kid1| ctx: exit level 0
2013/08/20 07:55:26 kid1| Attempt to open socket for EUI retrieval
failed: (24) Too many open files
2013/08/20 07:55:26 kid1| comm_open: socket failure: (24) Too many open files
2013/08/20 07:55:26 kid1| Reserved FD adjusted from 100 to 64542 due to failures
2013/08/20 07:55:26 kid1| WARNING! Your cache is running out of filedescriptors
2013/08/20 07:55:26 kid1| comm_open: socket failure: (24) Too many open files
ulimit -n = 65535 (i have configured it in limits.conf myself)
When squid starts, it shows no errors:
2013/08/20 13:38:11 kid1| Starting Squid Cache version 3.3.8 for
x86_64-unknown-linux-gnu...
2013/08/20 13:38:11 kid1| Process ID 8087
2013/08/20 13:38:11 kid1| Process Roles: worker
2013/08/20 13:38:11 kid1| With 65535 file descriptors available
running lsof gives no more than 8000 files opened when the problem occurs.
Why should it say "Too many open files"? Do you think SELinux can be
the cause of this issue?
thanks
Received on Tue Aug 20 2013 - 18:57:30 MDT
This archive was generated by hypermail 2.2.0 : Wed Aug 21 2013 - 12:00:43 MDT