Re: [squid-users] Reverse proxy always misses cache items (dynamic pages)

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Thu, 28 Nov 2013 18:24:02 +1300

On 28/11/2013 4:14 p.m., juan_fla wrote:
> I have Squid 3.3.8 as reverse proxy for a website with MediaWiki, on a
> FreeBSD 6.2 server.
>
> All the requests are missed. (PHP files with parameter). At this point my
> only idea is that something is wrong with my refresh_pattern. I didn't set
> up a cache directory, it appears that Squid creates the cache on each
> individual directory, but I don't know if that could be a reason as well.

If you do not explicitly create a cache_dir lines in your config Squid
will cache to memory only and the cache will be emptied on every
shutdown/restart of Squid.

> Squid.conf:
> ----------------------------
> http_access allow manager localhost
>
> http_port 3129 accel defaultsite=my_domain.org
> cache_peer 127.0.0.1 parent 3130 0 no-query originserver name=myAccel
> login=PASS forceddomain=my_domain.org
>
> acl our_sites dstdomain my_domain.org
> acl our_sites2 dstdomain localhost
> http_access allow our_sites
> http_access allow our_sites2
>
> http_access allow localhost
>
> refresh_pattern -i (/cgi-bin/|.php|\?) 0 20% 4320

This will cause an RFC violation. The default rule is very specifically
designed to thread the line and cause old 1990'2 type CGI installations
not to cache while allowing the modern frameworks like PHP to use their
provided control headers.

> refresh_pattern . 0 20% 4320
>
> cache_peer_access myAccel allow our_sites
> cache_peer_access myAccel allow our_sites2
>
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 563 # https, snews
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> http_access deny !Safe_ports

What I see happening here is that unsafe ports (ie SMTP receiving spam
email) are blocked *unless* the attacker forges your domain name on the
faked request line.
 Your peer server is being left vulnerable to a whole category of
smuggling attacks.

>
> ----------------------------
> LocalSettings.php includes
> ----------------------------
> $wgUseSquid = true;
> $wgSquidServers = array('my_ip_address');
> $wgSquidServersNoPurge = array('127.0.0.1');
>
> ----------------------------
> php.ini
> ----------------------------
> session.cache_limiter = nocache
> session.cache_expire = 180
> ----------------------------
> Access.log
> ----------------------------
> 1385607796.200 410 127.0.0.1 TCP_MISS/200 53326 GET
> http://localhost:3129/index.php? - FIRSTUP_PARENT/127.0.0.1 text/html

While testing log the full URL. Its no use trying to figure out why they
MISS without the full URL to prove they are actually identical URLs. If
even one byte in the full URL is different the two objects will be
stored in different cache locations.
 NP: you can't HIT on an object using http://localhost... if it was
cached using http://my_domain... and the reverse.

The request and response headers will tell you the full story of what
Squid is permitted/forbidden to do with the response objects.
Try setting debug_options 11,2 and see what gets logged for the client
requests and server responses.

Amos
Received on Thu Nov 28 2013 - 05:24:11 MST

This archive was generated by hypermail 2.2.0 : Thu Nov 28 2013 - 12:00:06 MST