Hi Amos,
Thanks for your help in understanding my request. I have attempted to create
a rock store but was unsuccessful. There doesn't seem to be very good
guidance on the proper step by step process of creating a rock store. I
came across crashes the last time i attempted it. Also, I am using an x86
platform (32 bit) with multiple cores, when I attempted to use SMP mode with
multiple workers, instantly my intercept mode stopped functioning. I
couldn't figure out what was wrong so I'd love to get better guidance on
this as well.
Best regards,
The Geek Guy
Lawrence Pingree
http://www.lawrencepingree.com/resume/
Author of "The Manager's Guide to Becoming Great"
http://www.Management-Book.com
-----Original Message-----
From: Amos Jeffries [mailto:squid3_at_treenet.co.nz]
Sent: Tuesday, April 29, 2014 1:20 AM
To: squid-users_at_squid-cache.org
Subject: Re: [squid-users] feature requests
On 29/04/2014 4:17 p.m., Lawrence Pingree wrote:
>
> I would like to request two features that could potentially help with
> performance.
>
See item #1 "Wait" ...
<http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feat
ure.2C_enhance.2C_of_fix_something.3F>
Some comments to think about before you make the formal feature request bug.
Don't let these hold you back, but they are the bigger details that will
need to be overcome for these features to be accepted and useful.
I will also suggest you test out the 3.HEAD Squid code to see what we have
done recently with collapsed_forwarding, SMP support and large rock caches.
Perhapse the startup issues that make you want these are now resolved.
> 1. I would like to specify a max age for memory-stored hot objects
> different than those specified in the generic cache refresh patterns.
refresh_patterns are not generic. They are as targeted as the regex pattern
you write.
The only difference between memory, disk or network sources for a cache is
access latency. Objects are "promoted" from disk to memory when used, and
pushed from memory to disk when more memory space is needed.
I suspect this feature will result in disk objects maximum age stabilizing
at the same value as the memory cache is set to.
- a memory age limit higher than disk objects needing to push to disk will
get erased as they are too old.
- a memory age limit lower than disk objects promoted from disk will get
erased or revalidated to be within the memory limit (erasing the obsoleted
disk copy).
So either way anything not meeting the memory limit is erased. Disk will
only be used for the objects younger than the memory limit which need to
overspill into the slower storage area where they can age a bit beofre next
use ... which is effectively how it works today.
Additionally, there is the fact that objects *are* cached past their max-age
values. All that happens in HTTP/1.1 when an old object is requested is a
revalidation check over the network (used to be a re-fetch in HTTP/1.0). The
revalidation MAY supply a whole new object, or just a few new headers.
- a memory age limit higher than disk causes the disk (already slow) to
have additional network lag for revalidation which is not applied to the in
memory objects.
- a memory age limit lower than disk places the extra network lag on memory
objects.
... what benefit is gained from adding latency to one of the storage areas
which is not applicable to the same object when it is stored to the other
area?
The overarching limit on all this is the *size* of the storage areas, not
the object age. If you are in the habit of setting very large max-age value
on refresh_pattern to increase caching take a look at your storage LRU/LFU
age statistics sometime. You might be in for a bit of a surprise.
>
> 2. I would like to pre-load hot disk objects during startup so that
> squid is automatically re-launched with the memory cache populated.
> I'd limit this to the maximum memory cache size amount.
>
This one is not so helpful as it seems when done by a cache. Loading on
demand solves several performance problems which pre-loading encounter in
full.
1) Loading the objects takes time. Resulting in a slower time until first
request.
Loading on-demand we can guarantee that the first client starts receiving
its response as fast as possible. There is no waiting for GB of other
objects to fully load first, or even the end of the current object to
complete loading.
2) Loading based on previous experience is as best an educated guess.
That can still load the wrong things, wasting the time spent.
Loading on-demand guarantees that only the currently hot objects are loaded.
Regardless of what was hot a few seconds, minutes or days ago when the proxy
shutdown. Freeing up CPU cycles and disk waiting time for servicing more
relevant requests.
3) A large portion of traffic in HTTP/1.1 needs to be validated over the
network using the new clients request header details before use.
This comes back to (1). As soon as the headers are loaded the network
revalidation can begin and happen while other traffic is loaded from disk.
In loaded but cold cache objects the revalidation still has to be done and
delays the clients further.
The individual tasks of loading, revalidating, and delivering will take the
same amout of CPU cycles/time regardless of when you do them.
The nice thing about hot objects is that they are requested more frequently
than other objects. So the probability of the very popular object being the
first one demanded is extremely high. Getting it into memory and available
for delivery without delay allows service for a larger portion of the
traffic than any other loaded object would.
[ keep in mind this next bit is just the maths and general traffic shapes,
the actual graph scales and values will depend on your particular clients ]
You can see the interaction of hot objects and on-demand loading in the
traffic speed and client requests graph spikes on startup.
With on-demand they are both exponential curves starting from some initial
value [traffic speed low, requests peaking] and returning quickly to your
normal operating values. The total length/width of that curve is the time
taken to fill the memory cache with the currenly hot objects.
Pre-loading the entire cache makes them start with a linear curve growing
further *away* from normal to a maxima/minima value at the end of loading
action, followed by a similar exponential curve back to normal over the time
taked to revalidate the hot objects.
The exponential return curve for pre-loading is similar and possibly
shorter than the return curve for on-demand, but the total startup time
spent above normal values is very often longer due to the linear growth on
pre-loading [reasons being problems (1) and (2) above].
FYI: The main Squid developers have concentrated efforts in other areas such
as; perfecting the COSS caching model into the one now called "rock" to load
multiple objects in single fast disk loads[a], shared memory model with
multiple processes dedicated to the tasks in parallel, removing the disk
access in transactions that dont need them, and improved HTTP protocol
handling. Interest, sponsorship, patches for these projects is very welcome.
[a] ironically the biggest unresolved issue with rock today is that it holds
up startup doing a pre-loading scan of its database slots very similar to
your feature #2.
HTH
Amos
Received on Thu May 01 2014 - 22:02:01 MDT
This archive was generated by hypermail 2.2.0 : Thu May 08 2014 - 12:00:04 MDT