see below,
Andres Kroonmaa wrote:
>On 21 Nov 2001, at 7:32, Henrik Nordstrom <hno@squid-cache.org> wrote:
>
>>Jon Kay wrote:
>>
>>>Unlike cache digests, hints have a notion of version through
>>>last-modified time, and always go for the most recent known version.
>>>They also have some other handy metadata.
>>>
>>How much metadata is involved?
>> * Data sent over the networking
>> * Disk space
>> * Memory requirement
>>
>
> Same worries here. It seems that all caches in the cloud have copy of
> metadata of every other cache. This metadata is main memory consumer
> already for single cache. What impact does pushcache add to that memory
> usage?
> It seems to me that it would be more memory efficient to use single
> central "hint-cache" that does nothing else but centralises hint data
> and provides it to every box in cloud via some ICP-like protocol.
> Have you considered such approach?
>
> Otherwise its interesting.
>
Firstly a thought from left field: which would be easier, updating
pushcache to HEAD or modifying the design assumptions of cache-digests?
Secondly, on the topic of Andres thoughts regarding a 'central
hint-cache', this is exactly what 'central-squid-server
(http://www.senet.com.au/css) was about. One needs to be clear about the
design goals though: minimising traffic, or minimising latency for cache
users. I was setting about 'proving' how this central squid server would
actually assist in traffic savings for a large loose confederation of
users such as the Australian universities network (www.aarnet.edu.au).
CSS can operate in a hierarchical model where a single CSS is local to a
cluster of caches, but also peers with a hierarchy of CSS servers. I
recall that ICP was routed around this hierarchy until an actual
destination cache was determined and returned. Anyway, I recall that I
wasn't actually able to convince myself of bandwidth savings. The only
thing that really made a difference was differential tariffs, ie.
overseas bandwidth being more expensive than national bandwidth, and
thus not all bytes were equal. This is one of the reasons that I kind of
let things slip I guess eventually, although perhaps it is deserving of
some more in-depth modelling. One of the issues was the diminishing
returns I expected to be gained from increasing the cluster size.
Perhaps a more factual-based revisit might find that the locality of
reference gains from higher request rates to the 'distributed caching
cluster' might bring more hit-rate improvements than I expected.
Roger.
>
>------------------------------------
> Andres Kroonmaa <andre@online.ee>
> CTO, Microlink Online
> Tel: 6501 731, Fax: 6501 725
> Pärnu mnt. 158, Tallinn,
> 11317 Estonia
>
>
Received on Wed Nov 21 2001 - 03:17:33 MST
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:38 MST