Search squid archive

Re: Re: how enhance browsing quality for top ten sites on my squid ??!!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/11/2013 1:36 p.m., Dr.x wrote:
Alex Rousskov wrote
On 11/01/2013 01:26 PM, Dr.x wrote:

from cache manager we have top ten sites ,

my question is how to let squid optimize those sites  ??

as an example , i mean how to let squid use cache mem for cahcing them
not
use cache dir   ???

You may experiment with the memory_cache_mode directive, but most likely
the default is what you want. The two caches (memory and disk) are not
exclusive of each other -- the same entry may be in both caches at the
same time. Squid will use the [faster] memory cache when it can.

If you verified that a popular object is usually returned from disk
while your memory_cache_mode and memory cache size restrictions allow
for that object to be cached and preserved in memory, then there is
probably a bug somewhere.


HTH,

Alex.

hi Alex,
agian ,
modifying cache mem is just a suggestion from me , it is not mandatory to be
done , but im asking if i have top ten sites ,

which methods i can do to  enhance browsing these-sites and make caching
better ?

Amount of memory cache size available for Squid to use and size limitations on the objects being stored there are the main ones. Followed by how broken the popular sites HTTP protocol usage is, *careful* tuning of refresh_pattern can help there *if* there is actually something wrong with the sites HTTP protocol usage. Just keep in mind that not all popular things are cacheable (Twitter or Facebook channel feeds for example).

FWIW; in my experience as sites get more popular and larger they tend to learn that using HTTP properly and working with caching scales better than working against caching. Possibly because they start to need CDN which rely on caching, but that is still win-win for everyone. It tends to be the vast set of smaller amateur sites that are either ignorant of proper HTTP usage or misunderstand it and get things wrong.

thats it !



is it better to longer the object timeout for specific sites  ? """  just a
suggestion from me , and may be wrong """

Well. That depends on the site. Possibly. "Fail fast" is a good motto for improving user experience. The faster failures happen and are detected the faster the recovery can produce a better result. However some destination will simply be at the other end of very long or slow network connections. For them longer timeout is more appropriate than a close high-speed destination.

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux