Re: Cache strategy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Hash: SHA1


i'm new to the list and was following your caching discussion with some
interest, as i was just implementing something similar for an
open-source interactive photo-development software and was thinking
about using GEGL instead.

for this application, there are two main tasks which have to be fast:
zooming/panning (especially quick 1:1 views) and history stack popping
(to compare if your latest changes really improved the image).

as i wanted to develop images and not a powerful library, i used a very
simple approach to caching:

- - there is a fixed-size cache with lines large enough to hold all
relevant pixels for display (e.g. 3*float*512*512), with about 5 cache

- - the first operation scales and crops the input buffer to fit this

- - each operation allocates its output buffer in this cache, tagged with
a hash that depends on all parameters of all operations from original
down to here.

- - the graph of operations is simply traversed as needed and the cache
lines are replaced by a least recently used scheme.

obviously, this does not work terribly well with zooming and panning.
to address this, there is a second pixel pipeline with a downscaled
buffer, which is always evaluated as a whole, not only the visible parts
(this should really be replaced by something as cool as the tile buffer
in GeglBuffer).

Martin Nordholts wrote:
> For example, when you 
> interactively apply geometric transforms on a layer, you will want to 
> have caching right before the compositor op that composes the 
> transformed layer onto the projection. You will not want to have caches 
> on the compositor ops all the time though because it would cause a giant 
> memory overhead.

this is handled well because changing the last operation in the graph
will need the output of the previous one, thus incrementing the ``more
recently used'' value of this one, preventing the important previous
cache line from being swapped out.

if this swaps out other important cache lines on the way, a mechanism to
detect such a situation can be implemented (only changing parameters of
the same op very often and quickly), such a thing is required for the
history stack anyways, as it would also overflow.

so i was thinking if maybe such a global (in the parent gegl node?),
straight forward LRU cache with an appropriate hash could work for the
tiles in GEGL as well (one tile = one cacheline, or one mip-map level of
a tile = one cacheline..)? did you try it already? if so, how did it go?

> Another thing worth mentioning is that caches on every node doesn't 
> scale well to concurrent evaluation of the graph since the evaluators 
> would need to all the time synchronize usage of the caches, preventing 
> nice scaling of performance as you use more CPU cores/CPUs.

this is definitely true, although i hope there will always be enough
work on disjoint tiles or just parallel read accesses on the same tile
(to have two threads writing the same tile doesn't really make sense

- -jo
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla -

Gegl-developer mailing list

[Video For Linux]     [Photo]     [Yosemite News]    [Yosemite Photos]    [gtk]     [GIMP Users]     [KDE]     [Scanner]     [Gimp's Home]     [Gimp on Windows]     [Steve's Art]     [Webcams]