There are two forms of parallelization that we can tackle above.
This i quite correct..
I could be wrong, but my intuition tells me that implementing the
latter parallelization is difficult to tackle in my GSoC's timeframe.
Let us start the we will see
The reason why a certain level of communication is required to
implement both projects is that we need to agree on where tiles and
rectangular pixel data are stored/cached (GPU memory vs. main memory)
I am not sure about this. Your work woulbe like to access the memory map and find out where the tile is residing. Then Correspondingly you have to apply the operation if it is CPU or GPU.
We need to talk upon the structure of the memory map.I also (not quiet sure) was thinking that since the we are implementing the two level parallelization so why not justcreate the memory map at the lowest level. I mean if I put a rectangle into a GPU then I would write information like all the tiles are in GPU instead of the rectangle in GPU. they can be grouped but i feelIn the unified memory the tiles must be written. It would be better if someone looks into this logic.
A node whose associated operation publishes
GPU-support will create GPU-based buffers only, CPU/RAM-based buffers
I would say let them flow freely. you can track them by memory map
Cached GeglTiles in my current plans are always stored in GPU memory
for GPU-based buffers
Are You sure?
how and when do we store tiles in GPU
If Gpu memory is free we try to use it, but we will have to find a function to optimize the storage and data Transfer.
I dont know about the locking issue and would prefer if someone explains it bettter.
Btech, Material Science and Metallurgy,
Gegl-developer mailing list
[Video For Linux]
[Gimp on Windows]