RE: [PATCH 2/2 v4] drm/exynos: added userptr feature.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jerome,

> -----Original Message-----
> From: Jerome Glisse [mailto:j.glisse@xxxxxxxxx]
> Sent: Tuesday, May 15, 2012 4:27 AM
> To: Inki Dae
> Cc: airlied@xxxxxxxx; dri-devel@xxxxxxxxxxxxxxxxxxxxx; minchan@xxxxxxxxxx;
> kosaki.motohiro@xxxxxxxxx; kyungmin.park@xxxxxxxxxxx;
> sw0312.kim@xxxxxxxxxxx; jy0922.shim@xxxxxxxxxxx
> Subject: Re: [PATCH 2/2 v4] drm/exynos: added userptr feature.
> 
> On Mon, May 14, 2012 at 2:17 AM, Inki Dae <inki.dae@xxxxxxxxxxx> wrote:
> > this feature is used to import user space region allocated by malloc()
> or
> > mmaped into a gem. for this, we uses get_user_pages() to get all the
> pages
> > to VMAs within user address space. However we should pay attention to
> use
> > this userptr feature like below.
> >
> > The migration issue.
> > - Pages reserved by CMA for some device using DMA could be used by
> > kernel and if the device driver wants to use those pages
> > while being used by kernel then the pages are copied into
> > other ones allocated to migrate them and then finally,
> > the device driver can use the pages for itself.
> > Thus, migrated, the pages being accessed by DMA could be changed
> > to other so this situation may incur that DMA accesses any pages
> > it doesn't want.
> >
> > The COW issue.
> > - while DMA of a device is using the pages to VMAs, if current
> > process was forked then the pages being accessed by the DMA
> > would be copied into child's pages.(Copy On Write) so
> > these pages may not have coherrency with parent's ones if
> > child process wrote something on those pages so we need to
> > flag VM_DONTCOPY to prevent pages from being COWed.
> 
> Note that this is a massive change in behavior of anonymous mapping
> this effectively completely change the linux API from application
> point of view on your platform. Any application that have memory
> mapped by your ioctl will have different fork behavior that other
> application. I think this should be stressed, it's one of the thing i
> am really uncomfortable with i would rather not have the dont copy
> flag and have the page cowed and have the child not working with the
> 3d/2d/drm driver. That would means that your driver (opengl
> implementation for instance) would have to detect fork and work around
> it, nvidia closed source driver do that.
> 

First of all, thank you for your comments.

Right, VM_DONTCOPY flag would change original behavior of user. Do you think
this way has no problem but no generic way? anyway our issue was that the
pages to VMAs are copied into child's ones(COW) so we prevented those pages
from being COWed with using VM_DONTCOPY flag.

For this, I have three questions below

1. in case of not using VM_DONTCOPY flag, you think that the application
using our userptr feature has COW issue; parent's pages being accessed by
DMA of some device would be copied into child's ones if the child wrote
something on the pages. after that, DMA of a device could access pages user
doesn't want. I'm not sure but I think such behavior has no any problem and
is generic behavior and it's role of user to do fork or not. Do you think
such COW behavior could create any issue I don't aware of so we have to
prevent that somehow?

2. so we added VM_DONTCOPY flag to prevent the pages from being COWed but
this changes original behavior of user. Do you think this is not generic way
or could create any issue also?

3. and last one, what is the difference between to flag VM_DONTCOPY and to
detect fork? I mean the device driver should do something to need after
detecting fork. and I'm not sure but I think the something may also change
original behavior of user.

Please let me know if there is my missing point.

Thanks,
Inki Dae

> Anyway this is a big red flag for me, as app on your platform using
> your drm driver might start relaying on a completely non con-formant
> fork behavior.
> 
> > But the use of get_user_pages is safe from such magration issue
> > because all the pages from get_user_pages CAN NOT BE not only
> > migrated, but also swapped out. However below issue could be incurred.
> >
> > The deterioration issue of system performance by malicious process.
> > - any malicious process can request all the pages of entire system
> memory
> > through this userptr ioctl. which in turn, all other processes would be
> > blocked and incur the deterioration of system performance because the
> pages
> > couldn't be swapped out.
> >
> > So this feature limit user-desired userptr size to pre-defined and this
> value
> > CAN BE changed by only root user.
> >
> > Signed-off-by: Inki Dae <inki.dae@xxxxxxxxxxx>
> > Signed-off-by: Kyungmin Park <kyungmin.park@xxxxxxxxxxx>
> > ---
> >  drivers/gpu/drm/exynos/exynos_drm_drv.c |    2 +
> >  drivers/gpu/drm/exynos/exynos_drm_gem.c |  391
> +++++++++++++++++++++++++++++++
> >  drivers/gpu/drm/exynos/exynos_drm_gem.h |   17 ++-
> >  include/drm/exynos_drm.h                |   26 ++-
> >  4 files changed, 433 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.c
> b/drivers/gpu/drm/exynos/exynos_drm_drv.c
> > index 1e68ec2..e8ae3f1 100644
> > --- a/drivers/gpu/drm/exynos/exynos_drm_drv.c
> > +++ b/drivers/gpu/drm/exynos/exynos_drm_drv.c
> > @@ -220,6 +220,8 @@ static struct drm_ioctl_desc exynos_ioctls[] = {
> >        DRM_IOCTL_DEF_DRV(EXYNOS_GEM_MAP_OFFSET,
> >                        exynos_drm_gem_map_offset_ioctl, DRM_UNLOCKED |
> >                        DRM_AUTH),
> > +       DRM_IOCTL_DEF_DRV(EXYNOS_GEM_USERPTR,
> > +                       exynos_drm_gem_userptr_ioctl, DRM_UNLOCKED),
> >        DRM_IOCTL_DEF_DRV(EXYNOS_GEM_MMAP,
> >                        exynos_drm_gem_mmap_ioctl, DRM_UNLOCKED |
DRM_AUTH),
> >        DRM_IOCTL_DEF_DRV(EXYNOS_GEM_GET,
> > diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> > index e6abb66..3c8a5f3 100644
> > --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> > +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> > @@ -68,6 +68,80 @@ static int check_gem_flags(unsigned int flags)
> >        return 0;
> >  }
> >
> > +static struct vm_area_struct *get_vma(struct vm_area_struct *vma)
> > +{
> > +       struct vm_area_struct *vma_copy;
> > +
> > +       vma_copy = kmalloc(sizeof(*vma_copy), GFP_KERNEL);
> > +       if (!vma_copy)
> > +               return NULL;
> > +
> > +       if (vma->vm_ops && vma->vm_ops->open)
> > +               vma->vm_ops->open(vma);
> > +
> > +       if (vma->vm_file)
> > +               get_file(vma->vm_file);
> > +
> > +       memcpy(vma_copy, vma, sizeof(*vma));
> > +
> > +       vma_copy->vm_mm = NULL;
> > +       vma_copy->vm_next = NULL;
> > +       vma_copy->vm_prev = NULL;
> > +
> > +       return vma_copy;
> > +}
> > +
> > +
> > +static void put_vma(struct vm_area_struct *vma)
> > +{
> > +       if (!vma)
> > +               return;
> > +
> > +       if (vma->vm_ops && vma->vm_ops->close)
> > +               vma->vm_ops->close(vma);
> > +
> > +       if (vma->vm_file)
> > +               fput(vma->vm_file);
> > +
> > +       kfree(vma);
> > +}
> > +
> > +/*
> > + * cow_userptr_vma - flag VM_DONTCOPY to VMAs to user address space.
> > + *
> > + * this function flags VM_DONTCOPY to VMAs to user address space to
> prevent
> > + * pages to VMAs from being COWed.
> > + */
> > +static int cow_userptr_vma(struct exynos_drm_gem_buf *buf, unsigned int
> no_cow)
> > +{
> > +       struct vm_area_struct *vma;
> > +       unsigned long start, end;
> > +
> > +       start = buf->userptr;
> > +       end = buf->userptr + buf->size - 1;
> > +
> > +       down_write(&current->mm->mmap_sem);
> > +
> > +       do {
> > +               vma = find_vma(current->mm, start);
> > +               if (!vma) {
> > +                       up_write(&current->mm->mmap_sem);
> > +                       return -EFAULT;
> > +               }
> > +
> > +               if (no_cow)
> > +                       vma->vm_flags |= VM_DONTCOPY;
> > +               else
> > +                       vma->vm_flags &= ~VM_DONTCOPY;
> > +
> > +               start = vma->vm_end + 1;
> > +       } while (vma->vm_end < end);
> > +
> > +       up_write(&current->mm->mmap_sem);
> > +
> > +       return 0;
> > +}
> > +
> >  static void update_vm_cache_attr(struct exynos_drm_gem_obj *obj,
> >                                        struct vm_area_struct *vma)
> >  {
> > @@ -256,6 +330,50 @@ static void exynos_drm_gem_put_pages(struct
> drm_gem_object *obj)
> >        /* add some codes for UNCACHED type here. TODO */
> >  }
> >
> > +static void exynos_drm_put_userptr(struct drm_gem_object *obj)
> > +{
> > +       struct drm_device *dev = obj->dev;
> > +       struct exynos_drm_private *priv = dev->dev_private;
> > +       struct exynos_drm_gem_obj *exynos_gem_obj;
> > +       struct exynos_drm_gem_buf *buf;
> > +       struct vm_area_struct *vma;
> > +       int npages;
> > +
> > +       exynos_gem_obj = to_exynos_gem_obj(obj);
> > +       buf = exynos_gem_obj->buffer;
> > +       vma = exynos_gem_obj->vma;
> > +
> > +       if (vma && (vma->vm_flags & VM_PFNMAP) && (vma->vm_pgoff)) {
> > +               put_vma(exynos_gem_obj->vma);
> > +               goto out;
> > +       }
> > +
> > +       npages = buf->size >> PAGE_SHIFT;
> > +
> > +       if (exynos_gem_obj->flags & EXYNOS_BO_USERPTR && !buf->pfnmap)
> > +               cow_userptr_vma(buf, 0);
> > +
> > +       npages--;
> > +       while (npages >= 0) {
> > +               if (buf->write)
> > +                       set_page_dirty_lock(buf->pages[npages]);
> > +
> > +               put_page(buf->pages[npages]);
> > +               npages--;
> > +       }
> > +
> > +out:
> > +       mutex_lock(&dev->struct_mutex);
> > +       priv->userptr_limit += buf->size;
> > +       mutex_unlock(&dev->struct_mutex);
> > +
> > +       kfree(buf->pages);
> > +       buf->pages = NULL;
> > +
> > +       kfree(buf->sgt);
> > +       buf->sgt = NULL;
> > +}
> > +
> >  static int exynos_drm_gem_handle_create(struct drm_gem_object *obj,
> >                                        struct drm_file *file_priv,
> >                                        unsigned int *handle)
> > @@ -295,6 +413,8 @@ void exynos_drm_gem_destroy(struct
> exynos_drm_gem_obj *exynos_gem_obj)
> >
> >        if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG)
> >                exynos_drm_gem_put_pages(obj);
> > +       else if (exynos_gem_obj->flags & EXYNOS_BO_USERPTR)
> > +               exynos_drm_put_userptr(obj);
> >        else
> >                exynos_drm_free_buf(obj->dev, exynos_gem_obj->flags,
buf);
> >
> > @@ -606,6 +726,277 @@ int exynos_drm_gem_mmap_ioctl(struct drm_device
> *dev, void *data,
> >        return 0;
> >  }
> >
> > +/*
> > + * exynos_drm_get_userptr - get pages to VMSs to user address space.
> > + *
> > + * this function is used for gpu driver to access user space directly
> > + * for performance enhancement to avoid memcpy.
> > + */
> > +static int exynos_drm_get_userptr(struct drm_device *dev,
> > +                               struct exynos_drm_gem_obj *obj,
> > +                               unsigned long userptr,
> > +                               unsigned int write)
> > +{
> > +       unsigned int get_npages;
> > +       unsigned long npages = 0;
> > +       struct vm_area_struct *vma;
> > +       struct exynos_drm_gem_buf *buf = obj->buffer;
> > +       int ret;
> > +
> > +       down_read(&current->mm->mmap_sem);
> > +       vma = find_vma(current->mm, userptr);
> > +
> > +       /* the memory region mmaped with VM_PFNMAP. */
> > +       if (vma && (vma->vm_flags & VM_PFNMAP) && (vma->vm_pgoff)) {
> > +               unsigned long this_pfn, prev_pfn, pa;
> > +               unsigned long start, end, offset;
> > +               struct scatterlist *sgl;
> > +               int ret;
> > +
> > +               start = userptr;
> > +               offset = userptr & ~PAGE_MASK;
> > +               end = start + buf->size;
> > +               sgl = buf->sgt->sgl;
> > +
> > +               for (prev_pfn = 0; start < end; start += PAGE_SIZE) {
> > +                       ret = follow_pfn(vma, start, &this_pfn);
> > +                       if (ret)
> > +                               goto err;
> > +
> > +                       if (prev_pfn == 0) {
> > +                               pa = this_pfn << PAGE_SHIFT;
> > +                               buf->dma_addr = pa + offset;
> > +                       } else if (this_pfn != prev_pfn + 1) {
> > +                               ret = -EINVAL;
> > +                               goto err;
> > +                       }
> > +
> > +                       sg_dma_address(sgl) = (pa + offset);
> > +                       sg_dma_len(sgl) = PAGE_SIZE;
> > +                       prev_pfn = this_pfn;
> > +                       pa += PAGE_SIZE;
> > +                       npages++;
> > +                       sgl = sg_next(sgl);
> > +               }
> > +
> > +               obj->vma = get_vma(vma);
> > +               if (!obj->vma) {
> > +                       ret = -ENOMEM;
> > +                       goto err;
> > +               }
> > +
> > +               up_read(&current->mm->mmap_sem);
> > +               buf->pfnmap = true;
> > +
> > +               return npages;
> > +err:
> > +               buf->dma_addr = 0;
> > +               up_read(&current->mm->mmap_sem);
> > +
> > +               return ret;
> > +       }
> > +
> > +       up_read(&current->mm->mmap_sem);
> > +
> > +       /*
> > +        * flag VM_DONTCOPY to VMAs to user address space to prevent
> > +        * pages to VMAs from being COWed.
> > +        *
> > +        * The COW issue.
> > +        * - while DMA of a device is using the pages to VMAs, if
current
> > +        * process was forked then the pages being accessed by the DMA
> > +        * would be copied into child's pages.(Copy On Write) so
> > +        * these pages may not have coherrency with parent's ones if
> > +        * child process wrote something on those pages so we need to
> > +        * flag VM_DONTCOPY to prevent pages from being COWed.
> > +        */
> > +       ret = cow_userptr_vma(buf, 1);
> > +       if (ret < 0) {
> > +               DRM_ERROR("failed to set VM_DONTCOPY to  vma.\n");
> > +               cow_userptr_vma(buf, 0);
> > +               return 0;
> > +       }
> > +
> > +       buf->write = write;
> > +       npages = buf->size >> PAGE_SHIFT;
> > +
> > +       down_read(&current->mm->mmap_sem);
> > +
> > +       /*
> > +        * Basically, all the pages from get_user_pages() can not be not
> only
> > +        * migrated by CMA but also swapped out.
> > +        *
> > +        * The migration issue.
> > +        * - Pages reserved by CMA for some device using DMA could be
> used by
> > +        * kernel and if the device driver wants to use those pages
> > +        * while being used by kernel then the pages are copied into
> > +        * other ones allocated to migrate them and then finally,
> > +        * the device driver can use the pages for itself.
> > +        * Thus, migrated, the pages being accessed by DMA could be
> changed
> > +        * to other so this situation may incur that DMA accesses any
> pages
> > +        * it doesn't want.
> > +        *
> > +        * But the use of get_user_pages is safe from such magration
> issue
> > +        * because all the pages from get_user_pages CAN NOT be not only
> > +        * migrated, but also swapped out.
> > +        */
> > +       get_npages = get_user_pages(current, current->mm, userptr,
> > +                                       npages, write, 1, buf->pages,
NULL);
> > +       up_read(&current->mm->mmap_sem);
> > +       if (get_npages != npages)
> > +               DRM_ERROR("failed to get user_pages.\n");
> > +
> > +       buf->userptr = userptr;
> > +       buf->pfnmap = false;
> > +
> > +       return get_npages;
> > +}
> > +
> > +int exynos_drm_gem_userptr_ioctl(struct drm_device *dev, void *data,
> > +                                     struct drm_file *file_priv)
> > +{
> > +       struct exynos_drm_private *priv = dev->dev_private;
> > +       struct exynos_drm_gem_obj *exynos_gem_obj;
> > +       struct drm_exynos_gem_userptr *args = data;
> > +       struct exynos_drm_gem_buf *buf;
> > +       struct scatterlist *sgl;
> > +       unsigned long size, userptr;
> > +       unsigned int npages;
> > +       int ret, get_npages;
> > +
> > +       DRM_DEBUG_KMS("%s\n", __FILE__);
> > +
> > +       if (!args->size) {
> > +               DRM_ERROR("invalid size.\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       ret = check_gem_flags(args->flags);
> > +       if (ret)
> > +               return ret;
> > +
> > +       size = roundup_gem_size(args->size, EXYNOS_BO_USERPTR);
> > +
> > +       if (size > priv->userptr_limit) {
> > +               DRM_ERROR("excessed maximum size of userptr.\n");
> > +               return -EINVAL;
> > +       }
> > +
> > +       mutex_lock(&dev->struct_mutex);
> > +
> > +       /*
> > +        * Limited userptr size
> > +        * - User request with userptr SHOULD BE limited as
userptr_limit
> size
> > +        * Malicious process is possible to make all the processes to be
> > +        * blocked so this would incur a deterioation of system
> > +        * performance. so the size that user can request is limited as
> > +        * userptr_limit value and also the value CAN BE changed by only
> root
> > +        * user.
> > +        */
> > +       if (priv->userptr_limit >= size)
> > +               priv->userptr_limit -= size;
> > +       else {
> > +               DRM_DEBUG_KMS("insufficient userptr size.\n");
> > +               mutex_unlock(&dev->struct_mutex);
> > +               return -EINVAL;
> > +       }
> > +
> > +       mutex_unlock(&dev->struct_mutex);
> > +
> > +       userptr = args->userptr;
> > +
> > +       buf = exynos_drm_init_buf(dev, size);
> > +       if (!buf)
> > +               return -ENOMEM;
> > +
> > +       exynos_gem_obj = exynos_drm_gem_init(dev, size);
> > +       if (!exynos_gem_obj) {
> > +               ret = -ENOMEM;
> > +               goto err_free_buffer;
> > +       }
> > +
> > +       buf->sgt = kzalloc(sizeof(struct sg_table), GFP_KERNEL);
> > +       if (!buf->sgt) {
> > +               DRM_ERROR("failed to allocate buf->sgt.\n");
> > +               ret = -ENOMEM;
> > +               goto err_release_gem;
> > +       }
> > +
> > +       npages = size >> PAGE_SHIFT;
> > +
> > +       ret = sg_alloc_table(buf->sgt, npages, GFP_KERNEL);
> > +       if (ret < 0) {
> > +               DRM_ERROR("failed to initailize sg table.\n");
> > +               goto err_free_sgt;
> > +       }
> > +
> > +       buf->pages = kzalloc(npages * sizeof(struct page *),
GFP_KERNEL);
> > +       if (!buf->pages) {
> > +               DRM_ERROR("failed to allocate buf->pages\n");
> > +               ret = -ENOMEM;
> > +               goto err_free_table;
> > +       }
> > +
> > +       exynos_gem_obj->buffer = buf;
> > +
> > +       get_npages = exynos_drm_get_userptr(dev, exynos_gem_obj,
userptr,
> 1);
> > +       if (get_npages != npages) {
> > +               DRM_ERROR("failed to get user_pages.\n");
> > +               ret = get_npages;
> > +               goto err_release_userptr;
> > +       }
> > +
> > +       ret = exynos_drm_gem_handle_create(&exynos_gem_obj->base,
> file_priv,
> > +                                               &args->handle);
> > +       if (ret < 0) {
> > +               DRM_ERROR("failed to create gem handle.\n");
> > +               goto err_release_userptr;
> > +       }
> > +
> > +       sgl = buf->sgt->sgl;
> > +
> > +       /*
> > +        * if buf->pfnmap is true then update sgl of sgt with pages else
> > +        * then it means the sgl was updated already so it doesn't need
> > +        * to update the sgl.
> > +        */
> > +       if (!buf->pfnmap) {
> > +               unsigned int i = 0;
> > +
> > +               /* set all pages to sg list. */
> > +               while (i < npages) {
> > +                       sg_set_page(sgl, buf->pages[i], PAGE_SIZE, 0);
> > +                       sg_dma_address(sgl) =
page_to_phys(buf->pages[i]);
> > +                       i++;
> > +                       sgl = sg_next(sgl);
> > +               }
> > +       }
> > +
> > +       /* always use EXYNOS_BO_USERPTR as memory type for userptr. */
> > +       exynos_gem_obj->flags |= EXYNOS_BO_USERPTR;
> > +
> > +       return 0;
> > +
> > +err_release_userptr:
> > +       get_npages--;
> > +       while (get_npages >= 0)
> > +               put_page(buf->pages[get_npages--]);
> > +       kfree(buf->pages);
> > +       buf->pages = NULL;
> > +err_free_table:
> > +       sg_free_table(buf->sgt);
> > +err_free_sgt:
> > +       kfree(buf->sgt);
> > +       buf->sgt = NULL;
> > +err_release_gem:
> > +       drm_gem_object_release(&exynos_gem_obj->base);
> > +       kfree(exynos_gem_obj);
> > +       exynos_gem_obj = NULL;
> > +err_free_buffer:
> > +       exynos_drm_free_buf(dev, 0, buf);
> > +       return ret;
> > +}
> > +
> >  int exynos_drm_gem_get_ioctl(struct drm_device *dev, void *data,
> >                                      struct drm_file *file_priv)
> >  {      struct exynos_drm_gem_obj *exynos_gem_obj;
> > diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> > index 3334c9f..72bd993 100644
> > --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> > +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> > @@ -29,27 +29,35 @@
> >  #define to_exynos_gem_obj(x)   container_of(x,\
> >                        struct exynos_drm_gem_obj, base)
> >
> > -#define IS_NONCONTIG_BUFFER(f)         (f & EXYNOS_BO_NONCONTIG)
> > +#define IS_NONCONTIG_BUFFER(f) ((f & EXYNOS_BO_NONCONTIG) ||\
> > +                                       (f & EXYNOS_BO_USERPTR))
> >
> >  /*
> >  * exynos drm gem buffer structure.
> >  *
> >  * @kvaddr: kernel virtual address to allocated memory region.
> > + * @userptr: user space address.
> >  * @dma_addr: bus address(accessed by dma) to allocated memory region.
> >  *     - this address could be physical address without IOMMU and
> >  *     device address with IOMMU.
> > + * @write: whether pages will be written to by the caller.
> >  * @sgt: sg table to transfer page data.
> >  * @pages: contain all pages to allocated memory region.
> >  * @page_size: could be 4K, 64K or 1MB.
> >  * @size: size of allocated memory region.
> > + * @pfnmap: indicate whether memory region from userptr is mmaped with
> > + *     VM_PFNMAP or not.
> >  */
> >  struct exynos_drm_gem_buf {
> >        void __iomem            *kvaddr;
> > +       unsigned long           userptr;
> >        dma_addr_t              dma_addr;
> > +       unsigned int            write;
> >        struct sg_table         *sgt;
> >        struct page             **pages;
> >        unsigned long           page_size;
> >        unsigned long           size;
> > +       bool                    pfnmap;
> >  };
> >
> >  /*
> > @@ -64,6 +72,8 @@ struct exynos_drm_gem_buf {
> >  *     continuous memory region allocated by user request
> >  *     or at framebuffer creation.
> >  * @size: total memory size to physically non-continuous memory region.
> > + * @vma: a pointer to the vma to user address space and used to release
> > + *     the pages to user space.
> >  * @flags: indicate memory type to allocated buffer and cache
attruibute.
> >  *
> >  * P.S. this object would be transfered to user as kms_bo.handle so
> > @@ -73,6 +83,7 @@ struct exynos_drm_gem_obj {
> >        struct drm_gem_object           base;
> >        struct exynos_drm_gem_buf       *buffer;
> >        unsigned long                   size;
> > +       struct vm_area_struct           *vma;
> >        unsigned int                    flags;
> >  };
> >
> > @@ -130,6 +141,10 @@ int exynos_drm_gem_map_offset_ioctl(struct
> drm_device *dev, void *data,
> >  int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data,
> >                              struct drm_file *file_priv);
> >
> > +/* map user space allocated by malloc to pages. */
> > +int exynos_drm_gem_userptr_ioctl(struct drm_device *dev, void *data,
> > +                                     struct drm_file *file_priv);
> > +
> >  /* get buffer information to memory region allocated by gem. */
> >  int exynos_drm_gem_get_ioctl(struct drm_device *dev, void *data,
> >                                      struct drm_file *file_priv);
> > diff --git a/include/drm/exynos_drm.h b/include/drm/exynos_drm.h
> > index 52465dc..33fa1e2 100644
> > --- a/include/drm/exynos_drm.h
> > +++ b/include/drm/exynos_drm.h
> > @@ -77,6 +77,23 @@ struct drm_exynos_gem_mmap {
> >  };
> >
> >  /**
> > + * User-requested user space importing structure
> > + *
> > + * @userptr: user space address allocated by malloc.
> > + * @size: size to the buffer allocated by malloc.
> > + * @flags: indicate user-desired cache attribute to map the allocated
> buffer
> > + *     to kernel space.
> > + * @handle: a returned handle to created gem object.
> > + *     - this handle will be set by gem module of kernel side.
> > + */
> > +struct drm_exynos_gem_userptr {
> > +       uint64_t userptr;
> > +       uint64_t size;
> > +       unsigned int flags;
> > +       unsigned int handle;
> > +};
> > +
> > +/**
> >  * A structure to gem information.
> >  *
> >  * @handle: a handle to gem object created.
> > @@ -135,8 +152,10 @@ enum e_drm_exynos_gem_mem_type {
> >        EXYNOS_BO_CACHABLE      = 1 << 1,
> >        /* write-combine mapping. */
> >        EXYNOS_BO_WC            = 1 << 2,
> > +       /* user space memory allocated by malloc. */
> > +       EXYNOS_BO_USERPTR       = 1 << 3,
> >        EXYNOS_BO_MASK          = EXYNOS_BO_NONCONTIG |
EXYNOS_BO_CACHABLE
> |
> > -                                       EXYNOS_BO_WC
> > +                                       EXYNOS_BO_WC | EXYNOS_BO_USERPTR
> >  };
> >
> >  struct drm_exynos_g2d_get_ver {
> > @@ -173,7 +192,7 @@ struct drm_exynos_g2d_exec {
> >  #define DRM_EXYNOS_GEM_CREATE          0x00
> >  #define DRM_EXYNOS_GEM_MAP_OFFSET      0x01
> >  #define DRM_EXYNOS_GEM_MMAP            0x02
> > -/* Reserved 0x03 ~ 0x05 for exynos specific gem ioctl */
> > +#define DRM_EXYNOS_GEM_USERPTR         0x03
> >  #define DRM_EXYNOS_GEM_GET             0x04
> >  #define DRM_EXYNOS_USER_LIMIT          0x05
> >  #define DRM_EXYNOS_PLANE_SET_ZPOS      0x06
> > @@ -193,6 +212,9 @@ struct drm_exynos_g2d_exec {
> >  #define DRM_IOCTL_EXYNOS_GEM_MMAP      DRM_IOWR(DRM_COMMAND_BASE + \
> >                DRM_EXYNOS_GEM_MMAP, struct drm_exynos_gem_mmap)
> >
> > +#define DRM_IOCTL_EXYNOS_GEM_USERPTR   DRM_IOWR(DRM_COMMAND_BASE + \
> > +               DRM_EXYNOS_GEM_USERPTR, struct drm_exynos_gem_userptr)
> > +
> >  #define DRM_IOCTL_EXYNOS_GEM_GET       DRM_IOWR(DRM_COMMAND_BASE + \
> >                DRM_EXYNOS_GEM_GET,     struct drm_exynos_gem_info)
> >
> > --
> > 1.7.4.1
> >

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux