CephFS use cases + MDS limitations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ceph community,

I’d like to get a feel for some of the problems that CephFS users are
encountering with single MDS deployments. There were requests for
stable distributed metadata/MDS services [1] and I’m guessing its
because your workloads exhibit many, many metadata operations. Some of
you mentioned opening many files in a directory for checkpointing,
recursive stats on a directory, etc. [2] and I’d like more details,
such as:
- workloads/applications that stress the MDS service that would cause
you to call for multi-MDS support
- use cases for the Ceph file system (I’m not really too interested in
users using CephFS to host VMs, since many of these use cases are
migrating to RBD)

I’m just trying to get an idea of what’s out there and the problems
CephFS users encounter as a result of a bottlenecked MDS (single node
or cluster).

Thanks!

Michael

[1] CephFS MDS Status Discussion,
http://ceph.com/dev-notes/cephfs-mds-status-discussion/
[2] CephFS First Product Release Discussion,
http://thread.gmane.org/gmane.comp.file-systems.ceph.devel/13524
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux