Re: Mourning the demise of mkcephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 12, 2013 at 2:22 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
> On 11/11/2013 06:51 PM, Dave (Bob) wrote:
>>
>> The utility mkcephfs seemed to work, it was very simple to use and
>> apparently effective.
>>
>> It has been deprecated in favour of something called ceph-deploy, which
>> does not work for me.
>>
>> I've ignored the deprecation messages until now, but in going from 70 to
>> 72 I find that mkcephfs has finally gone.
>>
>> I have tried ceph-deploy, and it seems to be tied in to specific
>> 'distributions' in some way.
>>
>> It is unuseable for me at present, because it reports:
>>
>> [ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported:
>>
>>
>> I therefore need to go back to first principles, but the documentation
>> seems to have dropped descriptions of driving ceph without smoke and
>> mirrors.
>>
>> The direct approach may be more laborious, but at least it would not
>> depend on anything except ceph itself.
>>
>
> I myself am not a very big fan of ceph-deploy as well.

Why not? I definitely want to make it better for users, and any
feedback is super useful. What kind of caveats have
you found that lead you to not use it (or use something completely different) ?

>  Most installations I
> do are done by bootstrapping the monitors and osds manually.
>
> I have some homebrew scripts for this, but I mainly use Puppet to make sure
> all the packages and configuration is present on the nodes and afterwards
> it's just a matter of adding the OSDs and formatting their disks once.
>
> The guide to bootstrapping a monitor:
> http://eu.ceph.com/docs/master/dev/mon-bootstrap/
>
> When the monitor cluster is running you can start generating cephx keys for
> the OSDs and add them to the cluster:
> http://eu.ceph.com/docs/master/rados/operations/add-or-rm-osds/
>
> I don't know if the docs are 100% correct. I've done this so many times that
> I do a lot of things without even reading the docs, so there might be a typo
> in it somewhere. If so, report it so it can be fixed.
>
> Where I think that ceph-deploy works for a lot of people I fully understand
> that some people just want to manually bootstrap a Ceph cluster from
> scratch.
>
> Wido
>
>
>> Maybe I need to step back a version or two, set up my cluster with
>> mkcephfs, then switch back to the latest to use it.
>>
>> I'll search the documentation again.
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux