On 7/7/2014 5:24 μμ, André-Sebastian Liebe wrote:
On 07/07/2014 03:54 PM, Konstantinos Skarlatos wrote:
On 7/7/2014 4:38 μμ, André-Sebastian Liebe wrote:
Hello List,
can anyone tell me how much time is acceptable and assumable for a
multi-disk btrfs array with classical hard disk drives to mount?
I'm having a bit of trouble with my current systemd setup, because it
couldn't mount my btrfs raid anymore after adding the 5th drive. With
the 4 drive setup it failed to mount once in a few times. Now it fails
everytime because the default timeout of 1m 30s is reached and mount is
aborted.
My last 10 manual mounts took between 1m57s and 2m12s to finish.
I have the exact same problem, and have to manually mount my large
multi-disk btrfs filesystems, so I would be interested in a solution
as well.
Hi Konstantinos , you can workaround this by manual creating a systemd
mount unit.
- First review the autogenerated systemd mount unit (systemctl show
<your-mount-unit>.mount). You you can get the unit name by issuing a
'systemctl' and look for your failed mount.
- Then you have to take the needed values (After, Before, Conflicts,
RequiresMountsFor, Where, What, Options, Type, Wantedby) and put them
into an new systemd mount unit file (possibly under
/usr/lib/systemd/system/<your-mount-unit>.mount ).
- Now just add the TimeoutSec with a large enough value below [Mount].
- If you later want to automount you raid, add the WantedBy under [Install]
- now issue a 'systemctl daemon-reload' and look for error messages in
syslog.
- If there are no errors you could enable your manual mount entry by
'systemctl enable <your-mount-unit>.mount' and safely comment out your
old fstab entry (systemd no longer generates autogenerated units).
-- 8< ----------- 8< ----------- 8< ----------- 8< ----------- 8<
----------- 8< ----------- 8< -----------
[Unit]
Description=Mount /data/pool0
After=dev-disk-by\x2duuid-066141c6\x2d16ca\x2d4a30\x2db55c\x2de606b90ad0fb.device
systemd-journald.socket local-fs-pre.target system.slice -.mount
Before=umount.target
Conflicts=umount.target
RequiresMountsFor=/data
/dev/disk/by-uuid/066141c6-16ca-4a30-b55c-e606b90ad0fb
[Mount]
Where=/data/pool0
What=/dev/disk/by-uuid/066141c6-16ca-4a30-b55c-e606b90ad0fb
Options=rw,relatime,skip_balance,compress
Type=btrfs
TimeoutSec=3min
[Install]
WantedBy=dev-disk-by\x2duuid-066141c6\x2d16ca\x2d4a30\x2db55c\x2de606b90ad0fb.device
-- 8< ----------- 8< ----------- 8< ----------- 8< ----------- 8<
----------- 8< ----------- 8< -----------
Hi André,
This unit file works for me, thank you for creating it! Can somebody put
it on the wiki?
My hardware setup contains a
- Intel Core i7 4770
- Kernel 3.15.2-1-ARCH
- 32GB RAM
- dev 1-4 are 4TB Seagate ST4000DM000 (5900rpm)
- dev 5 is a 4TB Wstern Digital WDC WD40EFRX (5400rpm)
Thanks in advance
André-Sebastian Liebe
--------------------------------------------------------------------------------------------------
# btrfs fi sh
Label: 'apc01_pool0' uuid: 066141c6-16ca-4a30-b55c-e606b90ad0fb
Total devices 5 FS bytes used 14.21TiB
devid 1 size 3.64TiB used 2.86TiB path /dev/sdd
devid 2 size 3.64TiB used 2.86TiB path /dev/sdc
devid 3 size 3.64TiB used 2.86TiB path /dev/sdf
devid 4 size 3.64TiB used 2.86TiB path /dev/sde
devid 5 size 3.64TiB used 2.88TiB path /dev/sdb
Btrfs v3.14.2-dirty
# btrfs fi df /data/pool0/
Data, single: total=14.28TiB, used=14.19TiB
System, RAID1: total=8.00MiB, used=1.54MiB
Metadata, RAID1: total=26.00GiB, used=20.20GiB
unknown, single: total=512.00MiB, used=0.00
--
To unsubscribe from this list: send the line "unsubscribe
linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Konstantinos Skarlatos
--
André-Sebastian Liebe
--
Konstantinos Skarlatos
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html