On 12/25/2011 06:16 AM, Brendan Conoboy wrote:
On 12/24/2011 09:25 PM, Dennis Gilmore wrote:
Just throwing out what im seeing. lets see what ideas we can come up
with. performance is better than before. and seneca has done a great
job and put a lot of hard work into the reorg and im thankful for that.
We just have another bottleneck to address.
Allocating builders to individual rather than a single raid volume will
Care to explain why?
Since hfp/sfp F15 builds happen at the same time,
having 2 hfp disks and 2 sfp disks is a good start. Split it up again by
some other criteria- you want to try to ensure that any one builder only
causes one spindle to be used and any 2 builders each only cause 2
spindles to be used (1-to-1), and so forth. As long as any single
spindle isn't doing multiple simultaneous mock inits it should scale
You can largely avoid this by making sure your RAID and FS are aligned.
Most mock IOPS are small, and provided layers are aligned right, should
scale very well on RAID 0/10 (RAID5/6 will inevitably cripple just about
anything when it comes to writes regardless of what you do, unless the
vast majority of your writes is a multiple of your stripe-width, which
Allocate the large files the builders are using all at once
(no sparse files) to ensure large read/writes don't require seeks.
Is this a "proper" SAN or just another Linux box with some disks in it?
Is NFS backed by a SAN "volume"?
Use the async nfs export option if it isn't already in use.
journaling (normally dangerous, but this is throw-away space).
Not really dangerous - the only danger is that you might have to wait
for fsck to do it's thing on an unclean shutdown (which can take hours
on a full TB scale disk, granted).
Speaking of "dangerous" tweaks, you could LD_PRELOAD libeatmydata (add
to a profile.d file in the mock config, and add the package to the
buildsys-build group). That will eat fsync() calls which will smooth out
commits and make a substantial difference to performance. Since it's
scratch space anyway it doesn't matter WRT safety.
noatime mounts if that won't break package builds checking filesystem
Build of zsh will break on NFS whateveryou do. It will also break on a
local FS with noatime. There may be other packages that suffer from this
issue but I don't recall them off the top of my head. Anyway, that is an
issue for a build policy - have one builder using block level storage
with atime and the rest on NFS.
Once all that is done, tweak the number of nfsds such that
there are as many as possible without most of them going into deep
sleep. Perhaps somebody else can suggest some optimal sysctl and ext4fs
As mentioned in a previous post, have a look here:
Deadline scheduler might also help on the NAS/SAN end, plus all the
usual tweaks (e.g. make sure write caches on the disks are enabled, if
the disks support write-read-verify disable it, etc.)
arm mailing list