Re: [PATCH] [RFC] xfs: wire up aio_fsync method

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 17, 2014 at 08:20:55PM -0700, Jens Axboe wrote:
> On 2014-06-17 20:13, Dave Chinner wrote:
> >On Tue, Jun 17, 2014 at 07:24:10PM -0700, Jens Axboe wrote:
> >>On 2014-06-17 17:28, Dave Chinner wrote:
> >>>[cc linux-mm]
> >>>
> >>>On Tue, Jun 17, 2014 at 07:23:58AM -0600, Jens Axboe wrote:
> >>>>On 2014-06-16 16:27, Dave Chinner wrote:
> >>>>>On Mon, Jun 16, 2014 at 01:30:42PM -0600, Jens Axboe wrote:
> >>>>>>On 06/16/2014 01:19 AM, Dave Chinner wrote:
> >>>>>>>On Sun, Jun 15, 2014 at 08:58:46PM -0600, Jens Axboe wrote:
> >>>>>>>>On 2014-06-15 20:00, Dave Chinner wrote:
> >>>>>>>>>On Mon, Jun 16, 2014 at 08:33:23AM +1000, Dave Chinner wrote:
> >>>>>>>>>FWIW, the non-linear system CPU overhead of a fs_mark test I've been
> >>>>>>>>>running isn't anything related to XFS.  The async fsync workqueue
> >>>>>>>>>results in several thousand worker threads dispatching IO
> >>>>>>>>>concurrently across 16 CPUs:
> >....
> >>>>>>>>>I know that the tag allocator has been rewritten, so I tested
> >>>>>>>>>against a current a current Linus kernel with the XFS aio-fsync
> >>>>>>>>>patch. The results are all over the place - from several sequential
> >>>>>>>>>runs of the same test (removing the files in between so each tests
> >>>>>>>>>starts from an empty fs):
> >>>>>>>>>
> >>>>>>>>>Wall time	sys time	IOPS	 files/s
> >>>>>>>>>4m58.151s	11m12.648s	30,000	 13,500
> >>>>>>>>>4m35.075s	12m45.900s	45,000	 15,000
> >>>>>>>>>3m10.665s	11m15.804s	65,000	 21,000
> >>>>>>>>>3m27.384s	11m54.723s	85,000	 20,000
> >>>>>>>>>3m59.574s	11m12.012s	50,000	 16,500
> >>>>>>>>>4m12.704s	12m15.720s	50,000	 17,000

....
> >But the IOPS rate has definitely increased with this config
> >- I just saw 90k, 100k and 110k IOPS in the last 3 iterations of the
> >workload (the above profile is from the 100k IOPS period). However,
> >the wall time was still only 3m58s, which again tends to implicate
> >the write() portion of the benchmark for causing the slowdowns
> >rather than the fsync() portion that is dispatching all the IO...
> 
> Some contention for this case is hard to avoid, and the above looks
> better than 3.15 does. So the big question is whether it's worth
> fixing the gaps with multiple waitqueues (and if that actually still
> buys us anything), or whether we should just disable them.
> 
> If I can get you to try one more thing, can you apply this patch and
> give that a whirl? Get rid of the other patches I sent first, this
> has everything.

Not much difference in the CPU usage profiles or base line
performance. It runs at 3m10s from empty memory, and ~3m45s when
memory starts full of clean pages. system time varies from 10m40s to
12m55s with no real correlation to overall runtime.

>From observation of all the performance metrics I graph in real
time, however, the pattern of the peaks and troughs from run to run
and even iteration to iteration is much more regular than the
previous patches. So from that perspective it is an improvement.
Again, all the variability in the graphs show up when free memory
runs out...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-man" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Kernel Documentation]     [Netdev]     [Linux Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux