Re: Deadlock/high load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/06/14 16:15, Alin Dobre wrote:
> Hi all,
> 
> I have a problem that triggers quite often on our production machines. I
> don't really know what's triggering this or how to reproduce it, but the
> machine enters in some sort of deadlock state, where it consumes all the
> i/o and the load average goes very high in seconds (it even gets to over
> 200), sometimes in about a minute or even less, the machine is
> unresponsive and we have to reset it. Rarely, the load just stays high
> (~25) for hours, but it never gets down again, but this happens rarely,
> as I said. In general, the machine is either already unresponsive or is
> about to become unresponsive.
> 
> The last machine that encountered this has 40 cores and the btrfs
> filesystem is running over SSDs. We encountered this on a plain 3.14
> kernel, and also on the latest 3.14.6 kernel + all the patches whose
> summary is marked "btrfs:" that made it in 3.15, straight forward
> backported (cherry-picked) to 3.14.
> 
> Also, no suspicious (malicious) activity from the running processes either.
> 
> I noticed there was another report on 3.13 which was solved by a 3.15rc
> patch, it doesn't seem to be the same thing.
> 
> Since the only chance to obtain something was via a SysRq dump, here's
> what I could get from the last "w" trigger (tasks that are in
> uninterruptable (blocked) state), showing only tasks that are related to
> btrfs:

I tried to reproduce this on a slower/older machine with older SSDs and
couldn't get anywhere, the machine stood up. However, when I tried one
of our faster/newer machine also with newer and faster SSDs, I managed
to reproduce it twice.

I should mention that the disks are set up in a MD RAID6, and btrfs
single for both data and metadata is on top of that. I ran bonnie++ to
reproduce it (bonnie++ -d /home/bonnie -s 4g -m test -r 1024 -x 100 -u
bonnie) inside a container that was memory capped to 1GB (hence the -r
1024) with the help of cgroups.

Just before the machine stopped being fully responsive I had 3 processes
that were consuming 100% CPU: md128_raid6, btrfs-transact,
kworker/u82:6. The load was fairly low, but atop stopped working at ~5
load average.

I couldn't dump the sysrq blocked processes this time, but the above 3
processes are also in my initial report.

As per Liu Bo's request, the output of the df command is:
Data, single: total=73.01GiB, used=28.05GiB
System, single: total=4.00MiB, used=16.00KiB
Metadata, single: total=3.01GiB, used=1.04GiB
unknown, single: total=368.00MiB, used=0.00
at the moment when atop was already unresponsive.

Another thing to mention is that our production machines also have a
fairly high traffic of snapshotting (or plain creation, more rarely) and
deletion operations on subvolumes that are quota enabled.

Cheers,
Alin.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux