Re: xfstests/224 lockup/slowdown (was: Please hammer my for-linus branch)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm seeing a machine lockup in xfstests/224, logs attached. Friday's
xfstests round with 3.5-rc4 was ok, all tests passed.

The 'dd' processes are in D-state with this stacktraces

 5597 pts/0    D+     0:00 dd status=noxfer if=/dev/zero of=/mnt/a2/testfile.8 bs=4k conv=notrunc
[<ffffffffa001bb3e>] reserve_metadata_bytes+0x33e/0x8f0 [btrfs]
[<ffffffffa001cd64>] btrfs_delalloc_reserve_metadata+0x134/0x3b0 [btrfs]
[<ffffffffa001d16b>] btrfs_delalloc_reserve_space+0x3b/0x60 [btrfs]
[<ffffffffa004132b>] __btrfs_buffered_write+0x17b/0x380 [btrfs]
[<ffffffffa0041783>] btrfs_file_aio_write+0x253/0x4e0 [btrfs]
[<ffffffff81144892>] do_sync_write+0xe2/0x120
[<ffffffff8114519e>] vfs_write+0xce/0x190
[<ffffffff811454e4>] sys_write+0x54/0xa0
[<ffffffff818b4fa9>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff

and (not sure if there are more)

 5666 pts/0    D+     0:00 dd status=noxfer if=/dev/zero of=/mnt/a2/testfile.6 bs=4k conv=notrunc
[<ffffffffa001bb3e>] reserve_metadata_bytes+0x33e/0x8f0 [btrfs]
[<ffffffffa001c56a>] btrfs_block_rsv_add+0x3a/0x60 [btrfs]
[<ffffffffa003155e>] start_transaction+0x26e/0x330 [btrfs]
[<ffffffffa0031903>] btrfs_start_transaction+0x13/0x20 [btrfs]
[<ffffffffa003cae0>] btrfs_dirty_inode+0xb0/0xe0 [btrfs]
[<ffffffffa003cdad>] btrfs_update_time+0xcd/0x180 [btrfs]
[<ffffffffa00416f8>] btrfs_file_aio_write+0x1c8/0x4e0 [btrfs]
[<ffffffff81144892>] do_sync_write+0xe2/0x120
[<ffffffff8114519e>] vfs_write+0xce/0x190
[<ffffffff811454e4>] sys_write+0x54/0xa0
[<ffffffff818b4fa9>] system_call_fastpath+0x16/0x1b

all btrfs kernel threads are idle.

Mount options: -o space_cache
Mkfs: fresh, default options

# btrfs fi df /mnt/a2
System: total=4.00MiB, used=4.00KiB
Data+Metadata: total=1020.00MiB, used=987.32MiB

[meanwhile]

While grabbing lockdep stats the test respawned

224 236s ...    [14:57:42] [15:46:56] 2954s

but there was no disk activity, I wonder if touching /proc/lockdep or
/proc/lock_stat is affecting this.

Finishing this report anyway, and will redo the tests again.

Looking again into the logs, the first process snapshot (only D-state
processes) is much longer than process snapshot of containing all,
unfortuntelly I don't have timestamps recorded, but this suggests that it's
very slowly going on, so slowly that I considered it stalled looking at the
io graphs.


david

Attachment: for-linus-hung-224-all.txt.gz
Description: Binary data

Attachment: for-linus-hung-224-D.txt.gz
Description: Binary data


[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux