Hello,
syzbot found the following issue on:
HEAD commit: 115472395b0a Linux 5.15.104
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=15d614f5c80000
kernel config:
https://syzkaller.appspot.com/x/.config?x=e597b110d58e7b4
dashboard link:
https://syzkaller.appspot.com/bug?extid=df79f3637753ffa39784
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/76798ca1c9b6/disk-11547239.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/3b608633c8f5/vmlinux-11547239.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/8836fafb618b/Image-11547239.gz.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
[email protected]
INFO: task syz-executor.1:12086 blocked for more than 143 seconds.
Not tainted 5.15.104-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1 state:D stack: 0 pid:12086 ppid: 4101 flags:0x00000009
Call trace:
__switch_to+0x308/0x5e8 arch/arm64/kernel/process.c:518
context_switch kernel/sched/core.c:5023 [inline]
__schedule+0xf10/0x1e38 kernel/sched/core.c:6369
schedule+0x11c/0x1c8 kernel/sched/core.c:6452
xlog_grant_head_wait+0x390/0xa88 fs/xfs/xfs_log.c:256
xlog_grant_head_check+0x218/0x3d8
xfs_log_reserve+0x384/0xc1c fs/xfs/xfs_log.c:464
xfs_trans_reserve+0x1e8/0x5a4 fs/xfs/xfs_trans.c:197
xfs_trans_alloc+0x4c4/0xaa4 fs/xfs/xfs_trans.c:288
xfs_qm_qino_alloc+0x348/0x864 fs/xfs/xfs_qm.c:779
xfs_qm_init_quotainos+0x500/0x72c fs/xfs/xfs_qm.c:1544
xfs_qm_init_quotainfo+0x9c/0x8bc fs/xfs/xfs_qm.c:644
xfs_qm_mount_quotas+0x90/0x578 fs/xfs/xfs_qm.c:1428
xfs_mountfs+0x11e4/0x1778 fs/xfs/xfs_mount.c:904
xfs_fs_fill_super+0xd64/0xf60 fs/xfs/xfs_super.c:1658
get_tree_bdev+0x360/0x54c fs/super.c:1294
xfs_fs_get_tree+0x28/0x38 fs/xfs/xfs_super.c:1705
vfs_get_tree+0x90/0x274 fs/super.c:1499
do_new_mount+0x25c/0x8c8 fs/namespace.c:2994
path_mount+0x590/0x104c fs/namespace.c:3324
do_mount fs/namespace.c:3337 [inline]
__do_sys_mount fs/namespace.c:3545 [inline]
__se_sys_mount fs/namespace.c:3522 [inline]
__arm64_sys_mount+0x510/0x5e0 fs/namespace.c:3522
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181
el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:596
el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:614
el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584
Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffff800014a91660 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:268
2 locks held by getty/3733:
#0: ffff0000d2ac5098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x40/0x50 drivers/tty/tty_ldsem.c:340
#1: ffff80001a0ae2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1200 drivers/tty/n_tty.c:2147
1 lock held by syz-executor.4/4100:
2 locks held by kworker/1:18/19952:
#0: ffff0000c0020d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x66c/0x11b8 kernel/workqueue.c:2279
#1: ffff80001c9a7c00 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2281
2 locks held by kworker/u4:11/21504:
#0: ffff0001b4831d18 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested kernel/sched/core.c:475 [inline]
#0: ffff0001b4831d18 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock kernel/sched/sched.h:1325 [inline]
#0: ffff0001b4831d18 (&rq->__lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1620 [inline]
#0: ffff0001b4831d18 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x328/0x1e38 kernel/sched/core.c:6283
#1: ffff0001b481fc48 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x438/0x66c kernel/sched/psi.c:891
2 locks held by kworker/0:8/30987:
#0: ffff0000c0021d38 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x66c/0x11b8 kernel/workqueue.c:2279
#1: ffff800026397c00 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2281
2 locks held by syz-executor.1/12086:
#0: ffff0000d00740e0 (&type->s_umount_key#76/1){+.+.}-{3:3}, at: alloc_super+0x1b8/0x844 fs/super.c:229
#1: ffff0000d0074650 (sb_internal#3){.+.+}-{0:0}, at: xfs_qm_qino_alloc+0x348/0x864 fs/xfs/xfs_qm.c:779
1 lock held by udevd/12111:
2 locks held by udevd/12120:
#0: ffff0000cba96918 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xe0/0x6b0 block/bdev.c:912
#1: ffff0000cb9f3468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0xa8/0x9b8 drivers/block/loop.c:1348
2 locks held by udevd/12126:
#0: ffff0000cb9d9118 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xe0/0x6b0 block/bdev.c:912
#1: ffff0000cb9c2468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0xa8/0x9b8 drivers/block/loop.c:1348
1 lock held by syz-executor.3/15904:
#0: ffff0000cba96918 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x12c/0x89c block/bdev.c:817
2 locks held by syz-executor.5/15913:
#0: ffff00011aaf03f0 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#0: ffff00011aaf03f0 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: lock_mount+0x68/0x26c fs/namespace.c:2248
#1: ffff800014a95be8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
#1: ffff800014a95be8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x320/0x660 kernel/rcu/tree_exp.h:840
1 lock held by syz-executor.0/15915:
#0: ffff0000cb9d9118 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x12c/0x89c block/bdev.c:817
1 lock held by syz-executor.4/15925:
=============================================
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
[email protected].
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.