| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
RDMA/ib_srp: Fix a deadlock
Remove the flush_workqueue(system_long_wq) call since flushing
system_long_wq is deadlock-prone and since that call is redundant with a
preceding cancel_work_sync() |
| In the Linux kernel, the following vulnerability has been resolved:
mm: vmscan: remove deadlock due to throttling failing to make progress
A soft lockup bug in kcompactd was reported in a private bugzilla with
the following visible in dmesg;
watchdog: BUG: soft lockup - CPU#33 stuck for 26s! [kcompactd0:479]
watchdog: BUG: soft lockup - CPU#33 stuck for 52s! [kcompactd0:479]
watchdog: BUG: soft lockup - CPU#33 stuck for 78s! [kcompactd0:479]
watchdog: BUG: soft lockup - CPU#33 stuck for 104s! [kcompactd0:479]
The machine had 256G of RAM with no swap and an earlier failed
allocation indicated that node 0 where kcompactd was run was potentially
unreclaimable;
Node 0 active_anon:29355112kB inactive_anon:2913528kB active_file:0kB
inactive_file:0kB unevictable:64kB isolated(anon):0kB isolated(file):0kB
mapped:8kB dirty:0kB writeback:0kB shmem:26780kB shmem_thp:
0kB shmem_pmdmapped: 0kB anon_thp: 23480320kB writeback_tmp:0kB
kernel_stack:2272kB pagetables:24500kB all_unreclaimable? yes
Vlastimil Babka investigated a crash dump and found that a task
migrating pages was trying to drain PCP lists;
PID: 52922 TASK: ffff969f820e5000 CPU: 19 COMMAND: "kworker/u128:3"
Call Trace:
__schedule
schedule
schedule_timeout
wait_for_completion
__flush_work
__drain_all_pages
__alloc_pages_slowpath.constprop.114
__alloc_pages
alloc_migration_target
migrate_pages
migrate_to_node
do_migrate_pages
cpuset_migrate_mm_workfn
process_one_work
worker_thread
kthread
ret_from_fork
This failure is specific to CONFIG_PREEMPT=n builds. The root of the
problem is that kcompact0 is not rescheduling on a CPU while a task that
has isolated a large number of the pages from the LRU is waiting on
kcompact0 to reschedule so the pages can be released. While
shrink_inactive_list() only loops once around too_many_isolated, reclaim
can continue without rescheduling if sc->skipped_deactivate == 1 which
could happen if there was no file LRU and the inactive anon list was not
low. |
| In the Linux kernel, the following vulnerability has been resolved:
net, neigh: Do not trigger immediate probes on NUD_FAILED from neigh_managed_work
syzkaller was able to trigger a deadlock for NTF_MANAGED entries [0]:
kworker/0:16/14617 is trying to acquire lock:
ffffffff8d4dd370 (&tbl->lock){++-.}-{2:2}, at: ___neigh_create+0x9e1/0x2990 net/core/neighbour.c:652
[...]
but task is already holding lock:
ffffffff8d4dd370 (&tbl->lock){++-.}-{2:2}, at: neigh_managed_work+0x35/0x250 net/core/neighbour.c:1572
The neighbor entry turned to NUD_FAILED state, where __neigh_event_send()
triggered an immediate probe as per commit cd28ca0a3dd1 ("neigh: reduce
arp latency") via neigh_probe() given table lock was held.
One option to fix this situation is to defer the neigh_probe() back to
the neigh_timer_handler() similarly as pre cd28ca0a3dd1. For the case
of NTF_MANAGED, this deferral is acceptable given this only happens on
actual failure state and regular / expected state is NUD_VALID with the
entry already present.
The fix adds a parameter to __neigh_event_send() in order to communicate
whether immediate probe is allowed or disallowed. Existing call-sites
of neigh_event_send() default as-is to immediate probe. However, the
neigh_managed_work() disables it via use of neigh_event_send_probe().
[0] <TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
print_deadlock_bug kernel/locking/lockdep.c:2956 [inline]
check_deadlock kernel/locking/lockdep.c:2999 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire.cold+0x149/0x3ab kernel/locking/lockdep.c:5027
lock_acquire kernel/locking/lockdep.c:5639 [inline]
lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5604
__raw_write_lock_bh include/linux/rwlock_api_smp.h:202 [inline]
_raw_write_lock_bh+0x2f/0x40 kernel/locking/spinlock.c:334
___neigh_create+0x9e1/0x2990 net/core/neighbour.c:652
ip6_finish_output2+0x1070/0x14f0 net/ipv6/ip6_output.c:123
__ip6_finish_output net/ipv6/ip6_output.c:191 [inline]
__ip6_finish_output+0x61e/0xe90 net/ipv6/ip6_output.c:170
ip6_finish_output+0x32/0x200 net/ipv6/ip6_output.c:201
NF_HOOK_COND include/linux/netfilter.h:296 [inline]
ip6_output+0x1e4/0x530 net/ipv6/ip6_output.c:224
dst_output include/net/dst.h:451 [inline]
NF_HOOK include/linux/netfilter.h:307 [inline]
ndisc_send_skb+0xa99/0x17f0 net/ipv6/ndisc.c:508
ndisc_send_ns+0x3a9/0x840 net/ipv6/ndisc.c:650
ndisc_solicit+0x2cd/0x4f0 net/ipv6/ndisc.c:742
neigh_probe+0xc2/0x110 net/core/neighbour.c:1040
__neigh_event_send+0x37d/0x1570 net/core/neighbour.c:1201
neigh_event_send include/net/neighbour.h:470 [inline]
neigh_managed_work+0x162/0x250 net/core/neighbour.c:1574
process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307
worker_thread+0x657/0x1110 kernel/workqueue.c:2454
kthread+0x2e9/0x3a0 kernel/kthread.c:377
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
</TASK> |
| In the Linux kernel, the following vulnerability has been resolved:
IB/core: Fix a nested dead lock as part of ODP flow
Fix a nested dead lock as part of ODP flow by using mmput_async().
From the below call trace [1] can see that calling mmput() once we have
the umem_odp->umem_mutex locked as required by
ib_umem_odp_map_dma_and_lock() might trigger in the same task the
exit_mmap()->__mmu_notifier_release()->mlx5_ib_invalidate_range() which
may dead lock when trying to lock the same mutex.
Moving to use mmput_async() will solve the problem as the above
exit_mmap() flow will be called in other task and will be executed once
the lock will be available.
[1]
[64843.077665] task:kworker/u133:2 state:D stack: 0 pid:80906 ppid:
2 flags:0x00004000
[64843.077672] Workqueue: mlx5_ib_page_fault mlx5_ib_eqe_pf_action [mlx5_ib]
[64843.077719] Call Trace:
[64843.077722] <TASK>
[64843.077724] __schedule+0x23d/0x590
[64843.077729] schedule+0x4e/0xb0
[64843.077735] schedule_preempt_disabled+0xe/0x10
[64843.077740] __mutex_lock.constprop.0+0x263/0x490
[64843.077747] __mutex_lock_slowpath+0x13/0x20
[64843.077752] mutex_lock+0x34/0x40
[64843.077758] mlx5_ib_invalidate_range+0x48/0x270 [mlx5_ib]
[64843.077808] __mmu_notifier_release+0x1a4/0x200
[64843.077816] exit_mmap+0x1bc/0x200
[64843.077822] ? walk_page_range+0x9c/0x120
[64843.077828] ? __cond_resched+0x1a/0x50
[64843.077833] ? mutex_lock+0x13/0x40
[64843.077839] ? uprobe_clear_state+0xac/0x120
[64843.077860] mmput+0x5f/0x140
[64843.077867] ib_umem_odp_map_dma_and_lock+0x21b/0x580 [ib_core]
[64843.077931] pagefault_real_mr+0x9a/0x140 [mlx5_ib]
[64843.077962] pagefault_mr+0xb4/0x550 [mlx5_ib]
[64843.077992] pagefault_single_data_segment.constprop.0+0x2ac/0x560
[mlx5_ib]
[64843.078022] mlx5_ib_eqe_pf_action+0x528/0x780 [mlx5_ib]
[64843.078051] process_one_work+0x22b/0x3d0
[64843.078059] worker_thread+0x53/0x410
[64843.078065] ? process_one_work+0x3d0/0x3d0
[64843.078073] kthread+0x12a/0x150
[64843.078079] ? set_kthread_struct+0x50/0x50
[64843.078085] ret_from_fork+0x22/0x30
[64843.078093] </TASK> |
| In the Linux kernel, the following vulnerability has been resolved:
cgroup: Add missing cpus_read_lock() to cgroup_attach_task_all()
syzbot is hitting percpu_rwsem_assert_held(&cpu_hotplug_lock) warning at
cpuset_attach() [1], for commit 4f7e7236435ca0ab ("cgroup: Fix
threadgroup_rwsem <-> cpus_read_lock() deadlock") missed that
cpuset_attach() is also called from cgroup_attach_task_all().
Add cpus_read_lock() like what cgroup_procs_write_start() does. |
| In the Linux kernel, the following vulnerability has been resolved:
io_uring: lock overflowing for IOPOLL
syzbot reports an issue with overflow filling for IOPOLL:
WARNING: CPU: 0 PID: 28 at io_uring/io_uring.c:734 io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734
CPU: 0 PID: 28 Comm: kworker/u4:1 Not tainted 6.2.0-rc3-syzkaller-16369-g358a161a6a9e #0
Workqueue: events_unbound io_ring_exit_work
Call trace:
io_cqring_event_overflow+0x1c0/0x230 io_uring/io_uring.c:734
io_req_cqe_overflow+0x5c/0x70 io_uring/io_uring.c:773
io_fill_cqe_req io_uring/io_uring.h:168 [inline]
io_do_iopoll+0x474/0x62c io_uring/rw.c:1065
io_iopoll_try_reap_events+0x6c/0x108 io_uring/io_uring.c:1513
io_uring_try_cancel_requests+0x13c/0x258 io_uring/io_uring.c:3056
io_ring_exit_work+0xec/0x390 io_uring/io_uring.c:2869
process_one_work+0x2d8/0x504 kernel/workqueue.c:2289
worker_thread+0x340/0x610 kernel/workqueue.c:2436
kthread+0x12c/0x158 kernel/kthread.c:376
ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:863
There is no real problem for normal IOPOLL as flush is also called with
uring_lock taken, but it's getting more complicated for IOPOLL|SQPOLL,
for which __io_cqring_overflow_flush() happens from the CQ waiting path. |
| In the Linux kernel, the following vulnerability has been resolved:
Input: cyapa - add missing input core locking to suspend/resume functions
Grab input->mutex during suspend/resume functions like it is done in
other input drivers. This fixes the following warning during system
suspend/resume cycle on Samsung Exynos5250-based Snow Chromebook:
------------[ cut here ]------------
WARNING: CPU: 1 PID: 1680 at drivers/input/input.c:2291 input_device_enabled+0x68/0x6c
Modules linked in: ...
CPU: 1 PID: 1680 Comm: kworker/u4:12 Tainted: G W 6.6.0-rc5-next-20231009 #14109
Hardware name: Samsung Exynos (Flattened Device Tree)
Workqueue: events_unbound async_run_entry_fn
unwind_backtrace from show_stack+0x10/0x14
show_stack from dump_stack_lvl+0x58/0x70
dump_stack_lvl from __warn+0x1a8/0x1cc
__warn from warn_slowpath_fmt+0x18c/0x1b4
warn_slowpath_fmt from input_device_enabled+0x68/0x6c
input_device_enabled from cyapa_gen3_set_power_mode+0x13c/0x1dc
cyapa_gen3_set_power_mode from cyapa_reinitialize+0x10c/0x15c
cyapa_reinitialize from cyapa_resume+0x48/0x98
cyapa_resume from dpm_run_callback+0x90/0x298
dpm_run_callback from device_resume+0xb4/0x258
device_resume from async_resume+0x20/0x64
async_resume from async_run_entry_fn+0x40/0x15c
async_run_entry_fn from process_scheduled_works+0xbc/0x6a8
process_scheduled_works from worker_thread+0x188/0x454
worker_thread from kthread+0x108/0x140
kthread from ret_from_fork+0x14/0x28
Exception stack(0xf1625fb0 to 0xf1625ff8)
...
---[ end trace 0000000000000000 ]---
...
------------[ cut here ]------------
WARNING: CPU: 1 PID: 1680 at drivers/input/input.c:2291 input_device_enabled+0x68/0x6c
Modules linked in: ...
CPU: 1 PID: 1680 Comm: kworker/u4:12 Tainted: G W 6.6.0-rc5-next-20231009 #14109
Hardware name: Samsung Exynos (Flattened Device Tree)
Workqueue: events_unbound async_run_entry_fn
unwind_backtrace from show_stack+0x10/0x14
show_stack from dump_stack_lvl+0x58/0x70
dump_stack_lvl from __warn+0x1a8/0x1cc
__warn from warn_slowpath_fmt+0x18c/0x1b4
warn_slowpath_fmt from input_device_enabled+0x68/0x6c
input_device_enabled from cyapa_gen3_set_power_mode+0x13c/0x1dc
cyapa_gen3_set_power_mode from cyapa_reinitialize+0x10c/0x15c
cyapa_reinitialize from cyapa_resume+0x48/0x98
cyapa_resume from dpm_run_callback+0x90/0x298
dpm_run_callback from device_resume+0xb4/0x258
device_resume from async_resume+0x20/0x64
async_resume from async_run_entry_fn+0x40/0x15c
async_run_entry_fn from process_scheduled_works+0xbc/0x6a8
process_scheduled_works from worker_thread+0x188/0x454
worker_thread from kthread+0x108/0x140
kthread from ret_from_fork+0x14/0x28
Exception stack(0xf1625fb0 to 0xf1625ff8)
...
---[ end trace 0000000000000000 ]--- |
| In the Linux kernel, the following vulnerability has been resolved:
virtio/vsock: Fix uninit-value in virtio_transport_recv_pkt()
KMSAN reported the following uninit-value access issue:
=====================================================
BUG: KMSAN: uninit-value in virtio_transport_recv_pkt+0x1dfb/0x26a0 net/vmw_vsock/virtio_transport_common.c:1421
virtio_transport_recv_pkt+0x1dfb/0x26a0 net/vmw_vsock/virtio_transport_common.c:1421
vsock_loopback_work+0x3bb/0x5a0 net/vmw_vsock/vsock_loopback.c:120
process_one_work kernel/workqueue.c:2630 [inline]
process_scheduled_works+0xff6/0x1e60 kernel/workqueue.c:2703
worker_thread+0xeca/0x14d0 kernel/workqueue.c:2784
kthread+0x3cc/0x520 kernel/kthread.c:388
ret_from_fork+0x66/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
Uninit was stored to memory at:
virtio_transport_space_update net/vmw_vsock/virtio_transport_common.c:1274 [inline]
virtio_transport_recv_pkt+0x1ee8/0x26a0 net/vmw_vsock/virtio_transport_common.c:1415
vsock_loopback_work+0x3bb/0x5a0 net/vmw_vsock/vsock_loopback.c:120
process_one_work kernel/workqueue.c:2630 [inline]
process_scheduled_works+0xff6/0x1e60 kernel/workqueue.c:2703
worker_thread+0xeca/0x14d0 kernel/workqueue.c:2784
kthread+0x3cc/0x520 kernel/kthread.c:388
ret_from_fork+0x66/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
Uninit was created at:
slab_post_alloc_hook+0x105/0xad0 mm/slab.h:767
slab_alloc_node mm/slub.c:3478 [inline]
kmem_cache_alloc_node+0x5a2/0xaf0 mm/slub.c:3523
kmalloc_reserve+0x13c/0x4a0 net/core/skbuff.c:559
__alloc_skb+0x2fd/0x770 net/core/skbuff.c:650
alloc_skb include/linux/skbuff.h:1286 [inline]
virtio_vsock_alloc_skb include/linux/virtio_vsock.h:66 [inline]
virtio_transport_alloc_skb+0x90/0x11e0 net/vmw_vsock/virtio_transport_common.c:58
virtio_transport_reset_no_sock net/vmw_vsock/virtio_transport_common.c:957 [inline]
virtio_transport_recv_pkt+0x1279/0x26a0 net/vmw_vsock/virtio_transport_common.c:1387
vsock_loopback_work+0x3bb/0x5a0 net/vmw_vsock/vsock_loopback.c:120
process_one_work kernel/workqueue.c:2630 [inline]
process_scheduled_works+0xff6/0x1e60 kernel/workqueue.c:2703
worker_thread+0xeca/0x14d0 kernel/workqueue.c:2784
kthread+0x3cc/0x520 kernel/kthread.c:388
ret_from_fork+0x66/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
CPU: 1 PID: 10664 Comm: kworker/1:5 Not tainted 6.6.0-rc3-00146-g9f3ebbef746f #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-1.fc38 04/01/2014
Workqueue: vsock-loopback vsock_loopback_work
=====================================================
The following simple reproducer can cause the issue described above:
int main(void)
{
int sock;
struct sockaddr_vm addr = {
.svm_family = AF_VSOCK,
.svm_cid = VMADDR_CID_ANY,
.svm_port = 1234,
};
sock = socket(AF_VSOCK, SOCK_STREAM, 0);
connect(sock, (struct sockaddr *)&addr, sizeof(addr));
return 0;
}
This issue occurs because the `buf_alloc` and `fwd_cnt` fields of the
`struct virtio_vsock_hdr` are not initialized when a new skb is allocated
in `virtio_transport_init_hdr()`. This patch resolves the issue by
initializing these fields during allocation. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/amdkfd: Fix lock dependency warning with srcu
======================================================
WARNING: possible circular locking dependency detected
6.5.0-kfd-yangp #2289 Not tainted
------------------------------------------------------
kworker/0:2/996 is trying to acquire lock:
(srcu){.+.+}-{0:0}, at: __synchronize_srcu+0x5/0x1a0
but task is already holding lock:
((work_completion)(&svms->deferred_list_work)){+.+.}-{0:0}, at:
process_one_work+0x211/0x560
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 ((work_completion)(&svms->deferred_list_work)){+.+.}-{0:0}:
__flush_work+0x88/0x4f0
svm_range_list_lock_and_flush_work+0x3d/0x110 [amdgpu]
svm_range_set_attr+0xd6/0x14c0 [amdgpu]
kfd_ioctl+0x1d1/0x630 [amdgpu]
__x64_sys_ioctl+0x88/0xc0
-> #2 (&info->lock#2){+.+.}-{3:3}:
__mutex_lock+0x99/0xc70
amdgpu_amdkfd_gpuvm_restore_process_bos+0x54/0x740 [amdgpu]
restore_process_helper+0x22/0x80 [amdgpu]
restore_process_worker+0x2d/0xa0 [amdgpu]
process_one_work+0x29b/0x560
worker_thread+0x3d/0x3d0
-> #1 ((work_completion)(&(&process->restore_work)->work)){+.+.}-{0:0}:
__flush_work+0x88/0x4f0
__cancel_work_timer+0x12c/0x1c0
kfd_process_notifier_release_internal+0x37/0x1f0 [amdgpu]
__mmu_notifier_release+0xad/0x240
exit_mmap+0x6a/0x3a0
mmput+0x6a/0x120
do_exit+0x322/0xb90
do_group_exit+0x37/0xa0
__x64_sys_exit_group+0x18/0x20
do_syscall_64+0x38/0x80
-> #0 (srcu){.+.+}-{0:0}:
__lock_acquire+0x1521/0x2510
lock_sync+0x5f/0x90
__synchronize_srcu+0x4f/0x1a0
__mmu_notifier_release+0x128/0x240
exit_mmap+0x6a/0x3a0
mmput+0x6a/0x120
svm_range_deferred_list_work+0x19f/0x350 [amdgpu]
process_one_work+0x29b/0x560
worker_thread+0x3d/0x3d0
other info that might help us debug this:
Chain exists of:
srcu --> &info->lock#2 --> (work_completion)(&svms->deferred_list_work)
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock((work_completion)(&svms->deferred_list_work));
lock(&info->lock#2);
lock((work_completion)(&svms->deferred_list_work));
sync(srcu); |
| In the Linux kernel, the following vulnerability has been resolved:
hwrng: core - Fix page fault dead lock on mmap-ed hwrng
There is a dead-lock in the hwrng device read path. This triggers
when the user reads from /dev/hwrng into memory also mmap-ed from
/dev/hwrng. The resulting page fault triggers a recursive read
which then dead-locks.
Fix this by using a stack buffer when calling copy_to_user. |
| In the Linux kernel, the following vulnerability has been resolved:
ceph: fix deadlock or deadcode of misusing dget()
The lock order is incorrect between denty and its parent, we should
always make sure that the parent get the lock first.
But since this deadcode is never used and the parent dir will always
be set from the callers, let's just remove it. |
| In the Linux kernel, the following vulnerability has been resolved:
phy: lynx-28g: serialize concurrent phy_set_mode_ext() calls to shared registers
The protocol converter configuration registers PCC8, PCCC, PCCD
(implemented by the driver), as well as others, control protocol
converters from multiple lanes (each represented as a different
struct phy). So, if there are simultaneous calls to phy_set_mode_ext()
to lanes sharing the same PCC register (either for the "old" or for the
"new" protocol), corruption of the values programmed to hardware is
possible, because lynx_28g_rmw() has no locking.
Add a spinlock in the struct lynx_28g_priv shared by all lanes, and take
the global spinlock from the phy_ops :: set_mode() implementation. There
are no other callers which modify PCC registers. |
| In the Linux kernel, the following vulnerability has been resolved:
bus: mhi: host: Drop chan lock before queuing buffers
Ensure read and write locks for the channel are not taken in succession by
dropping the read lock from parse_xfer_event() such that a callback given
to client can potentially queue buffers and acquire the write lock in that
process. Any queueing of buffers should be done without channel read lock
acquired as it can result in multiple locks and a soft lockup.
[mani: added fixes tag and cc'ed stable] |
| In the Linux kernel, the following vulnerability has been resolved:
serial: imx: fix tx statemachine deadlock
When using the serial port as RS485 port, the tx statemachine is used to
control the RTS pin to drive the RS485 transceiver TX_EN pin. When the
TTY port is closed in the middle of a transmission (for instance during
userland application crash), imx_uart_shutdown disables the interface
and disables the Transmission Complete interrupt. afer that,
imx_uart_stop_tx bails on an incomplete transmission, to be retriggered
by the TC interrupt. This interrupt is disabled and therefore the tx
statemachine never transitions out of SEND. The statemachine is in
deadlock now, and the TX_EN remains low, making the interface useless.
imx_uart_stop_tx now checks for incomplete transmission AND whether TC
interrupts are enabled before bailing to be retriggered. This makes sure
the state machine handling is reached, and is properly set to
WAIT_AFTER_SEND. |
| In the Linux kernel, the following vulnerability has been resolved:
mptcp: fix deadlock in __mptcp_push_pending()
__mptcp_push_pending() may call mptcp_flush_join_list() with subflow
socket lock held. If such call hits mptcp_sockopt_sync_all() then
subsequently __mptcp_sockopt_sync() could try to lock the subflow
socket for itself, causing a deadlock.
sysrq: Show Blocked State
task:ss-server state:D stack: 0 pid: 938 ppid: 1 flags:0x00000000
Call Trace:
<TASK>
__schedule+0x2d6/0x10c0
? __mod_memcg_state+0x4d/0x70
? csum_partial+0xd/0x20
? _raw_spin_lock_irqsave+0x26/0x50
schedule+0x4e/0xc0
__lock_sock+0x69/0x90
? do_wait_intr_irq+0xa0/0xa0
__lock_sock_fast+0x35/0x50
mptcp_sockopt_sync_all+0x38/0xc0
__mptcp_push_pending+0x105/0x200
mptcp_sendmsg+0x466/0x490
sock_sendmsg+0x57/0x60
__sys_sendto+0xf0/0x160
? do_wait_intr_irq+0xa0/0xa0
? fpregs_restore_userregs+0x12/0xd0
__x64_sys_sendto+0x20/0x30
do_syscall_64+0x38/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f9ba546c2d0
RSP: 002b:00007ffdc3b762d8 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007f9ba56c8060 RCX: 00007f9ba546c2d0
RDX: 000000000000077a RSI: 0000000000e5e180 RDI: 0000000000000234
RBP: 0000000000cc57f0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f9ba56c8060
R13: 0000000000b6ba60 R14: 0000000000cc7840 R15: 41d8685b1d7901b8
</TASK>
Fix the issue by using __mptcp_flush_join_list() instead of plain
mptcp_flush_join_list() inside __mptcp_push_pending(), as suggested by
Florian. The sockopt sync will be deferred to the workqueue. |
| In the Linux kernel, the following vulnerability has been resolved:
net: systemport: Add global locking for descriptor lifecycle
The descriptor list is a shared resource across all of the transmit queues, and
the locking mechanism used today only protects concurrency across a given
transmit queue between the transmit and reclaiming. This creates an opportunity
for the SYSTEMPORT hardware to work on corrupted descriptors if we have
multiple producers at once which is the case when using multiple transmit
queues.
This was particularly noticeable when using multiple flows/transmit queues and
it showed up in interesting ways in that UDP packets would get a correct UDP
header checksum being calculated over an incorrect packet length. Similarly TCP
packets would get an equally correct checksum computed by the hardware over an
incorrect packet length.
The SYSTEMPORT hardware maintains an internal descriptor list that it re-arranges
when the driver produces a new descriptor anytime it writes to the
WRITE_PORT_{HI,LO} registers, there is however some delay in the hardware to
re-organize its descriptors and it is possible that concurrent TX queues
eventually break this internal allocation scheme to the point where the
length/status part of the descriptor gets used for an incorrect data buffer.
The fix is to impose a global serialization for all TX queues in the short
section where we are writing to the WRITE_PORT_{HI,LO} registers which solves
the corruption even with multiple concurrent TX queues being used. |
| In the Linux kernel, the following vulnerability has been resolved:
mm/mempolicy: do not allow illegal MPOL_F_NUMA_BALANCING | MPOL_LOCAL in mbind()
syzbot reported access to unitialized memory in mbind() [1]
Issue came with commit bda420b98505 ("numa balancing: migrate on fault
among multiple bound nodes")
This commit added a new bit in MPOL_MODE_FLAGS, but only checked valid
combination (MPOL_F_NUMA_BALANCING can only be used with MPOL_BIND) in
do_set_mempolicy()
This patch moves the check in sanitize_mpol_flags() so that it is also
used by mbind()
[1]
BUG: KMSAN: uninit-value in __mpol_equal+0x567/0x590 mm/mempolicy.c:2260
__mpol_equal+0x567/0x590 mm/mempolicy.c:2260
mpol_equal include/linux/mempolicy.h:105 [inline]
vma_merge+0x4a1/0x1e60 mm/mmap.c:1190
mbind_range+0xcc8/0x1e80 mm/mempolicy.c:811
do_mbind+0xf42/0x15f0 mm/mempolicy.c:1333
kernel_mbind mm/mempolicy.c:1483 [inline]
__do_sys_mbind mm/mempolicy.c:1490 [inline]
__se_sys_mbind+0x437/0xb80 mm/mempolicy.c:1486
__x64_sys_mbind+0x19d/0x200 mm/mempolicy.c:1486
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x54/0xd0 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x44/0xae
Uninit was created at:
slab_alloc_node mm/slub.c:3221 [inline]
slab_alloc mm/slub.c:3230 [inline]
kmem_cache_alloc+0x751/0xff0 mm/slub.c:3235
mpol_new mm/mempolicy.c:293 [inline]
do_mbind+0x912/0x15f0 mm/mempolicy.c:1289
kernel_mbind mm/mempolicy.c:1483 [inline]
__do_sys_mbind mm/mempolicy.c:1490 [inline]
__se_sys_mbind+0x437/0xb80 mm/mempolicy.c:1486
__x64_sys_mbind+0x19d/0x200 mm/mempolicy.c:1486
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x54/0xd0 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x44/0xae
=====================================================
Kernel panic - not syncing: panic_on_kmsan set ...
CPU: 0 PID: 15049 Comm: syz-executor.0 Tainted: G B 5.15.0-rc2-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1ff/0x28e lib/dump_stack.c:106
dump_stack+0x25/0x28 lib/dump_stack.c:113
panic+0x44f/0xdeb kernel/panic.c:232
kmsan_report+0x2ee/0x300 mm/kmsan/report.c:186
__msan_warning+0xd7/0x150 mm/kmsan/instrumentation.c:208
__mpol_equal+0x567/0x590 mm/mempolicy.c:2260
mpol_equal include/linux/mempolicy.h:105 [inline]
vma_merge+0x4a1/0x1e60 mm/mmap.c:1190
mbind_range+0xcc8/0x1e80 mm/mempolicy.c:811
do_mbind+0xf42/0x15f0 mm/mempolicy.c:1333
kernel_mbind mm/mempolicy.c:1483 [inline]
__do_sys_mbind mm/mempolicy.c:1490 [inline]
__se_sys_mbind+0x437/0xb80 mm/mempolicy.c:1486
__x64_sys_mbind+0x19d/0x200 mm/mempolicy.c:1486
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x54/0xd0 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x44/0xae |
| In the Linux kernel, the following vulnerability has been resolved:
ice: fix locking for Tx timestamp tracking flush
Commit 4dd0d5c33c3e ("ice: add lock around Tx timestamp tracker flush")
added a lock around the Tx timestamp tracker flow which is used to
cleanup any left over SKBs and prepare for device removal.
This lock is problematic because it is being held around a call to
ice_clear_phy_tstamp. The clear function takes a mutex to send a PHY
write command to firmware. This could lead to a deadlock if the mutex
actually sleeps, and causes the following warning on a kernel with
preemption debugging enabled:
[ 715.419426] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:573
[ 715.427900] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 3100, name: rmmod
[ 715.435652] INFO: lockdep is turned off.
[ 715.439591] Preemption disabled at:
[ 715.439594] [<0000000000000000>] 0x0
[ 715.446678] CPU: 52 PID: 3100 Comm: rmmod Tainted: G W OE 5.15.0-rc4+ #42 bdd7ec3018e725f159ca0d372ce8c2c0e784891c
[ 715.458058] Hardware name: Intel Corporation S2600STQ/S2600STQ, BIOS SE5C620.86B.02.01.0010.010620200716 01/06/2020
[ 715.468483] Call Trace:
[ 715.470940] dump_stack_lvl+0x6a/0x9a
[ 715.474613] ___might_sleep.cold+0x224/0x26a
[ 715.478895] __mutex_lock+0xb3/0x1440
[ 715.482569] ? stack_depot_save+0x378/0x500
[ 715.486763] ? ice_sq_send_cmd+0x78/0x14c0 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.494979] ? kfree+0xc1/0x520
[ 715.498128] ? mutex_lock_io_nested+0x12a0/0x12a0
[ 715.502837] ? kasan_set_free_info+0x20/0x30
[ 715.507110] ? __kasan_slab_free+0x10b/0x140
[ 715.511385] ? slab_free_freelist_hook+0xc7/0x220
[ 715.516092] ? kfree+0xc1/0x520
[ 715.519235] ? ice_deinit_lag+0x16c/0x220 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.527359] ? ice_remove+0x1cf/0x6a0 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.535133] ? pci_device_remove+0xab/0x1d0
[ 715.539318] ? __device_release_driver+0x35b/0x690
[ 715.544110] ? driver_detach+0x214/0x2f0
[ 715.548035] ? bus_remove_driver+0x11d/0x2f0
[ 715.552309] ? pci_unregister_driver+0x26/0x250
[ 715.556840] ? ice_module_exit+0xc/0x2f [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.564799] ? __do_sys_delete_module.constprop.0+0x2d8/0x4e0
[ 715.570554] ? do_syscall_64+0x3b/0x90
[ 715.574303] ? entry_SYSCALL_64_after_hwframe+0x44/0xae
[ 715.579529] ? start_flush_work+0x542/0x8f0
[ 715.583719] ? ice_sq_send_cmd+0x78/0x14c0 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.591923] ice_sq_send_cmd+0x78/0x14c0 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.599960] ? wait_for_completion_io+0x250/0x250
[ 715.604662] ? lock_acquire+0x196/0x200
[ 715.608504] ? do_raw_spin_trylock+0xa5/0x160
[ 715.612864] ice_sbq_rw_reg+0x1e6/0x2f0 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.620813] ? ice_reset+0x130/0x130 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.628497] ? __debug_check_no_obj_freed+0x1e8/0x3c0
[ 715.633550] ? trace_hardirqs_on+0x1c/0x130
[ 715.637748] ice_write_phy_reg_e810+0x70/0xf0 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.646220] ? do_raw_spin_trylock+0xa5/0x160
[ 715.650581] ? ice_ptp_release+0x910/0x910 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.658797] ? ice_ptp_release+0x255/0x910 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.667013] ice_clear_phy_tstamp+0x2c/0x110 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.675403] ice_ptp_release+0x408/0x910 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.683440] ice_remove+0x560/0x6a0 [ice 9a7e1ec00971c89ecd3fe0d4dc7da2b3786a421d]
[ 715.691037] ? _raw_spin_unlock_irqrestore+0x46/0x73
[ 715.696005] pci_device_remove+0xab/0x1d0
[ 715.700018] __device_release_driver+0x35b/0x690
[ 715.704637] driver_detach+0x214/0x2f0
[ 715.708389] bus_remove_driver+0x11d/0x2f0
[ 715.712489] pci_unregister_driver+0x26/0x250
[ 71
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
iio: adis16475: fix deadlock on frequency set
With commit 39c024b51b560
("iio: adis16475: improve sync scale mode handling"), two deadlocks were
introduced:
1) The call to 'adis_write_reg_16()' was not changed to it's unlocked
version.
2) The lock was not being released on the success path of the function.
This change fixes both these issues. |
| In the Linux kernel, the following vulnerability has been resolved:
powerpc/mm: Fix lockup on kernel exec fault
The powerpc kernel is not prepared to handle exec faults from kernel.
Especially, the function is_exec_fault() will return 'false' when an
exec fault is taken by kernel, because the check is based on reading
current->thread.regs->trap which contains the trap from user.
For instance, when provoking a LKDTM EXEC_USERSPACE test,
current->thread.regs->trap is set to SYSCALL trap (0xc00), and
the fault taken by the kernel is not seen as an exec fault by
set_access_flags_filter().
Commit d7df2443cd5f ("powerpc/mm: Fix spurious segfaults on radix
with autonuma") made it clear and handled it properly. But later on
commit d3ca587404b3 ("powerpc/mm: Fix reporting of kernel execute
faults") removed that handling, introducing test based on error_code.
And here is the problem, because on the 603 all upper bits of SRR1
get cleared when the TLB instruction miss handler bails out to ISI.
Until commit cbd7e6ca0210 ("powerpc/fault: Avoid heavy
search_exception_tables() verification"), an exec fault from kernel
at a userspace address was indirectly caught by the lack of entry for
that address in the exception tables. But after that commit the
kernel mainly relies on KUAP or on core mm handling to catch wrong
user accesses. Here the access is not wrong, so mm handles it.
It is a minor fault because PAGE_EXEC is not set,
set_access_flags_filter() should set PAGE_EXEC and voila.
But as is_exec_fault() returns false as explained in the beginning,
set_access_flags_filter() bails out without setting PAGE_EXEC flag,
which leads to a forever minor exec fault.
As the kernel is not prepared to handle such exec faults, the thing to
do is to fire in bad_kernel_fault() for any exec fault taken by the
kernel, as it was prior to commit d3ca587404b3. |