| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
soc/tegra: regulators: Fix locking up when voltage-spread is out of range
Fix voltage coupler lockup which happens when voltage-spread is out
of range due to a bug in the code. The max-spread requirement shall be
accounted when CPU regulator doesn't have consumers. This problem is
observed on Tegra30 Ouya game console once system-wide DVFS is enabled
in a device-tree. |
| In the Linux kernel, the following vulnerability has been resolved:
nvmet-tcp: fix incorrect locking in state_change sk callback
We are not changing anything in the TCP connection state so
we should not take a write_lock but rather a read lock.
This caused a deadlock when running nvmet-tcp and nvme-tcp
on the same system, where state_change callbacks on the
host and on the controller side have causal relationship
and made lockdep report on this with blktests:
================================
WARNING: inconsistent lock state
5.12.0-rc3 #1 Tainted: G I
--------------------------------
inconsistent {IN-SOFTIRQ-W} -> {SOFTIRQ-ON-R} usage.
nvme/1324 [HC0[0]:SC0[0]:HE1:SE1] takes:
ffff888363151000 (clock-AF_INET){++-?}-{2:2}, at: nvme_tcp_state_change+0x21/0x150 [nvme_tcp]
{IN-SOFTIRQ-W} state was registered at:
__lock_acquire+0x79b/0x18d0
lock_acquire+0x1ca/0x480
_raw_write_lock_bh+0x39/0x80
nvmet_tcp_state_change+0x21/0x170 [nvmet_tcp]
tcp_fin+0x2a8/0x780
tcp_data_queue+0xf94/0x1f20
tcp_rcv_established+0x6ba/0x1f00
tcp_v4_do_rcv+0x502/0x760
tcp_v4_rcv+0x257e/0x3430
ip_protocol_deliver_rcu+0x69/0x6a0
ip_local_deliver_finish+0x1e2/0x2f0
ip_local_deliver+0x1a2/0x420
ip_rcv+0x4fb/0x6b0
__netif_receive_skb_one_core+0x162/0x1b0
process_backlog+0x1ff/0x770
__napi_poll.constprop.0+0xa9/0x5c0
net_rx_action+0x7b3/0xb30
__do_softirq+0x1f0/0x940
do_softirq+0xa1/0xd0
__local_bh_enable_ip+0xd8/0x100
ip_finish_output2+0x6b7/0x18a0
__ip_queue_xmit+0x706/0x1aa0
__tcp_transmit_skb+0x2068/0x2e20
tcp_write_xmit+0xc9e/0x2bb0
__tcp_push_pending_frames+0x92/0x310
inet_shutdown+0x158/0x300
__nvme_tcp_stop_queue+0x36/0x270 [nvme_tcp]
nvme_tcp_stop_queue+0x87/0xb0 [nvme_tcp]
nvme_tcp_teardown_admin_queue+0x69/0xe0 [nvme_tcp]
nvme_do_delete_ctrl+0x100/0x10c [nvme_core]
nvme_sysfs_delete.cold+0x8/0xd [nvme_core]
kernfs_fop_write_iter+0x2c7/0x460
new_sync_write+0x36c/0x610
vfs_write+0x5c0/0x870
ksys_write+0xf9/0x1d0
do_syscall_64+0x33/0x40
entry_SYSCALL_64_after_hwframe+0x44/0xae
irq event stamp: 10687
hardirqs last enabled at (10687): [<ffffffff9ec376bd>] _raw_spin_unlock_irqrestore+0x2d/0x40
hardirqs last disabled at (10686): [<ffffffff9ec374d8>] _raw_spin_lock_irqsave+0x68/0x90
softirqs last enabled at (10684): [<ffffffff9f000608>] __do_softirq+0x608/0x940
softirqs last disabled at (10649): [<ffffffff9cdedd31>] do_softirq+0xa1/0xd0
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(clock-AF_INET);
<Interrupt>
lock(clock-AF_INET);
*** DEADLOCK ***
5 locks held by nvme/1324:
#0: ffff8884a01fe470 (sb_writers#4){.+.+}-{0:0}, at: ksys_write+0xf9/0x1d0
#1: ffff8886e435c090 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x216/0x460
#2: ffff888104d90c38 (kn->active#255){++++}-{0:0}, at: kernfs_remove_self+0x22d/0x330
#3: ffff8884634538d0 (&queue->queue_lock){+.+.}-{3:3}, at: nvme_tcp_stop_queue+0x52/0xb0 [nvme_tcp]
#4: ffff888363150d30 (sk_lock-AF_INET){+.+.}-{0:0}, at: inet_shutdown+0x59/0x300
stack backtrace:
CPU: 26 PID: 1324 Comm: nvme Tainted: G I 5.12.0-rc3 #1
Hardware name: Dell Inc. PowerEdge R640/06NR82, BIOS 2.10.0 11/12/2020
Call Trace:
dump_stack+0x93/0xc2
mark_lock_irq.cold+0x2c/0xb3
? verify_lock_unused+0x390/0x390
? stack_trace_consume_entry+0x160/0x160
? lock_downgrade+0x100/0x100
? save_trace+0x88/0x5e0
? _raw_spin_unlock_irqrestore+0x2d/0x40
mark_lock+0x530/0x1470
? mark_lock_irq+0x1d10/0x1d10
? enqueue_timer+0x660/0x660
mark_usage+0x215/0x2a0
__lock_acquire+0x79b/0x18d0
? tcp_schedule_loss_probe.part.0+0x38c/0x520
lock_acquire+0x1ca/0x480
? nvme_tcp_state_change+0x21/0x150 [nvme_tcp]
? rcu_read_unlock+0x40/0x40
? tcp_mtu_probe+0x1ae0/0x1ae0
? kmalloc_reserve+0xa0/0xa0
? sysfs_file_ops+0x170/0x170
_raw_read_lock+0x3d/0xa0
? nvme_tcp_state_change+0x21/0x150 [nvme_tcp]
nvme_tcp_state_change+0x21/0x150 [nvme_tcp]
? sysfs_file_ops
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: avoid deadlock between hci_dev->lock and socket lock
Commit eab2404ba798 ("Bluetooth: Add BT_PHY socket option") added a
dependency between socket lock and hci_dev->lock that could lead to
deadlock.
It turns out that hci_conn_get_phy() is not in any way relying on hdev
being immutable during the runtime of this function, neither does it even
look at any of the members of hdev, and as such there is no need to hold
that lock.
This fixes the lockdep splat below:
======================================================
WARNING: possible circular locking dependency detected
5.12.0-rc1-00026-g73d464503354 #10 Not tainted
------------------------------------------------------
bluetoothd/1118 is trying to acquire lock:
ffff8f078383c078 (&hdev->lock){+.+.}-{3:3}, at: hci_conn_get_phy+0x1c/0x150 [bluetooth]
but task is already holding lock:
ffff8f07e831d920 (sk_lock-AF_BLUETOOTH-BTPROTO_L2CAP){+.+.}-{0:0}, at: l2cap_sock_getsockopt+0x8b/0x610
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (sk_lock-AF_BLUETOOTH-BTPROTO_L2CAP){+.+.}-{0:0}:
lock_sock_nested+0x72/0xa0
l2cap_sock_ready_cb+0x18/0x70 [bluetooth]
l2cap_config_rsp+0x27a/0x520 [bluetooth]
l2cap_sig_channel+0x658/0x1330 [bluetooth]
l2cap_recv_frame+0x1ba/0x310 [bluetooth]
hci_rx_work+0x1cc/0x640 [bluetooth]
process_one_work+0x244/0x5f0
worker_thread+0x3c/0x380
kthread+0x13e/0x160
ret_from_fork+0x22/0x30
-> #2 (&chan->lock#2/1){+.+.}-{3:3}:
__mutex_lock+0xa3/0xa10
l2cap_chan_connect+0x33a/0x940 [bluetooth]
l2cap_sock_connect+0x141/0x2a0 [bluetooth]
__sys_connect+0x9b/0xc0
__x64_sys_connect+0x16/0x20
do_syscall_64+0x33/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
-> #1 (&conn->chan_lock){+.+.}-{3:3}:
__mutex_lock+0xa3/0xa10
l2cap_chan_connect+0x322/0x940 [bluetooth]
l2cap_sock_connect+0x141/0x2a0 [bluetooth]
__sys_connect+0x9b/0xc0
__x64_sys_connect+0x16/0x20
do_syscall_64+0x33/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
-> #0 (&hdev->lock){+.+.}-{3:3}:
__lock_acquire+0x147a/0x1a50
lock_acquire+0x277/0x3d0
__mutex_lock+0xa3/0xa10
hci_conn_get_phy+0x1c/0x150 [bluetooth]
l2cap_sock_getsockopt+0x5a9/0x610 [bluetooth]
__sys_getsockopt+0xcc/0x200
__x64_sys_getsockopt+0x20/0x30
do_syscall_64+0x33/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
other info that might help us debug this:
Chain exists of:
&hdev->lock --> &chan->lock#2/1 --> sk_lock-AF_BLUETOOTH-BTPROTO_L2CAP
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(sk_lock-AF_BLUETOOTH-BTPROTO_L2CAP);
lock(&chan->lock#2/1);
lock(sk_lock-AF_BLUETOOTH-BTPROTO_L2CAP);
lock(&hdev->lock);
*** DEADLOCK ***
1 lock held by bluetoothd/1118:
#0: ffff8f07e831d920 (sk_lock-AF_BLUETOOTH-BTPROTO_L2CAP){+.+.}-{0:0}, at: l2cap_sock_getsockopt+0x8b/0x610 [bluetooth]
stack backtrace:
CPU: 3 PID: 1118 Comm: bluetoothd Not tainted 5.12.0-rc1-00026-g73d464503354 #10
Hardware name: LENOVO 20K5S22R00/20K5S22R00, BIOS R0IET38W (1.16 ) 05/31/2017
Call Trace:
dump_stack+0x7f/0xa1
check_noncircular+0x105/0x120
? __lock_acquire+0x147a/0x1a50
__lock_acquire+0x147a/0x1a50
lock_acquire+0x277/0x3d0
? hci_conn_get_phy+0x1c/0x150 [bluetooth]
? __lock_acquire+0x2e1/0x1a50
? lock_is_held_type+0xb4/0x120
? hci_conn_get_phy+0x1c/0x150 [bluetooth]
__mutex_lock+0xa3/0xa10
? hci_conn_get_phy+0x1c/0x150 [bluetooth]
? lock_acquire+0x277/0x3d0
? mark_held_locks+0x49/0x70
? mark_held_locks+0x49/0x70
? hci_conn_get_phy+0x1c/0x150 [bluetooth]
hci_conn_get_phy+0x
---truncated--- |
| In the Linux kernel, the following vulnerability has been resolved:
tracing: Restructure trace_clock_global() to never block
It was reported that a fix to the ring buffer recursion detection would
cause a hung machine when performing suspend / resume testing. The
following backtrace was extracted from debugging that case:
Call Trace:
trace_clock_global+0x91/0xa0
__rb_reserve_next+0x237/0x460
ring_buffer_lock_reserve+0x12a/0x3f0
trace_buffer_lock_reserve+0x10/0x50
__trace_graph_return+0x1f/0x80
trace_graph_return+0xb7/0xf0
? trace_clock_global+0x91/0xa0
ftrace_return_to_handler+0x8b/0xf0
? pv_hash+0xa0/0xa0
return_to_handler+0x15/0x30
? ftrace_graph_caller+0xa0/0xa0
? trace_clock_global+0x91/0xa0
? __rb_reserve_next+0x237/0x460
? ring_buffer_lock_reserve+0x12a/0x3f0
? trace_event_buffer_lock_reserve+0x3c/0x120
? trace_event_buffer_reserve+0x6b/0xc0
? trace_event_raw_event_device_pm_callback_start+0x125/0x2d0
? dpm_run_callback+0x3b/0xc0
? pm_ops_is_empty+0x50/0x50
? platform_get_irq_byname_optional+0x90/0x90
? trace_device_pm_callback_start+0x82/0xd0
? dpm_run_callback+0x49/0xc0
With the following RIP:
RIP: 0010:native_queued_spin_lock_slowpath+0x69/0x200
Since the fix to the recursion detection would allow a single recursion to
happen while tracing, this lead to the trace_clock_global() taking a spin
lock and then trying to take it again:
ring_buffer_lock_reserve() {
trace_clock_global() {
arch_spin_lock() {
queued_spin_lock_slowpath() {
/* lock taken */
(something else gets traced by function graph tracer)
ring_buffer_lock_reserve() {
trace_clock_global() {
arch_spin_lock() {
queued_spin_lock_slowpath() {
/* DEAD LOCK! */
Tracing should *never* block, as it can lead to strange lockups like the
above.
Restructure the trace_clock_global() code to instead of simply taking a
lock to update the recorded "prev_time" simply use it, as two events
happening on two different CPUs that calls this at the same time, really
doesn't matter which one goes first. Use a trylock to grab the lock for
updating the prev_time, and if it fails, simply try again the next time.
If it failed to be taken, that means something else is already updating
it.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=212761 |
| In the Linux kernel, the following vulnerability has been resolved:
mm/damon/dbgfs: fix 'struct pid' leaks in 'dbgfs_target_ids_write()'
DAMON debugfs interface increases the reference counts of 'struct pid's
for targets from the 'target_ids' file write callback
('dbgfs_target_ids_write()'), but decreases the counts only in DAMON
monitoring termination callback ('dbgfs_before_terminate()').
Therefore, when 'target_ids' file is repeatedly written without DAMON
monitoring start/termination, the reference count is not decreased and
therefore memory for the 'struct pid' cannot be freed. This commit
fixes this issue by decreasing the reference counts when 'target_ids' is
written. |
| In the Linux kernel, the following vulnerability has been resolved:
binder: fix async_free_space accounting for empty parcels
In 4.13, commit 74310e06be4d ("android: binder: Move buffer out of area shared with user space")
fixed a kernel structure visibility issue. As part of that patch,
sizeof(void *) was used as the buffer size for 0-length data payloads so
the driver could detect abusive clients sending 0-length asynchronous
transactions to a server by enforcing limits on async_free_size.
Unfortunately, on the "free" side, the accounting of async_free_space
did not add the sizeof(void *) back. The result was that up to 8-bytes of
async_free_space were leaked on every async transaction of 8-bytes or
less. These small transactions are uncommon, so this accounting issue
has gone undetected for several years.
The fix is to use "buffer_size" (the allocated buffer size) instead of
"size" (the logical buffer size) when updating the async_free_space
during the free operation. These are the same except for this
corner case of asynchronous transactions with payloads < 8 bytes. |
| In the Linux kernel, the following vulnerability has been resolved:
Input: appletouch - initialize work before device registration
Syzbot has reported warning in __flush_work(). This warning is caused by
work->func == NULL, which means missing work initialization.
This may happen, since input_dev->close() calls
cancel_work_sync(&dev->work), but dev->work initalization happens _after_
input_register_device() call.
So this patch moves dev->work initialization before registering input
device |
| In the Linux kernel, the following vulnerability has been resolved:
nitro_enclaves: Use get_user_pages_unlocked() call to handle mmap assert
After commit 5b78ed24e8ec ("mm/pagemap: add mmap_assert_locked()
annotations to find_vma*()"), the call to get_user_pages() will trigger
the mmap assert.
static inline void mmap_assert_locked(struct mm_struct *mm)
{
lockdep_assert_held(&mm->mmap_lock);
VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_lock), mm);
}
[ 62.521410] kernel BUG at include/linux/mmap_lock.h:156!
...........................................................
[ 62.538938] RIP: 0010:find_vma+0x32/0x80
...........................................................
[ 62.605889] Call Trace:
[ 62.608502] <TASK>
[ 62.610956] ? lock_timer_base+0x61/0x80
[ 62.614106] find_extend_vma+0x19/0x80
[ 62.617195] __get_user_pages+0x9b/0x6a0
[ 62.620356] __gup_longterm_locked+0x42d/0x450
[ 62.623721] ? finish_wait+0x41/0x80
[ 62.626748] ? __kmalloc+0x178/0x2f0
[ 62.629768] ne_set_user_memory_region_ioctl.isra.0+0x225/0x6a0 [nitro_enclaves]
[ 62.635776] ne_enclave_ioctl+0x1cf/0x6d7 [nitro_enclaves]
[ 62.639541] __x64_sys_ioctl+0x82/0xb0
[ 62.642620] do_syscall_64+0x3b/0x90
[ 62.645642] entry_SYSCALL_64_after_hwframe+0x44/0xae
Use get_user_pages_unlocked() when setting the enclave memory regions.
That's a similar pattern as mmap_read_lock() used together with
get_user_pages(). |
| In the Linux kernel, the following vulnerability has been resolved:
fs/mount_setattr: always cleanup mount_kattr
Make sure that finish_mount_kattr() is called after mount_kattr was
succesfully built in both the success and failure case to prevent
leaking any references we took when we built it. We returned early if
path lookup failed thereby risking to leak an additional reference we
took when building mount_kattr when an idmapped mount was requested. |
| In the Linux kernel, the following vulnerability has been resolved:
locking/qrwlock: Fix ordering in queued_write_lock_slowpath()
While this code is executed with the wait_lock held, a reader can
acquire the lock without holding wait_lock. The writer side loops
checking the value with the atomic_cond_read_acquire(), but only truly
acquires the lock when the compare-and-exchange is completed
successfully which isn’t ordered. This exposes the window between the
acquire and the cmpxchg to an A-B-A problem which allows reads
following the lock acquisition to observe values speculatively before
the write lock is truly acquired.
We've seen a problem in epoll where the reader does a xchg while
holding the read lock, but the writer can see a value change out from
under it.
Writer | Reader
--------------------------------------------------------------------------------
ep_scan_ready_list() |
|- write_lock_irq() |
|- queued_write_lock_slowpath() |
|- atomic_cond_read_acquire() |
| read_lock_irqsave(&ep->lock, flags);
--> (observes value before unlock) | chain_epi_lockless()
| | epi->next = xchg(&ep->ovflist, epi);
| | read_unlock_irqrestore(&ep->lock, flags);
| |
| atomic_cmpxchg_relaxed() |
|-- READ_ONCE(ep->ovflist); |
A core can order the read of the ovflist ahead of the
atomic_cmpxchg_relaxed(). Switching the cmpxchg to use acquire
semantics addresses this issue at which point the atomic_cond_read can
be switched to use relaxed semantics.
[peterz: use try_cmpxchg()] |
| Microsoft Local Security Authority Subsystem Service Information Disclosure Vulnerability |
| Microsoft Edge (Chromium-based) Information Disclosure Vulnerability |
| Microsoft ODBC Driver Remote Code Execution Vulnerability |
| Microsoft Edge (Chromium-based) Security Feature Bypass Vulnerability |
| Secure Boot Security Feature Bypass Vulnerability |
| Secure Boot Security Feature Bypass Vulnerability |
| Secure Boot Security Feature Bypass Vulnerability |
| Secure Boot Security Feature Bypass Vulnerability |
| Secure Boot Security Feature Bypass Vulnerability |
| BitLocker Security Feature Bypass Vulnerability |