| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
accel/amdxdna: Fix an integer overflow in aie2_query_ctx_status_array()
The unpublished smatch static checker reported a warning.
drivers/accel/amdxdna/aie2_pci.c:904 aie2_query_ctx_status_array()
warn: potential user controlled sizeof overflow
'args->num_element * args->element_size' '1-u32max(user) * 1-u32max(user)'
Even this will not cause a real issue, it is better to put a reasonable
limitation for element_size and num_element. Add condition to make sure
the input element_size <= 4K and num_element <= 1K. |
| In the Linux kernel, the following vulnerability has been resolved:
accel/ivpu: Fix page fault in ivpu_bo_unbind_all_bos_from_context()
Don't add BO to the vdev->bo_list in ivpu_gem_create_object().
When failure happens inside drm_gem_shmem_create(), the BO is not
fully created and ivpu_gem_bo_free() callback will not be called
causing a deleted BO to be left on the list. |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: ath12k: Fix MSDU buffer types handling in RX error path
Currently, packets received on the REO exception ring from
unassociated peers are of MSDU buffer type, while the driver expects
link descriptor type packets. These packets are not parsed further due
to a return check on packet type in ath12k_hal_desc_reo_parse_err(),
but the associated skb is not freed. This may lead to kernel
crashes and buffer leaks.
Hence to fix, update the RX error handler to explicitly drop
MSDU buffer type packets received on the REO exception ring.
This prevents further processing of invalid packets and ensures
stability in the RX error handling path.
Tested-on: QCN9274 hw2.0 PCI WLAN.WBE.1.4.1-00199-QCAHKSWPL_SILICONZ-1 |
| In the Linux kernel, the following vulnerability has been resolved:
crypto: aead - Fix reqsize handling
Commit afddce13ce81d ("crypto: api - Add reqsize to crypto_alg")
introduced cra_reqsize field in crypto_alg struct to replace type
specific reqsize fields. It looks like this was introduced specifically
for ahash and acomp from the commit description as subsequent commits
add necessary changes in these alg frameworks.
However, this is being recommended for use in all crypto algs
instead of setting reqsize using crypto_*_set_reqsize(). Using
cra_reqsize in aead algorithms, hence, causes memory corruptions and
crashes as the underlying functions in the algorithm framework have not
been updated to set the reqsize properly from cra_reqsize. [1]
Add proper set_reqsize calls in the aead init function to properly
initialize reqsize for these algorithms in the framework.
[1]: https://gist.github.com/Pratham-T/24247446f1faf4b7843e4014d5089f6b |
| In the Linux kernel, the following vulnerability has been resolved:
bpf: Do not let BPF test infra emit invalid GSO types to stack
Yinhao et al. reported that their fuzzer tool was able to trigger a
skb_warn_bad_offload() from netif_skb_features() -> gso_features_check().
When a BPF program - triggered via BPF test infra - pushes the packet
to the loopback device via bpf_clone_redirect() then mentioned offload
warning can be seen. GSO-related features are then rightfully disabled.
We get into this situation due to convert___skb_to_skb() setting
gso_segs and gso_size but not gso_type. Technically, it makes sense
that this warning triggers since the GSO properties are malformed due
to the gso_type. Potentially, the gso_type could be marked non-trustworthy
through setting it at least to SKB_GSO_DODGY without any other specific
assumptions, but that also feels wrong given we should not go further
into the GSO engine in the first place.
The checks were added in 121d57af308d ("gso: validate gso_type in GSO
handlers") because there were malicious (syzbot) senders that combine
a protocol with a non-matching gso_type. If we would want to drop such
packets, gso_features_check() currently only returns feature flags via
netif_skb_features(), so one location for potentially dropping such skbs
could be validate_xmit_unreadable_skb(), but then otoh it would be
an additional check in the fast-path for a very corner case. Given
bpf_clone_redirect() is the only place where BPF test infra could emit
such packets, lets reject them right there. |
| In the Linux kernel, the following vulnerability has been resolved:
bpf: Fix stackmap overflow check in __bpf_get_stackid()
Syzkaller reported a KASAN slab-out-of-bounds write in __bpf_get_stackid()
when copying stack trace data. The issue occurs when the perf trace
contains more stack entries than the stack map bucket can hold,
leading to an out-of-bounds write in the bucket's data array. |
| In the Linux kernel, the following vulnerability has been resolved:
coresight: ETR: Fix ETR buffer use-after-free issue
When ETR is enabled as CS_MODE_SYSFS, if the buffer size is changed
and enabled again, currently sysfs_buf will point to the newly
allocated memory(buf_new) and free the old memory(buf_old). But the
etr_buf that is being used by the ETR remains pointed to buf_old, not
updated to buf_new. In this case, it will result in a memory
use-after-free issue.
Fix this by checking ETR's mode before updating and releasing buf_old,
if the mode is CS_MODE_SYSFS, then skip updating and releasing it. |
| In the Linux kernel, the following vulnerability has been resolved:
perf/x86: Fix NULL event access and potential PEBS record loss
When intel_pmu_drain_pebs_icl() is called to drain PEBS records, the
perf_event_overflow() could be called to process the last PEBS record.
While perf_event_overflow() could trigger the interrupt throttle and
stop all events of the group, like what the below call-chain shows.
perf_event_overflow()
-> __perf_event_overflow()
->__perf_event_account_interrupt()
-> perf_event_throttle_group()
-> perf_event_throttle()
-> event->pmu->stop()
-> x86_pmu_stop()
The side effect of stopping the events is that all corresponding event
pointers in cpuc->events[] array are cleared to NULL.
Assume there are two PEBS events (event a and event b) in a group. When
intel_pmu_drain_pebs_icl() calls perf_event_overflow() to process the
last PEBS record of PEBS event a, interrupt throttle is triggered and
all pointers of event a and event b are cleared to NULL. Then
intel_pmu_drain_pebs_icl() tries to process the last PEBS record of
event b and encounters NULL pointer access.
To avoid this issue, move cpuc->events[] clearing from x86_pmu_stop()
to x86_pmu_del(). It's safe since cpuc->active_mask or
cpuc->pebs_enabled is always checked before access the event pointer
from cpuc->events[]. |
| In the Linux kernel, the following vulnerability has been resolved:
md: fix rcu protection in md_wakeup_thread
We attempted to use RCU to protect the pointer 'thread', but directly
passed the value when calling md_wakeup_thread(). This means that the
RCU pointer has been acquired before rcu_read_lock(), which renders
rcu_read_lock() ineffective and could lead to a use-after-free. |
| In the Linux kernel, the following vulnerability has been resolved:
md: avoid repeated calls to del_gendisk
There is a uaf problem which is found by case 23rdev-lifetime:
Oops: general protection fault, probably for non-canonical address 0xdead000000000122
RIP: 0010:bdi_unregister+0x4b/0x170
Call Trace:
<TASK>
__del_gendisk+0x356/0x3e0
mddev_unlock+0x351/0x360
rdev_attr_store+0x217/0x280
kernfs_fop_write_iter+0x14a/0x210
vfs_write+0x29e/0x550
ksys_write+0x74/0xf0
do_syscall_64+0xbb/0x380
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff5250a177e
The sequence is:
1. rdev remove path gets reconfig_mutex
2. rdev remove path release reconfig_mutex in mddev_unlock
3. md stop calls do_md_stop and sets MD_DELETED
4. rdev remove path calls del_gendisk because MD_DELETED is set
5. md stop path release reconfig_mutex and calls del_gendisk again
So there is a race condition we should resolve. This patch adds a
flag MD_DO_DELETE to avoid the race condition. |
| In the Linux kernel, the following vulnerability has been resolved:
ns: initialize ns_list_node for initial namespaces
Make sure that the list is always initialized for initial namespaces. |
| In the Linux kernel, the following vulnerability has been resolved:
coresight: tmc: add the handle of the event to the path
The handle is essential for retrieving the AUX_EVENT of each CPU and is
required in perf mode. It has been added to the coresight_path so that
dependent devices can access it from the path when needed.
The existing bug can be reproduced with:
perf record -e cs_etm//k -C 0-9 dd if=/dev/zero of=/dev/null
Showing an oops as follows:
Unable to handle kernel paging request at virtual address 000f6e84934ed19e
Call trace:
tmc_etr_get_buffer+0x30/0x80 [coresight_tmc] (P)
catu_enable_hw+0xbc/0x3d0 [coresight_catu]
catu_enable+0x70/0xe0 [coresight_catu]
coresight_enable_path+0xb0/0x258 [coresight] |
| In the Linux kernel, the following vulnerability has been resolved:
md: init bioset in mddev_init
IO operations may be needed before md_run(), such as updating metadata
after writing sysfs. Without bioset, this triggers a NULL pointer
dereference as below:
BUG: kernel NULL pointer dereference, address: 0000000000000020
Call Trace:
md_update_sb+0x658/0xe00
new_level_store+0xc5/0x120
md_attr_store+0xc9/0x1e0
sysfs_kf_write+0x6f/0xa0
kernfs_fop_write_iter+0x141/0x2a0
vfs_write+0x1fc/0x5a0
ksys_write+0x79/0x180
__x64_sys_write+0x1d/0x30
x64_sys_call+0x2818/0x2880
do_syscall_64+0xa9/0x580
entry_SYSCALL_64_after_hwframe+0x4b/0x53
Reproducer
```
mdadm -CR /dev/md0 -l1 -n2 /dev/sd[cd]
echo inactive > /sys/block/md0/md/array_state
echo 10 > /sys/block/md0/md/new_level
```
mddev_init() can only be called once per mddev, no need to test if bioset
has been initialized anymore. |
| In the Linux kernel, the following vulnerability has been resolved:
fs/ntfs3: Initialize allocated memory before use
KMSAN reports: Multiple uninitialized values detected:
- KMSAN: uninit-value in ntfs_read_hdr (3)
- KMSAN: uninit-value in bcmp (3)
Memory is allocated by __getname(), which is a wrapper for
kmem_cache_alloc(). This memory is used before being properly
cleared. Change kmem_cache_alloc() to kmem_cache_zalloc() to
properly allocate and clear memory before use. |
| In the Linux kernel, the following vulnerability has been resolved:
wifi: mt76: wed: use proper wed reference in mt76 wed driver callabacks
MT7996 driver can use both wed and wed_hif2 devices to offload traffic
from/to the wireless NIC. In the current codebase we assume to always
use the primary wed device in wed callbacks resulting in the following
crash if the hw runs wed_hif2 (e.g. 6GHz link).
[ 297.455876] Unable to handle kernel read from unreadable memory at virtual address 000000000000080a
[ 297.464928] Mem abort info:
[ 297.467722] ESR = 0x0000000096000005
[ 297.471461] EC = 0x25: DABT (current EL), IL = 32 bits
[ 297.476766] SET = 0, FnV = 0
[ 297.479809] EA = 0, S1PTW = 0
[ 297.482940] FSC = 0x05: level 1 translation fault
[ 297.487809] Data abort info:
[ 297.490679] ISV = 0, ISS = 0x00000005, ISS2 = 0x00000000
[ 297.496156] CM = 0, WnR = 0, TnD = 0, TagAccess = 0
[ 297.501196] GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[ 297.506500] user pgtable: 4k pages, 39-bit VAs, pgdp=0000000107480000
[ 297.512927] [000000000000080a] pgd=08000001097fb003, p4d=08000001097fb003, pud=08000001097fb003, pmd=0000000000000000
[ 297.523532] Internal error: Oops: 0000000096000005 [#1] SMP
[ 297.715393] CPU: 2 UID: 0 PID: 45 Comm: kworker/u16:2 Tainted: G O 6.12.50 #0
[ 297.723908] Tainted: [O]=OOT_MODULE
[ 297.727384] Hardware name: Banana Pi BPI-R4 (2x SFP+) (DT)
[ 297.732857] Workqueue: nf_ft_offload_del nf_flow_rule_route_ipv6 [nf_flow_table]
[ 297.740254] pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 297.747205] pc : mt76_wed_offload_disable+0x64/0xa0 [mt76]
[ 297.752688] lr : mtk_wed_flow_remove+0x58/0x80
[ 297.757126] sp : ffffffc080fe3ae0
[ 297.760430] x29: ffffffc080fe3ae0 x28: ffffffc080fe3be0 x27: 00000000deadbef7
[ 297.767557] x26: ffffff80c5ebca00 x25: 0000000000000001 x24: ffffff80c85f4c00
[ 297.774683] x23: ffffff80c1875b78 x22: ffffffc080d42cd0 x21: ffffffc080660018
[ 297.781809] x20: ffffff80c6a076d0 x19: ffffff80c6a043c8 x18: 0000000000000000
[ 297.788935] x17: 0000000000000000 x16: 0000000000000001 x15: 0000000000000000
[ 297.796060] x14: 0000000000000019 x13: ffffff80c0ad8ec0 x12: 00000000fa83b2da
[ 297.803185] x11: ffffff80c02700c0 x10: ffffff80c0ad8ec0 x9 : ffffff81fef96200
[ 297.810311] x8 : ffffff80c02700c0 x7 : ffffff80c02700d0 x6 : 0000000000000002
[ 297.817435] x5 : 0000000000000400 x4 : 0000000000000000 x3 : 0000000000000000
[ 297.824561] x2 : 0000000000000001 x1 : 0000000000000800 x0 : ffffff80c6a063c8
[ 297.831686] Call trace:
[ 297.834123] mt76_wed_offload_disable+0x64/0xa0 [mt76]
[ 297.839254] mtk_wed_flow_remove+0x58/0x80
[ 297.843342] mtk_flow_offload_cmd+0x434/0x574
[ 297.847689] mtk_wed_setup_tc_block_cb+0x30/0x40
[ 297.852295] nf_flow_offload_ipv6_hook+0x7f4/0x964 [nf_flow_table]
[ 297.858466] nf_flow_rule_route_ipv6+0x438/0x4a4 [nf_flow_table]
[ 297.864463] process_one_work+0x174/0x300
[ 297.868465] worker_thread+0x278/0x430
[ 297.872204] kthread+0xd8/0xdc
[ 297.875251] ret_from_fork+0x10/0x20
[ 297.878820] Code: 928b5ae0 8b000273 91400a60 f943fa61 (79401421)
[ 297.884901] ---[ end trace 0000000000000000 ]---
Fix the issue detecting the proper wed reference to use running wed
callabacks. |
| In the Linux kernel, the following vulnerability has been resolved:
btrfs: fix double free of qgroup record after failure to add delayed ref head
In the previous code it was possible to incur into a double kfree()
scenario when calling add_delayed_ref_head(). This could happen if the
record was reported to already exist in the
btrfs_qgroup_trace_extent_nolock() call, but then there was an error
later on add_delayed_ref_head(). In this case, since
add_delayed_ref_head() returned an error, the caller went to free the
record. Since add_delayed_ref_head() couldn't set this kfree'd pointer
to NULL, then kfree() would have acted on a non-NULL 'record' object
which was pointing to memory already freed by the callee.
The problem comes from the fact that the responsibility to kfree the
object is on both the caller and the callee at the same time. Hence, the
fix for this is to shift the ownership of the 'qrecord' object out of
the add_delayed_ref_head(). That is, we will never attempt to kfree()
the given object inside of this function, and will expect the caller to
act on the 'qrecord' object on its own. The only exception where the
'qrecord' object cannot be kfree'd is if it was inserted into the
tracing logic, for which we already have the 'qrecord_inserted_ret'
boolean to account for this. Hence, the caller has to kfree the object
only if add_delayed_ref_head() reports not to have inserted it on the
tracing logic.
As a side-effect of the above, we must guarantee that
'qrecord_inserted_ret' is properly initialized at the start of the
function, not at the end, and then set when an actual insert
happens. This way we avoid 'qrecord_inserted_ret' having an invalid
value on an early exit.
The documentation from the add_delayed_ref_head() has also been updated
to reflect on the exact ownership of the 'qrecord' object. |
| In the Linux kernel, the following vulnerability has been resolved:
erofs: limit the level of fs stacking for file-backed mounts
Otherwise, it could cause potential kernel stack overflow (e.g., EROFS
mounting itself). |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: Fix resource leak in ksmbd_session_rpc_open()
When ksmbd_rpc_open() fails then it must call ksmbd_rpc_id_free() to
undo the result of ksmbd_ipc_id_alloc(). |
| In the Linux kernel, the following vulnerability has been resolved:
devlink: hold region lock when flushing snapshots
Netdevsim triggers a splat on reload, when it destroys regions
with snapshots pending:
WARNING: CPU: 1 PID: 787 at net/core/devlink.c:6291 devlink_region_snapshot_del+0x12e/0x140
CPU: 1 PID: 787 Comm: devlink Not tainted 6.1.0-07460-g7ae9888d6e1c #580
RIP: 0010:devlink_region_snapshot_del+0x12e/0x140
Call Trace:
<TASK>
devl_region_destroy+0x70/0x140
nsim_dev_reload_down+0x2f/0x60 [netdevsim]
devlink_reload+0x1f7/0x360
devlink_nl_cmd_reload+0x6ce/0x860
genl_family_rcv_msg_doit.isra.0+0x145/0x1c0
This is the locking assert in devlink_region_snapshot_del(),
we're supposed to be holding the region->snapshot_lock here. |
| In the Linux kernel, the following vulnerability has been resolved:
clk: visconti: Fix memory leak in visconti_register_pll()
@pll->rate_table has allocated memory by kmemdup(), if clk_hw_register()
fails, it should be freed, otherwise it will cause memory leak issue,
this patch fixes it. |