var-202201-0496
Vulnerability from variot
An unprivileged write to the file handler flaw in the Linux kernel's control groups and namespaces subsystem was found in the way users have access to some less privileged process that are controlled by cgroups and have higher privileged parent process. It is actually both for cgroup2 and cgroup1 versions of control groups. A local user could use this flaw to crash the system or escalate their privileges on the system. ========================================================================== Ubuntu Security Notice USN-5368-1 April 06, 2022
linux-azure-5.13, linux-oracle-5.13 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux-azure-5.13: Linux kernel for Microsoft Azure cloud systems - linux-oracle-5.13: Linux kernel for Oracle Cloud systems
Details:
It was discovered that the BPF verifier in the Linux kernel did not properly restrict pointer types in certain situations. (CVE-2022-23222)
It was discovered that the network traffic control implementation in the Linux kernel contained a use-after-free vulnerability. (CVE-2022-1055)
Yiqi Sun and Kevin Wang discovered that the cgroups implementation in the Linux kernel did not properly restrict access to the cgroups v1 release_agent feature. (CVE-2022-0492)
J\xfcrgen Gro\xdf discovered that the Xen subsystem within the Linux kernel did not adequately limit the number of events driver domains (unprivileged PV backends) could send to other guest VMs. (CVE-2021-28711, CVE-2021-28712, CVE-2021-28713)
J\xfcrgen Gro\xdf discovered that the Xen network backend driver in the Linux kernel did not adequately limit the amount of queued packets when a guest did not process them. An attacker in a guest VM can use this to cause a denial of service (excessive kernel memory consumption) in the network backend domain. (CVE-2021-28714, CVE-2021-28715)
Szymon Heidrich discovered that the USB Gadget subsystem in the Linux kernel did not properly restrict the size of control requests for certain gadget types, leading to possible out of bounds reads or writes. (CVE-2021-39698)
It was discovered that the simulated networking device driver for the Linux kernel did not properly initialize memory in certain situations. (CVE-2021-4197)
Brendan Dolan-Gavitt discovered that the aQuantia AQtion Ethernet device driver in the Linux kernel did not properly validate meta-data coming from the device. (CVE-2021-43975)
It was discovered that the ARM Trusted Execution Environment (TEE) subsystem in the Linux kernel contained a race condition leading to a use- after-free vulnerability. (CVE-2021-45095)
It was discovered that the eBPF verifier in the Linux kernel did not properly perform bounds checking on mov32 operations. (CVE-2021-45402)
It was discovered that the Reliable Datagram Sockets (RDS) protocol implementation in the Linux kernel did not properly deallocate memory in some error conditions. (CVE-2021-45480)
It was discovered that the BPF subsystem in the Linux kernel did not properly track pointer types on atomic fetch operations in some situations. (CVE-2022-0264)
It was discovered that the TIPC Protocol implementation in the Linux kernel did not properly initialize memory in some situations. (CVE-2022-0382)
Samuel Page discovered that the Transparent Inter-Process Communication (TIPC) protocol implementation in the Linux kernel contained a stack-based buffer overflow. (CVE-2022-0435)
It was discovered that the KVM implementation for s390 systems in the Linux kernel did not properly prevent memory operations on PVM guests that were in non-protected mode. (CVE-2022-0516)
It was discovered that the ICMPv6 implementation in the Linux kernel did not properly deallocate memory in certain situations. (CVE-2022-27666)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: linux-image-5.13.0-1021-azure 5.13.0-1021.24~20.04.1 linux-image-5.13.0-1025-oracle 5.13.0-1025.30~20.04.1 linux-image-azure 5.13.0.1021.24~20.04.10 linux-image-oracle 5.13.0.1025.30~20.04.1
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well. The security impact is negligible as CAP_SYS_ADMIN inherently gives the ability to deny service. Summary:
The Migration Toolkit for Containers (MTC) 1.6.5 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) 2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2057579 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2072311 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2074044 - [MTC] Rsync pods are not running as privileged 2074553 - Upstream Hook Runner image requires arguments be in a different order
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-5173-1 security@debian.org https://www.debian.org/security/ Ben Hutchings July 03, 2022 https://www.debian.org/security/faq
Package : linux CVE ID : CVE-2021-4197 CVE-2022-0494 CVE-2022-0812 CVE-2022-0854 CVE-2022-1011 CVE-2022-1012 CVE-2022-1016 CVE-2022-1048 CVE-2022-1184 CVE-2022-1195 CVE-2022-1198 CVE-2022-1199 CVE-2022-1204 CVE-2022-1205 CVE-2022-1353 CVE-2022-1419 CVE-2022-1516 CVE-2022-1652 CVE-2022-1729 CVE-2022-1734 CVE-2022-1974 CVE-2022-1975 CVE-2022-2153 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-23960 CVE-2022-26490 CVE-2022-27666 CVE-2022-28356 CVE-2022-28388 CVE-2022-28389 CVE-2022-28390 CVE-2022-29581 CVE-2022-30594 CVE-2022-32250 CVE-2022-32296 CVE-2022-33981 Debian Bug : 922204 1006346 1013299
Several vulnerabilities have been discovered in the Linux kernel that may lead to a privilege escalation, denial of service or information leaks.
CVE-2021-4197
Eric Biederman reported that incorrect permission checks in the
cgroup process migration implementation can allow a local attacker
to escalate privileges.
CVE-2022-0494
The scsi_ioctl() was susceptible to an information leak only
exploitable by users with CAP_SYS_ADMIN or CAP_SYS_RAWIO
capabilities.
CVE-2022-0812
It was discovered that the RDMA transport for NFS (xprtrdma)
miscalculated the size of message headers, which could lead to a
leak of sensitive information between NFS servers and clients.
CVE-2022-0854
Ali Haider discovered a potential information leak in the DMA
subsystem. On systems where the swiotlb feature is needed, this
might allow a local user to read sensitive information.
CVE-2022-1011
Jann Horn discovered a flaw in the FUSE (Filesystem in User-Space)
implementation. A local user permitted to mount FUSE filesystems
could exploit this to cause a use-after-free and read sensitive
information.
CVE-2022-1012, CVE-2022-32296
Moshe Kol, Amit Klein, and Yossi Gilad discovered a weakness
in randomisation of TCP source port selection.
CVE-2022-1016
David Bouman discovered a flaw in the netfilter subsystem where
the nft_do_chain function did not initialize register data that
nf_tables expressions can read from and write to. A local attacker
can take advantage of this to read sensitive information.
CVE-2022-1048
Hu Jiahui discovered a race condition in the sound subsystem that
can result in a use-after-free.
CVE-2022-1184
A flaw was discovered in the ext4 filesystem driver which can lead
to a use-after-free. A local user permitted to mount arbitrary
filesystems could exploit this to cause a denial of service (crash
or memory corruption) or possibly for privilege escalation.
CVE-2022-1195
Lin Ma discovered race conditions in the 6pack and mkiss hamradio
drivers, which could lead to a use-after-free.
CVE-2022-1198
Duoming Zhou discovered a race condition in the 6pack hamradio
driver, which could lead to a use-after-free.
CVE-2022-1199, CVE-2022-1204, CVE-2022-1205
Duoming Zhou discovered race conditions in the AX.25 hamradio
protocol, which could lead to a use-after-free or null pointer
dereference.
CVE-2022-1353
The TCS Robot tool found an information leak in the PF_KEY
subsystem. A local user can receive a netlink message when an
IPsec daemon registers with the kernel, and this could include
sensitive information.
CVE-2022-1419
Minh Yuan discovered a race condition in the vgem virtual GPU
driver that can lead to a use-after-free. A local user permitted
to access the GPU device can exploit this to cause a denial of
service (crash or memory corruption) or possibly for privilege
escalation.
CVE-2022-1516
A NULL pointer dereference flaw in the implementation of the X.25
set of standardized network protocols, which can result in denial
of service.
This driver is not enabled in Debian's official kernel
configurations.
CVE-2022-1652
Minh Yuan discovered a race condition in the floppy driver that
can lead to a use-after-free. A local user permitted to access a
floppy drive device can exploit this to cause a denial of service
(crash or memory corruption) or possibly for privilege escalation.
CVE-2022-1729
Norbert Slusarek discovered a race condition in the perf subsystem
which could result in local privilege escalation to root. The
default settings in Debian prevent exploitation unless more
permissive settings have been applied in the
kernel.perf_event_paranoid sysctl.
CVE-2022-1734
Duoming Zhou discovered race conditions in the nfcmrvl NFC driver
that could lead to a use-after-free, double-free or null pointer
dereference.
This driver is not enabled in Debian's official kernel
configurations.
CVE-2022-1974, CVE-2022-1975
Duoming Zhou discovered that the NFC netlink interface was
suspectible to denial of service.
CVE-2022-2153
"kangel" reported a flaw in the KVM implementation for x86
processors which could lead to a null pointer dereference.
CVE-2022-21123, CVE-2022-21125, CVE-2022-21166
Various researchers discovered flaws in Intel x86 processors,
collectively referred to as MMIO Stale Data vulnerabilities.
These are similar to the previously published Microarchitectural
Data Sampling (MDS) issues and could be exploited by local users
to leak sensitive information.
For some CPUs, the mitigations for these issues require updated
microcode. An updated intel-microcode package may be provided at
a later date. The updated CPU microcode may also be available as
part of a system firmware ("BIOS") update.
Further information on the mitigation can be found at
<https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html>
or in the linux-doc-4.19 package.
CVE-2022-23960
Researchers at VUSec discovered that the Branch History Buffer in
Arm processors can be exploited to create information side-
channels with speculative execution. This issue is similar to
Spectre variant 2, but requires additional mitigations on some
processors.
This was previously mitigated for 32-bit Arm (armel and armhf)
architectures and is now also mitigated for 64-bit Arm (arm64).
This can be exploited to obtain sensitive information from a
different security context, such as from user-space to the kernel,
or from a KVM guest to the kernel.
CVE-2022-26490
Buffer overflows in the STMicroelectronics ST21NFCA core driver
can result in denial of service or privilege escalation.
This driver is not enabled in Debian's official kernel
configurations.
CVE-2022-27666
"valis" reported a possible buffer overflow in the IPsec ESP
transformation code.
CVE-2022-28356
"Beraphin" discovered that the ANSI/IEEE 802.2 LLC type 2 driver did
not properly perform reference counting on some error paths.
CVE-2022-28388
A double free vulnerability was discovered in the 8 devices
USB2CAN interface driver.
CVE-2022-28389
A double free vulnerability was discovered in the Microchip CAN
BUS Analyzer interface driver.
CVE-2022-28390
A double free vulnerability was discovered in the EMS CPC-USB/ARM7
CAN/USB interface driver.
CVE-2022-29581
Kyle Zeng discovered a reference-counting bug in the cls_u32
network classifier which can lead to a use-after-free.
CVE-2022-30594
Jann Horn discovered a flaw in the interaction between ptrace and
seccomp subsystems. A process sandboxed using seccomp() but still
permitted to use ptrace() could exploit this to remove the seccomp
restrictions.
CVE-2022-32250
Aaron Adams discovered a use-after-free in Netfilter which may
result in local privilege escalation to root.
CVE-2022-33981
Yuan Ming from Tsinghua University reported a race condition in
the floppy driver involving use of the FDRAWCMD ioctl, which could
lead to a use-after-free. A local user with access to a floppy
drive device could exploit this to cause a denial of service
(crash or memory corruption) or possibly for privilege escalation.
This ioctl is now disabled by default.
For the oldstable distribution (buster), these problems have been fixed in version 4.19.249-2.
Due to an issue in the signing service (Cf. Debian bug #1012741), the vport-vxlan module cannot be loaded for the signed kernel for amd64 in this update.
This update also corrects a regression in the network scheduler subsystem (bug #1013299).
For the 32-bit Arm (armel and armhf) architectures, this update enables optimised implementations of several cryptographic and CRC algorithms. For at least AES, this should remove a timing side- channel that could lead to a leak of sensitive information.
This update includes many more bug fixes from stable updates 4.19.236-4.19.249 inclusive, including for bug #1006346. The random driver has been backported from Linux 5.19, fixing numerous performance and correctness issues. Some changes will be visible:
-
- The entropy pool size is now 256 bits instead of 4096. You may need to adjust the configuration of system monitoring or user-space entropy gathering services to allow for this.
-
- On systems without a hardware RNG, the kernel may log more uses of /dev/urandom before it is fully initialised. These uses were previously under-counted and this is not a regression.
We recommend that you upgrade your linux packages.
For the detailed security status of linux please refer to its security tracker page at: https://security-tracker.debian.org/tracker/linux
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmLBuTxfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0TdzQ//Yxq7eTZmPsDVvj1ArPIDwE4w/CPyoYeXiiSBhWD4ueYAvWp3moPmUZmc a6is1JkP8MILLekkeAUJQjaxjHOn+kWIlfV7ZLJ7fzTrVjkHoQvzs8a8mv85ybaD sfQlVuEA7VPxfJI/4/31fIAuTPy1S+qd3r6qtESL2IQdZPFS8SOHwZrTt9DPGXhl XtY3XNm4fysgRmtDYNpqndluVXeTc39bXe9YBRG1bTdrI9QCTykSx2/HeZDOBiMQ Wb7cjXAUoy0q3c5QncTcqtgN3ax549qx/1oGZGXDlycZFOIE8vHMY3FyBXXURPz4 JgKkSf+NR87aeDi2SREjOm0CIp/laSc1VFxpf0TTT51kuPWhXzsleZ23eN2po106 UTyDFsNtNToHgoDpPFA/3GsioqirzbwwVUs0qKDeFdC1VZjJ5H+1JzO4JPbWGOTo rtoz64JHU9oIA3OJs3rYpgIphd6fzUfia89tuflE5/MkeAWSVP7f0rpUgGQy8gzw TdsN4p7aCLhQezMpFVKADIB1WfkBtXncDrPC//pxxnRZuu2efrlYv6se+dnOJM9/ WeDSm4hsi6u+MH7DBmVhDgjF/gatSbejud8rXYUcVKZArraj9k9rCArxcVKmJHMr 6teKhjSMX1B27AUJtTqSU1eEmErxbA+yEHCSEOW+8JNnLQZWDSI= =j1cH -----END PGP SIGNATURE----- . (CVE-2022-1516)
Demi Marie Obenour and Simon Gaiser discovered that several Xen para- virtualization device frontends did not properly restrict the access rights of device backends. Summary:
Red Hat OpenShift Container Platform release 4.10.25 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.10.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.25. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:5729
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)
- golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921)
- golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.25-x86_64
The image digest is sha256:ed84fb3fbe026b3bbb4a2637ddd874452ac49c6ead1e15675f257e28664879cc
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.25-s390x
The image digest is sha256:a151628743b643e8ceda09dbd290aa4ac2787fc519365603a5612cb4d379d8e3
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.25-ppc64le
The image digest is sha256:5ee9476628f198cdadd8f7afe6f117e8102eaafba8345e95d2f479c260eb0574
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString 2060058 - superfluous apirequestcount entries in audit log 2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2079034 - [4.10] Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment 2094584 - VM with sysprep is failed to create 2095217 - VM SSH command generated by UI points at api VIP 2095319 - [4.10] Bootimage bump tracker 2098655 - gcp cluster rollback fails due to storage failure 2099526 - prometheus-adapter becomes inaccessible during rollout 2100894 - Possible to cause misconfiguration of container runtime soon after cluster creation 2100974 - Layout issue: No spacing in delete modals 2103175 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces 2105110 - [VPA] recommender is logging errors for pods with init containers 2105275 - NodeIP is used instead of EgressIP 2105653 - egressIP panics with nil pointer dereference 2106385 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console 2106842 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes 2107276 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. 2109125 - [4.10 Backport] Spoke BMH stuck "inspecting" when deployed via ZTP in 4.11 OCP hub 2109225 - Console 4.10 operand form refresh 2109235 - openshift-apiserver pods never going NotReady
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYuqt+dzjgjWX9erEAQgkaRAAgkfZMlPLAAHEPj9/u6cy7TrRLDvMpgV/ pcH4o92HJHTYaO8CIp0+njDPSAtzHPxOvGqew795DZWKJvn3fhuvQoUCXuXBVOF0 eH8yIcmH2Xh7dkUV385rRvwWkYEBt5BaXXUP5UOq/pByZMkd1emEjiZth7CWWqwg GasDNRaG+FiB1MhJDaZYbRZ1Dpjrm/UOep6r/AwfaZkbvvHstwHDqWUc1PMG3TMO zQwCC2W8Ng+QiCVAGqWQhcvcnwAD5WeN6sgnO2fzAJwnZD/O1QS8Q2s6KO8izvjm y7P9wZfE449ijXkk8X06WRRTR082h6PiUyAa4rYpSHy5yP/zTukT8K81qQdR5BqQ ceDgac68/DgoHGn/7UebfYxxNa2aKXPtTb07a8Vd7YA/G1w3DGG5YGgyQ1LSQPJ2 v9XF8ggY9r2YiV0TiS9XHzC9PsvMasYoHL+c31RI1QNKizJtn3HVlw3yE62BCTNC n9G+IjvdY1a8xDUV/mmthZJnNa4/QybEhiL30XNTwHXATszwS/9xq3J9/Un1f325 funeRCL+WPGnEB5MmczkSyomf2Clq7nfjJWWNcAwZganPXmXREWB0uL/5JiyCDjQ 5LIJDtYYcoa9fYRtOMQUjzWJr4h1vpHwkRfWd8m+dXqkbTgE1YspS0sp/fmdcCek E4/PvJnIe00=X10h -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Relevant releases/architectures:
Red Hat Enterprise Linux Real Time EUS (v.8.4) - x86_64 Red Hat Enterprise Linux Real Time for NFV EUS (v.8.4) - x86_64
- Description:
The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements.
Security Fix(es):
-
kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak (CVE-2022-1012)
-
kernel: race condition in perf_event_open leads to privilege escalation (CVE-2022-1729)
-
kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root (CVE-2022-32250)
-
kernel: cgroup: Use open-time creds and namespace for migration perm checks (CVE-2021-4197)
-
kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses (CVE-2021-4203)
-
kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check (CVE-2020-29368)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
kernel-rt: update RT source tree to the RHEL-8.4.z10 source tree (BZ#2087922)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect. Bugs fixed (https://bugzilla.redhat.com/):
1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check 2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks 2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses 2064604 - CVE-2022-1012 kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak 2086753 - CVE-2022-1729 kernel: race condition in perf_event_open leads to privilege escalation 2092427 - CVE-2022-32250 kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root
- Package List:
Red Hat Enterprise Linux Real Time for NFV EUS (v.8.4):
Source: kernel-rt-4.18.0-305.57.1.rt7.129.el8_4.src.rpm
x86_64: kernel-rt-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-kvm-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-kvm-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm
Red Hat Enterprise Linux Real Time EUS (v.8.4):
Source: kernel-rt-4.18.0-305.57.1.rt7.129.el8_4.src.rpm
x86_64: kernel-rt-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debug-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-debuginfo-common-x86_64-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm kernel-rt-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Summary:
Red Hat Advanced Cluster Management for Kubernetes 2.5.0 is now generally available. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.5.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/
Security fixes:
-
nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)
-
containerd: Unprivileged pod may bind mount any privileged regular file on disk (CVE-2021-43816)
-
minio: user privilege escalation in AddUser() admin API (CVE-2021-43858)
-
openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates (CVE-2022-0778)
-
imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path (CVE-2022-24778)
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
nconf: Prototype pollution in memory store (CVE-2022-21803)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)
-
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
-
opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
Bug fixes:
-
RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target (BZ# 2014557)
-
RHACM 2.5.0 images (BZ# 2024938)
-
[UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?) (BZ#2028348)
-
Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly (BZ# 2028647)
-
create cluster pool -> choose infra type, As a result infra providers disappear from UI. (BZ# 2033339)
-
Restore/backup shows up as Validation failed but the restore backup status in ACM shows success (BZ# 2034279)
-
Observability - OCP 311 node role are not displayed completely (BZ# 2038650)
-
Documented uninstall procedure leaves many leftovers (BZ# 2041921)
-
infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5 (BZ# 2046554)
-
Acm failed to install due to some missing CRDs in operator (BZ# 2047463)
-
Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)
-
ACM home page now includes /home/ in url (BZ# 2051299)
-
proxy heading in Add Credential should be capitalized (BZ# 2051349)
-
ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0 (BZ# 2051983)
-
Create Policy button does not work and user cannot use console to create policy (BZ# 2053264)
-
No cluster information was displayed after a policyset was created (BZ# 2053366)
-
Dynamic plugin update does not take effect in Firefox (BZ# 2053516)
-
Replicated policy should not be available when creating a Policy Set (BZ# 2054431)
-
Placement section in Policy Set wizard does not reset when users click "Back" to re-configured placement (BZ# 2054433)
-
Bugs fixed (https://bugzilla.redhat.com/):
2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target
2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2028224 - RHACM 2.5.0 images
2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?)
2028647 - Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic
2033339 - create cluster pool -> choose infra type , As a result infra providers disappear from UI.
2034279 - Restore/backup shows up as Validation failed but the restore backup status in ACM shows success
2036252 - CVE-2021-43858 minio: user privilege escalation in AddUser() admin API
2038650 - Observability - OCP 311 node role are not displayed completely
2041921 - Documented uninstall procedure leaves many leftovers
2044434 - CVE-2021-43816 containerd: Unprivileged pod may bind mount any privileged regular file on disk
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2046554 - infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5
2047463 - Acm failed to install due to some missing CRDs in operator
2051298 - Navigation icons no longer showing in ACM 2.5
2051299 - ACM home page now includes /home/ in url
2051349 - proxy heading in Add Credential should be capitalized
2051983 - ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0
2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account
2053264 - Create Policy button does not work and user cannot use console to create policy
2053366 - No cluster information was displayed after a policyset was created
2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements
2053516 - Dynamic plugin update does not take effect in Firefox
2054431 - Replicated policy should not be available when creating a Policy Set
2054433 - Placement section in Policy Set wizard does not reset when users click "Back" to re-configured placement
2054772 - credentialName is not parsed correctly in UI notifications/alerts when creating/updating a discovery config
2054860 - Cluster overview page crashes for on-prem cluster
2055333 - Unable to delete assisted-service operator
2055900 - If MCH is installed on existing MCE and both are in multicluster-engine namespace , uninstalling MCH terminates multicluster-engine namespace
2056485 - [UI] In infraenv detail the host list don't have pagination
2056701 - Non platform install fails agentclusterinstall CRD is outdated in rhacm2.5
2057060 - [CAPI] Unable to create ClusterDeployment due to service account restrictions (ACM + Bundled Assisted)
2058435 - Label cluster.open-cluster-management.io/backup-cluster stamped 'unknown' for velero backups
2059779 - spec.nodeSelector is missing in MCE instance created by MCH upon installing ACM on infra nodes
2059781 - Policy UI crashes when viewing details of configuration policies for backupschedule that does not exist
2060135 - [assisted-install] agentServiceConfig left orphaned after uninstalling ACM
2060151 - Policy set of the same name cannot be re-created after the previous one has been deleted
2060230 - [UI] Delete host modal has incorrect host's name populated
2060309 - multiclusterhub stuck in installing on "ManagedClusterConditionAvailable" [intermittent]
2060469 - The development branch of the Submariner addon deploys 0.11.0, not 0.12.0
2060550 - MCE installation hang due to no console-mce-console deployment available
2060603 - prometheus doesn't display managed clusters
2060831 - Observability - prometheus-operator failed to start on KS
2060934 - Cannot provision AWS OCP 4.9 cluster from Power Hub
2061260 - The value of the policyset placement should be filtered space when input cluster label expression
2061311 - Cleanup of installed spoke clusters hang on deletion of spoke namespace
2061659 - the network section in create cluster -> Networking include the brace in the network title
2061798 - [ACM 2.5] The service of Cluster Proxy addon was missing
2061838 - ACM component subscriptions are removed when enabling spec.disableHubSelfManagement in MCH
2062009 - No name validation is performed on Policy and Policy Set Wizards
2062022 - cluster.open-cluster-management.io/backup-cluster of velero schedules should populate the corresponding hub clusterID
2062025 - No validation is done on yaml's format or content in Policy and Policy Set wizards
2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates
2062337 - velero schedules get re-created after the backupschedule is in 'BackupCollision' phase
2062462 - Upgrade to 2.5 hang due to irreconcilable errors of grc-sub and search-prod-sub in MCH
2062556 - Always return the policyset page after created the policy from UI
2062787 - Submariner Add-on UI does not indicate on Broker error
2063055 - User with cluserrolebinding of open-cluster-management:cluster-manager-admin role can't see policies and clusters page
2063341 - Release imagesets are missing in the console for ocp 4.10
2063345 - Application Lifecycle- UI shows white blank page when the page is Refreshed
2063596 - claim clusters from clusterpool throws errors
2063599 - Update the message in clusterset -> clusterpool page since we did not allow to add clusterpool to clusterset by resourceassignment
2063697 - Observability - MCOCR reports object-storage secret without AWS access_key in STS enabled env
2064231 - Can not clean the instance type for worker pool when create the clusters
2064247 - prefer UI can add the architecture type when create the cluster
2064392 - multicloud oauth-proxy failed to log users in on web
2064477 - Click at "Edit Policy" for each policy leads to a blank page
2064509 - No option to view the ansible job details and its history in the Automation wizard after creation of the automation job
2064516 - Unable to delete an automation job of a policy
2064528 - Columns of Policy Set, Status and Source on Policy page are not sortable
2064535 - Different messages on the empty pages of Overview and Clusters when policy is disabled
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2064722 - [Tracker] [DR][ACM 2.5] Applications are not getting deployed on managed cluster
2064899 - Failed to provision openshift 4.10 on bare metal
2065436 - "Filter" drop-down list does not show entries of the policies that have no top-level remediation specified
2066198 - Issues about disabled policy from UI
2066207 - The new created policy should be always shown up on the first line
2066333 - The message was confuse when the cluster status is Running
2066383 - MCE install failing on proxy disconnected environment
2066433 - Logout not working for ACM 2.5
2066464 - console-mce-console pods throw ImagePullError after upgrading to ocp 4.10
2066475 - User with view-only rolebinding should not be allowed to create policy, policy set and automation job
2066544 - The search box can't work properly in Policies page
2066594 - RFE: Can't open the helm source link of the backup-restore-enabled policy from UI
2066650 - minor issues in cluster curator due to the startup throws errors
2066751 - the image repo of application-manager did not updated to use the image repo in MCE/MCH configuration
2066834 - Hibernating cluster(s) in cluster pool stuck in 'Stopping' status after restore activation
2066842 - cluster pool credentials are not backed up
2066914 - Unable to remove cluster value during configuration of the label expressions for policy and policy set
2066940 - Validation fired out for https proxy when the link provided not starting with https
2066965 - No message is displayed in Policy Wizard to indicate a policy externally managed
2066979 - MIssing groups in policy filter options comparing to previous RHACM version
2067053 - I was not able to remove the image mirror content when create the cluster
2067067 - Can't filter the cluster info when clicked the cluster in the Placement section
2067207 - Bare metal asset secrets are not backed up
2067465 - Categories,Standards, and Controls annotations are not updated after user has deleted a selected template
2067713 - Columns on policy's "Results" are not sort-able as in previous release
2067728 - Can't search in the policy creation or policyset creation Yaml editor
2068304 - Application Lifecycle- Replicasets arent showing the logs console in Topology
2068309 - For policy wizard in dynamics plugin environment, buttons at the bottom should be sticky and the contents of the Policy should scroll
2068312 - Application Lifecycle - Argo Apps are not showing overview details and topology after upgrading from 2.4
2068313 - Application Lifecycle - Refreshing overview page leads to a blank page
2068328 - A cluster's "View history" page should not contain all clusters' violations history
2068387 - Observability - observability operator always CrashLoopBackOff in FIPS upgrading hub
2068993 - Observability - Node list is not filtered according to nodeType on OCP 311 dashboard
2069329 - config-policy-controller addon with "Unknown" status in OCP 3.11 managed cluster after upgrade hub to 2.5
2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path
2069469 - Status of unreachable clusters is not reported in several places on GRC panels
2069615 - The YAML editor can't work well when login UI using dynamic console plugin
2069622 - No validation for policy template's name
2069698 - After claim a cluster from clusterpool, the cluster pages become very very slow
2069867 - Error occurs when trying to edit an application set/subscription
2069870 - ACM/MCE Dynamic Plugins - 404: Page Not Found Error Occurs - intermittent crashing
2069875 - Cluster secrets are not being created in the managed cluster's namespace
2069895 - Application Lifecycle - Replicaset and Pods gives error messages when Yaml is selected on sidebar
2070203 - Blank Application is shown when editing an Application with AnsibleJobs
2070782 - Failed Secret Propagation to the Same Namespace as the AnsibleJob CR
2070846 - [ACM 2.5] Can't re-add the default clusterset label after removing it from a managedcluster on BM SNO hub
2071066 - Policy set details panel does not work when deployed into namespace different than "default"
2071173 - Configured RunOnce automation job is not displayed although the policy has no violation
2071191 - MIssing title on details panel after clicking "view details" of a policy set card
2071769 - Placement must be always configured or error is reported when creating a policy
2071818 - ACM logo not displayed in About info modal
2071869 - Topology includes the status of local cluster resources when Application is only deployed to managed cluster
2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale
2072097 - Local Cluster is shown as Remote on the Application Overview Page and Single App Overview Page
2072104 - Inconsistent "Not Deployed" Icon Used Between 2.4 and 2.5 as well as the Overview and Topology
2072177 - Cluster Resource Status is showing App Definition Statuses as well
2072227 - Sidebar Statuses Need to Be Updated to Reflect Cluster List and Cluster Resource Statuses
2072231 - Local Cluster not included in the appsubreport for Helm Applications Deployed on All Clusters
2072334 - Redirect URL is now to the details page after created a policy
2072342 - Shows "NaN%" in the ring chart when add the disabled policy into policyset and view its details
2072350 - CRD Deployed via Application Console does not have correct deployment status and spelling
2072359 - Report the error when editing compliance type in the YAML editor and then submit the changes
2072504 - The policy has violations on the failed managed cluster
2072551 - URL dropdown is not being rendered with an Argo App with a new URL
2072773 - When a channel is deleted and recreated through the App Wizard, application creation stalls and warning pops up
2072824 - The edit/delete policyset button should be greyed when using viewer check
2072829 - When Argo App with jsonnet object is deployed, topology and cluster status would fail to display the correct statuses.
2073179 - Policy controller was unable to retrieve violation status in for an OCP 3.11 managed cluster on ARM hub
2073330 - Observabilityy - memory usage data are not collected even collect rule is fired on SNO
2073355 - Get blank page when click policy with unknown status in Governance -> Overview page
2073508 - Thread responsible to get insights data from ks clusters is broken
2073557 - appsubstatus is not deleted for Helm applications when changing between 2 managed clusters
2073726 - Placement of First Subscription gets overlapped by the Cluster Node in Application Topology
2073739 - Console/App LC - Error message saying resource conflict only shows up in standalone ACM but not in Dynamic plugin
2073740 - Console/App LC- Apps are deployed even though deployment do not proceed because of "resource conflict" error
2074178 - Editing Helm Argo Applications does not Prune Old Resources
2074626 - Policy placement failure during ZTP SNO scale test
2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store
2074803 - The import cluster YAML editor shows the klusterletaddonconfig was required on MCE portal
2074937 - UI allows creating cluster even when there are no ClusterImageSets
2075416 - infraEnv failed to create image after restore
2075440 - The policyreport CR is created for spoke clusters until restarted the insights-client pod
2075739 - The lookup function won't check the referred resource whether exist when using template policies
2076421 - Can't select existing placement for policy or policyset when editing policy or policyset
2076494 - No policyreport CR for spoke clusters generated in the disconnected env
2076502 - The policyset card doesn't show the cluster status(violation/without violation) again after deleted one policy
2077144 - GRC Ansible automation wizard does not display error of missing dependent Ansible Automation Platform operator
2077149 - App UI shows no clusters cluster column of App Table when Discovery Applications is deployed to a managed cluster
2077291 - Prometheus doesn't display acm_managed_cluster_info after upgrade from 2.4 to 2.5
2077304 - Create Cluster button is disabled only if other clusters exist
2077526 - ACM UI is very very slow after upgrade from 2.4 to 2.5
2077562 - Console/App LC- Helm and Object bucket applications are not showing as deployed in the UI
2077751 - Can't create a template policy from UI when the object's name is referring Golang text template syntax in this policy
2077783 - Still show violation for clusterserviceversions after enforced "Detect Image vulnerabilities " policy template and the operator is installed
2077951 - Misleading message indicated that a placement of a policy became one managed only by policy set
2078164 - Failed to edit a policy without placement
2078167 - Placement binding and rule names are not created in yaml when editing a policy previously created with no placement
2078373 - Disable the hyperlink of *ks node in standalone MCE environment since the search component was not exists
2078617 - Azure public credential details get pre-populated with base domain name in UI
2078952 - View pod logs in search details returns error
2078973 - Crashed pod is marked with success in Topology
2079013 - Changing existing placement rules does not change YAML file
2079015 - Uninstall pod crashed when destroying Azure Gov cluster in ACM
2079421 - Hyphen(s) is deleted unexpectedly in UI when yaml is turned on
2079494 - Hitting Enter in yaml editor caused unexpected keys "key00x:" to be created
2079533 - Clusters with no default clusterset do not get assigned default cluster when upgrading from ACM 2.4 to 2.5
2079585 - When an Ansible Secret is propagated to an Ansible Application namespace, the propagated secret is shown in the Credentials page
2079611 - Edit appset placement in UI with a different existing placement causes the current associated placement being deleted
2079615 - Edit appset placement in UI with a new placement throws error upon submitting
2079658 - Cluster Count is Incorrect in Application UI
2079909 - Wrong message is displayed when GRC fails to connect to an ansible tower
2080172 - Still create policy automation successfully when the PolicyAutomation name exceed 63 characters
2080215 - Get a blank page after go to policies page in upgraded env when using an user with namespace-role-binding of default view role
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2080503 - vSphere network name doesn't allow entering spaces and doesn't reflect YAML changes
2080567 - Number of cluster in violation in the table does not match other cluster numbers on the policy set details page
2080712 - Select an existing placement configuration does not work
2080776 - Unrecognized characters are displayed on policy and policy set yaml editors
2081792 - When deploying an application to a clusterpool claimed cluster after upgrade, the application does not get deployed to the cluster
2081810 - Type '-' character in Name field caused previously typed character backspaced in in the name field of policy wizard
2081829 - Application deployed on local cluster's topology is crashing after upgrade
2081938 - The deleted policy still be shown on the policyset review page when edit this policy set
2082226 - Object Storage Topology includes residue of resources after Upgrade
2082409 - Policy set details panel remains even after the policy set has been deleted
2082449 - The hypershift-addon-agent deployment did not have imagePullSecrets
2083038 - Warning still refers to the klusterlet-addon-appmgr
pod rather than the application-manager
pod
2083160 - When editing a helm app with failing resources to another, the appsubstatus and the managedclusterview do not get updated
2083434 - The provider-credential-controller did not support the RHV credentials type
2083854 - When deploying an application with ansiblejobs multiple times with different namespaces, the topology shows all the ansiblejobs rather than just the one within the namespace
2083870 - When editing an existing application and refreshing the Select an existing placement configuration
, multiple occurrences of the placementrule gets displayed
2084034 - The status message looks messy in the policy set card, suggest one kind status one a row
2084158 - Support provisioning bm cluster where no provisioning network provided
2084622 - Local Helm application shows cluster resources as Not Deployed
in Topology [Upgrade]
2085083 - Policies fail to copy to cluster namespace after ACM upgrade
2085237 - Resources referenced by a channel are not annotated with backup label
2085273 - Error querying for ansible job in app topology
2085281 - Template name error is reported but the template name was found in a different replicated policy
2086389 - The policy violations for hibernated cluster still be displayed on the policy set details page
2087515 - Validation thrown out in configuration for disconnect install while creating bm credential
2088158 - Object Storage Application deployed to all clusters is showing unemployed in topology [Upgrade]
2088511 - Some cluster resources are not showing labels that are defined in the YAML
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202201-0496", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "kernel", "scope": "lt", "trust": 1.0, "vendor": "linux", "version": "5.15.14" }, { "model": "kernel", "scope": "gte", "trust": 1.0, "vendor": "linux", "version": "4.15" }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "kernel", "scope": "lt", "trust": 1.0, "vendor": "linux", "version": "5.4.189" }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "kernel", "scope": "gte", "trust": 1.0, "vendor": "linux", "version": "5.11" }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "kernel", "scope": "gte", "trust": 1.0, "vendor": "linux", "version": "4.20" }, { "model": "communications cloud native core binding support function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.1.1" }, { "model": "kernel", "scope": "lt", "trust": 1.0, "vendor": "linux", "version": "4.14.276" }, { "model": "kernel", "scope": "gte", "trust": 1.0, "vendor": "linux", "version": "5.5" }, { "model": "kernel", "scope": "gte", "trust": 1.0, "vendor": "linux", "version": "4.2" }, { "model": "kernel", "scope": "lt", "trust": 1.0, "vendor": "linux", "version": "5.10.111" }, { "model": "kernel", "scope": "lt", "trust": 1.0, "vendor": "linux", "version": "4.19.238" }, { "model": "communications cloud native core binding support function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.2.0" }, { "model": "communications cloud native core binding support function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.1.3" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "brocade fabric operating system", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null } ], "sources": [ { "db": "NVD", "id": "CVE-2021-4197" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "167330" }, { "db": "PACKETSTORM", "id": "167952" }, { "db": "PACKETSTORM", "id": "167822" }, { "db": "PACKETSTORM", "id": "167459" } ], "trust": 0.4 }, "cve": "CVE-2021-4197", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "accessComplexity": "LOW", "accessVector": "LOCAL", "authentication": "NONE", "author": "nvd@nist.gov", "availabilityImpact": "COMPLETE", "baseScore": 7.2, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 3.9, "id": "CVE-2021-4197", "impactScore": 10.0, "integrityImpact": "COMPLETE", "severity": "HIGH", "trust": 1.1, "vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "LOCAL", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "COMPLETE", "baseScore": 7.2, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 3.9, "id": "VHN-410862", "impactScore": 10.0, "integrityImpact": "COMPLETE", "severity": "HIGH", "trust": 0.1, "vectorString": "AV:L/AC:L/AU:N/C:C/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "LOCAL", "author": "nvd@nist.gov", "availabilityImpact": "HIGH", "baseScore": 7.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.8, "id": "CVE-2021-4197", "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" } ], "severity": [ { "author": "nvd@nist.gov", "id": "CVE-2021-4197", "trust": 1.0, "value": "HIGH" }, { "author": "VULHUB", "id": "VHN-410862", "trust": 0.1, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2021-4197", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-410862" }, { "db": "VULMON", "id": "CVE-2021-4197" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "An unprivileged write to the file handler flaw in the Linux kernel\u0027s control groups and namespaces subsystem was found in the way users have access to some less privileged process that are controlled by cgroups and have higher privileged parent process. It is actually both for cgroup2 and cgroup1 versions of control groups. A local user could use this flaw to crash the system or escalate their privileges on the system. ==========================================================================\nUbuntu Security Notice USN-5368-1\nApril 06, 2022\n\nlinux-azure-5.13, linux-oracle-5.13 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-azure-5.13: Linux kernel for Microsoft Azure cloud systems\n- linux-oracle-5.13: Linux kernel for Oracle Cloud systems\n\nDetails:\n\nIt was discovered that the BPF verifier in the Linux kernel did not\nproperly restrict pointer types in certain situations. (CVE-2022-23222)\n\nIt was discovered that the network traffic control implementation in the\nLinux kernel contained a use-after-free vulnerability. (CVE-2022-1055)\n\nYiqi Sun and Kevin Wang discovered that the cgroups implementation in the\nLinux kernel did not properly restrict access to the cgroups v1\nrelease_agent feature. (CVE-2022-0492)\n\nJ\\xfcrgen Gro\\xdf discovered that the Xen subsystem within the Linux kernel did\nnot adequately limit the number of events driver domains (unprivileged PV\nbackends) could send to other guest VMs. \n(CVE-2021-28711, CVE-2021-28712, CVE-2021-28713)\n\nJ\\xfcrgen Gro\\xdf discovered that the Xen network backend driver in the Linux\nkernel did not adequately limit the amount of queued packets when a guest\ndid not process them. An attacker in a guest VM can use this to cause a\ndenial of service (excessive kernel memory consumption) in the network\nbackend domain. (CVE-2021-28714, CVE-2021-28715)\n\nSzymon Heidrich discovered that the USB Gadget subsystem in the Linux\nkernel did not properly restrict the size of control requests for certain\ngadget types, leading to possible out of bounds reads or writes. (CVE-2021-39698)\n\nIt was discovered that the simulated networking device driver for the Linux\nkernel did not properly initialize memory in certain situations. (CVE-2021-4197)\n\nBrendan Dolan-Gavitt discovered that the aQuantia AQtion Ethernet device\ndriver in the Linux kernel did not properly validate meta-data coming from\nthe device. (CVE-2021-43975)\n\nIt was discovered that the ARM Trusted Execution Environment (TEE)\nsubsystem in the Linux kernel contained a race condition leading to a use-\nafter-free vulnerability. (CVE-2021-45095)\n\nIt was discovered that the eBPF verifier in the Linux kernel did not\nproperly perform bounds checking on mov32 operations. \n(CVE-2021-45402)\n\nIt was discovered that the Reliable Datagram Sockets (RDS) protocol\nimplementation in the Linux kernel did not properly deallocate memory in\nsome error conditions. (CVE-2021-45480)\n\nIt was discovered that the BPF subsystem in the Linux kernel did not\nproperly track pointer types on atomic fetch operations in some situations. (CVE-2022-0264)\n\nIt was discovered that the TIPC Protocol implementation in the Linux kernel\ndid not properly initialize memory in some situations. \n(CVE-2022-0382)\n\nSamuel Page discovered that the Transparent Inter-Process Communication\n(TIPC) protocol implementation in the Linux kernel contained a stack-based\nbuffer overflow. \n(CVE-2022-0435)\n\nIt was discovered that the KVM implementation for s390 systems in the Linux\nkernel did not properly prevent memory operations on PVM guests that were\nin non-protected mode. (CVE-2022-0516)\n\nIt was discovered that the ICMPv6 implementation in the Linux kernel did\nnot properly deallocate memory in certain situations. (CVE-2022-27666)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n linux-image-5.13.0-1021-azure 5.13.0-1021.24~20.04.1\n linux-image-5.13.0-1025-oracle 5.13.0-1025.30~20.04.1\n linux-image-azure 5.13.0.1021.24~20.04.10\n linux-image-oracle 5.13.0.1025.30~20.04.1\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. The security impact is negligible as\n CAP_SYS_ADMIN inherently gives the ability to deny service. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.5 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196)\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2057579 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2072311 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2074044 - [MTC] Rsync pods are not running as privileged\n2074553 - Upstream Hook Runner image requires arguments be in a different order\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5173-1 security@debian.org\nhttps://www.debian.org/security/ Ben Hutchings\nJuly 03, 2022 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : linux\nCVE ID : CVE-2021-4197 CVE-2022-0494 CVE-2022-0812 CVE-2022-0854\n CVE-2022-1011 CVE-2022-1012 CVE-2022-1016 CVE-2022-1048\n CVE-2022-1184 CVE-2022-1195 CVE-2022-1198 CVE-2022-1199\n CVE-2022-1204 CVE-2022-1205 CVE-2022-1353 CVE-2022-1419\n CVE-2022-1516 CVE-2022-1652 CVE-2022-1729 CVE-2022-1734\n CVE-2022-1974 CVE-2022-1975 CVE-2022-2153 CVE-2022-21123\n CVE-2022-21125 CVE-2022-21166 CVE-2022-23960 CVE-2022-26490\n CVE-2022-27666 CVE-2022-28356 CVE-2022-28388 CVE-2022-28389\n CVE-2022-28390 CVE-2022-29581 CVE-2022-30594 CVE-2022-32250\n CVE-2022-32296 CVE-2022-33981\nDebian Bug : 922204 1006346 1013299\n\nSeveral vulnerabilities have been discovered in the Linux kernel that\nmay lead to a privilege escalation, denial of service or information\nleaks. \n\nCVE-2021-4197\n\n Eric Biederman reported that incorrect permission checks in the\n cgroup process migration implementation can allow a local attacker\n to escalate privileges. \n\nCVE-2022-0494\n\n The scsi_ioctl() was susceptible to an information leak only\n exploitable by users with CAP_SYS_ADMIN or CAP_SYS_RAWIO\n capabilities. \n\nCVE-2022-0812\n\n It was discovered that the RDMA transport for NFS (xprtrdma)\n miscalculated the size of message headers, which could lead to a\n leak of sensitive information between NFS servers and clients. \n\nCVE-2022-0854\n\n Ali Haider discovered a potential information leak in the DMA\n subsystem. On systems where the swiotlb feature is needed, this\n might allow a local user to read sensitive information. \n\nCVE-2022-1011\n\n Jann Horn discovered a flaw in the FUSE (Filesystem in User-Space)\n implementation. A local user permitted to mount FUSE filesystems\n could exploit this to cause a use-after-free and read sensitive\n information. \n\nCVE-2022-1012, CVE-2022-32296\n\n Moshe Kol, Amit Klein, and Yossi Gilad discovered a weakness\n in randomisation of TCP source port selection. \n\nCVE-2022-1016\n\n David Bouman discovered a flaw in the netfilter subsystem where\n the nft_do_chain function did not initialize register data that\n nf_tables expressions can read from and write to. A local attacker\n can take advantage of this to read sensitive information. \n\nCVE-2022-1048\n\n Hu Jiahui discovered a race condition in the sound subsystem that\n can result in a use-after-free. \n\nCVE-2022-1184\n\n A flaw was discovered in the ext4 filesystem driver which can lead\n to a use-after-free. A local user permitted to mount arbitrary\n filesystems could exploit this to cause a denial of service (crash\n or memory corruption) or possibly for privilege escalation. \n\nCVE-2022-1195\n\n Lin Ma discovered race conditions in the 6pack and mkiss hamradio\n drivers, which could lead to a use-after-free. \n\nCVE-2022-1198\n\n Duoming Zhou discovered a race condition in the 6pack hamradio\n driver, which could lead to a use-after-free. \n\nCVE-2022-1199, CVE-2022-1204, CVE-2022-1205\n\n Duoming Zhou discovered race conditions in the AX.25 hamradio\n protocol, which could lead to a use-after-free or null pointer\n dereference. \n\nCVE-2022-1353\n\n The TCS Robot tool found an information leak in the PF_KEY\n subsystem. A local user can receive a netlink message when an\n IPsec daemon registers with the kernel, and this could include\n sensitive information. \n\nCVE-2022-1419\n\n Minh Yuan discovered a race condition in the vgem virtual GPU\n driver that can lead to a use-after-free. A local user permitted\n to access the GPU device can exploit this to cause a denial of\n service (crash or memory corruption) or possibly for privilege\n escalation. \n\nCVE-2022-1516\n\n A NULL pointer dereference flaw in the implementation of the X.25\n set of standardized network protocols, which can result in denial\n of service. \n\n This driver is not enabled in Debian\u0027s official kernel\n configurations. \n\nCVE-2022-1652\n\n Minh Yuan discovered a race condition in the floppy driver that\n can lead to a use-after-free. A local user permitted to access a\n floppy drive device can exploit this to cause a denial of service\n (crash or memory corruption) or possibly for privilege escalation. \n\nCVE-2022-1729\n\n Norbert Slusarek discovered a race condition in the perf subsystem\n which could result in local privilege escalation to root. The\n default settings in Debian prevent exploitation unless more\n permissive settings have been applied in the\n kernel.perf_event_paranoid sysctl. \n\nCVE-2022-1734\n\n Duoming Zhou discovered race conditions in the nfcmrvl NFC driver\n that could lead to a use-after-free, double-free or null pointer\n dereference. \n\n This driver is not enabled in Debian\u0027s official kernel\n configurations. \n\nCVE-2022-1974, CVE-2022-1975\n\n Duoming Zhou discovered that the NFC netlink interface was\n suspectible to denial of service. \n\nCVE-2022-2153\n\n \"kangel\" reported a flaw in the KVM implementation for x86\n processors which could lead to a null pointer dereference. \n\nCVE-2022-21123, CVE-2022-21125, CVE-2022-21166\n\n Various researchers discovered flaws in Intel x86 processors,\n collectively referred to as MMIO Stale Data vulnerabilities. \n These are similar to the previously published Microarchitectural\n Data Sampling (MDS) issues and could be exploited by local users\n to leak sensitive information. \n\n For some CPUs, the mitigations for these issues require updated\n microcode. An updated intel-microcode package may be provided at\n a later date. The updated CPU microcode may also be available as\n part of a system firmware (\"BIOS\") update. \n\n Further information on the mitigation can be found at\n \u003chttps://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html\u003e\n or in the linux-doc-4.19 package. \n\nCVE-2022-23960\n\n Researchers at VUSec discovered that the Branch History Buffer in\n Arm processors can be exploited to create information side-\n channels with speculative execution. This issue is similar to\n Spectre variant 2, but requires additional mitigations on some\n processors. \n\n This was previously mitigated for 32-bit Arm (armel and armhf)\n architectures and is now also mitigated for 64-bit Arm (arm64). \n\n This can be exploited to obtain sensitive information from a\n different security context, such as from user-space to the kernel,\n or from a KVM guest to the kernel. \n\nCVE-2022-26490\n\n Buffer overflows in the STMicroelectronics ST21NFCA core driver\n can result in denial of service or privilege escalation. \n\n This driver is not enabled in Debian\u0027s official kernel\n configurations. \n\nCVE-2022-27666\n\n \"valis\" reported a possible buffer overflow in the IPsec ESP\n transformation code. \n\nCVE-2022-28356\n\n \"Beraphin\" discovered that the ANSI/IEEE 802.2 LLC type 2 driver did\n not properly perform reference counting on some error paths. \n\nCVE-2022-28388\n\n A double free vulnerability was discovered in the 8 devices\n USB2CAN interface driver. \n\nCVE-2022-28389\n\n A double free vulnerability was discovered in the Microchip CAN\n BUS Analyzer interface driver. \n\nCVE-2022-28390\n\n A double free vulnerability was discovered in the EMS CPC-USB/ARM7\n CAN/USB interface driver. \n\nCVE-2022-29581\n\n Kyle Zeng discovered a reference-counting bug in the cls_u32\n network classifier which can lead to a use-after-free. \n\nCVE-2022-30594\n\n Jann Horn discovered a flaw in the interaction between ptrace and\n seccomp subsystems. A process sandboxed using seccomp() but still\n permitted to use ptrace() could exploit this to remove the seccomp\n restrictions. \n\nCVE-2022-32250\n\n Aaron Adams discovered a use-after-free in Netfilter which may\n result in local privilege escalation to root. \n\nCVE-2022-33981\n\n Yuan Ming from Tsinghua University reported a race condition in\n the floppy driver involving use of the FDRAWCMD ioctl, which could\n lead to a use-after-free. A local user with access to a floppy\n drive device could exploit this to cause a denial of service\n (crash or memory corruption) or possibly for privilege escalation. \n This ioctl is now disabled by default. \n\nFor the oldstable distribution (buster), these problems have been\nfixed in version 4.19.249-2. \n\nDue to an issue in the signing service (Cf. Debian bug #1012741), the\nvport-vxlan module cannot be loaded for the signed kernel for amd64 in\nthis update. \n\nThis update also corrects a regression in the network scheduler\nsubsystem (bug #1013299). \n\nFor the 32-bit Arm (armel and armhf) architectures, this update\nenables optimised implementations of several cryptographic and CRC\nalgorithms. For at least AES, this should remove a timing side-\nchannel that could lead to a leak of sensitive information. \n\nThis update includes many more bug fixes from stable updates\n4.19.236-4.19.249 inclusive, including for bug #1006346. The random\ndriver has been backported from Linux 5.19, fixing numerous\nperformance and correctness issues. Some changes will be visible:\n\n- - The entropy pool size is now 256 bits instead of 4096. You may need\n to adjust the configuration of system monitoring or user-space\n entropy gathering services to allow for this. \n\n- - On systems without a hardware RNG, the kernel may log more uses of\n /dev/urandom before it is fully initialised. These uses were\n previously under-counted and this is not a regression. \n\nWe recommend that you upgrade your linux packages. \n\nFor the detailed security status of linux please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/linux\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmLBuTxfFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0TdzQ//Yxq7eTZmPsDVvj1ArPIDwE4w/CPyoYeXiiSBhWD4ueYAvWp3moPmUZmc\na6is1JkP8MILLekkeAUJQjaxjHOn+kWIlfV7ZLJ7fzTrVjkHoQvzs8a8mv85ybaD\nsfQlVuEA7VPxfJI/4/31fIAuTPy1S+qd3r6qtESL2IQdZPFS8SOHwZrTt9DPGXhl\nXtY3XNm4fysgRmtDYNpqndluVXeTc39bXe9YBRG1bTdrI9QCTykSx2/HeZDOBiMQ\nWb7cjXAUoy0q3c5QncTcqtgN3ax549qx/1oGZGXDlycZFOIE8vHMY3FyBXXURPz4\nJgKkSf+NR87aeDi2SREjOm0CIp/laSc1VFxpf0TTT51kuPWhXzsleZ23eN2po106\nUTyDFsNtNToHgoDpPFA/3GsioqirzbwwVUs0qKDeFdC1VZjJ5H+1JzO4JPbWGOTo\nrtoz64JHU9oIA3OJs3rYpgIphd6fzUfia89tuflE5/MkeAWSVP7f0rpUgGQy8gzw\nTdsN4p7aCLhQezMpFVKADIB1WfkBtXncDrPC//pxxnRZuu2efrlYv6se+dnOJM9/\nWeDSm4hsi6u+MH7DBmVhDgjF/gatSbejud8rXYUcVKZArraj9k9rCArxcVKmJHMr\n6teKhjSMX1B27AUJtTqSU1eEmErxbA+yEHCSEOW+8JNnLQZWDSI=\n=j1cH\n-----END PGP SIGNATURE-----\n. (CVE-2022-1516)\n\nDemi Marie Obenour and Simon Gaiser discovered that several Xen para-\nvirtualization device frontends did not properly restrict the access rights\nof device backends. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.25 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.10. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.25. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:5729\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s)\nlisted in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.25-x86_64\n\nThe image digest is\nsha256:ed84fb3fbe026b3bbb4a2637ddd874452ac49c6ead1e15675f257e28664879cc\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.25-s390x\n\nThe image digest is\nsha256:a151628743b643e8ceda09dbd290aa4ac2787fc519365603a5612cb4d379d8e3\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.25-ppc64le\n\nThe image digest is\nsha256:5ee9476628f198cdadd8f7afe6f117e8102eaafba8345e95d2f479c260eb0574\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for important instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2060058 - superfluous apirequestcount entries in audit log\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2079034 - [4.10] Openshift Container Platform - Ingress Controller does not set allowPrivilegeEscalation in the router deployment\n2094584 - VM with sysprep is failed to create\n2095217 - VM SSH command generated by UI points at api VIP\n2095319 - [4.10] Bootimage bump tracker\n2098655 - gcp cluster rollback fails due to storage failure\n2099526 - prometheus-adapter becomes inaccessible during rollout\n2100894 - Possible to cause misconfiguration of container runtime soon after cluster creation\n2100974 - Layout issue: No spacing in delete modals\n2103175 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2105110 - [VPA] recommender is logging errors for pods with init containers\n2105275 - NodeIP is used instead of EgressIP\n2105653 - egressIP panics with nil pointer dereference\n2106385 - the cronjob object is created with a wrong api version batch/v1beta1 when created via the openshift console\n2106842 - In CI 4.10 HAProxy must-gather takes longer than 10 minutes\n2107276 - The ccoctl does not seem to know how to leverage the VMs service account to talk to GCP APIs. \n2109125 - [4.10 Backport] Spoke BMH stuck \"inspecting\" when deployed via ZTP in 4.11 OCP hub\n2109225 - Console 4.10 operand form refresh\n2109235 - openshift-apiserver pods never going NotReady\n\n5. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYuqt+dzjgjWX9erEAQgkaRAAgkfZMlPLAAHEPj9/u6cy7TrRLDvMpgV/\npcH4o92HJHTYaO8CIp0+njDPSAtzHPxOvGqew795DZWKJvn3fhuvQoUCXuXBVOF0\neH8yIcmH2Xh7dkUV385rRvwWkYEBt5BaXXUP5UOq/pByZMkd1emEjiZth7CWWqwg\nGasDNRaG+FiB1MhJDaZYbRZ1Dpjrm/UOep6r/AwfaZkbvvHstwHDqWUc1PMG3TMO\nzQwCC2W8Ng+QiCVAGqWQhcvcnwAD5WeN6sgnO2fzAJwnZD/O1QS8Q2s6KO8izvjm\ny7P9wZfE449ijXkk8X06WRRTR082h6PiUyAa4rYpSHy5yP/zTukT8K81qQdR5BqQ\nceDgac68/DgoHGn/7UebfYxxNa2aKXPtTb07a8Vd7YA/G1w3DGG5YGgyQ1LSQPJ2\nv9XF8ggY9r2YiV0TiS9XHzC9PsvMasYoHL+c31RI1QNKizJtn3HVlw3yE62BCTNC\nn9G+IjvdY1a8xDUV/mmthZJnNa4/QybEhiL30XNTwHXATszwS/9xq3J9/Un1f325\nfuneRCL+WPGnEB5MmczkSyomf2Clq7nfjJWWNcAwZganPXmXREWB0uL/5JiyCDjQ\n5LIJDtYYcoa9fYRtOMQUjzWJr4h1vpHwkRfWd8m+dXqkbTgE1YspS0sp/fmdcCek\nE4/PvJnIe00=X10h\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Real Time EUS (v.8.4) - x86_64\nRed Hat Enterprise Linux Real Time for NFV EUS (v.8.4) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. \n\nSecurity Fix(es):\n\n* kernel: Small table perturb size in the TCP source port generation\nalgorithm can lead to information leak (CVE-2022-1012)\n\n* kernel: race condition in perf_event_open leads to privilege escalation\n(CVE-2022-1729)\n\n* kernel: a use-after-free write in the netfilter subsystem can lead to\nprivilege escalation to root (CVE-2022-32250)\n\n* kernel: cgroup: Use open-time creds and namespace for migration perm\nchecks (CVE-2021-4197)\n\n* kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n(CVE-2021-4203)\n\n* kernel: the copy-on-write implementation can grant unintended write\naccess because of a race condition in a THP mapcount check (CVE-2020-29368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* kernel-rt: update RT source tree to the RHEL-8.4.z10 source tree\n(BZ#2087922)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. Bugs fixed (https://bugzilla.redhat.com/):\n\n1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check\n2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks\n2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n2064604 - CVE-2022-1012 kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak\n2086753 - CVE-2022-1729 kernel: race condition in perf_event_open leads to privilege escalation\n2092427 - CVE-2022-32250 kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root\n\n6. Package List:\n\nRed Hat Enterprise Linux Real Time for NFV EUS (v.8.4):\n\nSource:\nkernel-rt-4.18.0-305.57.1.rt7.129.el8_4.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-kvm-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-kvm-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\n\nRed Hat Enterprise Linux Real Time EUS (v.8.4):\n\nSource:\nkernel-rt-4.18.0-305.57.1.rt7.129.el8_4.src.rpm\n\nx86_64:\nkernel-rt-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-core-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debug-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debuginfo-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-debuginfo-common-x86_64-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-devel-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-modules-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\nkernel-rt-modules-extra-4.18.0-305.57.1.rt7.129.el8_4.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Summary:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.5.0 is now generally\navailable. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.5.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes: \n\n* nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)\n\n* containerd: Unprivileged pod may bind mount any privileged regular file\non disk (CVE-2021-43816)\n\n* minio: user privilege escalation in AddUser() admin API (CVE-2021-43858)\n\n* openssl: Infinite loop in BN_mod_sqrt() reachable when parsing\ncertificates (CVE-2022-0778)\n\n* imgcrypt: Unauthorized access to encryted container image on a shared\nsystem due to missing check in CheckAuthorization() code path\n(CVE-2022-24778)\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n* nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nBug fixes:\n\n* RFE Copy secret with specific secret namespace, name for source and name,\nnamespace and cluster label for target (BZ# 2014557)\n\n* RHACM 2.5.0 images (BZ# 2024938)\n\n* [UI] When you delete host agent from infraenv no confirmation message\nappear (Are you sure you want to delete x?) (BZ#2028348)\n\n* Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller\nnot working properly (BZ# 2028647)\n\n* create cluster pool -\u003e choose infra type, As a result infra providers\ndisappear from UI. (BZ# 2033339)\n\n* Restore/backup shows up as Validation failed but the restore backup\nstatus in ACM shows success (BZ# 2034279)\n\n* Observability - OCP 311 node role are not displayed completely (BZ#\n2038650)\n\n* Documented uninstall procedure leaves many leftovers (BZ# 2041921)\n\n* infrastructure-operator pod crashes due to insufficient privileges in ACM\n2.5 (BZ# 2046554)\n\n* Acm failed to install due to some missing CRDs in operator (BZ# 2047463)\n\n* Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)\n\n* ACM home page now includes /home/ in url (BZ# 2051299)\n\n* proxy heading in Add Credential should be capitalized (BZ# 2051349)\n\n* ACM 2.5 tries to create new MCE instance when install on top of existing\nMCE 2.0 (BZ# 2051983)\n\n* Create Policy button does not work and user cannot use console to create\npolicy (BZ# 2053264)\n\n* No cluster information was displayed after a policyset was created (BZ#\n2053366)\n\n* Dynamic plugin update does not take effect in Firefox (BZ# 2053516)\n\n* Replicated policy should not be available when creating a Policy Set (BZ#\n2054431)\n\n* Placement section in Policy Set wizard does not reset when users click\n\"Back\" to re-configured placement (BZ# 2054433)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2028224 - RHACM 2.5.0 images\n2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?)\n2028647 - Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller not working properly\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2033339 - create cluster pool -\u003e choose infra type , As a result infra providers disappear from UI. \n2034279 - Restore/backup shows up as Validation failed but the restore backup status in ACM shows success\n2036252 - CVE-2021-43858 minio: user privilege escalation in AddUser() admin API\n2038650 - Observability - OCP 311 node role are not displayed completely\n2041921 - Documented uninstall procedure leaves many leftovers\n2044434 - CVE-2021-43816 containerd: Unprivileged pod may bind mount any privileged regular file on disk\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2046554 - infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5\n2047463 - Acm failed to install due to some missing CRDs in operator\n2051298 - Navigation icons no longer showing in ACM 2.5\n2051299 - ACM home page now includes /home/ in url\n2051349 - proxy heading in Add Credential should be capitalized\n2051983 - ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature authenticated user can obtain the privileges of the System account\n2053264 - Create Policy button does not work and user cannot use console to create policy\n2053366 - No cluster information was displayed after a policyset was created\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n2053516 - Dynamic plugin update does not take effect in Firefox\n2054431 - Replicated policy should not be available when creating a Policy Set\n2054433 - Placement section in Policy Set wizard does not reset when users click \"Back\" to re-configured placement\n2054772 - credentialName is not parsed correctly in UI notifications/alerts when creating/updating a discovery config\n2054860 - Cluster overview page crashes for on-prem cluster\n2055333 - Unable to delete assisted-service operator\n2055900 - If MCH is installed on existing MCE and both are in multicluster-engine namespace , uninstalling MCH terminates multicluster-engine namespace\n2056485 - [UI] In infraenv detail the host list don\u0027t have pagination\n2056701 - Non platform install fails agentclusterinstall CRD is outdated in rhacm2.5\n2057060 - [CAPI] Unable to create ClusterDeployment due to service account restrictions (ACM + Bundled Assisted)\n2058435 - Label cluster.open-cluster-management.io/backup-cluster stamped \u0027unknown\u0027 for velero backups\n2059779 - spec.nodeSelector is missing in MCE instance created by MCH upon installing ACM on infra nodes\n2059781 - Policy UI crashes when viewing details of configuration policies for backupschedule that does not exist\n2060135 - [assisted-install] agentServiceConfig left orphaned after uninstalling ACM\n2060151 - Policy set of the same name cannot be re-created after the previous one has been deleted\n2060230 - [UI] Delete host modal has incorrect host\u0027s name populated\n2060309 - multiclusterhub stuck in installing on \"ManagedClusterConditionAvailable\" [intermittent]\n2060469 - The development branch of the Submariner addon deploys 0.11.0, not 0.12.0\n2060550 - MCE installation hang due to no console-mce-console deployment available\n2060603 - prometheus doesn\u0027t display managed clusters\n2060831 - Observability - prometheus-operator failed to start on *KS\n2060934 - Cannot provision AWS OCP 4.9 cluster from Power Hub\n2061260 - The value of the policyset placement should be filtered space when input cluster label expression\n2061311 - Cleanup of installed spoke clusters hang on deletion of spoke namespace\n2061659 - the network section in create cluster -\u003e Networking include the brace in the network title\n2061798 - [ACM 2.5] The service of Cluster Proxy addon was missing\n2061838 - ACM component subscriptions are removed when enabling spec.disableHubSelfManagement in MCH\n2062009 - No name validation is performed on Policy and Policy Set Wizards\n2062022 - cluster.open-cluster-management.io/backup-cluster of velero schedules should populate the corresponding hub clusterID\n2062025 - No validation is done on yaml\u0027s format or content in Policy and Policy Set wizards\n2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates\n2062337 - velero schedules get re-created after the backupschedule is in \u0027BackupCollision\u0027 phase\n2062462 - Upgrade to 2.5 hang due to irreconcilable errors of grc-sub and search-prod-sub in MCH\n2062556 - Always return the policyset page after created the policy from UI\n2062787 - Submariner Add-on UI does not indicate on Broker error\n2063055 - User with cluserrolebinding of open-cluster-management:cluster-manager-admin role can\u0027t see policies and clusters page\n2063341 - Release imagesets are missing in the console for ocp 4.10\n2063345 - Application Lifecycle- UI shows white blank page when the page is Refreshed\n2063596 - claim clusters from clusterpool throws errors\n2063599 - Update the message in clusterset -\u003e clusterpool page since we did not allow to add clusterpool to clusterset by resourceassignment\n2063697 - Observability - MCOCR reports object-storage secret without AWS access_key in STS enabled env\n2064231 - Can not clean the instance type for worker pool when create the clusters\n2064247 - prefer UI can add the architecture type when create the cluster\n2064392 - multicloud oauth-proxy failed to log users in on web\n2064477 - Click at \"Edit Policy\" for each policy leads to a blank page\n2064509 - No option to view the ansible job details and its history in the Automation wizard after creation of the automation job\n2064516 - Unable to delete an automation job of a policy\n2064528 - Columns of Policy Set, Status and Source on Policy page are not sortable\n2064535 - Different messages on the empty pages of Overview and Clusters when policy is disabled\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064722 - [Tracker] [DR][ACM 2.5] Applications are not getting deployed on managed cluster\n2064899 - Failed to provision openshift 4.10 on bare metal\n2065436 - \"Filter\" drop-down list does not show entries of the policies that have no top-level remediation specified\n2066198 - Issues about disabled policy from UI\n2066207 - The new created policy should be always shown up on the first line\n2066333 - The message was confuse when the cluster status is Running\n2066383 - MCE install failing on proxy disconnected environment\n2066433 - Logout not working for ACM 2.5\n2066464 - console-mce-console pods throw ImagePullError after upgrading to ocp 4.10\n2066475 - User with view-only rolebinding should not be allowed to create policy, policy set and automation job\n2066544 - The search box can\u0027t work properly in Policies page\n2066594 - RFE: Can\u0027t open the helm source link of the backup-restore-enabled policy from UI\n2066650 - minor issues in cluster curator due to the startup throws errors\n2066751 - the image repo of application-manager did not updated to use the image repo in MCE/MCH configuration\n2066834 - Hibernating cluster(s) in cluster pool stuck in \u0027Stopping\u0027 status after restore activation\n2066842 - cluster pool credentials are not backed up\n2066914 - Unable to remove cluster value during configuration of the label expressions for policy and policy set\n2066940 - Validation fired out for https proxy when the link provided not starting with https\n2066965 - No message is displayed in Policy Wizard to indicate a policy externally managed\n2066979 - MIssing groups in policy filter options comparing to previous RHACM version\n2067053 - I was not able to remove the image mirror content when create the cluster\n2067067 - Can\u0027t filter the cluster info when clicked the cluster in the Placement section\n2067207 - Bare metal asset secrets are not backed up\n2067465 - Categories,Standards, and Controls annotations are not updated after user has deleted a selected template\n2067713 - Columns on policy\u0027s \"Results\" are not sort-able as in previous release\n2067728 - Can\u0027t search in the policy creation or policyset creation Yaml editor\n2068304 - Application Lifecycle- Replicasets arent showing the logs console in Topology\n2068309 - For policy wizard in dynamics plugin environment, buttons at the bottom should be sticky and the contents of the Policy should scroll\n2068312 - Application Lifecycle - Argo Apps are not showing overview details and topology after upgrading from 2.4\n2068313 - Application Lifecycle - Refreshing overview page leads to a blank page\n2068328 - A cluster\u0027s \"View history\" page should not contain all clusters\u0027 violations history\n2068387 - Observability - observability operator always CrashLoopBackOff in FIPS upgrading hub\n2068993 - Observability - Node list is not filtered according to nodeType on OCP 311 dashboard\n2069329 - config-policy-controller addon with \"Unknown\" status in OCP 3.11 managed cluster after upgrade hub to 2.5\n2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path\n2069469 - Status of unreachable clusters is not reported in several places on GRC panels\n2069615 - The YAML editor can\u0027t work well when login UI using dynamic console plugin\n2069622 - No validation for policy template\u0027s name\n2069698 - After claim a cluster from clusterpool, the cluster pages become very very slow\n2069867 - Error occurs when trying to edit an application set/subscription\n2069870 - ACM/MCE Dynamic Plugins - 404: Page Not Found Error Occurs - intermittent crashing\n2069875 - Cluster secrets are not being created in the managed cluster\u0027s namespace\n2069895 - Application Lifecycle - Replicaset and Pods gives error messages when Yaml is selected on sidebar\n2070203 - Blank Application is shown when editing an Application with AnsibleJobs\n2070782 - Failed Secret Propagation to the Same Namespace as the AnsibleJob CR\n2070846 - [ACM 2.5] Can\u0027t re-add the default clusterset label after removing it from a managedcluster on BM SNO hub\n2071066 - Policy set details panel does not work when deployed into namespace different than \"default\"\n2071173 - Configured RunOnce automation job is not displayed although the policy has no violation\n2071191 - MIssing title on details panel after clicking \"view details\" of a policy set card\n2071769 - Placement must be always configured or error is reported when creating a policy\n2071818 - ACM logo not displayed in About info modal\n2071869 - Topology includes the status of local cluster resources when Application is only deployed to managed cluster\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2072097 - Local Cluster is shown as Remote on the Application Overview Page and Single App Overview Page\n2072104 - Inconsistent \"Not Deployed\" Icon Used Between 2.4 and 2.5 as well as the Overview and Topology\n2072177 - Cluster Resource Status is showing App Definition Statuses as well\n2072227 - Sidebar Statuses Need to Be Updated to Reflect Cluster List and Cluster Resource Statuses\n2072231 - Local Cluster not included in the appsubreport for Helm Applications Deployed on All Clusters\n2072334 - Redirect URL is now to the details page after created a policy\n2072342 - Shows \"NaN%\" in the ring chart when add the disabled policy into policyset and view its details\n2072350 - CRD Deployed via Application Console does not have correct deployment status and spelling\n2072359 - Report the error when editing compliance type in the YAML editor and then submit the changes\n2072504 - The policy has violations on the failed managed cluster\n2072551 - URL dropdown is not being rendered with an Argo App with a new URL\n2072773 - When a channel is deleted and recreated through the App Wizard, application creation stalls and warning pops up\n2072824 - The edit/delete policyset button should be greyed when using viewer check\n2072829 - When Argo App with jsonnet object is deployed, topology and cluster status would fail to display the correct statuses. \n2073179 - Policy controller was unable to retrieve violation status in for an OCP 3.11 managed cluster on ARM hub\n2073330 - Observabilityy - memory usage data are not collected even collect rule is fired on SNO\n2073355 - Get blank page when click policy with unknown status in Governance -\u003e Overview page\n2073508 - Thread responsible to get insights data from *ks clusters is broken\n2073557 - appsubstatus is not deleted for Helm applications when changing between 2 managed clusters\n2073726 - Placement of First Subscription gets overlapped by the Cluster Node in Application Topology\n2073739 - Console/App LC - Error message saying resource conflict only shows up in standalone ACM but not in Dynamic plugin\n2073740 - Console/App LC- Apps are deployed even though deployment do not proceed because of \"resource conflict\" error\n2074178 - Editing Helm Argo Applications does not Prune Old Resources\n2074626 - Policy placement failure during ZTP SNO scale test\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2074803 - The import cluster YAML editor shows the klusterletaddonconfig was required on MCE portal\n2074937 - UI allows creating cluster even when there are no ClusterImageSets\n2075416 - infraEnv failed to create image after restore\n2075440 - The policyreport CR is created for spoke clusters until restarted the insights-client pod\n2075739 - The lookup function won\u0027t check the referred resource whether exist when using template policies\n2076421 - Can\u0027t select existing placement for policy or policyset when editing policy or policyset\n2076494 - No policyreport CR for spoke clusters generated in the disconnected env\n2076502 - The policyset card doesn\u0027t show the cluster status(violation/without violation) again after deleted one policy\n2077144 - GRC Ansible automation wizard does not display error of missing dependent Ansible Automation Platform operator\n2077149 - App UI shows no clusters cluster column of App Table when Discovery Applications is deployed to a managed cluster\n2077291 - Prometheus doesn\u0027t display acm_managed_cluster_info after upgrade from 2.4 to 2.5\n2077304 - Create Cluster button is disabled only if other clusters exist\n2077526 - ACM UI is very very slow after upgrade from 2.4 to 2.5\n2077562 - Console/App LC- Helm and Object bucket applications are not showing as deployed in the UI\n2077751 - Can\u0027t create a template policy from UI when the object\u0027s name is referring Golang text template syntax in this policy\n2077783 - Still show violation for clusterserviceversions after enforced \"Detect Image vulnerabilities \" policy template and the operator is installed\n2077951 - Misleading message indicated that a placement of a policy became one managed only by policy set\n2078164 - Failed to edit a policy without placement\n2078167 - Placement binding and rule names are not created in yaml when editing a policy previously created with no placement\n2078373 - Disable the hyperlink of *ks node in standalone MCE environment since the search component was not exists\n2078617 - Azure public credential details get pre-populated with base domain name in UI\n2078952 - View pod logs in search details returns error\n2078973 - Crashed pod is marked with success in Topology\n2079013 - Changing existing placement rules does not change YAML file\n2079015 - Uninstall pod crashed when destroying Azure Gov cluster in ACM\n2079421 - Hyphen(s) is deleted unexpectedly in UI when yaml is turned on\n2079494 - Hitting Enter in yaml editor caused unexpected keys \"key00x:\" to be created\n2079533 - Clusters with no default clusterset do not get assigned default cluster when upgrading from ACM 2.4 to 2.5\n2079585 - When an Ansible Secret is propagated to an Ansible Application namespace, the propagated secret is shown in the Credentials page\n2079611 - Edit appset placement in UI with a different existing placement causes the current associated placement being deleted\n2079615 - Edit appset placement in UI with a new placement throws error upon submitting\n2079658 - Cluster Count is Incorrect in Application UI\n2079909 - Wrong message is displayed when GRC fails to connect to an ansible tower\n2080172 - Still create policy automation successfully when the PolicyAutomation name exceed 63 characters\n2080215 - Get a blank page after go to policies page in upgraded env when using an user with namespace-role-binding of default view role\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080503 - vSphere network name doesn\u0027t allow entering spaces and doesn\u0027t reflect YAML changes\n2080567 - Number of cluster in violation in the table does not match other cluster numbers on the policy set details page\n2080712 - Select an existing placement configuration does not work\n2080776 - Unrecognized characters are displayed on policy and policy set yaml editors\n2081792 - When deploying an application to a clusterpool claimed cluster after upgrade, the application does not get deployed to the cluster\n2081810 - Type \u0027-\u0027 character in Name field caused previously typed character backspaced in in the name field of policy wizard\n2081829 - Application deployed on local cluster\u0027s topology is crashing after upgrade\n2081938 - The deleted policy still be shown on the policyset review page when edit this policy set\n2082226 - Object Storage Topology includes residue of resources after Upgrade\n2082409 - Policy set details panel remains even after the policy set has been deleted\n2082449 - The hypershift-addon-agent deployment did not have imagePullSecrets\n2083038 - Warning still refers to the `klusterlet-addon-appmgr` pod rather than the `application-manager` pod\n2083160 - When editing a helm app with failing resources to another, the appsubstatus and the managedclusterview do not get updated\n2083434 - The provider-credential-controller did not support the RHV credentials type\n2083854 - When deploying an application with ansiblejobs multiple times with different namespaces, the topology shows all the ansiblejobs rather than just the one within the namespace\n2083870 - When editing an existing application and refreshing the `Select an existing placement configuration`, multiple occurrences of the placementrule gets displayed\n2084034 - The status message looks messy in the policy set card, suggest one kind status one a row\n2084158 - Support provisioning bm cluster where no provisioning network provided\n2084622 - Local Helm application shows cluster resources as `Not Deployed` in Topology [Upgrade]\n2085083 - Policies fail to copy to cluster namespace after ACM upgrade\n2085237 - Resources referenced by a channel are not annotated with backup label\n2085273 - Error querying for ansible job in app topology\n2085281 - Template name error is reported but the template name was found in a different replicated policy\n2086389 - The policy violations for hibernated cluster still be displayed on the policy set details page\n2087515 - Validation thrown out in configuration for disconnect install while creating bm credential\n2088158 - Object Storage Application deployed to all clusters is showing unemployed in topology [Upgrade]\n2088511 - Some cluster resources are not showing labels that are defined in the YAML\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2021-4197" }, { "db": "VULHUB", "id": "VHN-410862" }, { "db": "VULMON", "id": "CVE-2021-4197" }, { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "169305" }, { "db": "PACKETSTORM", "id": "167330" }, { "db": "PACKETSTORM", "id": "169299" }, { "db": "PACKETSTORM", "id": "167443" }, { "db": "PACKETSTORM", "id": "167952" }, { "db": "PACKETSTORM", "id": "167822" }, { "db": "PACKETSTORM", "id": "167459" } ], "trust": 1.8 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-410862", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-410862" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-4197", "trust": 2.0 }, { "db": "PACKETSTORM", "id": "167443", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167952", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167822", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167694", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167746", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168136", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168019", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166392", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167097", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167748", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167886", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167714", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167852", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167072", "trust": 0.1 }, { "db": "CNNVD", "id": "CNNVD-202201-1396", "trust": 0.1 }, { "db": "CNVD", "id": "CNVD-2022-68560", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-410862", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-4197", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166636", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169305", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167330", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169299", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167459", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-410862" }, { "db": "VULMON", "id": "CVE-2021-4197" }, { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "169305" }, { "db": "PACKETSTORM", "id": "167330" }, { "db": "PACKETSTORM", "id": "169299" }, { "db": "PACKETSTORM", "id": "167443" }, { "db": "PACKETSTORM", "id": "167952" }, { "db": "PACKETSTORM", "id": "167822" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "id": "VAR-202201-0496", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-410862" } ], "trust": 0.725 }, "last_update_date": "2024-09-19T22:23:20.539000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Red Hat: Important: kernel-rt security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225633 - Security Advisory" }, { "title": "Red Hat: Important: kernel security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225626 - Security Advisory" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.25 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225730 - Security Advisory" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=CVE-2021-4197" }, { "title": "Ubuntu Security Notice: USN-5500-1: Linux kernel vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5500-1" }, { "title": "Ubuntu Security Notice: USN-5541-1: Linux kernel (Azure) vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5541-1" }, { "title": "Ubuntu Security Notice: USN-5515-1: Linux kernel vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5515-1" }, { "title": "Amazon Linux 2: ALAS2KERNEL-5.4-2022-023", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2KERNEL-5.4-2022-023" }, { "title": "Amazon Linux 2: ALAS2KERNEL-5.10-2022-011", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2KERNEL-5.10-2022-011" }, { "title": "Ubuntu Security Notice: USN-5368-1: Linux kernel vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5368-1" }, { "title": "Ubuntu Security Notice: USN-5513-1: Linux kernel (AWS) vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5513-1" }, { "title": "Ubuntu Security Notice: USN-5278-1: Linux kernel (OEM) vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5278-1" }, { "title": "Ubuntu Security Notice: USN-5505-1: Linux kernel vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5505-1" }, { "title": "Amazon Linux AMI: ALAS-2022-1571", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=ALAS-2022-1571" }, { "title": "Red Hat: Important: kernel security, bug fix, and enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20221988 - Security Advisory" }, { "title": "Ubuntu Security Notice: USN-5337-1: Linux kernel vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5337-1" }, { "title": "Ubuntu Security Notice: USN-5467-1: Linux kernel vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=USN-5467-1" }, { "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.6.5 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224814 - Security Advisory" }, { "title": "Debian Security Advisories: DSA-5127-1 linux -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=46ac8c0354184763812b1f853ffa31b9" }, { "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20224956 - Security Advisory" }, { "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.2 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225483 - Security Advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.5 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225201 - Security Advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.11 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=RHSA-20225392 - Security Advisory" }, { "title": "Amazon Linux 2: ALAS2-2022-1761", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=ALAS2-2022-1761" }, { "title": "Debian Security Advisories: DSA-5173-1 linux -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=acd6d70f5129be4a1390575252ec92a6" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-4197" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-287", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-410862" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.2, "url": "https://security.netapp.com/advisory/ntap-20220602-0006/" }, { "trust": 1.2, "url": "https://www.debian.org/security/2022/dsa-5127" }, { "trust": 1.2, "url": "https://www.debian.org/security/2022/dsa-5173" }, { "trust": 1.2, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2035652" }, { "trust": 1.2, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.0, "url": "https://lore.kernel.org/lkml/20211209214707.805617-1-tj%40kernel.org/t/" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4197" }, { "trust": 0.4, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.4, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.4, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-4197" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-4203" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1198" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3772" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012" }, { "trust": 0.2, "url": "https://lore.kernel.org/lkml/20211209214707.805617-1-tj@kernel.org/t/" }, { "trust": 0.2, "url": "https://access.redhat.com/errata/rhsa-2022:5633" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27666" }, { "trust": 0.2, "url": "https://www.debian.org/security/faq" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1353" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1199" }, { "trust": 0.2, "url": "https://www.debian.org/security/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1158" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1016" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1205" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1195" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1048" }, { "trust": 0.2, "url": "https://security-tracker.debian.org/tracker/linux" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1516" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1204" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3752" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4157" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3744" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-13974" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-45485" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3773" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4002" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43976" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-0941" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43389" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-44733" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3752" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-19131" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3773" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4037" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29154" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37159" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3772" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-0404" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3669" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3764" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20322" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3743" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43056" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3764" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37159" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-41864" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3612" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-26401" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-27820" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3743" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1011" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4083" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-45486" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0322" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-4788" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0286" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0001" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3759" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3759" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4002" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-21781" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0002" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3744" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-42739" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1011" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1729" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-32250" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4203" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1729" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29368" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29368" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/287.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5500-1" }, { "trust": 0.1, "url": "https://security.archlinux.org/cve-2021-4197" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44733" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28711" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28715" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39685" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45402" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0382" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1055" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0264" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39698" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43975" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-azure-5.13/5.13.0-1021.24~20.04.1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45095" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-oracle-5.13/5.13.0-1025.30~20.04.1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0742" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5368-1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45480" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26490" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:4814" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39293" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1154" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35492" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39293" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0812" }, { "trust": 0.1, "url": "https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html\u003e" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0494" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1184" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24958" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-raspi/5.4.0-1065.75" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-azure/5.4.0-1083.87" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-aws/5.4.0-1078.84" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux/5.4.0-117.132" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-hwe-5.4/5.4.0-117.132~18.04.1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23040" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23039" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-gke-5.4/5.4.0-1074.79~18.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-raspi-5.4/5.4.0-1065.75~18.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-ibm-5.4/5.4.0-1026.29~18.04.1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28390" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1966" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-gkeop-5.4/5.4.0-1046.48~18.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-azure-5.4/5.4.0-1083.87~18.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-gcp/5.4.0-1078.84" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-oracle/5.4.0-1076.83" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-oracle-5.4/5.4.0-1076.83~18.04.1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21499" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-aws-5.4/5.4.0-1078.84~18.04.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-azure-fde/5.4.0-1083.87+cvm1.1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-ibm/5.4.0-1026.29" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-gke/5.4.0-1074.79" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-kvm/5.4.0-1068.72" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26966" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5467-1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28389" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-gkeop/5.4.0-1046.48" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28356" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-34169" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21540" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21540" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5729" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24921" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21541" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23772" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-34169" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21541" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24675" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21803" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24778" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43565" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4157" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43816" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4037" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29810" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24450" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:4956" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0778" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3918" } ], "sources": [ { "db": "VULHUB", "id": "VHN-410862" }, { "db": "VULMON", "id": "CVE-2021-4197" }, { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "169305" }, { "db": "PACKETSTORM", "id": "167330" }, { "db": "PACKETSTORM", "id": "169299" }, { "db": "PACKETSTORM", "id": "167443" }, { "db": "PACKETSTORM", "id": "167952" }, { "db": "PACKETSTORM", "id": "167822" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-410862" }, { "db": "VULMON", "id": "CVE-2021-4197" }, { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "169305" }, { "db": "PACKETSTORM", "id": "167330" }, { "db": "PACKETSTORM", "id": "169299" }, { "db": "PACKETSTORM", "id": "167443" }, { "db": "PACKETSTORM", "id": "167952" }, { "db": "PACKETSTORM", "id": "167822" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-03-23T00:00:00", "db": "VULHUB", "id": "VHN-410862" }, { "date": "2022-03-23T00:00:00", "db": "VULMON", "id": "CVE-2021-4197" }, { "date": "2022-04-07T16:37:07", "db": "PACKETSTORM", "id": "166636" }, { "date": "2022-05-28T19:12:00", "db": "PACKETSTORM", "id": "169305" }, { "date": "2022-05-31T17:24:53", "db": "PACKETSTORM", "id": "167330" }, { "date": "2022-07-28T19:12:00", "db": "PACKETSTORM", "id": "169299" }, { "date": "2022-06-08T15:58:59", "db": "PACKETSTORM", "id": "167443" }, { "date": "2022-08-04T14:49:08", "db": "PACKETSTORM", "id": "167952" }, { "date": "2022-07-27T17:20:56", "db": "PACKETSTORM", "id": "167822" }, { "date": "2022-06-09T16:11:52", "db": "PACKETSTORM", "id": "167459" }, { "date": "2022-03-23T20:15:10.200000", "db": "NVD", "id": "CVE-2021-4197" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-03T00:00:00", "db": "VULHUB", "id": "VHN-410862" }, { "date": "2022-07-25T00:00:00", "db": "VULMON", "id": "CVE-2021-4197" }, { "date": "2023-11-07T03:40:21.077000", "db": "NVD", "id": "CVE-2021-4197" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "local", "sources": [ { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "167443" } ], "trust": 0.2 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Ubuntu Security Notice USN-5368-1", "sources": [ { "db": "PACKETSTORM", "id": "166636" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "arbitrary", "sources": [ { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "167443" } ], "trust": 0.2 } }
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.