Language Selection

English French German Italian Portuguese Spanish

LWN on Security and Kernel (Paywall Has Ended)

Filed under
Linux
Security
  • A container-confinement breakout

    The recently announced container-confinement breakout for containers started with runc is interesting from a few different perspectives. For one, it affects more than just runc-based containers as privileged LXC-based containers (and likely others) are also affected, though the LXC-based variety are harder to compromise than the runc ones. But it also, once again, shows that privileged containers are difficult—perhaps impossible—to create in a secure manner. Beyond that, it exploits some Linux kernel interfaces in novel ways and the fixes use a perhaps lesser-known system call that was added to Linux less than five years back.

    The runc tool implements the container runtime specification of the Open Container Initiative (OCI), so it is used by a number of different containerization solutions and orchestration systems, including Docker, Podman, Kubernetes, CRI-O, and containerd. The flaw, which uses the /proc/self/exe pseudo-file to gain control of the host operating system (thus anything else, including other containers, running on the host), has been assigned CVE-2019-5736. It is a massive hole for containers that run with access to the host root user ID (i.e. UID 0), which, sadly, covers most of the containers being run today.

    There are a number of sources of information on the flaw, starting with the announcement from runc maintainer Aleksa Sarai linked above. The discoverers, Adam Iwaniuk and Borys Popławski, put out a blog post about how they found the hole, including some false steps along the way. In addition, one of the LXC maintainers who worked with Sarai on the runc fix, Christian Brauner, described the problems with privileged containers and how CVE-2019-5736 applies to LXC containers. There is a proof of concept (PoC) attached to Sarai's announcement, along with another more detailed PoC he posted the following day after the discoverers' blog post.

  • The Thunderclap vulnerabilities

    It should come as no surprise that plugging untrusted devices into a computer system can lead to a wide variety of bad outcomes—though often enough it works just fine. We have reported on a number of these kinds of vulnerabilities (e.g. BadUSB in 2014) along the way. So it will not shock readers to find out that another vulnerability of this type has been discovered, though it may not sit well that, even after years of vulnerable plug-in buses, there are still no solid protections against these rogue devices. This most-recent entrant into this space targets the Thunderbolt interface; the vulnerabilities found have been dubbed "Thunderclap".

    There are several different versions of Thunderbolt, either using Mini DisplayPort connectors (Thunderbolt 1 and 2) or USB Type-C (Thunderbolt 3). According to the long list of researchers behind Thunderclap, all of those are vulnerable to the problems they found. Beyond that, PCI Express (PCIe) peripherals are also able to exploit the Thunderclap vulnerabilities, though they are a bit less prone to hotplugging. Thunderclap is the subject of a paper [PDF] and web site. It is more than just a bunch of vulnerabilities, however, as there is a hardware and software research platform that they have developed and released. A high-level summary of the Thunderclap paper was posted to the Light Blue Touchpaper blog by Theo Markettos, one of the researchers, at the end of February.

  • Core scheduling

    Kernel developers are used to having to defend their work when posting it to the mailing lists, so when a longtime kernel developer describes their own work as "expensive and nasty", one tends to wonder what is going on. The patch set in question is core scheduling from Peter Zijlstra. It is intended to make simultaneous multithreading (SMT) usable on systems where cache-based side channels are a concern, but even its author is far from convinced that it should actually become part of the kernel.
    SMT increases performance by turning one physical CPU into two virtual CPUs that share the hardware; while one is waiting for data from memory, the other can be executing. Sharing a processor this closely has led to security issues and concerns for years, and many security-conscious users disable SMT entirely. The disclosure of the L1 terminal fault vulnerability in 2018 did not improve the situation; for many, SMT simply isn't worth the risks it brings with it.

    But performance matters too, so there is interest in finding ways to make SMT safe (or safer, at least) to use in environments with users who do not trust each other. The coscheduling patch set posted last September was one attempt to solve this problem, but it did not get far and has not been reposted. One obstacle to this patch set was almost certainly its complexity; it operated at every level of the scheduling domain hierarchy, and thus addressed more than just the SMT problem.

    Zijlstra's patch set is focused on scheduling at the core level only, meaning that it is intended to address SMT concerns but not to control higher-level groups of physical processors as a unit. Conceptually, it is simple enough. On kernels where core scheduling is enabled, a core_cookie field is added to the task structure; it is an unsigned long value. These cookies are used to define the trust boundaries; two processes with the same cookie value trust each other and can be allowed to run simultaneously on the same core.

  • A kernel unit-testing framework

    March 1, 2019 For much of its history, the kernel has had little in the way of formal testing infrastructure. It is not entirely an exaggeration to say that testing is what the kernel community kept users around for. Over the years, though, that situation has improved; internal features like kselftest and services like the 0day testing system have increased our test coverage considerably. The story is unlikely to end there, though; the next addition to the kernel's testing arsenal may be a unit-testing framework called KUnit.

    The KUnit patches, currently in their fourth revision, have been developed by Brendan Higgins at Google. The intent is to enable the easy and rapid testing of kernel components in isolation — unit testing, in other words. That distinguishes KUnit from kernel's kselftest framework in a couple of significant ways. Kselftest is intended to verify that a given feature works in a running kernel; the tests run in user space and exercise the kernel that the system booted. They thus can be thought of as a sort of end-to-end test, ensuring that specific parts of the entire system are behaving as expected. These tests are important to have, but they do not necessarily test specific kernel subsystems in isolation from all of the others, and they require actually booting the kernel to be tested.

    KUnit, instead, is designed to run more focused tests, and they run inside the kernel itself. To make this easy to do in any setting, the framework makes use of user-mode Linux (UML) to actually run the tests. That may come as a surprise to those who think of UML as a dusty relic from before the kernel had proper virtualization support (its home page is hosted on SourceForge and offers a bleeding-edge 2.6.24 kernel for download), but UML has been maintained over the years. It makes a good platform for something like KUnit without rebooting the host system or needing to set up virtualization.

  • Two topics in user-space access

    Kernel code must often access data that is stored in user space. Most of the time, this access is uneventful, but it is not without its dangers and cannot be done without exercising due care. A couple of recent discussions have made it clear that this care is not always being taken, and that not all kernel developers fully understand how user-space access should be performed. The good news is that kernel developers are currently working on a set of changes to make user-space access safer in the future.

More in Tux Machines

My personal journey from MIT to GPL

As I got started writing open source software, I generally preferred the MIT license. I actually made fun of the “copyleft” GPL licenses, on the grounds that they are less free. I still hold this opinion today: the GPL license is less free than the MIT license - but today, I believe this in a good way.

[...]

I don’t plan on relicensing my historical projects, but my new projects have used the GPL family of licenses for a while now. I think you should seriously consider it as well.

Read more

Security Leftovers

  • Yubico recalls government-grade security keys due security bug

    If you buy a government-grade security key, the one thing you really want from it is government-grade security. It's the very dictionary definition of "you had one job." That's why it's somewhat embarrassing that Yubico has put out a recall notice on its FIPS series of authentication keys which, it turns out, aren't completely secure.

  • [Microsoft's] EternalBlue exploit surfaces in bog standard mining attack Featured

    A bog standard attack aimed at planting a cryptocurrency miner has been found to be using advanced targeted attack tools as well, the security firm Trend Micro says, pointing out that this behaviour marks a departure from the norm.

Kernel: Systemd, DXVK, Intel and AMD

  • Systemd Is Now Seeing Continuous Fuzzing By Fuzzit
    In hoping to catch more bugs quickly, systemd now has continuous fuzzing integration via the new "Fuzzit" platform that provides continuous fuzzing as a service.  New this week to systemd is the continuous fuzzing integration where every pull request / push will see some quick checks carried out while on a daily basis will be fuzzed in full for all targets.
  •  
  • DXVK 1.2.2 Brings Minor CPU Overhead Optimizations, Game Fixes
    In time for those planning to spend some time this weekend gaming, DXVK lead developer Philip Rebohle announced the release of DXVK 1.2.2 that will hopefully soon be integrated as part of a Proton update for Steam Play but right now can be built from source. While certain upstream Wine developers express DXVK being a "dead end" and are optimistic in favor of piping their WineD3D implementation over Vulkan, for Linux gamers today wanting to enjoy D3D11 Windows games on Linux the DXVK library continues working out splendid with great performance and running many Direct3D games with much better performance over the current WineD3D OpenGL code.
  • Intel 19.23.13131 OpenCL NEO Stack Adds Comet Lake Support
    We've seen the Intel Comet Lake support get pieced together in recent months in the different components making up the Intel Linux graphics stack while the compute-runtime is the latest addition. Comet Lake as a refresher is a planned successor to Coffeelake/Whiskeylake and expected to come out this year as yet more 9th Gen hardware. But Comet Lake should be interesting with rumored 10-core designs. Though with being more processors with Gen9 graphics, the Comet Lake Linux support basically boils down to adding in the new PCI IDs.
  • AMD Wires Its New Runtime Linker Into RadeonSI Gallium3D
    RadeonSI Gallium3D has already shifted over to using this new linker. Making use of the .rodata should help with efficiencies throughout the driver (more details in this forum thread) but at this point is mostly laying the groundwork for more improvements to be made moving forward.

Red Hat and Fedora Leftovers

  • Building IT Transformation Architecture with Red Hat OpenShift
    In the era of mobile applications, business challenges to the enterprise IT organizations are more dynamic than ever. Many enterprises have difficulties responding in time because of the inherent complexity and risk of integrating emerging technologies into existing IT architectures. In this article, I will share my experience on how to utilize Red Hat OpenShift as a “Middle Platform” (中台) for enterprises to construct its bimodal IT architecture with agile, scalable and open strategy. In the past year, I have discussed with many corporate customers–especially in the financial services industry–the challenges of digital transformation, and the solutions. Most of their difficulties are coming from “core systems” which have been working for more than 10 years.
  • Fedora Community Blog: FPgM report: 2019-24
    Here’s your report of what has happened in Fedora Program Management this week. Elections voting is open through 23:59 UTC on Thursday 20 June. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.
  • Copr's Dist-Git
    In Copr, we use dist-git to store sources as well. However, our use case is different. In the past, Copr only allowed to build from URL. You provided a URL to your SRC.RPM and Copr downloaded it and built it. This was a problem when the user wanted to resubmit the build. The original URL very often did not exists anymore. Therefore we came with an idea to store the SRC.RPM somewhere. And obviously, the dist-git was the first idea.