Language Selection

English French German Italian Portuguese Spanish

LWN on Security and Kernel (Paywall Has Ended)

Filed under
Linux
Security
  • A container-confinement breakout

    The recently announced container-confinement breakout for containers started with runc is interesting from a few different perspectives. For one, it affects more than just runc-based containers as privileged LXC-based containers (and likely others) are also affected, though the LXC-based variety are harder to compromise than the runc ones. But it also, once again, shows that privileged containers are difficult—perhaps impossible—to create in a secure manner. Beyond that, it exploits some Linux kernel interfaces in novel ways and the fixes use a perhaps lesser-known system call that was added to Linux less than five years back.

    The runc tool implements the container runtime specification of the Open Container Initiative (OCI), so it is used by a number of different containerization solutions and orchestration systems, including Docker, Podman, Kubernetes, CRI-O, and containerd. The flaw, which uses the /proc/self/exe pseudo-file to gain control of the host operating system (thus anything else, including other containers, running on the host), has been assigned CVE-2019-5736. It is a massive hole for containers that run with access to the host root user ID (i.e. UID 0), which, sadly, covers most of the containers being run today.

    There are a number of sources of information on the flaw, starting with the announcement from runc maintainer Aleksa Sarai linked above. The discoverers, Adam Iwaniuk and Borys Popławski, put out a blog post about how they found the hole, including some false steps along the way. In addition, one of the LXC maintainers who worked with Sarai on the runc fix, Christian Brauner, described the problems with privileged containers and how CVE-2019-5736 applies to LXC containers. There is a proof of concept (PoC) attached to Sarai's announcement, along with another more detailed PoC he posted the following day after the discoverers' blog post.

  • The Thunderclap vulnerabilities

    It should come as no surprise that plugging untrusted devices into a computer system can lead to a wide variety of bad outcomes—though often enough it works just fine. We have reported on a number of these kinds of vulnerabilities (e.g. BadUSB in 2014) along the way. So it will not shock readers to find out that another vulnerability of this type has been discovered, though it may not sit well that, even after years of vulnerable plug-in buses, there are still no solid protections against these rogue devices. This most-recent entrant into this space targets the Thunderbolt interface; the vulnerabilities found have been dubbed "Thunderclap".

    There are several different versions of Thunderbolt, either using Mini DisplayPort connectors (Thunderbolt 1 and 2) or USB Type-C (Thunderbolt 3). According to the long list of researchers behind Thunderclap, all of those are vulnerable to the problems they found. Beyond that, PCI Express (PCIe) peripherals are also able to exploit the Thunderclap vulnerabilities, though they are a bit less prone to hotplugging. Thunderclap is the subject of a paper [PDF] and web site. It is more than just a bunch of vulnerabilities, however, as there is a hardware and software research platform that they have developed and released. A high-level summary of the Thunderclap paper was posted to the Light Blue Touchpaper blog by Theo Markettos, one of the researchers, at the end of February.

  • Core scheduling

    Kernel developers are used to having to defend their work when posting it to the mailing lists, so when a longtime kernel developer describes their own work as "expensive and nasty", one tends to wonder what is going on. The patch set in question is core scheduling from Peter Zijlstra. It is intended to make simultaneous multithreading (SMT) usable on systems where cache-based side channels are a concern, but even its author is far from convinced that it should actually become part of the kernel.
    SMT increases performance by turning one physical CPU into two virtual CPUs that share the hardware; while one is waiting for data from memory, the other can be executing. Sharing a processor this closely has led to security issues and concerns for years, and many security-conscious users disable SMT entirely. The disclosure of the L1 terminal fault vulnerability in 2018 did not improve the situation; for many, SMT simply isn't worth the risks it brings with it.

    But performance matters too, so there is interest in finding ways to make SMT safe (or safer, at least) to use in environments with users who do not trust each other. The coscheduling patch set posted last September was one attempt to solve this problem, but it did not get far and has not been reposted. One obstacle to this patch set was almost certainly its complexity; it operated at every level of the scheduling domain hierarchy, and thus addressed more than just the SMT problem.

    Zijlstra's patch set is focused on scheduling at the core level only, meaning that it is intended to address SMT concerns but not to control higher-level groups of physical processors as a unit. Conceptually, it is simple enough. On kernels where core scheduling is enabled, a core_cookie field is added to the task structure; it is an unsigned long value. These cookies are used to define the trust boundaries; two processes with the same cookie value trust each other and can be allowed to run simultaneously on the same core.

  • A kernel unit-testing framework

    March 1, 2019 For much of its history, the kernel has had little in the way of formal testing infrastructure. It is not entirely an exaggeration to say that testing is what the kernel community kept users around for. Over the years, though, that situation has improved; internal features like kselftest and services like the 0day testing system have increased our test coverage considerably. The story is unlikely to end there, though; the next addition to the kernel's testing arsenal may be a unit-testing framework called KUnit.

    The KUnit patches, currently in their fourth revision, have been developed by Brendan Higgins at Google. The intent is to enable the easy and rapid testing of kernel components in isolation — unit testing, in other words. That distinguishes KUnit from kernel's kselftest framework in a couple of significant ways. Kselftest is intended to verify that a given feature works in a running kernel; the tests run in user space and exercise the kernel that the system booted. They thus can be thought of as a sort of end-to-end test, ensuring that specific parts of the entire system are behaving as expected. These tests are important to have, but they do not necessarily test specific kernel subsystems in isolation from all of the others, and they require actually booting the kernel to be tested.

    KUnit, instead, is designed to run more focused tests, and they run inside the kernel itself. To make this easy to do in any setting, the framework makes use of user-mode Linux (UML) to actually run the tests. That may come as a surprise to those who think of UML as a dusty relic from before the kernel had proper virtualization support (its home page is hosted on SourceForge and offers a bleeding-edge 2.6.24 kernel for download), but UML has been maintained over the years. It makes a good platform for something like KUnit without rebooting the host system or needing to set up virtualization.

  • Two topics in user-space access

    Kernel code must often access data that is stored in user space. Most of the time, this access is uneventful, but it is not without its dangers and cannot be done without exercising due care. A couple of recent discussions have made it clear that this care is not always being taken, and that not all kernel developers fully understand how user-space access should be performed. The good news is that kernel developers are currently working on a set of changes to make user-space access safer in the future.

More in Tux Machines

AMD Releases Firmware Update To Address SEV Vulnerability

A new security vulnerability has been made public over AMD's Secure Encrypted Virtualization (SEV) having insecure cryptographic implementations. Fortunately, this AMD SEV issue is addressed by a firmware update. CVE-2019-9836 has been made pulic as the AMD Secure Processor / Secure Encrypted Virtualization having an insecure cryptographic implementation. Read more

today's howtos and programming bits

  • How to get the latest Wine on Linux Mint 19
  • How to Install KDE Plasma in Arch Linux (Guide)
  • 0 bytes left

    Around 2003–2004, a friend and I wrote a softsynth that was used in a 64 kB intro. Now, 14 years later, cTrix and Pselodux picked it up and made a really cool 32 kB tune with it! Who would have thought.

  • A month full of learning with Gnome-GSoC

    In this month I was able to work with Libgit2-glib where Albfan mentored me on how to port functions from Libgit2 to Libgit2-glib. Libgit2-glib now has functionality to compare two-buffers. This feature I think can now benefit other projects also which requires diff from buffers, for example Builder for it’s diff-view and gedit.

  • Google Developers Are Looking At Creating A New libc For LLVM

    As part of Google's consolidating their different toolchains around LLVM, they are exploring the possibility of writing a new C library "libc" implementation.  Google is looking to develop a new C standard library within LLVM that will better suit their use-cases and likely others within the community too. 

  • How We Made Conda Faster in 4.7

    We’ve witnessed a lot of community grumbling about Conda’s speed, and we’ve experienced it ourselves. Thanks to a contract from NASA via the SBIR program, we’ve been able to dedicate a lot of time recently to optimizing Conda.  We’d like to take this opportunity to discuss what we did, and what we think is left to do.

  • TensorFlow CPU optimizations in Anaconda

    By Stan Seibert, Anaconda, Inc. & Nathan Greeneltch, Intel Corporation TensorFlow is one of the most commonly used frameworks for large-scale machine learning, especially deep learning (we’ll call it “DL” for short). This popular framework has been increasingly used to solve a variety of complex research, business and social problems. Since 2016, Intel and Google have worked together to optimize TensorFlow for DL training and inference speed performance on CPUs. The Anaconda Distribution has included this CPU-optimized TensorFlow as the default for the past several TensorFlow releases. Performance optimizations for CPUs are provided by both software-layer graph optimizations and hardware-specific code paths. In particular, the software-layer graph optimizations use the Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN), an open source performance library for DL applications on Intel architecture. Hardware specific code paths are further accelerated with advanced x86 processor instruction set, specifically, Intel Advanced Vector Extensions 512 (Intel AVX-512) and new instructions found in the Intel Deep Learning Boost (Intel DL Boost) feature on 2nd generation Intel Xeon Scalable processors. Let’s take a closer look at both optimization approaches and how to get these accelerations from Anaconda.

  • PyCoder’s Weekly: Issue #374 (June 25, 2019)

VIdeo/Audio: Linux in the Ham Shack, How to install OpenMandriva Lx 4.0 and "Debian Package of the Day"

  • LHS Episode #290: Where the Wild Things Are

    Welcome to Episode 290 of Linux in the Ham Shack. In this short format show, the hosts discuss the recent ARRL Field Day, LIDs getting theirs, vandalism in Oregon, a Canonical flip-flop, satellite reception with SDR and much more. Thank you for tuning in and we hope you have a wonderful week.

  • How to install OpenMandriva Lx 4.0

    In this video, I am going to show how to Install OpenMandriva Lx 4.0.

  • Jonathan Carter: PeerTube and LBRY

    I have many problems with YouTube, who doesn’t these days, right? I’m not going to go into all the nitty gritty of it in this post, but here’s a video from a LBRY advocate that does a good job of summarizing some of the issues by using clips from YouTube creators: I have a channel on YouTube for which I have lots of plans for. I started making videos last year and created 59 episodes for Debian Package of the Day. I’m proud that I got so far because I tend to lose interest in things after I figure out how it works or how to do it. I suppose some people have assumed that my video channel is dead because I haven’t uploaded recently, but I’ve just been really busy and in recent weeks, also a bit tired as a result. Things should pick up again soon.

Games: Steam Summer Sale, Last Moon, Ubuntu-Valve-Canonical Faceoff

  • Steam Summer Sale 2019 is live, here’s what to look out for Linux fans

    Another year, another massive sale is now live on Steam. Let’s take a look at what Valve are doing this year and what you should be looking out for. This time around, Valve aren’t doing any special trading cards. They’re trying something a little different! You will be entering the "Steam Grand Prix" by joining a team (go team Hare!), earning points for rewards and having a shot at winning some free games in the process. Sounds like a good bit of fun, the specific-game challenges are a nice touch.

  • Last Moon, a 2D action-RPG with a gorgeous vibrant style will be coming to Linux next year

    Sköll Studio managed to capture my attention recently, with some early footage of their action-RPG 'Last Moon' popping up in my feed and it looks gorgeous. Taking inspiration from classics like Legend of Zelda: A link to the past, Secret of Mana, Chrono Trigger and a ton more you can see it quite clearly. Last Moon takes in place in a once peaceful kingdom, where an ancient and powerful mage put a curse on the moon, as Lunar Knight you need to stop all this insanity and bring back peace.

  • Ubuntu Takes A U-Turn with 32-Bit Support

    Canonical will continue to support legacy applications and libraries. Canonical, the maker of the world’s most popular Linux-based distribution Ubuntu, has revived support for 32-bit libraries after feedback from WINE, Ubuntu Studio and Steam communities. Last week Canonical announced that its engineering teams decided that Ubuntu should not continue to carry i386 forward as an architecture. “Consequently, i386 will not be included as an architecture for the 19.10 release, and we will shortly begin the process of disabling it for the eoan series across Ubuntu infrastructure,” wrote Will Cooke, Director of Ubuntu Desktop at Canonical.

  • Steam and Ubuntu clash over 32-bit libs

    It has been a tumultuous week for gaming on Linux. Last Tuesday afternoon, Canonical's Steve Langasek announced that 32-bit libs would be frozen (kept as-is, with no new builds or updates) as of this October's interim 19.10 release, codenamed "Eoan Ermine." Langasek was pretty clear that this did not mean abandoning support for running 32-bit applications, however.

  • Linux gamers take note: Steam won’t support the next version of Ubuntu

    Valve has announced that from the next version of Ubuntu (19.10), it will no longer support Steam on Ubuntu, the most popular flavor of Linux, due to the distro dropping support for 32-bit packages, This all kicked off when Canonical, developer of Ubuntu, announced that it was seemingly completely dropping support for 32-bit in Ubuntu 19.10. However, following a major outcry, a further clarification (or indeed, change of heart) came from the firm stating that there will actually be limited support for 32-bit going forward (although updates for 32-bit libraries will no longer be delivered, effectively leaving them in a frozen state).

  • Valve killing Steam Support for some Ubuntu users

    A few years ago the announcement that Steam would begin supporting Linux was a big deal: it meant that anyone who preferred to rock an open-source operating system over Mac OS or Windows 10 would have instant buy-it-and-play-it access to a large catalog of game titles that would have otherwise taken a whole lot of tweaking to get up and running or wouldn't have worked for them at all. For some, at least, the party may be coming to an end.

  • Steam is dropping support for Ubuntu, but not Linux entirely

    The availability of Steam on Linux has been a boom for gaming on the platform, especially with the recent addition of the Steam Play compatibility layer for running Windows-only games. Valve has always recommended that gamers run Ubuntu Linux, the most popular desktop Linux distribution, but that's now changing.

  • Canonical (sort of) backtracks: Ubuntu will continue to support (some) 32-bit software

    A few days after announcing it would effectively drop support for 32-bit software in future versions of the Ubuntu operating system, Canonical has decided to “change our plan and build selected 32-bit i386 packages.” The company’s original decision sparked some backlash when it became clear that some existing apps and games would no longer run on Ubuntu 19.10 if the change were to proceed as planned. Valve, for example, announced it would continue to support older versions of Ubuntu, allowing users to continue running its popular Steam game client. But moving forward, the company said it would be focusing its Steam for Linux efforts on a different GNU/Linux distribution.

  • Just kidding? Ubuntu 32-bit moving forward, no word yet from Valve

    Due in part to the feedback given to the group over the weekend and because of their connections with Valve, Canonical did an about-face today. They’ve suggested that feedback from gamers, Ubuntu Studio, and the WINE community led them to change their plan and will “build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS. Whether this will change Valve’s future with Ubuntu Steam, we’ll see.

  • Canonical backtracks on 32-bit Ubuntu cull, but warns that on your head be it

    CANONICAL HAS CONFIRMED a U-Turn on the controversial decision to drop 32-bit support for Ubuntu users later this year. The company has faced criticism from users who aren't happy with the plan to make Ubuntu purely 64-bit, which culminated at the weekend with Steam announcing it would pull support for Ubuntu. Many Steam games were never made in 64-bit and it would, therefore, devalue the offer. However, Canonical confirmed on Monday that following feedback from the community, it was clear that there is still a demand, and indeed a need for 32-bit binaries, and as such, it will provide "selected" builds for both Ubuntu 19.10 and the forthcoming Ubuntu 20.04. Canonical's announcement spoke of the highly passionate arguments from those who are in favour of maintaining both versions, thus forcing the team to take notice. However, it has made it clear that it's doing so under the weight of expectation, not because it agrees. "There is a real risk to anybody who is running a body of software that gets little testing. The facts are that most 32-bit x86 packages are hardly used at all," the firm said.