Language Selection

English French German Italian Portuguese Spanish

FlexiWAN Adopts an 'Open' Slant

Filed under
OSS
Security
  • Stealthy Start-Up Portends 'Second Wave of SD-WAN'
  • The First SD-WAN Open Source Driving the Second Wave of SD-WAN by flexiWAN
  • flexiWAN Launches With Open Source SD-WAN Architecture

    Will open source usher in the second-wave of SD-WAN? Startup flexiWAN's co-founder and CEO Amir Zmora thinks so.

  • FlexiWAN soft launches SD-WAN software based on open source architecture

    Israel-based start-up FlexiWAN has started conducting proof-of-concept trials to test its SD-WAN software product, which aims to use open source architecture as a differentiator. With this approach, the company hopes to attract IT managers by providing more control over the capabilities and elements within their networks.

  • FlexiWAN pushes SD-WAN into an open source architecture

    Among the goals of flexiWAN co-founder and CEO Amir Zmora is to give enterprises and service providers the ability to differentiate their SD-WAN services instead of relying on SD-WAN vendors to define them.

    After years of working in the VoIP space, and after attending numerous industry conferences where SD-WAN was a hot topic, Zmora said that he came to the realization that SD-WAN solutions were closed black boxes that didn't enable innovation.

    [...]

    Chua said he has been waiting to see an open-source approach to SD-WAN. He said there were two elements to SD-WAN; the SD-WAN element and the universal CPE element.

    "So, on the SD-WAN side of things, which is, I think, where he's (Zmora) starting, there are elements in place in open source where you can try to cobble things together to make an SD-WAN solution," Chua said. "So, there's IPSec or an open SSL VPN, firewalls, things like that.

    "What's missing is that cloud control policy elements that aren't quite there. So, there's no open source equivalent, that I know of, on the whole cloud control side for the centralized policies, centralized configuration and of all the different SD-WAN components out there."

More in Tux Machines

Kernel Articles at LWN (Paywall Just Expired)

  • Filesystem sandboxing with eBPF

    Bijlani is focused on a specific type of sandbox: a filesystem sandbox. The idea is to restrict access to sensitive data when running these untrusted programs. The rules would need to be dynamic as the restrictions might need to change based on the program being run. Some examples he gave were to restrict access to the ~/.ssh/id_rsa* files or to only allow access to files of a specific type (e.g. only *.pdf for a PDF reader). He went through some of the existing solutions to show why they did not solve his problem, comparing them on five attributes: allowing dynamic policies, usable by unprivileged users, providing fine-grained control, meeting the security needs for running untrusted code, and avoiding excessive performance overhead. Unix discretionary access control (DAC)—file permissions, essentially—is available to unprivileged users, but fails most of the other measures. Most importantly, it does not suffice to keep untrusted code from accessing files owned by the user running the code. SELinux mandatory access control (MAC) does check most of the boxes (as can be seen in the talk slides [PDF]), but is not available to unprivileged users. Namespaces (or chroot()) can be used to isolate filesystems and parts of filesystems, but cannot enforce security policies, he said. Using LD_PRELOAD to intercept calls to filesystem operations (e.g. open() or write()) is a way for unprivileged users to enforce dynamic policies, but it can be bypassed fairly easily. System calls can be invoked directly, rather than going through the library calls, or files can be mapped with mmap(), which will allow I/O to the files without making system calls. Similarly, ptrace() can be used, but it suffers from time-of-check-to-time-of-use (TOCTTOU) races, which would allow the security protections to be bypassed.

  • Generalizing address-space isolation

    Linux systems have traditionally run with a single address space that is shared by user and kernel space. That changed with the advent of the Meltdown vulnerability, which forced the merging of kernel page-table isolation (KPTI) at the end of 2017. But, Mike Rapoport said during his 2019 Open Source Summit Europe talk, that may not be the end of the story for address-space isolation. There is a good case to be made for increasing the separation of address spaces, but implementing that may require some fundamental changes in how kernel memory management works. Currently, Linux systems still use a single address space, at least when they are running in kernel mode. It is efficient and convenient to have everything visible, but there are security benefits to be had from splitting the address space apart. Memory that is not actually mapped is a lot harder for an attacker to get at. The first step in that direction was KPTI. It has performance costs, especially around transitions between user and kernel space, but there was no other option that would address the Meltdown problem. For many, that's all the address-space isolation they would like to see, but that hasn't stopped Rapoport from working to expand its use.

  • Identifying buggy patches with machine learning

    The stable kernel releases are meant to contain as many important fixes as possible; to that end, the stable maintainers have been making use of a machine-learning system to identify patches that should be considered for a stable update. This exercise has had some success but, at the 2019 Open Source Summit Europe, Sasha Levin asked whether this process could be improved further. Might it be possible for a machine-learning system to identify patches that create bugs and intercept them, so that the fixes never become necessary? Any kernel patch that fixes a bug, Levin began, should include a tag marking it for the stable updates. Relying on that tag turns out to miss a lot of important fixes, though. About 3-4% of the mainline patch stream was being marked, but the number of patches that should be put into the stable releases is closer to 20% of the total. Rather than try to get developers to mark more patches, he developed his machine-learning system to identify fixes in the mainline patch stream automatically and queue them for manual review. This system uses a number of heuristics, he said. If the changelog contains language like "fixes" or "causes a panic", it's likely to be an important fix. Shorter patches tend to be candidates.

  • Next steps for kernel workflow improvement

    The kernel project's email-based development process is well established and has some strong defenders, but it is also showing its age. At the 2019 Kernel Maintainers Summit, it became clear that the kernel's processes are much in need of updating, and that the maintainers are beginning to understand that. It is one thing, though, to establish goals for an improved process; it is another to actually implement that process and convince developers to use it. At the 2019 Open Source Summit Europe, a group of 20 or so maintainers and developers met in the corner of a noisy exhibition hall to try to work out what some of the first steps in that direction might be. The meeting was organized and led by Konstantin Ryabitsev, who is in charge of kernel.org (among other responsibilities) at the Linux Foundation (LF). Developing the kernel by emailing patches is suboptimal, he said, especially when it comes to dovetailing with continuous-integration (CI) processes, but it still works well for many kernel developers. Any new processes will have to coexist with the old, or they will not be adopted. There are, it seems, some resources at the LF that can be directed toward improving the kernel's development processes, especially if it is clear that this work is something that the community wants.

Server Leftovers

  • Knative at 1: New Changes, New Opportunities

    This summer marked the one-year anniversary of Knative, an open-source project that provides the fundamental building blocks for serverless workloads in Kubernetes. In its relatively short life (so far), Knative is already delivering on its promise to boost organizations’ ability to leverage serverless and FaaS (functions as a service). Knative isn’t the only serverless offering for Kubernetes, but it has become a de-facto standard because it arguably has a richer set of features and can be integrated more smoothly than the competition. And the Knative project continues to evolve to address businesses’ changing needs. In the last year alone, the platform has seen many improvements, giving organizations looking to expand their use of Kubernetes through serverless new choices, new considerations and new opportunities.

  • Redis Labs Leverages Kubernetes to Automate Database Recovery

    Redis Labs today announced it has enhanced the Operator software for deploying its database on Kubernetes clusters to include an automatic cluster recovery that enables customers to manage a stateful service as if it were stateless. Announced at Redis Day, the latest version of Kubernetes Operator for Redis Enterprise makes it possible to spin up a new instance of a Redis database in minutes. Howard Ting, chief marketing officer for Redis Labs, says as Kubernetes has continued to gain traction, it became apparent that IT organizations need tools to provision Redis Enterprise for Kubernetes clusters. That requirement led Redis Labs to embrace Operator software for Kubernetes developed by CoreOS, which has since been acquired by Red Hat. IT teams can either opt to recover databases manually using Kubernetes Operator or configure the tool to recover databases automatically anytime a database goes offline. In either case, he says, all datasets are loaded and balanced across the cluster without any need for manual workflows.

  • Dare to Transform IT with SUSE Global Services

Audiocasts/Shows: FLOSS Weekly and Linux Headlines

  • FLOSS Weekly 555: Emissions API

    Emissions API is easy to access satellite-based emission data for everyone. The project strives to create an application interface that lowers the barrier to use the data for visualization and/or analysis.

  • 2019-11-13 | Linux Headlines

    It’s time to update your kernel again as yet more Intel security issues come to light, good news for container management and self-hosted collaboration, and Brave is finally ready for production.

Bill Wear, Developer Advocate for MAAS: foo.c

I remember my first foo. It was September, 1974, on a PDP-11/40, in the second-floor lab at the local community college. It was an amazing experience for a fourteen-year-old, admitted at 12 to audit night classes because his dad was a part-time instructor and full-time polymath. I should warn you, I’m not the genius in the room. I maintained a B average in math and electrical engineering, but A+ averages in English, languages, programming, and organic chemistry (yeah, about that….). The genius was my Dad, the math wizard, the US Navy CIC Officer. More on him in a later blog — he’s relevant to what MAAS does in a big way. Okay, so I’m more of a language (and logic) guy. But isn’t code where math meets language and logic? Research Unix Fifth edition UNIX had just been licensed to educational institutions at no cost, and since this college was situated squarely in the middle of the military-industrial complex, scoring a Hulking Giant was easy. Finding good code to run it? That was another issue, until Bell Labs offered up a freebie. It was amazing! Getting the computer to do things on its own — via ASM and FORTRAN — was not new to me. What was new was the simplicity of the whole thing. Mathematically, UNIX and C were incredibly complex, incorporating all kinds of network theory and topology and numerical methods that (frankly) haven’t always been my favorite cup of tea. I’m not even sure if Computer Science was a thing yet. But the amazing part? Here was an OS which took all that complexity and translated it to simple logic: everything is a file; small is beautiful; do one thing well. Didn’t matter that it was cranky and buggy and sometimes dumped your perfectly-okay program in the bit bucket. It was a thrill to be able to do something without having to obsess over the math underneath. Read more Also: How to upgrade to Ubuntu 20.04 Daily Builds from Ubuntu 19.10