Language Selection

English French German Italian Portuguese Spanish

Not So Open Any More: Elasticsearch Relicensing and Implications for Open Source Search

Filed under
OSS
Legal

Elastic, the company founded by the creators of the Elasticsearch search server, recently announced a change to the license of its core product. Previously under the permissive Apache 2 license, future versions of the software will be dual-licensed allowing users to choose between Elastic’s own license or the Server Side Public License (SSPL) created by MongoDB.

What does this change mean for users of the software? At this point I should note that although I am very familiar with open source search engines, I am not a lawyer — so please do take your own legal advice!

Read more

The SSPL is Not an Open Source License

  • The SSPL is Not an Open Source License

    We’ve seen that several companies have abandoned their original dedication to the open source community by switching their core products from an open source license, one approved by the Open Source Initiative, to a “fauxpen” source license. The hallmark of a fauxpen source license is that those who made the switch claim that their product continues to remain “open” under the new license, but the new license actually has taken away user rights.

    The license du jour is the Server Side Public License. This license was submitted to the Open Source Initiative for approval but later withdrawn by the license steward when it became clear that the license would not be approved.

    Open source licenses are the foundation for the open source software ecosystem, a system that fosters and facilitates the collaborative development of software. Fauxpen source licenses allow a user to view the source code but do not allow other highly important rights protected by the Open Source Definition, such as the right to make use of the program for any field of endeavor. By design, and as explained by the most recent adopter, Elastic, in a post it unironically titled “Doubling Down on Open,” Elastic says that it now can “restrict cloud service providers from offering our software as a service” in violation of OSD6. Elastic didn’t double down, it threw its cards in.

Banon: License changes to Elasticsearch and Kibana

  • Banon: License changes to Elasticsearch and Kibana

    Shay Banon first announced that Elastic would move its Apache 2.0-licensed source code in Elasticsearch and Kibana to be dual licensed under Server Side Public License (SSPL) and the Elastic License. "To be clear, our distributions starting with 7.11 will be provided only under the Elastic License, which does not have any copyleft aspects. If you are building Elasticsearch and/or Kibana from source, you may choose between SSPL and the Elastic License to govern your use of the source code."

    In another post Banon added some clarification. "SSPL, a copyleft license based on GPL, aims to provide many of the freedoms of open source, though it is not an OSI approved license and is not considered open source."

Elasticsearch licence changed as owner chases more revenue

  • Elasticsearch licence changed as owner chases more revenue

    The company behind Elasticsearch has changed its licensing in order to prevent "cloud service providers from offering our products as a service without sharing their modifications and the source code of their service management layers".

Elastic promises "open"—delivers proprietary

  • Elastic promises "open"—delivers proprietary

    Open-source software is famously able to be used by anyone for any purpose; those are some of the keystones of the open source definition. But some companies that run open-source projects are increasingly unhappy that others are reaping some of the profits from those projects. That has led to various efforts of "license reform" meant to try to capture those profits. So far, those efforts have just led to non-open-source licenses, thus projects that are no longer open source. We are seeing that play out yet again with Elastic's mid-January announcement that it was changing the license on some of its projects.

    Elastic is switching the license of the Elasticsearch search-engine software and its data-visualization counterpart, Kibana, away from the Apache Software License version 2 (ASL) to the Server Side Public License (SSPL) starting with version 7.11. As with previous versions, those projects will be dual-licensed with the Elastic License, so that users who do not want to (or are unable to) comply with the SSPL can purchase a license from Elastic.

    We have seen this movie before, of course, but this time around the disingenuous way that Elastic is presenting its move is raising hackles within the FOSS community. It is clear that the company is quite unhappy with Amazon Web Services (AWS) for turning Elasticsearch and Kibana into for-profit products that compete with Elastic's own offerings. It is less clear that the license switch really addresses the problems that Elastic is complaining about, however. Beyond that, AWS is using the components under the license that Elastic freely chose when the ASL suited its objectives. Now that the ASL apparently does not suit Elastic, it is rather ridiculous for the company to proclaim that switching to a proprietary license is "doubling down on open"—it manifestly is not.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Can Linux Run Video Games?

Linux is a widely used and popular open source operating system that was first released back in 1991. It differs from operating systems like Windows and macOS in that it is open source and it is highly customizable through its use of “distributions”. Distributions or “distros” are basically different versions of Linux that can be installed along with the Linux core software so that users can customize their system to fit their specific need. Some of the more popular Linux distributions are Ubuntu, Debian and Fedora. For many years Linux had the reputation of being a terrible gaming platform and it was believed that users wouldn’t be able to engage in this popular form of entertainment. The main reason for this is that commercially successful games just weren’t being developed for Linux. A few well known video game titles like Doom, Quake and SimCity made it to Linux but for the most part they were overlooked through the 1990’s. However, things have changed a lot since then and there is an every expanding library of popular video games you can play on Linux. [...] There are plenty of Windows games you can run on Linux and no reason why you can’t play as well as you do when using Windows. If you are having trouble leveling up or winning the best loot, consider trying AskBoosters for help with your game. Aside from native Linux games and Windows games there are a huge amount of browser based games that work on any system including Linux. Read more

Security: DFI and Canonical, IBM/Red Hat/CentOS and Oracle, Malware in GitHub

  • DFI and Canonical offer risk-free system updates and reduced software lead times for the IoT ecosystem

    DFI and Canonical signed the Ubuntu IoT Hardware Certification Partner Program. DFI is the world’s first industrial computer manufacturer to join the program aimed at offering Ubuntu-certified IoT hardware ready for the over-the-air software update. The online update mechanism of and the authorized DFI online application store combines with DFI’s products’ application flexibility, to reduce software and hardware development time to deploy new services. DFI’s RemoGuard IoT solution will provide real-time monitoring and partition-level system recovery through out-of-band management technology. In addition to the Ubuntu online software update, RemoGuard avoids service interruption, reduces maintenance personnel costs, and response time to establish a seamless IoT ecosystem. From the booming 5G mobile network to industrial robot applications, a large number of small base stations, edge computing servers, and robots will be deployed in outdoor or harsh industrial environments. Ubuntu Core on DFI certified hardware and Remoguard brings the reassurance that no software update will bring risks and challenges of on-site repair.

  • Update CentOS Linux for free

    As you may know, in December 2020 IBM/Red Hat announced that CentOS Linux 8 will end in December 2021. Additionally, the updates for CentOS Linux 6 ended on November 30, 2020. If your organization relies on CentOS, you are faced with finding an alternative OS. The lack of regular updates puts these systems at increasing risk for major vulnerabilities with every passing day. A popular solution with minimal disruption is to simply point your CentOS systems to receive updates from Oracle Linux. This can be done anonymously and at no charge to your organization. With Oracle Linux, you can continue to benefit from a similar, stable CentOS alternative. Oracle Linux updates and errata are freely available and can be applied to CentOS or Red Hat Enterprise Linux (RHEL) instances without reinstalling the operating system. Just connect to the Oracle Linux yum server, and follow these instructions. Best of all, your apps continue to run as usual.

  • Malware in open-source web extensions

    Since the original creator has exclusive control over the account for the distribution channel (which is typically the user's only gateway to the program), it logically follows that they are responsible for transferring control to future maintainers, despite the fact that they may only have the copyright on a portion of the software. Additionally, as the distribution-channel account is the property of the project owner, they can sell that account and the accompanying maintainership. After all, while the code of the extension might be owned by its larger community, the distributing account certainly isn't. Such is what occurred for The Great Suspender, which was a Chrome extension on the Web Store that suspends inactive tabs, halting their scripts and releasing most of the resources from memory. In June 2020, Dean Oemcke, the creator and longtime maintainer, decided to move on from the project. He transferred the GitHub repository and the Web Store rights, announcing the change in a GitHub issue that said nothing about the identity of the new maintainer. The announcement even made a concerning mention of a purchase, which raises the question of who would pay money for a free extension, and why. Of course, as the vast majority of the users of The Great Suspender were not interested in its open-source nature, few of them noticed until October, when the new maintainer made a perfectly ordinary release on the Chrome Web Store. Well, perfectly ordinary except for the minor details that the release did not match the contents of the Git repository, was not tagged on GitHub, and lacked a changelog.

What goes into default Debian?

The venerable locate file-finding utility has long been available for Linux systems, though its origins are in the BSD world. It is a generally useful tool, but does have a cost beyond just the disk space it occupies in the filesystem; there is a periodic daemon program (updatedb) that runs to keep the file-name database up to date. As a recent debian-devel discussion shows, though, people have differing ideas of just how important the tool is—and whether it should be part of the default installation of Debian. There are several variants of locate floating around at this point. The original is described in a ;login: article from 1983; a descendant of that code lives on in the GNU Find Utilities alongside find and xargs. After that came Secure Locate (slocate), which checks permissions to only show file names that users have access to, and its functional successor, mlocate, which does the same check but also merges new changes into the existing database, rather than recreating it, for efficiency and filesystem-cache preservation. On many Linux distributions these days, mlocate is the locate of choice. Read more

Christian Hergert: Sysprof and Podman

With the advent of immutable/re-provisional/read-only operating systems like Fedora’s Silverblue, people will be doing a lot more computing inside of containers on their desktops (as if they’re not already). When you want to profile an entire system with tools like perf this can be problematic because the files that are mapped into memory could be coming from strange places like FUSE. In particular, fuse-overlayfs. There doesn’t seem to be a good way to decode all this indirection which means in Sysprof, we’ve had broken ELF symbol decoding for your things running inside of podman containers (such as Fedora’s toolbox). For those of us who have to develop inside those containers, that can really be a drag. The problem at the core is that Sysprof (and presumably other perf-based tooling) would think a file was mapped from somewhere like /usr/lib64/libglib-2.0.so according to the /proc/$pid/maps. Usually we translate that using /proc/$pid/mountinfo to the real mount or subvolume. But if fuse-overlayfs is in the picture, you don’t get any insight into that. When symbols are decoded, it looks at the host’s /usr/lib/libglib-2.0.so and finds an inode mismatch at which point it will stop trying to decode the instruction address. Read more Also: Adding a New Disk Device to Fedora Linux