Language Selection

English French German Italian Portuguese Spanish

Server

Cockpit and the evolution of the Web User Interface

Filed under
Server

This article only touches upon some of the main functions available in Cockpit. Managing storage devices, networking, user account, and software control will be covered in an upcoming article. In addition, optional extensions such as the 389 directory service, and the cockpit-ostree module used to handle packages in Fedora Silverblue.

The options continue to grow as more users adopt Cockpit. The interface is ideal for admins who want a light-weight interface to control their server(s).

Read more

Server: Managing GNU/Linux Servers and Cost of Micro-services Complexity

Filed under
Server
  • Keeping track of Linux users: When do they log in and for how long?

    The Linux command line provides some excellent tools for determining how frequently users log in and how much time they spend on a system. Pulling information from the /var/log/wtmp file that maintains details on user logins can be time-consuming, but with a couple easy commands, you can extract a lot of useful information on user logins.

  • Daily user management tasks made easy for every Linux administrator

    In this article, we will be going over some tasks that a Linux administrator may need to perform daily related to user management.

  • The cost of micro-services complexity

    It has long been recognized by the security industry that complex systems are impossible to secure, and that pushing for simplicity helps increase trust by reducing assumptions and increasing our ability to audit. This is often captured under the acronym KISS, for "keep it stupid simple", a design principle popularized by the US Navy back in the 60s. For a long time, we thought the enemy were application monoliths that burden our infrastructure with years of unpatched vulnerabilities.

    So we split them up. We took them apart. We created micro-services where each function, each logical component, is its own individual service, designed, developed, operated and monitored in complete isolation from the rest of the infrastructure. And we composed them ad vitam æternam. Want to send an email? Call the rest API of micro-service X. Want to run a batch job? Invoke lambda function Y. Want to update a database entry? Post it to A which sends an event to B consumed by C stored in D transformed by E and inserted by F. We all love micro-services architecture. It’s like watching dominoes fall down. When it works, it’s visceral. It’s when it doesn’t that things get interesting. After nearly a decade of operating them, let me share some downsides and caveats encountered in large-scale production environments.

    [...]

    And finally, there’s security. We sure love auditing micro-services, with their tiny codebases that are always neat and clean. We love reviewing their infrastructure too, with those dynamic security groups and clean dataflows and dedicated databases and IAM controlled permissions. There’s a lot of security benefits to micro-services, so we’ve been heavily advocating for them for several years now.

    And then, one day, someone gets fed up with having to manage API keys for three dozen services in flat YAML files and suggests to use oauth for service-to-service authentication. Or perhaps Jean-Kevin drank the mTLS Kool-Aid at the FoolNix conference and made a PKI prototype on the flight back (side note: do you know how hard it is to securely run a PKI over 5 or 10 years? It’s hard). Or perhaps compliance mandates that every server, no matter how small, must run a security agent on them.

Announcing Oracle Linux 7 Update 7

Filed under
GNU
Linux
Red Hat
Server

Oracle is pleased to announce the general availability of Oracle Linux 7 Update 7. Individual RPM packages are available on the Unbreakable Linux Network (ULN) and the Oracle Linux yum server. ISO installation images will soon be available for download from the Oracle Software Delivery Cloud and Docker images will soon be available via Oracle Container Registry and Docker Hub.

Read more

Also: Oracle Linux 7 Update 7 Released

Server: Kata Containers in Tumbleweed, Ubuntu on 'Multi' 'Cloud', and Containers 101

Filed under
Server
  • Kubic Project: Kata Containers now available in Tumbleweed

    Kata Containers is an open source container runtime that is crafted to seamlessly plug into the containers ecosystem.

    We are now excited to announce that the Kata Containers packages are finally available in the official openSUSE Tumbleweed repository.

    It is worthwhile to spend few words explaining why this is a great news, considering the role of Kata Containers (a.k.a. Kata) in fulfilling the need for security in the containers ecosystem, and given its importance for openSUSE and Kubic.

  • Why multi-cloud has become a must-have for enterprises: six experts weigh in

    Remember the one-size-fits-all approach to cloud computing? That was five years ago. Today, multi-cloud architectures that use two, three, or more providers, across a mix of public and private platforms, are quickly becoming the preferred strategy at most companies.

    Despite the momentum, pockets of hesitation remain. Some sceptics are under the impression that deploying cloud platforms and services from multiple vendors can be a complex process. Others worry about security, regulatory, and performance issues.

  • Containers 101: Containers vs. Virtual Machines (And Why Containers Are the Future of IT Infrastructure)

    What exactly is a container and what makes it different -- and in some cases better -- than a virtual machine?

Server: Surveillance Computing, Kubernetes Ingress, MongoDB 4.2, Linux Foundation on 'DevOps'

Filed under
Server
  • Linux and Cloud Computing: Can Pigs Fly? Linux now Dominates Microsoft Azure Servers [Ed: This is not about "Linux" dominating Microsoft but Microsoft trying to dominate GNU/Linux]

    Over the last five years things have changed dramatically at Microsoft. Microsoft has embraced Linux. Earlier in the year, Sasha Levin, Microsoft Linux kernel developer, said that now more than half of the servers in Microsoft Azure are running Linux.

  • Google Cloud Adds Compute, Memory-Intensive VMs

    Google added virtual machine (VM) types on Google Compute Engine including second-generation Intel Xeon scalable processor machines and new VMs for compute- and memory-heavy applications.

  • Kubernetes Ingress

    On a similar note, if your application doesn’t serve a purpose outside the Kubernetes cluster, does it really matter whether or not your cluster is well built? Probably not.

    To give you a concrete example, let’s say we have a classical web app composed of a frontend written in Nodejs and a backend written in Python which uses MySQL database. You deploy two corresponding services on your Kubernetes cluster.

    You make a Dockerfile specifying how to package the frontend software into a container, and similarly you package your backend. Next in your Kubernetes cluster, you will deploy two services each running a set of pods behind it. The web service can talk to the database cluster and vice versa.

  • MongoDB 4.2 materialises with $merge operator and indexing help for unstructured data messes

    Document-oriented database MongoDB is now generally available in version 4.2 which introduces enhancements such as on-demand materialised views and wildcard indexing.

    Wildcard indexing can be useful in scenarios where unstructured, heterogeneous datasets make creating appropriate indexes hard. Admins can use the function to create a filter of sorts that matches fields, arrays, or sub-documents in a collection, and adds the hits to a sparse index.

    [...]

    Speaking of cloud, last year MongoDB decided to step away from using the GNU Affero General Public License for the Community Edition of its database and switched to an altered version. The Server-Side Public License is meant to place a condition – namely, to open source the code used to serve the software from the cloud – on offering MongoDB as a service to clients.

  • Announcing New Course: DevOps and SRE Fundamentals-Implementing Continuous Delivery

    The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced today that enrollment is now open for the new DevOps and SRE Fundamentals – Implementing Continuous Delivery eLearning course. The course will help an organization be more agile, deliver features rapidly, while at the same time being able to achieve non-functional requirements such as availability, reliability, scalability, security, etc.

    According to Chris Aniszczyk, CTO of the Cloud Native Computing Foundation, “The rise of cloud native computing and site reliability engineering are changing the way applications are built, tested, and deployed. The past few years have seen a shift towards having Site Reliability Engineers (SREs) on staff instead of just plain old sysadmins; building familiarity with SRE principles and continuous delivery open source projects are an excellent career investment.”

Server Side: IBM, Apache and CNCF

Filed under
Server
  • Take Your Time With IBM Stock as it Digests its Behemoth Linux Maker Deal

    Prior to the Red Heat deal, IBM was treading water. The company released earnings on July 17. For the second quarter of 2019, revenue was down year-over-year. Sales were $19.1 billion, down from $20 billion in the prior year’s quarter. The company’s Cloud and Business Services unit saw slight growth (5% and 3% YoY, respectively), but declines in the Global Technology Services and Systems units countered this improvement. Despite this slight revenue slip, IBM managed to keep quarterly operating income steady at ~$2.8 billion.

    The Red Hat deal adds a variety of growth catalysts to the International Business Machines story. For one thing, the acquisition makes IBM a bigger player in the $1 trillion cloud computing space. The deal is expected to accelerate revenue growth and improve gross margins. The deal is also very synergistic. IBM can now sell Red Hat’s suite of solutions to their existing customer base. With IBM’s global reach, the company could expand Red Hat’s business better than Red Hat would have done as an independent company.

  • Apache Software Foundation's Code-Base Valued At $20 Billion USD

    The Apache Software Foundation has published their 2019 fiscal year report highlighting their more than 350 open-source projects/initiatives and this also marks their 20th anniversary. 

    The Apache Software Foundation's 2019 report values their code-base at more than $20 billion USD using the COCOMO 2 model for estimating. Though for their 2019 fiscal year the foundation turned a profit of $585k USD thanks to sponsors. There are more than 190 million lines of code within Apache repositories. 

  • 9 open source cloud native projects to consider

    I mean, just look at that! And this is just a start. Just as NodeJS’s creation sparked the explosion of endless JavaScript tools, the popularity of container technology started the exponential growth of cloud-native applications.

    The good news is that there are several organizations that oversee and connect these dots together. One is the Open Containers Initiative (OCI), which is a lightweight, open governance structure (or project), "formed under the auspices of the Linux Foundation for the express purpose of creating open industry standards around container formats and runtime." The other is the CNCF, "an open source software foundation dedicated to making cloud native computing universal and sustainable."

    In addition to building a community around cloud-native applications generally, CNCF also helps projects set up structured governance around their cloud-native applications. CNCF created the concept of maturity levels—Sandbox, Incubating, or Graduated—which correspond to the Innovators, Early Adopters, and Early Majority tiers on the diagram below.

More on Fedora (Flock), IBM/Red Hat and Servers/HPC

Filed under
Red Hat
Server
  • Stephen Gallagher: Flock 2019 Trip Report

    As usual, the conference began with Matthew Miller’s traditional “State of Fedora” address wherein he uses pretty graphs to confound and amaze us. Oh, and reminds us that we’ve come a long way in Fedora and we have much further to go together, still.

    Next was a keynote by Cate Huston of Automattic (now the proud owners of both WordPress and Tumblr, apparently!). She talked to us about the importance of understanding when a team has become dysfunctional and some techniques for getting back on track.

    After lunch, Adam Samalik gave his talk, “Modularity: to modularize or not to modularize?”, describing for the audience some of the cases where Fedora Modularity makes sense… and some cases where other packaging techniques are a better choice. This was one of the more useful sessions for me. Once Adam gave his prepared talk, the two of us took a series of great questions from the audience. I hope that we did a good job of disambiguating some things, but time will tell how that works out. We also got some suggestions for improvements we could make, which were translated into Modularity Team tickets: here and here.

  • IBM Cloud: No shift, Sherlock

    IBM’s cloud strategy has gone through a number of iterations as it attempts to offer a compelling hybrid cloud to shift its customers from traditional IT architectures to modern cloud computing.

    IBM is gambling those customers who have yet to embrace the public cloud fully, remain committed to private and hybrid cloud-based infrastructure, and, if they do use public clouds, they want a cloud-agnostic approach to move workloads. In July, IBM closed the $34bn purchase of Red Hat, an acquisition it hopes will finally enable it to deliver cloud-agnostic products and services.

    To tie in with the completion of the acquisition of Red Hat, IBM commissioned Forrester to look at the benefits to those organisations that are both Red Hat and IBM customers.

  • Red Hat Shares ― Not just open source, *enterprise* open source

    Open source software (OSS), by definition, has source code that’s available for anyone to see, learn from, use, modify, and distribute. It’s also the foundation for a model of collaborative invention that empowers communities of individuals and companies to innovate in a way that proprietary software doesn't allow.

    Enterprise open source software is OSS that’s supported and made more secure―by a company like Red Hat―for enterprise use. It plays a strategic role in many organizations and continues to gain popularity.

  • Taashee Linux Services Joins Bright Computing Partner Program

Databases: BlazingSQL, Apache Cassandra, CockroachDB

Filed under
Server
  • BlazingSQL, a GPU-accelerated SQL engine built on top of RAPIDS, is now open source

    Yesterday, the BlazingSQL team open-sourced BlazingSQL under the Apache 2.0 license. It is a lightweight, GPU-accelerated SQL engine built on top of the RAPIDS. ai ecosystem. RAPIDS. ai is a suite of software libraries and APIs for end-to-end execution of data science and analytics pipelines entirely on GPUs.

    Explaining his vision behind this step, Rodrigo Aramburu, CEO of BlazingSQL wrote in a Medium blog post, “As RAPIDS adoption continues to explode, open-sourcing BlazingSQL accelerates our development cycle, gets our product in the hands of more users, and aligns our licensing and messaging with the greater RAPIDS.ai ecosystem.”

    Aramburu calls RAPIDS “the next-generation analytics ecosystem” where BlazingSQL serves as the SQL standard. It also serves as an SQL interface for cuDF, a GPU DataFrame (GDF) library for loading, joining, aggregating, and filtering data.

  • GPU SQL engine BlazingSQL now open source

    A new open-source project wants to take analytics to the next level. BlazingSQL is a GPU-accelerated SQL engine built on the RAPIDS ecosystem. RAPIDS is an open-source suite of software libraries for executing end-to-end data science and analytics pipelines entirely on GPUs.

    According to the team, BlazingSQL was built to address the expense, complexity and sluggish pace users deal with when working on large data sets.

    “BlazingSQL addresses these customer concerns not only with an incredibly fast, distributed GPU SQL engine, but also a zealous focus on simplicity,” Rodrigo Aramburu, CEO of BlazingSQL, wrote in a blog post. “With a few lines of code, BlazingSQL can query your raw data, wherever it resides and interoperate with your existing analytics stack and RAPIDS.”

    BlazingSQL enables users to query datasets from enterprise data lakes directly into GPU memory as a GPU DataFrame (GDF). GDF is a project that offers support for interoperability between GPU applications. It also defines a common GPU in-memory data layer.

  • DataStax: what is a ‘progressive’ cloud strategy?

    With its roots and foundations in the open source Apache Cassandra database, Santa Clara headquartered DataStax insists that it likes to keep things open.

    As such, the company is opening a wider aperture on its collaboration with VMware by now offering DataStax production support on VMware vSAN, now in hybrid and multi-cloud configurations.

  • Cockroach Labs raises $55 million for ultra-resilient databases

    Cockroach Labs, the New York-based developer of the open source distributed database project CockroachDB, today announced that it’s closed a $55 million, oversubscribed series C round co-led by Altimeter Capital, Tiger Global, and GV (formerly Google Ventures). The raise, which saw participation from existing investors Benchmark, Index Ventures, Redpoint Ventures, FirstMark Capital, and Work-Bench, brings the company’s total capital raised to $108.5 million and comes after a year in which revenue doubled quarter-over-quarter.

PostgreSQL: When open-source gets serious

Filed under
Server
OSS

The transition from academic research to commercial production environments that much technology makes is well documented.

In the area of software, the most shallow dive into any sector’s day-to-day production applications shows that the journey has been made, if not by the finished, user-facing app, then almost certainly in some aspect of the codebase.

Artificial Intelligence (AI) and grid computing, for example, both began in academe, and now are to be found in fully-commercial, production settings— often in open-source.

While there are commercial offerings of AIaaS most famously in Watson from IBM, machine learning, AI, cognitive computing and the like are now embedded into many apps and services in daily use– although, the technology might not be immediately apparent.

That’s the same shape of the journey taken by Postgres (aka PostgreSQL), a database schema that was devised as a successor to Ingres, released as open-source, and now is the fastest-growing (in terms of deployments) database in the enterprise space.

And while like all open-source software, the ongoing development and support of Postgres is community-driven, there are plenty of commercial companies that use the platform as the basis of their offerings.

There are small and not-so-small companies operating in this space; Devart, Severalnines, EnterpriseDB, Database Labs, and Aiven, to name but a handful.

Read more

Servers, SUSE, Red Hat and Fedora

Filed under
GNU
Linux
Red Hat
Server
SUSE
  • My Favorite Infrastructure

    PCI policy pays a lot of attention to systems that manage sensitive cardholder data. These systems are labeled as "in scope", which means they must comply with PCI-DSS standards. This scope extends to systems that interact with these sensitive systems, and there is a strong emphasis on compartmentation—separating and isolating the systems that are in scope from the rest of the systems, so you can put tight controls on their network access, including which administrators can access them and how.

    Our architecture started with a strict separation between development and production environments. In a traditional data center, you might accomplish this by using separate physical network and server equipment (or using abstractions to virtualize the separation). In the case of cloud providers, one of the easiest, safest and most portable ways to do it is by using completely separate accounts for each environment. In this way, there's no risk that a misconfiguration would expose production to development, and it has a side benefit of making it easy to calculate how much each environment is costing you per month.

    When it came to the actual server architecture, we divided servers into individual roles and gave them generic role-based names. We then took advantage of the Virtual Private Cloud feature in Amazon Web Services to isolate each of these roles into its own subnet, so we could isolate each type of server from others and tightly control access between them.

    By default, Virtual Private Cloud servers are either in the DMZ and have public IP addresses, or they have only internal addresses. We opted to put as few servers as possible in the DMZ, so most servers in the environment only had a private IP address. We intentionally did not set up a gateway server that routed all of these servers' traffic to the internet—their isolation from the internet was a feature!

    Of course, some internal servers did need some internet access. For those servers, it was only to talk to a small number of external web services. We set up a series of HTTP proxies in the DMZ that handled different use cases and had strict whitelists in place. That way we could restrict internet access from outside the host itself to just the sites it needed, while also not having to worry about collecting lists of IP blocks for a particular service (particularly challenging these days since everyone uses cloud servers).

    [...]

    Although I covered a lot of ground in this infrastructure write-up, I still covered only a lot of the higher-level details. For instance, deploying a fault-tolerant, scalable Postgres database could be an article all by itself. I also didn't talk much about the extensive documentation I wrote that, much like my articles in Linux Journal, walks the reader through how to use all of these tools we built.

    As I mentioned at the beginning of this article, this is only an example of an infrastructure design that I found worked well for me with my constraints. Your constraints might be different and might lead to a different design. The goal here is to provide you with one successful approach, so you might be inspired to adapt it to your own needs.

  • A Blunt Reminder About Security for Embedded Computing

    The ICS Advisory (ICSA-19-211-01) released on July 30th by the Cybersecurity and Infrastructure Security Agency (CISA) is chilling to read. According to the documentation, VxWorks is “exploitable remotely” and requires “low skill level to exploit.” Elaborating further, CISA risk assessment concludes, “Successful exploitation of these vulnerabilities could allow remote code execution.”
    The potential consequences of this security breech are astounding to measure, particularly when I look back on my own personal experiences in this space, and now as an Account Executive for Embedded Systems here at SUSE.

    [...]

    At the time, VxWorks was the standard go-to OS in the majority of the embedded production platforms I worked with. It was an ideal way to replace the legacy stove-piped platforms with an Open Architecture (OA) COTS solution. In light of the recent CISA warning, however, it is concerning to know that many of those affected systems processed highly-classified intelligence data at home and abroad.

  • Red Hat Recognized as a Leader by Independent Research Firm in Infrastructure Automation Platforms Evaluation [Ed: Forrester is not “Independent Research Firm”; It’s taking bribes to lie.]
  • Why Red Hat can take over the cloud sooner than you think
  • Red Hat Enterprise Linux 7.7: Final Full Support Update
  • Transport Layer Security version 1.3 in Red Hat Enterprise Linux 8

    TLS 1.3 is the sixth iteration of the Secure Sockets Layer (SSL) protocol. Originally designed by Netscape in the mid-1990’s to serve the purposes of online shopping, it quickly became the primary security protocol of the Internet. Now not limited just to web browsing, among other things, it secures email transfers, database accesses or business to business communication.

    Because it had its roots in the early days of public cryptography, when public knowledge about securely designing cryptographic protocols was limited, the first two iterations: SSLv2 and SSLv3 are now quite thoroughly broken. The next two iterations, TLS 1.0 and TLS 1.1 depend on the security of Message Digest 5 (MD5) and Secure Hash Algorithm 1 (SHA1).

  • Cute Qt applications in Fedora Workstation

    Fedora Workstation is all about Gnome and it has been since the beginning, but that doesn’t mean we don’t care about Qt applications, the opposite is true. Many users use Qt applications, even on Gnome, mainly because many KDE/Qt applications don’t have adequate replacement written in Gtk or they are just used to them and don’t really have reason to switch to another one.

    For Qt integration, there is some sort of Gnome support in Qt itself, which includes a platform theme reading Gnome configuration, like fonts and icons. This platform theme also provides native file dialogs, but don’t expect native look of Qt applications. There used to be a gtk2 style, which used gtk calls directly to render natively looking Qt widgets, but it was moved from qtbase to qt5-styleplugins, because it cannot be used today in combination with gtk3.

    For reasons mentioned above, we have been working on a Qt style to make Qt applications look natively in Gnome. This style is named adwaita-qt and from the name you can guess that it makes Qt applications look like Gtk applications with Adwaita style. Adwaita-qt is actually not a new project, it’s been there for years and it was developed by Martin Bříza. Unfortunately, Martin left Red Hat long time ago and since then a new version of Gnome’s Adwaita was released, completely changing colors and made the Adwaita theme look more modern. Being the one who takes care of these things nowadays, I started slowly updating adwaita-qt to make it look like the current Gnome Adwaita theme and voilà, a new version was released after 3 months of intermittent work.

  • Fedora Community Blog: Friday with Infra

    Friday with Infra is a new event done by CPE (Community Platform Engineering) Team, that will help potential contributors to start working on some of the applications we maintain. During this event members of the CPE team will help you to start working on those applications and help you with any issue you may encounter. At the end of this event you should be able to maintain the application by yourself.

Syndicate content

More in Tux Machines

KNOPPIX 8.6.0 Public Release

Version 8.6 basiert auf → Debian/stable (buster), mit einzelnen Paketen aus Debian/testing und unstable (sid) (v.a. Grafiktreiber und aktuelle Productivity-Software) und verwendet → Linux Kernel 5.2.5 sowie Xorg 7.7 (core 1.20.4) zur Unterstützung aktueller Computer-Hardware. Read more English: Knoppix 8.6 new public version is finally out !

Linux 5.3 Kernel Yielding The Best Performance Yet For AMD EPYC "Rome" CPU Performance

Among many different Linux/open-source benchmarks being worked on for the AMD EPYC "Rome" processors now that our initial launch benchmarks are out of the way are Linux distribution comparisons, checking out the BSD compatibility, and more. Some tests I wrapped up this weekend were seeing how recent Linux kernel releases perform on the AMD EPYC 7742 64-core / 128-thread processors. For some weekend analysis, here are benchmarks of Linux 4.18 through Linux 5.3 in its current development form. All tests were done on the same AMD EPYC 7742 2P server running Ubuntu 19.04 and using the latest kernels in each series via the Ubuntu Mainline Kernel PPA. Read more

Fedora 29 to 30 upgrade - How it went

Alas, my Fedora 30 experience started strong with the first review and soured since. The test on the old laptop with Nvidia graphics highlighted numerous problems, including almost ending up in an unbootable state due to the wrong driver version being selected by the software center. With the in-vivo upgrade, I almost ended up in a similar state due to some incompatibility with extensions. I wasn't pleased by other glitches and errors, and the performance improvement margin isn't as stellar as the clean install test. All in all, Fedora 30 feels like a rather buggy release, with tons of problems. I think versions 27 to 29 were quite robust overall, at least the Gnome version, but the latest edition is quite rough. That would mean I'd advise people upgrading to take care of their data, remember the possible snags like extensions, and triple check their hardware is up to the task, because apparently QA isn't cool anymore, and no one else will do this for you. All in all, Fedora 30 is very bleeding edge, finicky, definitely not for everyday use by ordinary desktop folks. It's a dev tool for devs, so if you want something stable and boring, search elsewhere. Read more

Neptune 6.0 Released, Which is based on Debian 10 (Buster)

Leszek has pleased to announce the release of the new stable release of Neptune 6.0 on 1th Aug, 2019. It’s first stable release of Neptune 6.0 based on Debian 10 “Buster”, featuring the KDE Plasma desktop with the typical Neptune tweaks and configurations. The base of the system is Linux Kernel in version 4.19.37 which provides the necessary hardware support. Plasma 5.14.5 features the stable and flexible KDE made desktop that is loved by millions. Read more