Language Selection

English French German Italian Portuguese Spanish

Red Hat

Tweaking the look of Fedora Workstation with themes

Filed under
Red Hat

Changing the theme of a desktop environment is a common way to customize your daily experience with Fedora Workstation. This article discusses the 4 different types of visual themes you can change and how to change to a new theme. Additionally, this article will cover how to install new themes from both the Fedora repositories and 3rd party theme sources.

When changing the theme of Fedora Workstation, there are 4 different themes that can be changed independently of each other. This allows a user to mix and match the theme types to customize their desktop in a multitude of combinations. The 4 theme types are the Application (GTK) theme, the shell theme, the icon theme, and the cursor theme.

Read more

Red Hat OpenShift 4 is Now Available

Filed under
Red Hat
Server

As of today, Red Hat OpenShift 4 is generally available to Red Hat customers. This rearchitecting in how we install, upgrade and manage the platform also brings with it the power of Kubernetes Operators, Red Hat Enterprise Linux CoreOS, and the Istio-based OpenShift Service Mesh. As transformational as our open hybrid cloud platform can be for managing software at scale, the more impressive transformation may lay ahead for your development and IT teams, as they can now offer more on-demand services in a more secure fashion.

Read more

Also: Kubernetes is a dump truck: Here's why

OSS: Federation, SUSE, Red Hat/Fedora and OSI Sessions

Filed under
Red Hat
OSS
SUSE
  • Federated conference videos

    So, foss-north 2019 happened. 260 visitors. 33 speakers. Four days of madness.

    During my opening of the second day I mentioned some social media statistics. Only 7 of our speakers had mastodon accounts, but 30 had twitter accounts.

  • Chameleon and the dragons

    Arriving to the conference venue on a bike was quite pleasant (thanks to bicycle paths almost everywhere in city and amount of parks). One thing which I forgot is the bike lock, but I met Richard Brown and he offered to lock our bikes together.

    First thing which brought my attention was some QR-code on the registration desk which says something like “This is not the first one, search better,” so I had to walk around and try to find correct one. There were 10 of them in different places of Biergarten, each is asking you some question about openSUSE (logos, abbreviations, versions and so on). Once you find all of them and answer correctly, you can pick up prize on registration desk. I really enjoyed this so I proposed this idea for our events.

    I have missed first half of the talks with fixing problem with dynamic BuildRequires and second half by talking with Michael Schröder about libsolv-related things. We’ve discussed what modularity would mean for libsolv, some known corner-cases and I promised to write document which describes how it is supposed to be handled (some kind of test cases).

    Then there was some kind of meetup of OBS (Open Build Service) community (both developers and users) where OBS-related things were discussed. I wish we could have something like “RPM buildsystems meetup” where people could discuss problems in different buildsystems (Koji, OBS) and share solutions.

  • Announcing Thorntail 2.4 general availability

    At this year’s Red Hat Summit, Red Hat announced Thorntail 2.4 general availability for Red Hat customers through a subscription to Red Hat Application Runtimes. Red Hat Application Runtimes provides application developers with a variety of application runtimes running on the Red Hat OpenShift Container Platform.

  • Container-related content you might have missed at Red Hat Summit

    If you weren’t lucky enough to attend the recent Red Hat Summit or you went but couldn’t make it to all the container-related sessions, worry not. We teamed up with Scott McCarty, Principal Technology Product Manager–Containers at Red Hat, to bring you an overview of what you missed.

  • Aging in the open: How this community changed us

    A passionate and dedicated community offers few of these comforts. Participating in something like the open organization community at Opensource.com—which turns four years old this week—means acquiescing to dynamism, to constant change. Every day brings novelty. Every correspondence is packed with possibility. Every interaction reveals undisclosed pathways.

    To a certain type of person (me again), it can be downright terrifying.

    But that unrelenting and genuine surprise is the very source of a community's richness, its sheer abundance. If a community is the nucleus of all those reactions that catalyze innovations and breakthroughs, then unpredictability and serendipity are its fuel. I've learned to appreciate it—more accurately, perhaps, to stand in awe of it. Four years ago, when the Opensource.com team heeded Jim Whitehurst's call to build a space for others to "share your thoughts and opinions… on how you think we can all lead and work better in the future" (see the final page of The Open Organization), we had little more than a mandate, a platform, and a vision. We'd be an open organization committed to studying, learning from, and propagating open organizations. The rest was a surprise—or rather, a series of surprises:

  • May 2019 License-Discuss Summary

    The corresponding License-Review summary is online at https://opensource.org/LicenseReview052019 and covers extensive debate on the Cryptographic Autonomy License, as well as discussion on a BSD license variant.

  • May 2019 License-Review Summary

    In May, the License-Review mailing list saw extensive debate on the Cryptographic Autonomy License. The list also discussed a BSD variant used by the Lawrence Berkeley National Laboratory, and the Master-Console license.

    The corresponding License-Discuss summary is online at https://opensource.org/LicenseDiscuss052019 and covers an announcement regarding the role of the License-Review list, discussion on the comprehensiveness of the approved license list, and other topics.

Create a CentOS homelab in an hour

Filed under
OS
Red Hat
HowTos

When working on new Linux skills (or, as I was, studying for a Linux certification), it is helpful to have a few virtual machines (VMs) available on your laptop so you can do some learning on the go.

But what happens if you are working somewhere without a good internet connection and you want to work on a web server? What about using other software that you don't already have installed? If you were depending on downloading it from the distribution's repositories, you may be out of luck. With a bit of preparation, you can set up a homelab that will allow you to install anything you need wherever you are, with or without a network connection.

Read more

Fedora Wants Art/Photography

Filed under
Red Hat

Richard Hughes: Breaking apart Dell UEFI Firmware CapsuleUpdate packages

Filed under
Red Hat
Hardware
GNOME

When firmware is uploaded to the LVFS we perform online checks on it. For example, one of the tests is looking for known badness like embedded UTF-8/UTF-16 BEGIN RSA PRIVATE KEY strings. As part of this we use CHIPSEC (in the form of chipsec_util -n uefi decode) which searches the binary for a UEFI volume header which is a simple string of _FVH and then decompresses the volumes which we then read back as component shards. This works well on plain EDK2 firmware, and the packages uploaded by Lenovo and HP which use IBVs of AMI and Phoenix. The nice side effect is that we can show the user what binaries have changed, as the vendor might have accidentally forgotten to mention something in the release notes.

Read more

A beginner's guide to Silverblue

Filed under
Red Hat

At Red Hat Summit 2019, I became fascinated with Fedora Silverblue, an immutable (i.e., unchangeable) variant of Fedora Workstation that primarily uses Flatpak to install apps. I've used Fedora for nearly three years (and Linux for about 22 years) and recently upgraded my machines (home and work) to Fedora 30. But I liked the idea of an immutable desktop and resolved to try it out when I got home.

According to the Fedora Silverblue User Guide:

"Fedora Silverblue is an immutable desktop operating system. It aims to be extremely stable and reliable. It also aims to be an excellent platform for developers and for those using container-focused workflows."

The day I returned from Red Hat Summit, I downloaded the latest image of Silverblue from the main Silverblue website. I burned it to a USB drive (do you really "burn" to a USB drive?) and tried to install it. The process failed, but I was jet-lagged, so I headed to bed suspecting that the problem might lie with the USB drive—I've found that about 50% of USB drives have problems when you try to install Linux from them. I woke up early (jet lag still), found a new USB drive, and tried again.

Read more

Also: PHP version 7.1.30, 7.2.19 and 7.3.6

GNOME and Fedora/Red Hat: Translation, Rust, Sysprof and EPEL

Filed under
Red Hat
GNOME
  • Why translation platforms matter

    In my opinion, the GNOME platform offers the best translation platform for the following reasons:

    Its site contains both the team organization and the translation platform. It's easy to see who is responsible and their roles on the team. Everything is concentrated on a few screens.
    It's easy to find what to work on, and you quickly realize you'll have to download files to your computer and send them back once you modify them. It's not very sexy, but the logic is easy to understand.
    Once you send a file back, the platform can send an alert to the mailing list so the team knows the next steps and the translation can be easily discussed at the global level (rather than commenting on specific sentences).
    It has 297 languages.
    It shows clear percentages on progress, both on basic sentences and advanced menus and documentation.
    Coupled with a predictable GNOME release schedule, everything is available for the community to work well because the tool promotes community work.

    If we look at the Debian translation team, which has been doing a good job for years translating an unimaginable amount of content for Fedora (especially news), we see there is a highly codified translation process based exclusively on emails with a manual push in the repositories. This team also puts everything into the process, rather than the tools, and—despite the considerable energy this seems to require—it has worked for many years while being among the leading group of languages.

    My perception is that the primary issue for a successful translation platform is not based on the ability to make the unitary (technical, translation) work, but on how it structures and supports the translation team's processes. This is what gives sustainability.

    The production processes are the most important way to structure a team; by putting them together correctly, it's easy for newcomers to understand how processes work, adopt them, and explain them to the next group of newcomers.

    To build a sustainable community, the first consideration must be on a tool that supports collaborative work, then on its usability.

    This explains my frustration with the Zanata tool, which is efficient from a technical and interface standpoint, but poor when it comes to helping to structure a community. GIven that translation is a community-driven process (possibly one of the most community-driven processes in open source software development), this is a critical problem for me.

  • Federico Mena-Quintero: Bzip2 in Rust - Basic infrastructure and CRC32 computation

    I have started a little experiment in porting bits of the widely-used bzip2/bzlib to Rust. I hope this can serve to refresh bzip2, which had its last release in 2010 and has been nominally unmaintained for years.

    I hope to make several posts detailing how this port is done. In this post, I'll talk about setting up a Rust infrastructure for bzip2 and my experiments in replacing the C code that does CRC32 computations.

  • Sysprof Developments

    Earlier this month, Matthias and I teamed up to push through some of our profiling tooling for GTK and GNOME. We took the occasional work I had done on Sysprof over the past few years and integrated that into the GTK-4.x tree.

    Sysprof uses a binary log file to store information about execution in a matter that is easy to write-buffer and read-back using positioned reads. It helps keep the sampling overhead of sysprof low. But it’s too detail oriented for each application supporting the format to write. To make this stuff reusable I created a libsysprof-capture-3.a static library we embed from various layers of the platform.

    GTK-4.x is now using this. Builder itself uses it to log internal statistics, tracing data, and counters for troubleshooting. I’ve also put forward patches for GJS to integrate with it. Georges revamped and pushed forward a prototype by Jonas to integrate with Mutter/Shell and get us frame timings and Cogl pipeline data. With some work we can finish off the i915 data sources that Eric Anholt did to correlate GPU commands too.

    What this means for developers is that soon we’ll be able to capture system information from various layers in the stack and correlate them using similar clocks. We’re only scratching the surface right now, but it’s definitely promising. It’s already useful to quantify the true performance improvements of merge-requests in Mutter and Shell.

  • Sysprof Making Progress For Improved GNOME Profiling

    Christian Hergert of GNOME Builder IDE fame has been working on a round of improvements recently to the Sysprof tool he also leads development on for system profiling in determining the hot functions of a program and related profiling mostly around GNOME components.

    One of the main additions has been adding support to GTK4 for Sysprof's new engine and he is planning on plumbing that new engine support through to at least Mutter and GJS while potentially back-porting it to the likes of GTK3.

  • EPEL Proposal: EPEL Wagontrain (aka Steve Gallagher's EPEL 8 Branch Strategy)

What's new with Red Hat Enterprise Linux 8 and Red Hat Virtualization

Filed under
Red Hat

Red Hat Enterprise Linux (RHEL) 8 is based upon the principles of "operational consistency, security, and cloud foundation." Utilizing kernel 4.18x, RHEL 8 is based on Fedora 28 and will run on Intel/AMD 64-bit processors as well as IBM Power LE, IBM z Systems, and ARM 64-bit.

Red Hat has sought to reduce complexity in RHEL 8, which comes with ten guaranteed years of enterprise support. Their model involves repositories for the base operating system as well as application streams for flexible lifecycle options, which offer multiple versions of databases, languages, various compilers, and other tools to help facilitate the use of RHEL for business models.
Build-in defaults in RHEL 8 include tuned profiles for database options (ready-to-go options out of the box) and ansible system roles to provide a common configuration interface (ensuring standardization and reliability)
The RHEL 8 YUM package manager is now based on the Dandified Yum (DNF) technology, which supports modular content, better performance, and a stable API for integration with tooling. User feedback indicated that "yum is a lot faster than it used to be, and all the commands work well."
Red Hat Insights (tools to provide system administrators with analytics, machine learning, and automation controls) are now included in RHEL 8 along with a session recording feature, which can record and playback user terminal sessions for better security and training capabilities.

Read more

HPC Chips, IBM and Red Hat on Servers

Filed under
Red Hat
Server
Hardware
  • Tachyum Boots Linux on Universal Processor Chip

    Today Tachyum announced it has successfully deployed the Linux OS on its Prodigy Universal Processor architecture, a foundation for 64-core, ultra-low power, high-performance processor. Running an OS directly and natively on its chip, without the need for host processors or other expensive components, reduces the cost of at-scale data centers and enables nearly unlimited flexibility in use.

  • Powering the Future of HPC & AI with OpenPOWER

    It is coming up on one year that the Summit supercomputer based on IBM POWER9 at Oak Ridge National Lab claimed the number one spot on the Top500 ranking. This system represents the culmination of a significant collaboration between OpenPOWER foundation members IBM, Nvidia, Mellanox and Red Hat with the goal of producing well a balanced computing platform for not only traditional HPC workloads such as modelling and simulation, but also AI workloads. With this milestone approaching, we took the opportunity to catch-up with Hugh Blemings, Executive Director at the OpenPOWER Foundation to chat about the foundation, and what lies ahead.

  • The limits of compatibility and supportability with containers

    Many folks who do container development have run Alpine container images. You might have run Fedora, Red Hat Enterprise Linux (RHEL), CentOS, Debian, and Ubuntu images as well. If you are adventurous, you may have even run Arch, Gentoo, or dare I say, really old container images - like, RHEL 5 old.

    If you have some experience running container images, you might be led to believe that anything will just work, all the time, because containers are often thought to be completely portable across time and space. And a lot of the time, they do work! (Until they don't.)

    It’s easy to assume that there is nothing to worry about when mixing and matching the container image userspace and host operating system. This post intends to give a realistic explanation on the limits of compatibility with container images, and demonstrate why bring your own images (BYI) isn't a workable enterprise solution..

  • Unlocking new levels of operational efficiency in financial services

    The financial services industry is changing. While the fundamental principles that the industry is built on remain the same—such as trust, value and customer service—the way financial organizations deliver on these values is far different from what it once was. We are now in an always-on, ever-connected world where banking customers expect to have access to accounts, information and services whenever and wherever they want, and the way organizations handle these operations can make or break the overall customer experience - and the bottom line.

    Financial services institutions need to find a balance between driving new innovations and keeping costs in check—all while meeting regulatory requirements. This culture of real-time engagement and access to information is leading organizations to not only reexamine business operational processes but also to think critically about the capabilities their core back-end banking systems provide, making changes and modernizing systems to keep pace.

  • Multi-architecture OpenShift containers

    Following the initial release of RHEL8-based OpenJDK OpenShift container images, we have now pushed PPC64LE and Aarch64 architecture variants to the Red Hat Container Registry. This is the first time I've pushed Aarch64 images in particular, and I'm excited to work on Aarch64-related issues, should any crop up!

Syndicate content