Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 5 hours 58 min ago

Open source game achievements

17 hours 56 min ago

Learn how Gamerzilla brings an achievement system to open source games and enables all developers to implement achievements separate from the game platform.

Some open source games rival the quality of commercial games. While it is hard to match the quality of triple-a games, open source games compete effectively against the indie games. But, gamer expectations change over time. Early games included a high score. Achievements expanded over time to promote replay. For example, you may have completed a level but you didn’t find all the secrets or collect all the coins. The Xbox 360 introduced the first multi-game online achievement system. Since that introduction, many game platforms added an achievement system.

Open source games are largely left out of the achievement systems. You can publish an open source game on Steam, but it costs money and they focus on working with companies not the free software community. Additionally, this locks players into a non-free platform.

Commercial game developers are not well served either, since some players enjoy achievements and refuse to purchase from other stores due to the inability to share their accomplishments. This lock-in gives the power to the platform holder. Each platform has a different system forcing the developer to implement support and testing multiple times. Smaller platform are likely to be skipped entirely. Furthermore, the platform holder has access to the achievement data on all companies using their system which could be used for competitive advantage.

Architecture of Gamerzilla

Gamerzilla is an open source game achievement system which attempts to correct this situation. The design considered both open source and commercial games. You can run your own Gamerzilla server, use one provided by a game store, or even distributions, or other groups could run them. Where you buy the game doesn’t matter. The achievement data uploads to your Gamerzilla server.

Game achievements require two things, a game, and a Gamerzilla server. As game collections grow, however, that setup has a disadvantage. Each game needs to have credentials to upload to the Gamerzilla server. Many gamers turn to game launchers due to their large number of games and ability to synchronize with one or more stores. By adding Gamerzilla support to the launcher, the individual games no longer need to know your credentials. Session results will relay from the game launcher to the Gamerzilla server.

At one time, freegamedev.net provided the Hubzilla social networking system. We created an addon allowing us to jump start Gamerzilla development. Unfortunately server upgrades broke the service so freegamedev.net stopped offering it.

For Gamerzilla servers, two implementations exist. Maintaining Hubzilla is a complex task, so we developed a standalone Gamerzilla service using .Net and React. The API used by games remains the same so it doesn’t matter which implementation you connect to.

Game launchers development and support often lags. To facilitate adding support, we created libgamerzilla. The library handles all the interaction between the game launcher, games, and the Gamerzilla server. Right now only GameHub has an implementation with Gamerzilla support and merging into the project is pending. On Fedora Linux, libgamerzilla-server package serves as a temporary solution. It does not launch games but listens for achievements and relays them to your server.

Game support continues growing. As with game launchers, developers use libgamerzilla to handle the Gamerzilla integration. The library, written in C, is in use in a variety of languages like Python and nim. Games which already have an achievement system typically take only a few days to add support. For other games ,collecting all the information to award the achievements occupies the bulk of the implementation time.

Setting up a server

The easiest server to setup is the Hubzilla addon. That, however, requires a working Hubzilla site which is not the simplest thing to setup. The new .Net and React server can be setup relatively easily on Fedora Linux, although there are a lot of steps. The readme details all the steps. The long set of steps is, in part, due to the lack of a built release. This means you need to build the .Net and the React code. Once built, React code serves up directly in Apache. A new service runs the .Net piece. Apache proxies all requests to the Gamerzilla API for the new service.

With the setup steps done, Gamerzilla runs but there are no users. There needs to be an easy way to create an administrator and register new users. Unfortunately this piece does not exist yet. At this time, users must be entered directly using the sqlite3 command line tool. The instructions are in the readme. Users can be publicly visible or not. The approval flag allows new users to not use the system immediately but web registration still needs to be implemented The user piece is designed with replacement in mind. It would not be hard to replace backend/Service/UserService.cs to integrate with an existing site. Gaming web sites could use this to offer Gamerzilla achievements to their users.

Currently the backend uses a sqlite database. No performance testing has been done. We expect that larger installations may need to modify the system to use a more robust database system.

Testing the system

There is no game launcher easily available at the moment. If you install libgamerzilla-server, you will have the command gamerzillaserver available from the command line. The first time you run it, you enter your url and login information. Subsequent executions will simply read the information from the configuration file. There is currently no way to correct a mistake except deleting the file at .local/share/gamerzillaserver/server.cfg and running gamerzillaserver again.

Most games have no built releases with Gamerzilla support. Pinball Disc Room on itch.io does have support built in the Linux version. The web version has no achievements There are only two achievements in the game, one for surviving for ten seconds and the other for unlocking and using the tunnel. With a little practice you can get an achievement. You need to check your Gamerzila server as the game provides no visual notification of the achievement.

Currently no game packaged in Fedora Linux supports Gamerzilla. SuperTuxKart merged support but is still awaiting a new release. Seahorse adventures and Shippy 1984 added achievements but new releases are not packaged yet. Some games with support, we maintain independently as the developers ignore pull requests or other attempt to contact them.

Future work

Gamerzilla needs more games. A variety of games currently support the system. An addition occurs nearly every month. If you have a game you like, ask the developer to support Gamerzilla. If you are making a game and need help adding support, please let us now.

Server development proceeds at a slow pace and we hope to have a functional registration system soon. After that we may setup a permanent hosting site. Right now you can see our test server. Some people expressed concern with the .Net backend. The API is not very complex and could be rewritten in Python fairly easily.

The largest unknown remains game launchers. GameHub wants a generic achievement interface. We could try to work with them to get that implemented. Adding support to the itch.io app could increase interest in the system. Another possibility is to do away with the game launcher entirely. Perhaps adding something like the gamerzillaserver to Gnome might be possible. You would then configure your url and login information on a settings page. Any game launched could then record achievements.

How to check for update info and changelogs with rpm-ostree db

Wednesday 15th of September 2021 08:00:00 AM

This article will teach you how to check for updates, check the changed packages, and read the changelogs with rpm-ostree db and its subcommands.

The commands will be demoed on a Fedora Silverblue installation and should work on any OS that uses rpm-ostree.

Introduction

Let’s say you are interested in immutable systems. Using a base system that is read-only while you build your use cases on top of containers technology sounds very attractive and it persuades you to select a distro that uses rpm-ostree.

You now find yourself on Fedora Silverblue (or another similar distro) and you want to check for updates. But you hit a problem. While you can find the updated packages on Fedora Silverblue with GNOME Software, you can’t actually read their changelogs. You also can’t use dnf updateinfo to read them on the command line, since there’s no DNF on the host system.

So, what should you do? Well, rpm-ostree has subcommands that can help in this situation.

Checking for updates

The first step is to check for updates. Simply run rpm-ostree upgrade –check:

$ rpm-ostree upgrade --check
...
AvailableUpdate:
Version: 34.20210905.0 (2021-09-05T20:59:47Z)
Commit: d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4
GPGSignature: Valid signature by 8C5BA6990BDB26E19F2A1A801161AE6945719A39
SecAdvisories: 1 moderate
Diff: 4 upgraded

Notice that while it doesn’t tell the updated packages in the output, it does show the Commit for the update as d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4. This will be useful later.

Next thing you need to do is find the Commit for the current deployment you are running. Run rpm-ostree status to get the BaseCommit of the current deployment:

$ rpm-ostree status State: idle Deployments: ● fedora:fedora/34/x86_64/silverblue Version: 34.20210904.0 (2021-09-04T19:16:37Z) BaseCommit: e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e GPGSignature: Valid signature by 8C5BA6990BDB26E19F2A1A801161AE6945719A39 RemovedBasePackages: ... LayeredPackages: ... ...

For this example BaseCommit is e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e.

Now you can find the diff of the two commits with rpm-ostree db diff [commit1] [commit2]. In this command commit1 will be the BaseCommit from the current deployment and commit2 will be the Commit from the upgrade checking command.

$ rpm-ostree db diff e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4 ostree diff commit from: e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e ostree diff commit to: d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4 Upgraded: soundtouch 2.1.1-6.fc34 -> 2.1.2-1.fc34

The diff output shows that soundtouch was updated and indicates the version numbers. View the changelogs by adding –changelogs to the previous command:

$ rpm-ostree db diff e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4 --changelogs ostree diff commit from: e279286dcd8b5e231cff15c4130a4b1f5a03b6735327b213ee474332b311dd1e ostree diff commit to: d8bab818f5abcfb58d2c038614965bf26426d55667e52018fcd295b9bfbc88b4 Upgraded: soundtouch 2.1.1-6.fc34.x86_64 -> 2.1.2-1.fc34.x86_64 * dom ago 29 2021 Uwe Klotz <uwe.klotz@gmail.com> - 2.1.2-1 - Update to new upstream version 2.1.2 Bump version to 2.1.2 to correct incorrect version info in configure.ac * sex jul 23 2021 Fedora Release Engineering <releng@fedoraproject.org> - 2.1.1-7 - Rebuilt for https://fedoraproject.org/wiki/Fedora_35_Mass_Rebuild

This output shows the commit notes as well as the version numbers.

Conclusion

Using rpm-ostree db you are now able to have the functionality equivalent to dnf check-update and dnf updateinfo.

This will come in handy if you want to inspect detailed info about the updates you install.

Stuart D Gathman: How do you Fedora?

Monday 13th of September 2021 08:00:00 AM

We recently interviewed Fedora user Stuart D. Gathman on how he uses Fedora Linux. This is a part of a series on the Fedora Magazine where we profile users and how they use Fedora Linux to get things done. If you are interested in being interviewed for a further installment in this series you can contact us on the feedback form.

Who are you and What do you do ??

For 35 years, Stuart worked as a System programmer for a small company where his projects included database servers, device drivers, protocol stacks, expert systems, accounting systems, aged AR/AP reports, and EDI. Currently, he is doing hourly consulting work for small businesses.

Stuart’s childhood heroes were his Dad and George Müller. His favorite movies are “The Gods must be crazy” and “The mission”. He grew up in a pacifist denomination, so feels “The mission” movie is very relevant to him. He loves over roasted vegetables.

Composing and performing music, Mesh networking, and refurbishing discarded computers to run Fedora Linux are some of his spare time interests as well as history, especially ancient Western and 19th century English/American.

“Love/charity, Hope, Faith, Virtue, and knowledge” are the five qualities someone should possess, according to Stuart.

Fedora Community

Stuart’s first Linux was Red Hat Linux 3 in 1996. ”At first, I wasn’t sure how free it was going to work out but I took the plunge with Fedora 8 and was very pleased with the quality and stability. It was so nice to have more recent applications already packaged that required a lot of effort on my part to package for Red Hat Linux. One doesn’t need to be a programmer to contribute. There are other skills that go into making a great Linux distro. The most important skill that can be learned is how to submit a useful bug report. I honestly love the way Fedora Linux is organized, as is, and I keep hoping my Pinebook (ARM) onboard wifi and mic will start working with the next kernel release”, says Stuart.

The one person who influenced him to contribute to Fedora Linux was Caleb James DeLisle. Stuart had been building local RHL, RHEL, and Fedora Linux packages for personal and work repositories since 1996, but hadn’t taken the plunge to become an official Fedora packager. In 2015, he began researching how to decentralize network connections and on Mar 22, 2016, his Fedora Cjdns package was officially approved!. After that, he added more packages supporting decentralized communication, like OpenAS2, and more are on the way.

Some of the skills Stuart uses in his work are building software from source, RPM packaging (similar to programming), understanding and following Fedora Linux packaging guidelines, and learning unfamiliar programming languages sufficiently to build their applications from the source.

Stuart’s suggestions to newbies who want to become involved in the Fedora project are: “First, learn to file bug reports then, extend Fedora for your own use, write cheat sheets, create a diagram to help you and others understand an application or utility, add a cool photo to your desktop backgrounds, make your own sound effects, develop “best practices” for using applications and utilities for a particular job or project. Write about it for Fedora Magazine. Outline how you would explain the benefits of Fedora to various audiences”. His biggest concern is that Fedora Linux might fall victim to politics unrelated to the project goals.

What hardware do you use ??

Stuart is currently running a second-hand Dell Optiplex 790 desktop, a Raspberry Pi3, and Pinebook (ARM) small form factor notebook with an all-day battery which were all bought new. He also runs, a Dell Poweredge T310 VM host running Fedora Linux in virtual machines, a Dell Poweredge SC440, a Dell Inspiron 1440, a Thinkpad T61, (all of which were being discarded) and a refurbished Dell Latitude 3570 laptop. The Inspiron 1440 is his favorite as it is so comfortable to use, but the core 2 duo does not support hardware virtualization.

What software do you use??

Stuart is currently running Fedora Linux 33 and Fedora LInux 34. He wrote an article about using LVM for system upgrades for the Fedora Magazine that can allow easy booting between versions for testing new releases.

Stuart relies on a suite of applications in his work:

VYM for quickly building outlines Nheko for messaging Glabels for business cards Gkrellm

MAKE MORE with Inkscape – Ink/Stitch

Friday 10th of September 2021 08:00:00 AM

Inkscape, the most used and loved tool of Fedora’s Design Team, is not just a program for doing nice vector graphics. With vector graphics (in our case SVG) a lot more can be done. Many programs can import this format. Also, Inkscape can do a lot more than just graphics. The first article of this series showed how to produce GCode with Inkscape. This article will examine another Inkscape extension – Ink/Stitch. Ink/Stitch is an extension for designing embroidery with Inkscape.

DIY Embroidery

In the last few years the do-it-yourself or maker scene has experienced a boom. You could say it all began with the inexpensive option of 3D printing; followed by also not expensive CNC machines and laser cutters/engravers. Also the prices for more traditionalmachines such as embroidery machines have fallen during recent years. Home embroidery machines are now available for 500 US dollars.

If you don’t want to or can’t buy one yourself, the nearest MakerSpace often has one. Even the prices for commercial single-head embroidery machines are down to 5,000 US dollars. They are an investment that can pay off quickly.

Software for Embroidery Design

Some of the home machines include their own software for designing embroidery. But most, if not all, of these applications are Windows-only. Also, the most used manufacturer-independent software of this area – Embird – is only available for Windows. But you could run it in Wine.

Another solution for Linux – Embroidermodde – is not really developed anymore. And this is after having had a fundraising campaign.

Today, only one solution is left – Ink/Stitch

The logo of the Ink/Stitch project Open Source and Embroidery Design

Ink/Stitch started out using libembroidery. Today pyembroidery is used. The manufacturers can’t be blamed for the prices of these machines and the number of Linux users. It is hardly worthwhile to develop applications for Linux.

The Embroidery File Format Problem


There is a problem with the proliferation of file formats for embroidery machines; especially among manufacturers that cook their own file format for their machines. In some cases, even a single manufacturer may use several different file formats.

  • .10o – Toyota embroidery machines
  • .100 – Toyota embroidery machines
  • .CSD – Poem, Huskygram, and Singer EU embroidery home sewing machines.
  • .DSB – Baruda embroidery machines
  • .JEF – MemoryCraft 10000 machines.
  • .SEW – MemoryCraft 5700, 8000, and 9000 machines.
  • .PES – Brother and Babylock embroidery home sewing machines.
  • .PEC – Brother and Babylock embroidery home sewing machines.
  • .HUS – Husqvarna/Viking embroidery home sewing machines.
  • .PCS – Pfaff embroidery home sewing machines.
  • .VIP – old Pfaff format also used by Husqvarna machines.
  • .VP3 – newer Pfaff embroidery home sewing machines.
  • .DST – Tajima commercial embroidery sewing machines.
  • .EXP – Melco commercial embroidery sewing machines.
  • .XXX – Compucon, Singer embroidery home sewing machines.
  • .ZSK – ZSK machines on the american market

This is just a small selection of the file formats that are available for embroidery. You can find a more complete list here. If you are interested in deeper knowledge about these file formats, see here for more information.

File Formats of Ink/Stitch

Ink/Stitch can currently read the following file formats: 100, 10o, BRO, DAT, DSB, DST, DSZ, EMD, EXP, EXY, FXY, GT, INB, JEF, JPX, KSM, MAX, MIT, NEW, PCD, PCM, PCQ, PCS, PEC, PES, PHB, PHC, SEW, SHV, STC, STX, TAP, TBF, U01, VP3, XXX, ZXY and also GCode as TXT file.

For the more important task of writing/saving your work, Ink/Stitch supports far fewer formats: DST, EXP, JEF, PEC, PES, U01, VP3 and of course SVG, CSV and GCode as TXT

Besides the problem of all these file formats, there are other problems that a potential stitch program has to overcome.

Working with the different kinds of stitches is one difficulty. The integration of tools for drawing and lettering is another. But why invent such a thing from scratch? Why not take an existing vector program and just add the functions for embroidery to it? That was the idea behind the Ink/Stitch project over three years ago.

Install Ink/Stitch

Ink/Stitch is an extension for Inkscape. Inkscape’s new functionality for downloading and installing extensions is still experimental. And you will not find Ink/Stitch among the extensions that are offered there. You must download the extension manually. After it is downloaded, unzip the package into your directory for Inkscape extensions. The default location is ~/.config/Inkscape/extensions (or /usr/share/inkscape/extensions for system-wide availability). If you have changed the defaults, you may need to check Inkscape’s settings to find the location of the extensions directory.

Customization – Install Add-ons for Ink/Stitch

The Ink/Stitch extension provides a function called Install Add-Ons for Inkscape, which you should run first.

The execution of this function – Extensions > Ink/Stitch > Thread Color Management > Install thread color palettes for Inkscape – will take a while.

Do not become nervous as there is no progress bar or a similar thing to see.

This function will install 70 color palettes of various yarn manufacturers and a symbol library for Ink/Stitch.

Inkscape with the swatches dialogue open, which shows the Madeira Rayon color palette

If you use the download from Github version 2.0.0, the ZIP-file contains the color palette files. You only need to unpack them into the right directory (~/.config/inkscape/palettes/). If you need a hoop template, you can download one and save it to ~/.config/inkscape/templates.

The next time you start Inkscape, you will find it under File > New From Template.

Lettering with Ink/Stitch

The way that is by far the easiest and most widely used, is to get a embroidery design using the Lettering function of Ink/Stitch. It is located under Extensions > Ink/Stitch > Lettering. Lettering for embroidery is not simple. What you expect are so called satin stitched letters. For this, special font settings are needed.

Inkscape with a “Chopin” glyph for satin stitching defined for the Lettering function

You can convert paths to satin stitching. But this is more work intensive than using the Lettering function. Thanks to the work of an active community, the May 2021 release of Ink/Stitch 2.0 brought more predefined fonts for this. An English tutorial on how to create such fonts can be found here.

Version 2.0 also brings functions (Extensions > Ink/Stitch > Font Management) to make managing these kinds of fonts easier. There are also functions for creating these kinds of fonts. But you will need knowledge about font design with Inkscape to do so. First, you create an an entire SVG font. It is then feed through a JSON script which converts the SVG font into the type of files that Ink/Stitch’s font management function works with.

On the left side the Lettering dialogue and on the right the preview of this settings

The function will open a dialogue window where you just have to put in your text, choose the size and font, and then it will render a preview.

Embroider Areas/Path-Objects

The easiest thing with Ink/Stitch, is to embroider areas or paths. Just draw your path. When you use shapes then you have to convert them and then run Extensions > Ink/Stitch > Fill Tools > Break Apart Fill Objects…

This breaks apart the path into its different parts. You have to use this function. The Path > Break apart function of Inkscape won’t work for this.

Next, you can run Ink/Stitch’s built-in simulator: Extensions > Ink/Stitch > Visualise and Export > Simulator/Realistic Preview.

The new Fedora logo as Stitch Plan Preview

Be careful with the simulator. It takes a lot system resources and it will take a while to start. You may find it easier to use the function Extensions > Ink/Stitch > Visualise and Export > Stitch Plan Preview. The latter renders the threading of the embroidery outside of the document.

Nicubunu’s Fedora hat icon as embroidery. The angles for the stitches of the head part and the brim are different so that it looks more realistic. The outline is done in Satin stitching Simple Satin and Satin Embroidery

Ink/Stitch will convert each stroke with a continuous line (no dashes) to what they call Zig-Zag or Simple Satin. Stitches are created along the path using the stroke width you have specified. This will work as long there aren’t too many curves on the path.

Parameter setting dialogue and on the right the Fedora logo shape embroidered as Zig-Zag line

This is simple. But it is by far not the best way. It is better to use the Satin Tools for this. The functions for the Satin embroidery can be found under Extensions > Satin Tools. The most important is the conversion function which converts paths to satin strokes.

Fedora logo shape as Satin Line embroidery

You can also reverse the stitch direction using Extensions > Satin Tools > Flip Satin Column Rails. This underlines the 3D effect satin embroidery gets, especially when you make puff embroidery. For machines that have this capability, you can also set the markings for the trims of jump stitches. To visualize these trims, Ink/Stitch uses the symbols that where installed from its own symbol library.

The Ink/Stitch Stitch Library

What is called the stitch library is simply the kind of stitches that Ink/Stitch can create. The Fill Stitch and Zig-Zag/Satin Stitch have already been introduced. But there are more.

  • Running Stitches: These are used for doing outline designs. The running stitch produces a series of small stitches following a line or curve. Each dashed line will be converted into a Running Stitch. The size of the dashes does not matter.
A running stitch – each dashed line will be converted in such one
  • Bean Stitches: These can also be used for outline designs or add details to a design. The bean stitch describes a repetition of running stitches back and forth. This results in thicker threading.
Bean Stitches – creating a thicker line
  • Manual Stitch: In this mode, Ink/Stitch will use each node of a path as a needle penetration point; exactly as they are placed.
In manual mode – each node will be the needle penetration point
  • E-Stitch: The main use for e-stitch is a simple but strong cover stitch for applique items. It is often used for baby cloths because their skin tends to be more sensitive.
E-Stitch mostly used for applications on baby cloths, soft but strong connection Embroidery Thread List

Some embroidery machines (especially those designed for commercial use) allow different threads to be fitted in advance according to what will be needed for the design. These machines will automatically switch to the right thread when needed. Some file formats for embroidery support this feature. But some do not. Ink/Stitch can apply custom thread lists to an embroidery design.

If you want to work on an existing design, you can import a thread list: Extensions > Ink/Stitch > Import Threadlist. Thread lists can also be exported: Save As different file formats as *.zip. You can also print them: Extensions > Ink/Stitch > Visualise and Export > Print PDF.

Conclusion

Writing software for embroidery design is not easy. Many functions are needed and diverse (sometimes closed-source) file formats make the task difficult. Ink/Stitch has managed to create a useful tool with many functions. It enables the user to get started with basic embroidery design. Some things could be done a little better. But it is definitely a good tool as-is and I expect that it will become better over time. Machine embroidery can be an interesting hobby and with Ink/Stitch the Fedora Linux user can begin designing breathtaking things.

Apps for daily needs part 5: video editors

Wednesday 8th of September 2021 08:00:00 AM

Video editing has become a popular activity. People need video editors for various reasons, such as work, education, or just a hobby. There are also now many platforms for sharing video on the internet. Almost all social media and chat messengers provide features for sharing videos. This article will introduce some of the open source video editors that you can use on Fedora Linux. You may need to install the software mentioned. If you are unfamiliar with how to add software packages in Fedora Linux, see my earlier article Things to do after installing Fedora 34 Workstation. Here is a list of a few apps for daily needs in the video editors category.

Kdenlive

When anyone asks about an open source video editor on Linux, the answer that often comes up is Kdenlive. It is a very popular video editor among open source users. This is because its features are complete for general purposes and are easy to use by someone who is not a professional.

Kdenlive supports multi-track, so you can combine audio, video, images, and text from multiple sources. This application also supports various video and audio formats without having to convert them first. In addition, Kdenlive provides a wide variety of effects and transitions to support your creativity in producing cool videos. Some of the features that Kdenlive provides are titler for creating 2D titles, audio and video scopes, proxy editing, timeline preview, keyframeable effects, and many more.

More information is available at this link: https://kdenlive.org/en/

Shotcut

Shotcut has more or less the same features as Kdenlive. This application is a general purposes video editor. It has a fairly simple interface, but with complete features to meet the various needs of your video editing work.

Shotcut has a complete set of features for a video editor, ranging from simple editing to high-level capabilities. It also supports various video, audio, and image formats. You don’t need to worry about your work history, because this application has unlimited undo and redo. Shotcut also provides a variety of video and audio effects features, so you have freedom to be creative in producing your video works. Some of the features offered are audio filters, audio mixing, cross fade audio and video dissolve transition, tone generator, speed change, video compositing, 3 way color wheels, track compositing/blending mode, video filters, etc.

More information is available at this link: https://shotcut.org/

Pitivi

Pitivi will be the right choice if you want a video editor that has an intuitive and clean user interface. You will feel comfortable with how it looks and will have no trouble finding the features you need. This application is classified as very easy to learn, especially if you need an application for simple editing needs. However, Pitivi still offers a variety of features, like trimming & cutting, sound mixing, keyframeable audio effects, audio waveforms, volume keyframe curves, video transitions, etc.

More information is available at this link: https://www.pitivi.org/

Cinelerra

Cinelerra is a video editor that has been in development for a long time. There are tons of features for your video work such as built-in frame render, various video effects, unlimited layers, 8K support, multi camera support, video audio sync, render farm, motion graphics, live preview, etc. This application is maybe not suitable for those who are just learning. I think it will take you a while to get used to the interface, especially if you are already familiar with other popular video editor applications. But Cinelerra will still be an interesting choice as your video editor.

More information is available at this link: http://cinelerra.org/

Conclusion

This article presented four video editor apps for your daily needs that are available on Fedora Linux. Actually there are many other video editors that you can use in Fedora Linux. You can also use Olive (Fedora Linux repo), OpenShot (rpmfusion-free) , Flowblade (rpmfusion-free) and many more. Each video editor has its own advantages. Some are better at correcting color, while others are better at a variety of transitions and effects. Some are better when it comes to how easy it is to add text. Choose the application that suits your needs. Hopefully this article can help you to choose the right video editors. If you have experience in using these applications, please share your experience in the comments.

Contribute at Fedora Linux 35 Audio, i18n, GNOME 41 , and Kernel test days

Sunday 5th of September 2021 08:00:00 AM

Fedora test days are events where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora Linux before, this is a perfect way to get started.

There are three upcoming test events in the coming weeks.

  • Tuesday, September 07 to September 13, is to test week i18n.
  • Thursdy, September 09 to September 16, is to test GNOME 41.
  • Sunday, September 12 to September 19, is test week for Kernel 5.14.
  • Wednesday, September 15 is to test Fedora Linux 35 Audio changes.

Come and test with us to make Fedora Linux 35 even better. Read more below on how to do it.

i18n test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. A lot of our users use Fedora Linux in their preferred languages and it’s important that we test the changes. The wiki contains more details about how to participate. The test week is Sept 07 through Sept 13

Audio test day

There is a recent proposal to replace the PulseAudio daemon with a functionally compatible implementation based on PipeWire. This means that all existing clients using the PulseAudio client library will continue to work as before, as well as applications shipped as Flatpak. The test day is to verify that everything works as expected.
This will occur on Wed, Sept 15

Kernel test week

The kernel team is working on the final integration for kernel 5.14. This version was just recently released and will arrive soon in Fedora Linux. As a result, the Fedora kernel and QA teams have organized a test week for Sunday, Sept 12 through Sunday, Sept 19. Refer to the wiki page for links to the test images you’ll need to participate. This document clearly outlines the steps. The test image goes live 24hrs before the test week starts.

GNOME test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora Linux users. As a part of the planned change the GNOME 41 megaupdate will land on Fedora which will then be shipped with Fedora Linux 35. To ensure that everything works fine the Workstation WG and QA team will have their test week from Thursday, Sept 09 through Sept 16. Refer to the wiki page for links and resources to test GNOME during test week.

How do test days work?

A test day or week is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results.

Install ONLYOFFICE Docs on Fedora Linux with Podman and connect it with Nextcloud

Friday 3rd of September 2021 08:00:00 AM

If you need a reliable office suite for online editing and collaboration within your sync & share platform, you can try ONLYOFFICE Docs. In this tutorial, we learn how to install it on your Fedora Linux with Podman and discover the ONLYOFFICE-Nextcloud integration.

What is ONLYOFFICE Docs

ONLYOFFICE Docs (Document Server) is an open-source office suite distributed under GNU AGPL v3.0. It is comprised of web-based viewers and collaborative editors for text documents, spreadsheets, and presentations. The suite is highly compatible with OOXML formats (docx, xlsx, pptx).

A brief features overview includes:

  • Full set of editing and styling tools, operations with fonts and styles, paragraph and text formatting.
  • Inserting and customizing all kinds of objects: shapes, charts, text art, text boxes, etc.
  • Academic formatting and navigation: endnotes, footnotes, table of contents, bookmarks.
  • Content Controls for creating digital forms and templates.
  • Extending functionality with plugins, building your own plugins using API.
  • Collaborative features: real-time and paragraph-locking co-editing modes, review and track changes, comments and mentions, integrated chat, version history.
  • Flexible access permissions: edit, view, comment, fill forms, review, restriction on copying, downloading, and printing, custom filter for spreadsheets.

You can integrate ONLYOFFICE Docs with various cloud services such as Nextcloud, ownCloud, Seafile, Alfresco, Plone, etc. What’s more, developers can embed the editors into their own solutions. 

You can also use the suite together with ONLYOFFICE Groups, a free open-source collaboration platform distributed under Apache 2.0. The complete solution is available as ONLYOFFICE Workspace.

What is Podman

Podman is a daemonless container engine for developing, managing, and running OCI containers on your Linux system. Users can run containers either as root or in rootless mode. 

It is available by default on Fedora Workstation. If it’s not the case, install podman with the command:

sudo dnf install podman What you need for ONLYOFFICE Docs installation
  • CPU: single core 2 GHz or better
  • RAM: 2 GB or more
  • HDD: at least 40 GB of free space
  • At least 4 GB of swap
Install and run ONLYOFFICE Docs

Start with the following commands for the root-privileged deployment. This creates directories for mounting from the container to the host system:

$ sudo mkdir -p /app/onlyoffice/DocumentServer/logs \ /app/onlyoffice/DocumentServer/data \ /app/onlyoffice/DocumentServer/lib \ /app/onlyoffice/DocumentServer/db

Now mount these directories via podman. When prompted, select the image from docker.io):

$ sudo podman run -i -t -d -p 80:80 -p 443:443 --restart=always \ -v /app/onlyoffice/DocumentServer/logs:/var/log/onlyoffice:Z \ -v /app/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data:Z \ -v /app/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice:Z \ -v /app/onlyoffice/DocumentServer/db:/var/lib/postgresql:Z \ -u root onlyoffice/documentserver:latest

Please note that rootless deployment is NOT recommended for ONLYOFFICE Docs.

To check that ONLYOFFICE is working correctly, run:

$ sudo podman exec $(sudo podman ps -q) sudo supervisorctl start ds:example

Then, open http://localhost/welcome and click the word “here” in the line Once started the example will be available here. Or look for the orange “button” that says “GO TO TEST EXAMPLE”. This opensthe test example where you can create a document.

Alternatively, to install ONLYOFFICE Docs, you can build an image in podman:

$ git clone https://github.com/ONLYOFFICE/Docker-DocumentServer.git $ cd Docker-DocumentServer/ $ sudo podman build --tag oods6.2.0:my -f ./Dockerfile

Or build an image from the Docker file in buildah (you need root access):

$ buildah bud --tag oods6.2.0buildah:mybuildah -f ./Dockerfile Activate HTTPS

To secure the application via SSL basically two things are needed:

  • Private key (.key)
  • SSL certificate (.crt)

So you need to create and install the following files:

/app/onlyoffice/DocumentServer/data/certs/onlyoffice.key /app/onlyoffice/DocumentServer/data/certs/onlyoffice.crt

You can get certificates in several ways depending on your requirements: buy from certification centers, request from Let’s Encrypt, or create a self-signed certificate through OpenSSL (note that self-signed certificates are not recommended for production use).

Secure ONLYOFFICE Docs switching to the HTTPS protocol:

$ sudo mkdir /app/onlyoffice/DocumentServer/data/certs $ sudo cp onlyoffice.crt /app/onlyoffice/DocumentServer/data/certs/ $ sudo cp onlyoffice.key /app/onlyoffice/DocumentServer/data/certs/ $ sudo chown -R 100108:100111 /app/onlyoffice/DocumentServer/data/certs/ # find the podman container id $ sudo podman ps -a # restart the container to use the new certificate $ sudo podman restart {container_id}

Now you can integrate ONLYOFFICE Docs with the platform you already use and start working with your documents.

ONLYOFFICE-Nextcloud integration example

To connect ONLYOFFICE Docs and Nextcloud (or any other DMS), you need a connector. This is an integration app that functions like a bridge between two services.  

In case you’re new to Nextcloud, you can install it with Podman following this tutorial.   

If you already have Nextcloud installed, you just need to install and activate the connector. Do this with the following steps:

  1. launch your Nextcloud as an admin,
  2. click your user icon in the upper right corner,
  3. switch to + Apps,
  4. find ONLYOFFICE in the list of available applications in the section “Office & text”,
  5. click the Download and enable button. 

ONLYOFFICE now appears in the Active apps section and you can go ahead with the configuration. 

Select your user icon again in the upper right corner -> Settings -> Administration -> ONLYOFFICE. On the settings page, you can configure:

  • The address of the machine with ONLYOFFICE installed
  • Secret key (JWT that protects docs from unauthorized access)
  • ONLYOFFICE and Nextcloud addresses for internal requests

You can also adjust additional settings which are not mandatory but will make your user experience more comfortable:

  • Restrict access to the editors to user groups
  • Enable/disable the Open file in the same tab option
  • Select file formats that will be opened by default with ONLYOFFICE
  • Customize editor interface
  • Enable watermarking
Conclusion

Installing ONLYOFFICE Docs on Fedora Linux with Podman is quite easy. It will give you a powerful office suite for integration into any Document Managemet System.

Getting ready for Fedora Linux

Wednesday 1st of September 2021 08:00:00 AM
Introduction

Why does Linux remain vastly invisible to ordinary folks who make general use of computers? This article steps through the process to move to Fedora Linux Workstation for non-Linux users. It also describes features of the GUI (Graphic User Interface) and CLI (Command Line Interface) for the newcomer. This is a quick introduction, not an in-depth course.

Installation and configuration are straightforward

Supposedly, a bootable USB drive is the most baffling experience of starting Linux for a beginner. In all fairness, installation with Fedora Media Writer and Anaconda is intuitive.

Step-by-step installation process
  1. Make a Fedora USB stick: 5 to 7 minutes depending on USB speed
  2. Understand disk partitions and Linux file systems
  3. Boot from a USB device
  4. Install with the Fedora installer, Anaconda: 15 to 20 minutes
  5. Software updates: 5 minutes

Following this procedure, it is easy to help family and friends install Fedora Linux.

Package management and configuration

Instead of configuring the OS manually, adding tools and applications you need, you may choose a functional bundle from Fedora Labs for a specific use case. Design Suite, Scientific, Python Classroom, and more, are available. Plus, all processes are complete without the command line.

Connecting devices and services
  • Add a USB printer: Fedora Linux detects most printers in a few seconds. Some may require the drivers.
  • Configure a USB keyboard: Refer to simple work-around for a mechanical keyboard.
  • Sync with Google Drive: Add an account either after installation, or at any time afterward.
Desktop customization is easy

The default GNOME desktop is decent and free from distractions.

A shortlist to highlight desktop benefits:

  • Simplicity: Clean design, fluid and elegant application grid.
  • Reduced user effort: No alerts for paid services or long list of user consent.
  • Accommodating software: GNOME requires little specialist knowledge or technical ability.
  • Neat layout of system Settings: Larger icons and a better layout.

The image below shows the applications and desktops currently available. Get here by selecting “Activities” and then the “Show Applications” icon at the bottom of the screen at the far right. There you will find LibreOffice for your document, spreadsheet, and presentation creation. Also available is Firefox for your web browsing. More applications are added using the Software icon (second from right at the bottom of the screen).

GNOME desktop Enable touchpad click (tapping)

A change for touchpad settings is required for laptop users.

  1. Go to Activies > Show Applications > Settings > Mouse & Touchpad > Touchpad
  2. Change the default behavior of touchpad settings (double click) to tap-to-click (single tap) using the built-in touchpad
  3. Select ‘Tap to Click’
Add user accounts using the users settings tool

During installation, you set up your first login account. For training or demo purposes, it is common to create a new user account.

  1. Add users: Go to Settings > Users > Unlock > Authentication> Add user
  2. Click at the top of the screen at the far right and then navigate to Power Off / Log out, and Select Switch User to relogin as the new user.
Fedora Linux is beginner-friendly

Yes, Fedora Linux caters to a broader selection of users. Since that is the case, why not dip into the shallow end of the Fedora community?

  • Fedora Docs: Clarity of self-help content is outstanding.
  • Ask Fedora: Get help for anything about Fedora Linux.
  • Magazine: Useful tips and user story are engaging. Make a suggestion to write about.
  • Nest with Fedora: Warm welcome virtually from Fedora Linux community.
  • Release parties.
Command line interface is powerful

The command line is a way of giving instructions to a computer (shell) using a terminal. To be fair, the real power behind Fedora Linux is the Bash shell that empowers users to be problem solvers. The good news is that the text-based command is universally compatible across different versions of Linux. The Bash shell comes with the Fedora Linux, so there is no need to install it.

The following will give you a feeling for the command line. However, you can accomplish many if not all day-to-day tasks without using the command line.

How to use commands?

Access the command line by selecting “Activities” and then the “Show Applications” icon at the bottom of the screen at the far right. Select Terminal.

Understand the shell prompt

The standard shell prompt looks like this:

[hank@fedora_test ~]$

The shell prompt waits for a command.

It shows the name of the user (hank), the computer being used (fedora_test), and the current working directory within the filesystem (~, meaning the user’s home directory). The last character of the prompt, $, indicates that this is a normal user’s prompt.

Enter commands

What common tasks should a beginner try out with command lines?

  • Command line information is available from the Fedora Magazine and other sites.
  • Use ls and cd to list and navigate your file system.
  • Make new directories (folders) with mkdir.
  • Delete files with rm.
  • Use lsblk command to display partition details.
How to deal with the error messages
  • Be attentive to error messages in the terminal. Common errors are missing arguments, typo of file name.
  • Pause to think about why that happened.
  • Figure out the correct syntax using the man command. For example:
    man ls
    displays the manual page for the ls command.
Perform administration tasks using sudo

When a user executes commands for installation, removal, or change of software, the sudo command allows users to gain administrative or root access. The actions that required sudo command are often called ‘the administrative tasks’. Sudo stands for SuperUser DO. The syntax for the sudo command is as follows:

sudo [COMMAND]
  1. Replace COMMAND with the command to run as the root user.
  2. Enter password

What are the most used sudo commands to start with?

  • List privileges
sudo -l
  • Install a package
sudo dnf install [package name]
  • Update a package
sudo dnf update [package name]
  • List all packages
sudo dnf grouplist [package name]
  • Manage disk partitions
sudo fdisk -l Built-in text editor is light and efficient

Nano is the default command-line-based text editor for Fedora Linux. vi is another one often used on Fedora Linux. Both are light and fast. Which to us is a personal choice, really. Nano and vi remain essential tools for editing config files and writing scripts. Generally, Nano is much simpler to work with than vi but vi can be more powerful when you get used to it.

What does a beginner benefit from a text editor?
  • Learn fundamentals of computing

Linux offers a vast range of customization options and monitoring. Shell scripts make it possible to add new functionality and the editor is used to create the scripts.

  • Build cool things for home automation

Raspberry Pi is a testing ground to build awesome projects for homes. Fedora can be installed on Raspberry Pi. Schools use the tiny microcomputer for IT training and experiment. Instead of a visual editor, it is easier to use a light and simple Nano editor to write files.

  • Test proof of concept with the public cloud services

Most of the public cloud suppliers offer free sandbox account to spin up a virtual machine or configure the network. Cloud servers run Linux OS, so editing configuration files require a text editor. Without installing additional software, it is easy to invoke Nano on a remote server.

How to use Nano text editor

Type nano and file name after the shell prompt $ and press Enter.

[hank@fedora_test ~]$ nano [filename]

Note that many of the most used commands are displayed at the bottom of the nano screen. The symbol ^ in Nano means to press the Ctrl key.

  • Use the arrow keys on the keyboard to move up and down, left and right.
  • Edit file.
  • Get built-in help by pressing ^G
  • Exit by entering ^X and Y to save your file and return to the shell prompt.
Examples of file extensions used for configuration or shell scripts
  • .cfg: User-configurable files in the /etc directory.
  • .yaml: A popular type of configuration file with cross-language data portability.
  • .json: JSON is a lightweight & open standard format for storing and transporting data.
  • .sh: A shell script used universally for Unix/Linux systems.

Above all, this is not a comprehensive guide on Nano or vi. Yet, adventurous learners should be aware of text editors for their next step in becoming accomplished in Fedora Linux.

Conclusion

Does Fedora Workstation simplify the user experience of a beginner with Linux? Yes, absolutely. It is entirely possible to create a desktop quickly and get the job done without installing additional software or extensions.

Taking it to the next level, how to get more people into Fedora Linux?

  • Make Fedora Linux device available at home. A repurposed computer with the above guide is a starting point.
  • Demonstrate cool things with Fedora Linux.
  • Share power user tips with shell scripts.
  • Get involved with Open Source Software community such as the Fedora project.

How to install only security and bugfixes updates with DNF

Monday 30th of August 2021 08:00:00 AM

This article will explore how to filter the updates available to your Fedora Linux system by type. This way you can choose to, for example, only install security or bug fixes updates. This article will demo running the dnf commands inside toolbox instead of using a real Fedora Linux install.

You might also want to read Use dnf updateinfo to read update changelogs before reading this article.

Introduction

If you have been managing system updates for Fedora Linux or any other GNU/Linux distro, you might have noticed how, when you run a system update (with dnf update, in the case of Fedora Workstation), you usually are not installing only security updates.

Due to how package management in a GNU/Linux distro works, generally (with the exception of software running in a container, under Flatpak, or similar technologies) you are updating every single package regardless of whether it’s a “system” software or an “app”.

DNF divides updates in three types: “security”, “bugfix” and “enhancement”. And, as you will see, DNF allows filtering which types you want to operate on.

But, why would you want to update only a subset of packages?

Well, this might depend on how you personally choose to deal with system updates. If you are not comfortable at the moment with updating everything, then restricting the current update to only security updates might be a good choice. You could also install bug fix updates as well and only install enhancements and other types of updates during a future opportunity.

How to filter security and bug fix updates

Start by creating a Fedora Linux 34 toolbox:

toolbox create --distro fedora --release f34 updatefilter-demo

Then enter that toolbox:

toolbox enter updatefilter-demo

From now on commands can be run on a real Fedora Linux install.

First, run dnf check-update to see the unfiltered list of packages:

$ dnf check-update audit-libs.x86_64 3.0.5-1.fc34 updates avahi.x86_64 0.8-14.fc34 updates avahi-libs.x86_64 0.8-14.fc34 updates ... vim-minimal.x86_64 2:8.2.3318-1.fc34 updates xkeyboard-config.noarch 2.33-1.fc34 updates yum.noarch 4.8.0-1.fc34 updates

DNF supports passing the types of updates to operate on as parameter: ‐‐security for security updates, ‐‐bugfix for bug fix updates and ‐‐enhancement for enhancement updates. Those work on commands such as dnf check-update, dnf update and dnf updateinfo.

For example, this is how you filter the list of available updates by security updates only:

$ dnf check-update --security avahi.x86_64 0.8-14.fc34 updates avahi-libs.x86_64 0.8-14.fc34 updates curl.x86_64 7.76.1-7.fc34 updates ... libgcrypt.x86_64 1.9.3-3.fc34 updates nettle.x86_64 3.7.3-1.fc34 updates perl-Encode.x86_64 4:3.12-460.fc34 updates

And now same thing but by bug fix updates only:

$ dnf check-update --bugfix audit-libs.x86_64 3.0.5-1.fc34 updates ca-certificates.noarch 2021.2.50-1.0.fc34 updates coreutils.x86_64 8.32-30.fc34 updates ... systemd-pam.x86_64 248.7-1.fc34 updates systemd-rpm-macros.noarch 248.7-1.fc34 updates yum.noarch 4.8.0-1.fc34 updates

They can even be combined, so you can use two or more of them at the same time. For example, you can filter the list to show both security and bug fix updates:

$ dnf check-update --security --bugfix audit-libs.x86_64 3.0.5-1.fc34 updates avahi.x86_64 0.8-14.fc34 updates avahi-libs.x86_64 0.8-14.fc34 updates ... systemd-pam.x86_64 248.7-1.fc34 updates systemd-rpm-macros.noarch 248.7-1.fc34 updates yum.noarch 4.8.0-1.fc34 updates

As mentioned, dnf updateinfo also works with this filtering, so you can filter dnf updateinfo, dnf updateinfo list and dnf updateinfo info. For example, for the list of security updates and their IDs:

$ dnf updateinfo list --security FEDORA-2021-74ebf2f06f Moderate/Sec. avahi-0.8-14.fc34.x86_64 FEDORA-2021-74ebf2f06f Moderate/Sec. avahi-libs-0.8-14.fc34.x86_64 FEDORA-2021-83fdddca0f Moderate/Sec. curl-7.76.1-7.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-2.33-20.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-common-2.33-20.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-minimal-langpack-2.33-20.fc34.x86_64 FEDORA-2021-8b25e4642f Low/Sec. krb5-libs-1.19.1-14.fc34.x86_64 FEDORA-2021-83fdddca0f Moderate/Sec. libcurl-7.76.1-7.fc34.x86_64 FEDORA-2021-31fdc84207 Moderate/Sec. libgcrypt-1.9.3-3.fc34.x86_64 FEDORA-2021-d1fc0b9d32 Moderate/Sec. nettle-3.7.3-1.fc34.x86_64 FEDORA-2021-92e07de1dd Important/Sec. perl-Encode-4:3.12-460.fc34.x86_64

If desired, you can install only security updates:

# dnf update --security ================================================================================ Package Arch Version Repository Size ================================================================================ Upgrading: avahi x86_64 0.8-14.fc34 updates 289 k avahi-libs x86_64 0.8-14.fc34 updates 68 k curl x86_64 7.76.1-7.fc34 updates 297 k ... perl-Encode x86_64 4:3.12-460.fc34 updates 1.7 M Installing weak dependencies: glibc-langpack-en x86_64 2.33-20.fc34 updates 563 k Transaction Summary ================================================================================ Install 1 Package Upgrade 11 Packages Total download size: 9.7 M Is this ok [y/N]:

Or even to install both security and bug fix updates while ignoring enhancement updates:

# dnf update --security --bugfix ================================================================================ Package Arch Version Repo Size ================================================================================ Upgrading: audit-libs x86_64 3.0.5-1.fc34 updates 116 k avahi x86_64 0.8-14.fc34 updates 289 k avahi-libs x86_64 0.8-14.fc34 updates 68 k ... rpm-plugin-systemd-inhibit x86_64 4.16.1.3-1.fc34 fedora 23 k shared-mime-info x86_64 2.1-2.fc34 fedora 374 k sqlite x86_64 3.34.1-2.fc34 fedora 755 k Transaction Summary ================================================================================ Install 11 Packages Upgrade 45 Packages Total download size: 32 M Is this ok [y/N]: Install only specific updates

You may also choose to only install the updates with a specific ID, such as FEDORA-2021-74ebf2f06f for avahi by using –advisory and specifying the ID:

# dnf update --advisory=FEDORA-2021-74ebf2f06f ================================================================================ Package Architecture Version Repository Size ================================================================================ Upgrading: avahi x86_64 0.8-14.fc34 updates 289 k avahi-libs x86_64 0.8-14.fc34 updates 68 k Transaction Summary ================================================================================ Upgrade 2 Packages Total download size: 356 k Is this ok [y/N]:

Or even multiple updates, with ‐‐advisories:

# dnf update --advisories=FEDORA-2021-74ebf2f06f,FEDORA-2021-83fdddca0f ================================================================================ Package Architecture Version Repository Size ================================================================================ Upgrading: avahi x86_64 0.8-14.fc34 updates 289 k avahi-libs x86_64 0.8-14.fc34 updates 68 k curl x86_64 7.76.1-7.fc34 updates 297 k libcurl x86_64 7.76.1-7.fc34 updates 284 k Transaction Summary ================================================================================ Upgrade 4 Packages Total download size: 937 k Is this ok [y/N]: Conclusion

In the end it all comes down to how you personally prefer to manage your updates. But if you need, for whichever reason, to only install security updates, then these filters will surely come in handy!

Automatically Light Up a Sign When Your Webcam is in Use

Friday 27th of August 2021 08:00:00 AM

At the beginning of COVID lockdown and multiple people working from home it was obvious there was a need to let others know when I’m in a meeting or on a live webcam. So naturally it took me one year to finally do something about it. Now I’m here to share what I learned along the way. You too can have your very own “do not disturb” sign automatically light up outside your door to tell people not to walk in half-dressed on laundry day.

At first I was surprised Zoom doesn’t have this kind of feature built in. But then again I might use Teams, Meet, Hangouts, WebEx, Bluejeans, or any number of future video collaboration apps. Wouldn’t it make sense to just use a system-wide watch for active webcams or microphones? Like most problems in life, this one can be helped with the Linux kernel. A simple check of the uvcvideo module will show if a video device is in use. Without using events all that is left is to poll it for changes. I chose to build a taskbar icon for this. I would normally do this with my trusty C++. But I decided to step out of my usual comfort zone and use Python in case someone wanted to port it to other platforms. I also wanted to renew my lesser Python-fu and face my inner white space demons. I came up with the following ~90 lines of practical and simple but insecure Python:

https://github.com/jboero/livewebcam/blob/main/livewebcam

Aside from the icon bits, a daemon thread performs the following basic check every 1s, calling scripts as changed:

def run(self): while True: val=subprocess.check_output(['lsmod | grep \'^uvcvideo\' | awk \'{print $3}\''], shell=True, text=True).strip() if val != self.status: self.status = val if val == '0': val=subprocess.check_output(['~/bin/webcam_deactivated.sh']) else: val=subprocess.check_output(['~/bin/webcam_activated.sh']) time.sleep(1)

Rather than implement the parsing of modules, just using a hard-coded shell command got the job done. Now whatever scripts you choose to put in ~/bin/ will be used when at least one webcam activates or deactivates. I recently had a futile go at the kernel maintainers regarding a bug in usb_core triggered by uvcvideo. I would just as soon not go a step further and attempt an events patch to uvcvideo. Also, this leaves room for Mac or Windows users to port their own simple checks.

Now that I had a happy icon that sits in my KDE system tray I could implement scripts for on and off. This is where things got complicated. At first I was going to stick a magnetic bluetooth LED badge on my door to flash “LIVE” whenvever I was in a call. These things are ubiquitous on the internet and cost about $10 for basically an embedded ARM Cortex-M0 with an LED screen, bluetooth, and battery. They are basically a full Raspberry Pi Pico kit but soldered onto the board.

These Bluetooth LED badges with 48Mhz ARM Cortex-M0 chips have a lot of potential, but they need custom firmware to be any use.

Unfortunately these badges use a fixed firmware that is either listening to Bluetooth transmissions or showing your message – it doesn’t do both which is silly. Many people have posted feedback that they should be so much more. Sure enough someone has already tinkered with custom firmware. Unfortunately the firmware was for older USB variants and I’m not about to de-solder or buy an ISP programmer to flash eeprom just for this. That would be a super interesting project for later and would be a great Rpi alternative but all I want right now is a remote controlled light outside my door. I looked at everything including WiFi smart bulbs to replace my recessed lighting bulbs, to BTLE candles which are an interesting option. Along the way I learned a lot about Bluetooth Low Energy including how a kernel update can waste 4 hours of weekend with bluetooth stack crashes. BTLE is really interesting and makes a lot more sense after reading up on it. Sure enough there is Python that can set the display message on your LED badge across the room, but once it is set, Bluetooth will stop listening for you to change it or shut it off. Darn. I guess I should just make do with USB, which actually has a standard command to control power to ports. Let’s see if something exists for this already.

A programmable Bluetooth LED sign costs £10 or for £30 you can have a single LED up to 59 inches away.

It looked like there are options out there even if they’re not ideal. Then suddenly I found it. Neon sign “ON AIR” for £15 and it’s as dumb as they come – just using 5v from USB power. Perfect.

Bingo – now all I needed to do was control the power to it.

The command to control USB power is uhubctl which is in Fedora repos. Unfortunately most USB hubs don’t support this command. In fact very few support it going back 20 years which seems silly. Hubs will happily report that power has been disconnected even though no such disconnection has been made. I assume it’s just a few cents extra to build in this feature but I’m not a USB hub manufacturer. Therefore I needed to source a pre-owned one. In the end I found a BYTECC BT-UH340 from the US. This was all I needed to finalize it. Adding udev rules to allow the wheel group to control USB power, I can now perform a simple uhubctl -a off -l 1-1 -p 1 to turn anything off.

The BYTECC BT-UH340 is one of few hubs I could actually find to support uhubctl power.

Now with a spare USB extension cable lead to my door I finally have a complete solution. There is an “ON AIR” sign on the outside of my door that lights up automatically whenever any of my webcams are in use. I would love to see a Mac port or improvements in pull requests. I’m sure it can all be better. Even further I would love to hone my IoT skills and sort out flashing those Bluetooth badges. If anybody wants to replicate this please be my guest, and suggestions are always welcome.

Auto-updating podman containers with systemd

Wednesday 25th of August 2021 08:00:00 AM

Auto-Updating containers can be very useful in some cases. Podman provides mechanisms to take care of container updates automatically. This article demonstrates how to use Podman Auto-Updates for your setups.

Podman

Podman is a daemonless Docker replacement that can handle rootfull and rootless containers. It is fully aware of SELinux and Firewalld. Furthermore, it comes pre-installed with Fedora Linux so you can start using it right away.

If Podman is not installed on your machine, use one of the following commands to install it. Select the appropriate command for your environment.

# Fedora Workstation / Server / Spins $ sudo dnf install -y podman # Fedora Silverblue, IoT, CoreOS $ rpm-ostree install podman

Podman is also available for many other Linux distributions like CentOS, Debian or Ubuntu. Please have a look at the Podman Install Instructions.

Auto-Updating Containers

Updating the Operating System on a regular basis is somewhat mandatory to get the newest features, bug fixes, and security updates. But what about containers? These are not part of the Operating System.

Why Auto-Updating?

If you want to update your Operating System, it can be as easy as:

$ sudo dnf update

This will not take care of the deployed containers. But why should you take care of these? If you check the content of containers, you will find the application (for example MariaDB in the docker.io/library/mariadb container) and some dependencies, including basic utilities.

Running updates for containers can be tedious and time-consuming, since you have to:

  1. pull the new image
  2. stop and remove the running container
  3. start the container with the new image

This procedure must be done for every container. Updating 10 containers can easily end up taking 30-40 commands that must be run.

Automating these steps will save time and ensure, that everything is up-to-date.

Podman and systemd

Podman has built-in support for systemd. This means you can start/stop/restart containers via systemd without the need of a separate daemon. The Podman Auto-Update feature requires you to have containers running via systemd. This is the only way to automatically ensure that all desired containers are running properly. Some articles like these for Bitwarden and Matrix Server already had a look at this feature. For this article, I will use an even simpler Apache httpd container.

First, start the container with the desired settings.

# Run httpd container with some custom settings $ sudo podman container run -d -t -p 80:80 --name web -v web-volume:/usr/local/apache2/htdocs/:Z docker.io/library/httpd:2.4 # Just a quick check of the container $ sudo podman container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 58e5b07febdf docker.io/library/httpd:2.4 httpd-foreground 4 seconds ago Up 5 seconds ago 0.0.0.0:80->80/tcp web # Also check the named volume $ sudo podman volume ls DRIVER VOLUME NAME local web-volume

Now, set up systemd to handle the deployment. Podman will generate the necessary file.

# Generate systemd service file $ sudo podman generate systemd --new --name --files web /home/USER/container-web.service

This will generate the file container-web.service in your current directory. Review and edit the file to your liking. Here is the file contents with added newlines and formatting to improve readability.

# container-web.service [Unit] Description=Podman container-web.service Documentation=man:podman-generate-systemd(1) Wants=network.target After=network-online.target RequiresMountsFor=%t/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStartPre=/bin/rm -f %t/container-web.pid %t/container-web.ctr-id ExecStart=/usr/bin/podman container run \ --conmon-pidfile %t/container-web.pid \ --cidfile %t/container-web.ctr-id \ --cgroups=no-conmon \ --replace \ -d \ -t \ -p 80:80 \ --name web \ -v web-volume:/usr/local/apache2/htdocs/ \ docker.io/library/httpd:2.4 ExecStop=/usr/bin/podman container stop \ --ignore \ --cidfile %t/container-web.ctr-id \ -t 10 ExecStopPost=/usr/bin/podman container rm \ --ignore \ -f \ --cidfile %t/container-web.ctr-id PIDFile=%t/container-web.pid Type=forking [Install] WantedBy=multi-user.target default.target

Now, remove the current container, copy the file to the proper systemd directory, and start/enable the service.

# Remove the temporary container $ sudo podman container rm -f web # Copy the service file $ sudo cp container-web.service /etc/systemd/system/container-web.service # Reload systemd $ sudo systemctl daemon-reload # Enable and start the service $ sudo systemctl enable --now container-web # Another quick check $ sudo podman container ls $ sudo systemctl status container-web

Please be aware, that the container can now only be managed via systemd. Starting and stopping the container with the “podman” command may interfere with systemd.

Now that the general setup is out of the way, have a look at auto-updating this container.

Manual Auto-Updates

The first thing to look at is manual auto-updates. Sounds weird? This feature allows you to avoid the 3 steps per container, but you will have full control over the update time and date. This is very useful if you only want to update containers in a maintenance window or on the weekend.

Edit the /etc/systemd/system/container-web.service file and add the label shown below to it.

--label "io.containers.autoupdate=registry"

The changed file will have a section appearing like this:

...snip... ExecStart=/usr/bin/podman container run \ --conmon-pidfile %t/container-web.pid \ --cidfile %t/container-web.ctr-id \ --cgroups=no-conmon \ --replace \ -d \ -t \ -p 80:80 \ --name web \ -v web-volume:/usr/local/apache2/htdocs/ \ --label "io.containers.autoupdate=registry" \ docker.io/library/httpd:2.4 ...snip...

Now reload systemd and restart the container service to apply the changes.

# Reload systemd $ sudo systemctl daemon-reload # Restart container-web service $ sudo systemctl restart container-web

After this setup you can run a simple command to update a running instance to the latest available image for the used tag. In this example case, if a new 2.4 image is available in the registry, Podman will download the image and restart the container automatically with a single command.

# Update containers $ sudo podman auto-update Scheduled Auto-Updates

Podman also provides a systemd timer unit that enables container updates on a schedule. This can be very useful if you don’t want to handle the updates on your own. If you are running a small home server, this might be the right thing for you, so you are getting the latest updates every week or so.

Enable the systemd timer for podman as follows:

# Enable podman auto update timer unit $ sudo systemctl enable --now podman-auto-update.timer Created symlink /etc/systemd/system/timers.target.wants/podman-auto-update.timer → /usr/lib/systemd/system/podman-auto-update.timer.

Optionally, you can edit the schedule of the timer. By default, the update will run every Monday morning, which is ok for me. Edit the timer module using this command:

$ sudo systemctl edit podman-auto-update.timer

This will bring up your default editor. Changing the schedule is beyond the scope of this article but the link to systemd.timer below will help. The Demo section of Systemd Timers for Scheduling Tasks contains details as well.

That’s it. Nothing more to do. Podman will now take care of image updates and also prune old images on a schedule.

Hints & Tips

Auto-Updating seems like the perfect solution for container updates, but you should consider some things, before doing so.

  • avoid using the “latest” tag, since it can include major updates
  • consider using tags like “2” or “2.4”, if the image provider has them
  • test auto-updates beforehand (does the container support updates without additional steps?)
  • consider having backups of your Podman volumes, in case something goes sideways
  • auto-updates might not be very useful for highly productive setups, where you need full control over the image version in use
  • updating a container also restarts the container and prunes the old image
  • occasionally check if the updates are being applied

If you take care of the above hints, you should be good to go.

Docs & Links

If you want to learn more about this topic, please check out the links below. There is a lot of useful information in the official documentation and some blogs.

Conclusion

As you can see, without the use of additional tools, you can easily run auto-updates on Podman containers manually or on a schedule. Scheduling allows unattended updates overnight, and you will get all the latest security updates, features, and bug fixes. Some setups I have tested successfully are: MariaDB, Ghost Blog, WordPress, Gitea, Redis, and PostgreSQL.

Apps for daily needs part 4: audio editors

Monday 23rd of August 2021 08:00:00 AM

Audio editor applications or digital audio workstations (DAW) were only used in the past by professionals, such as record producers, sound engineers, and musicians. But nowadays many people who are not professionals also need them. These tools are used for narration on presentations, video blogs, and even just as a hobby. This is especially true now since there are so many online platforms that facilitate everyone sharing audio works, such as music, songs, podcast, etc. This article will introduce some of the open source audio editors or DAW that you can use on Fedora Linux. You may need to install the software mentioned. If you are unfamiliar with how to add software packages in Fedora Linux, see my earlier article Things to do after installing Fedora 34 Workstation. Here is a list of a few apps for daily needs in the audio editors or DAW category.

Audacity

I’m sure many already know Audacity. It is a popular multi-track audio editor and recorder that can be used for post-processing all types of audio. Most people use Audacity to record their voices, then do editing to make the results better. The results can be used as a podcast or a narration for a video blog. In addition, people also use Audacity to create music and songs. You can record live audio through a microphone or mixer. It also supports 32 bit sound quality.

Audacity has a lot of features that can support your audio works. It has support for plugins, and you can even write your own plugin. Audacity provides many built-in effects, such as noise reduction, amplification, compression, reverb, echo, limiter, and many more. You can try these effects while listening to the audio directly with the real-time preview feature. The built in plugin-manager lets you manage frequently used plugins and effects.

More information is available at this link: https://www.audacityteam.org/

LMMS

LMMS or Linux MultiMedia Studio is a comprehensive music creation application. You can use LMMS to produce your music from scratch with your computer. You can create melodies and beats according to your creativity, and make it better with selection of sound instruments and various effects. There are several built-in features related to musical instruments and effects, such as 16 built-in sythesizers, embedded ZynAddSubFx, drop-in VST effect plug-in support, bundled graphic and parametric equalizer, built-in analyzer, and many more. LMMS also supports MIDI keyboards and other audio peripherals.

More information is available at this link: https://lmms.io/

Ardour

Ardour has capabilities similar to LMMS as a comprehensive music creation application. It says on its website that Ardour is a DAW application that is the result of collaboration between musicians, programmers, and professional recording engineers from around the world. Ardour has various functions that are needed by audio engineers, musicians, soundtrack editors, and composers.

Ardour provides complete features for recording, editing, mixing, and exporting. It has unlimited multichannel tracks, non-linear editor with unlimited undo/redo, a full featured mixer, built-in plugins, and much more. Ardour also comes with video playback tools, so it is also very helpful in the process of creating and editing soundtracks for video projects.

More information is available at this link: https://ardour.org/

TuxGuitar

TuxGuitar is a tablature and score editor. It comes with a tablature editor, score viewer, multitrack display, time signature management, and tempo management. It includes various effects, such as bend, slide, vibrato, etc. While TuxGuitar focuses on the guitar, it allows you to write scores for other instruments. It can also serve as a basic MIDI editor. You need to have an understanding of tablature and music scoring to be able to use it.

More information is available at this link: http://www.tuxguitar.com.ar/

Conclusion

This article presented four audio editors as apps for your daily needs and use on Fedora Linux. Actually there are many other audio editors, or DAW, that you can use on Fedora Linux. You can also use Mixxx, Rosegarden, Kwave, Qtractor, MuseScore, musE, and many more. Hopefully this article can help you investigate and choose the right audio editor or DAW. If you have experience using these applications, please share your experiences in the comments.

MAKE MORE with Inkscape – G-Code Tools

Friday 20th of August 2021 08:00:00 AM

Inkscape, the most used and loved tool of Fedora’s Design Team, is not just a program for doing nice vector graphics. With vector graphics (in our case SVG) a lot more can be done. Many programs can import this format. Inkscape can also do a lot more than just graphics. This series will show you some things you can do besides graphics with Inkscape. This first article of the series will show how Inkscape’s G-Code Tools extension can be used to produce G-Code. G-Code, in turn, is useful for programming machines such as plotters and laser engravers.

What is G-Code and what is it used for

The construction of machines for the hobby sector is booming. The publication of the source code for RepRap 3D printers for self-construction and the availability of electronic components, such as Arduino or Raspberry Pi are probably some of the causes for this boom. Mechanical engineering as a hobby is finding more and more adopters. This trend hasn’t stopped with 3D printers. There are also CNC milling machines, plotters, laser engravers, cutters and and even machines that you can build yourself.

You don’t have to design or build these machines yourself. You can purchase such machines relatively cheaply as a kit or already assembled. All these machines have one thing in common – they are computer-controlled. Computer Aided Manufacturing (CAM), which has been widespread in the manufacturing industry, is now also taking place at home.

G-Code or G programming language

The most widespread language for programming CAM machines is G-Code. G-Code is also known as the G programming language. This language was developed at MIT in the 1950s. Since then, various organizations have developed versions of this programming language. Keep this in mind when you work with it. Different countries have different standards for this language. The name comes from the fact that many instructions in this code begin with the letter G. This letter is used to transmit travel or path commands to the machine.

The commands go, in the truest sense of the word, from A (absolute or incremental position around the X-axis; turning around X) to Z (absolute or incrementing in the direction of the Z-axis). Commands prefixed with M (miscellaneous) transmit other instructions to the machine. Switching coolant on/off is an example of an M command. If you want a more complete list of G-Code commands there is a table on Wikipedia.

%
G00 X0 Y0 F70
G01 Z-1 F50
G01 X0 Y20 F50
G02 X20 Y0 J-20
G01 X0 Y0
G00 Z0 F70
M30
%

This small example would mill a square. You could write this G-Code in any editor of your choice. But when it comes to more complex things, you typically won’t do this sort of low-level coding by hand. When it comes to 3D-Printing the slicer writes the G-Code for you. But what about when you want to use a plotter or a laser engraver?

Other Software for writing G-Code

So you will need a program to do this job for you. Sure, some CAD programs can write G-Code. But not all open source CAD programs can do this. Here are some other open source solutions for this:

As you can see, there is no problem finding a tool for doing this. What I dislike is the use of raster graphics. I use a CNC machine because it works more precisely than I would be able to by hand. Using raster graphics and tracing it to make a path for G-Code is not precise anymore. I find that the use of vector graphics, which has paths anyway, is much more precise.

Inkscape and G-Code Tools installation

When it comes to vector graphics, there is no way around Inkscape; at least not if you use Linux. There are a few other programs. But they do not have anywhere near the capability that Inkscape has. Or they are designed for other purposes. So the question is, “Can Inkscape be used for creating G-Code?” And the answer is, “Yes!” Since version 0.91, Inkscape has been packaged with an extension called GCode Tools. This extension does exactly what we want – it converts paths to G-Code.

So all you have to do, if you have not already done it, is install Inkscape:

$ sudo dnf install Inkscape

One thing to note from the start (where light is, is also shadow) – the GCode Tools extension has a lot of functionality that is not well documented. The developer thinks it’s a good idea to use a forum for documentation. Also, basic knowledge about G-Code and CAM is necessary to understand the functions.

Another point to be aware of is that the development isn’t as vibrant as it was at the time the GCode Tools were packaged with Inkscape.

Getting started with Inkscape’s G-Code Tools extension

The first step is the same as when you would make any other thing in Inkscape – adjust your document properties. So open the document settings with Shift + Ctrl + D or by a clicking on the icon on the command bar and set the document properties to the size of your work piece.

Next, set the orientation points by going to Extensions > Gcodetools > Orientation points. You can use the default settings. The default settings will probably give you something similar to what is shown below.

Inkscape with document setup and the orientation points
The Tool library

The next step is to edit the tool library (Extensions > Gcodetools > Tools library). This will open the dialog window for the tool setting. There you choose the tool you will use. The default tool is fine. After you have chosen the tool and hit Apply, a rectangle will be on the canvas with the settings for the tool. These settings can be edited with the text tool (T). But this is a bit tricky.

Inkscape with the default tool library settings added into the document

The G-Code Tools extension will use these settings later. These tool settings are grouped together with an identifiable name. If you de-group these settings, this name will be lost.

There are two possibilities to avoid losing the identifier if you ungroup the tool settings. You can use the de-group with 4 clicks with the activated selection tool. Or you can de-group it by using Shift + Ctrl + G and then give the group a name later using the XML-editor.

In the first case you should watch that the group is restored before you draw anything new. Otherwise the newly drawn object will be added to this group.

Now you can draw the paths you want to later convert to G-Code. Objects like rectangles, circles, stars and polygons as well text must be converted to paths (Path > Object to Path or Shift + Ctrl + C).

Keep in mind that this function often does not produce clean paths. You will have to control it and clean it afterwards. You can find an older article here, that describes the process.

Hershey Fonts or Stroke Fonts

Regarding fonts, keep in mind that TTF and OTF are so called Outline Fonts. This means the contour of the single character is defined and it will be engraved or cut as such. If you do not want this and want to use, for example, a script font then you have to use Stroke Fonts instead. Inkscape itself brings a small collection of them by default (see Extensions > Text > Hershey text).

The stroke fonts of the Hershey Text extension

Another article about how make your own Stroke Fonts will follow. They are not only useful for engraving, but also for embroidery.

The Area Fill Functions

In some cases it might be necessary to fill paths with a pattern. The G-Code Tools extension has a function which offers two ways to fill objects with patterns – zig zag and spiral. There is another function which currently is not working (Inkscape changed some parts for the extensions with the release of version 1.0). The latter function would fill the object with the help of the offset functions in Inkscape. These functions are under Extensions > Gcodetools > Area.

The Fill Area function of the G-Code Tools extension. Left the pattern fill and right (currently not working) the offset filling. The extension will execute the active tab! The area fillings of the G-Code Tool, on top Zig zag and on the bottom Spiral. Note the results will look different, if you apply this function letter-by-letter instead of on the whole path.


With more and different area fillings you will often have to draw the paths by hand (about 90% of the time). The EggBot extension has a function for filling regions with hatches. You also can use the classical hatch patterns. But you will have to convert the fill pattern back to an object. Otherwise the G-Code Tools extension can not convert it. Besides these, Evilmadscientist has a good wiki page describing fill methods.

Converting paths to G-Code

To convert drawn paths to G-Code, use the function Extensions > Gcodetools > Paths to G-Code. This function will be run on the selected objects. If no object is selected, then all paths in the document will be converted.

There is currently no functionality to save G-Code using the file menu. This must be done from within the G-Code Tools extension dialog box when you convert the paths to G-Code. On the Preferences tab, you have to specify the path and the name for the output file.

On the canvas, different colored lines and arrows will be rendered. Blue and green lines show curves (G02 and G03). Red lines show straight lines (G01). When you see this styling, then you know that you are working with G-Code.

Fedora’s logo converted to G-Code with the Inkscape G-Code Tools Conclusion

Opinions differ as to whether Inkscape is the right tool for creating G-Code. If you keep in mind that Inkscape works only in two dimensions and don’t expect too much, you can create G-Code with it. For simple jobs like plotting some lettering or logos, it is definitely enough. The main disadvantage of the G-Code Tools extension is that its documentation is lacking. This makes it difficult to get started with G-Code Tools. Another disadvantage is that there is not currently much active development of G-Code Tools. There are other extensions for Inkscape that also targeted G-Code. But they are already history or are also not being actively developed. The Makerbot Unicorn GCode Output extension and the GCode Plot extension are a few examples of the latter case. The need for an easy way to export G-Code directly definitely exists.

below: a time traveling resource monitor

Wednesday 18th of August 2021 08:00:00 AM

In this article, we introduce below: an Apache 2.0 licensed resource monitor for modern Linux systems. below allows you to replay previously recorded data.

Background

One of the kernel’s primary responsibilities is mediating access to resources. Sometimes this might mean parceling out physical memory such that multiple processes can share the same host. Other times it might mean ensuring equitable distribution of CPU time. In all these contexts, the kernel provides the mechanism and leaves the policy to “someone else”. In more recent times, this “someone else” is usually a runtime like systemd or dockerd. The runtime takes input from a scheduler or end user — something along the lines of what to run and how to run it — and turns the right knobs and pulls the right levers on the kernel such that the workload can —well — get to work.

In a perfect world this would be the end of the story. However, the reality is that resource management is a complex and rather opaque amalgam of technologies that has evolved over decades of computing. Despite some of this technology having various warts and dead ends, the end result — a container — works relatively well. While the user does not usually need to concern themselves with the details, it is crucial for infrastructure operators to have visibility into their stack. Visibility and debuggability are essential for detecting and investigating misconfigurations, bugs, and systemic issues.

To make matters more complicated, resource outages are often difficult to reproduce. It is not unusual to spend weeks waiting for an issue to reoccur so that the root cause can be investigated. Scale further compounds this issue: one cannot run a custom script on every host in the hopes of logging bits of crucial state if the bug happens again. Therefore, more sophisticated tooling is required. Enter below.

Motivation

Historically Facebook has been a heavy user of atop [0]. atop is a performance monitor for Linux that is capable of reporting the activity of all processes as well as various pieces of system level activity. One of the most compelling features atop has over tools like htop is the ability to record historical data as a daemon. This sounds like a simple feature, but in practice this has enabled debugging countless production issues. With long enough data retention, it is possible to go backwards in time and look at the host state before, during, and after the issue or outage.

Unfortunately, it became clear over the years that atop had certain deficiencies. First, cgroups [1] have emerged as the defacto way to control and monitor resources on a Linux machine. atop still lacks support for this fundamental building block. Second, atop stores data on disk with custom delta compression. This works fine under normal circumstances, but under heavy resource pressure the host is likely to lose data points. Since delta compression is in use, huge swaths of data can be lost for periods of time where the data is most important. Third, the user experience has a steep learning curve. We frequently heard from atop power users that they love the dense layout and numerous keybindings. However, this is a double edged sword. When someone new to the space wants to debug a production issue, they’re solving two problems at once now: the issue at hand and how to use atop.

below was designed and developed by and for the resource control team at Facebook with input from production atop users. The resource control team is responsible for, as the name suggests, resource management at scale. The team is comprised of kernel developers, container runtime developers, and hardware folks. Recognizing the opportunity for a next-generation system monitor, we designed below with the following in mind:

  • Ease of use: below must be both intuitive for new users as well as powerful for daily users
  • Opinionated statistics: below displays accurate and useful statistics. We try to avoid collecting and dumping stats just because we can.
  • Flexibility: when the default settings are not enough, we allow the user to customize their experience. Examples include configurable keybindings, configurable default view, and a scripting interface (the default being a terminal user interface).
Install

To install the package:

# dnf install -y below

To turn on the recording daemon:

# systemctl enable --now below Quick tour

below’s most commonly used mode is replay mode. As the name implies, replay mode replays previously recorded data. Assuming you’ve already started the recording daemon, start a session by running:

$ below replay --time "5 minutes ago"

You will then see the cgroup view:

If you get stuck or forget a keybinding, press ? to access the help menu.

The very top of the screen is the status bar. The status bar displays information about the current sample. You can move forwards and backwards through samples by pressing t and T, respectively. The middle section is the system overview. The system overview contains statistics about the system as a whole that are generally always useful to see. The third and lowest section is the multipurpose view. The image above shows the cgroup view. Additionally, there are process and system views, accessible by pressing p and s, respectively.

Press and to move the list selection. Press <Enter> to collapse and expand cgroups. Suppose you’ve found an interesting cgroup and you want to see what processes are running inside it. To zoom into the process view, select the cgroup and press z:

Press z again to return to the cgroup view. The cgroup view can be somewhat long at times. If you have a vague idea of what you’re looking for, you can filter by cgroup name by pressing / and entering a filter:

At this point, you may have noticed a tab system we haven’t explored yet. To cycle forwards and backwards through tabs, press <Tab> and <Shift> + <Tab> respectively. We’ll leave this as an exercise to the reader.

Other features

Under the hood, below has a powerful design and architecture. Facebook is constantly upgrading to newer kernels, so we never assume a data source is available. This tacit assumption enables total backwards and forwards compatibility between kernels and below versions. Furthermore, each data point is zstd compressed and stored in full. This solves the issues with delta compression we’ve seen atop have at scale. Based on our tests, our per-sample compression can achieve on average a 5x compression ratio.

below also uses eBPF [2] to collect information about short-lived processes (processes that live for shorter than the data collection interval). In contrast, atop implements this feature with BSD process accounting, a known slow and priority-inversion-prone kernel interface.

For the user, below also supports live-mode and a dump interface. Live mode combines the recording daemon and the TUI session into one process. This is convenient for browsing system state without committing to a long running daemon or disk space for data storage. The dump interface is a scriptable interface to all the data below stores. Dump is both powerful and flexible — detailed data is available in CSV, JSON, and human readable format.

Conclusion

below is an Apache 2.0 licensed open source project that we (the below developers) think offers compelling advantages over existing tools in the resource monitoring space. We’ve spent a great deal of effort preparing below for open source use so we hope that readers and the community get a chance to try below out and report back with bugs and feature requests.

[0]: https://www.atoptool.nl/
[1]: https://en.wikipedia.org/wiki/Cgroups
[2]: https://ebpf.io/

Barrier: an introduction

Monday 16th of August 2021 08:00:00 AM
What is barrier?

To reduce the number of keyboards and mice you can get a physical KVM switch but the down side to the physical KVM switch is it requires you to select a device each time you want to swap. barrier is a virtual KVM switch that allows one keyboard and mouse to control anything from 2-15 computers and you can even have a mix of linux, Windows or Mac.

Don’t confuse Keyboard, Video and Mouse (KVM) with Kernel Virtual Machine (KVM) they are very different and this article will be covering the former. If the Kernel Virtual Machine topic is of interest to you read through this Red Hat article https://www.redhat.com/en/topics/virtualization/what-is-KVM that provides an overview of the latter type of KVM.

Installing barrier on Fedora Linux (KDE Plasma)

Enter Alt+Ctrl+T to display the terminal screen and you will enter the following to download barrier from the Fedora system repository.

$ sudo dnf install barrier Installing barrier on Windows and Mac

If you are looking to install on alternate operating systems you can find the Windows and Mac downloads here: https://github.com/debauchee/barrier/releases.

Nuances of version 2.3.3
  • barrier does not support Wayland, the default display protocol used in both Gnome and KDE, so you will need to switch your desktop to use the X11 protocol to use barrier.
  • If you are unable to move your mouse from the host to a client computer, make sure you do not have scroll lock enabled. If scroll lock is enabled it will prevent the pointer from moving to a client.
  • When using more then one Linux machine, verify you are using the same version of barrier on each one (thanks to @ilikelinux for pointing this requirement out). If you need to check your version enter the following at the terminal.
$ dnf list barrier To use X11 in KDE:
  1. Select the Fedora icon in the bottom left
  2. Select Leave on the bottom right of the menu
  3. Select Log Out
  4. Select OK
  5. Select Desktop Session on the bottom left side of the screen and select X11
  6. Log back in
Set up your barrier host

At the command line type barrier and the main screen will display.

$ barrier
  1. Select the check box next to Server (share this computer’s mouse and keyboard)
  2. Click the Configure Server… button
Barrier

You should now be on the Screens and links tab. Here you will see a recycle icon on the top left and a blue monitor icon on the top right.

To add a client, drag the blue monitor icon to the location you want your monitor to be when you move the mouse from your host to client device. Think of this as how you would want a multi-monitor setup to appear.

If you want to remove one drag the blue monitor to the recycle bin.

Barrier Server Configuration

After you have setup a client in the location grid, double click the same icon to open the Screen Settings dialog box.

Barrier Screen Settings
  1. Fill in the Screen name field with whatever name you would like.
  2. Under Aliases type a different name and select Add.

At this point, your host is ready to go and you can click the Start button on the bottom right of the screen.

Barrier

Note: Depending on your current firewall configuration, you might need to add an exception for the synergy service so that network connections to that port (24800/tcp) can get through to your barrier server. You probably want to restrict this access to only a select few source IP addresses (barrier clients).

Set up your barrier client

The barrier client side setup is very simple.

  1. Start barrier on the client machine.
  2. Select the check box next to Client (use another computer’s mouse and keyboard)
  3. From the host computer you can look at the IP addresses field and copy its value to the Server IP field on the client.
  4. Click Start on the bottom right.
Client on Microsoft Windows 10 Conclusion

From this point on, you can move your mouse between each computer you added and even copy and paste text back and forth just like they are on the same computer. barrier has numerous options you can use to tweak the program under the Hotkeys and Advanced server settings tabs on the host. Now that you are up and running, go ahead and spend time messing around with different options to see what suites you best.

Note: barrier requires that the each host has a physical display.

Use dnf updateinfo to read update changelogs

Friday 13th of August 2021 08:00:00 AM

This article will explore how to check the changelogs for the Fedora Linux operating system using the command line and dnf updateinfo. Instead of showing the commands running on a real Fedora Linux install, this article will demo running the dnf commands in toolbox.

Introduction

If you have used any type of computer recently (be it a desktop, laptop or even a smartphone), you most likely have had to deal with software updates. You might have an opinion about them. They might be a “necessary evil”, something that always breaks your setup and makes you waste hours fixing the new problems that appeared, or you might even like them.

No matter your opinion, there are reasons to update your software: mainly bug fixes, especially security-related bug fixes. After all, you most likely don’t want someone getting your private data by exploiting a bug that happens because of a interaction between the code of your web browser and the code that renders text on your screen.

If you manage your software updates in a manual or semi-manual fashion (in comparison to letting the operating system auto-update your software), one feature you should be aware of is “changelogs”.

A changelog is, as the name hints, a big list of changes between two releases of the same software. The changelog content can vary a lot. It may depend on the team, the type of software, its importance, and the number of changes. It can range from a very simple “several small bugs were fixed in this release”-type message, to a list of links to the bugs fixed on a issue tracker with a small description, to a big and detailed list of changes or elaborate blog posts.

Now, how do you check the changelogs for the updates?

If you use Fedora Workstation the easy way to see the changelog with a GUI is with Gnome Software. Select the name of the package or name of the software on the updates page and the changelog is displayed. You could also try your favorite GUI package manager, which will most likely show it to you as well. But how does one do the same thing via CLI?

How to use dnf updateinfo

Start by creating a Fedora 34 toolbox called updateinfo-demo:

toolbox create --distro fedora --release f34 updateinfo-demo

Now, enter the toolbox:

toolbox enter updateinfo-demo

The commands from here on can also be used on a normal Fedora install.

First, check the updates available:

$ dnf check-update audit-libs.x86_64 3.0.3-1.fc34 updates ca-certificates.noarch 2021.2.50-1.0.fc34 updates coreutils.x86_64 8.32-30.fc34 updates coreutils-common.x86_64 8.32-30.fc34 updates curl.x86_64 7.76.1-7.fc34 updates dnf.noarch 4.8.0-1.fc34 updates dnf-data.noarch 4.8.0-1.fc34 updates expat.x86_64 2.4.1-1.fc34 updates file-libs.x86_64 5.39-6.fc34 updates glibc.x86_64 2.33-20.fc34 updates glibc-common.x86_64 2.33-20.fc34 updates glibc-minimal-langpack.x86_64 2.33-20.fc34 updates krb5-libs.x86_64 1.19.1-14.fc34 updates libcomps.x86_64 0.1.17-1.fc34 updates libcurl.x86_64 7.76.1-7.fc34 updates libdnf.x86_64 0.63.1-1.fc34 updates libeconf.x86_64 0.4.0-1.fc34 updates libedit.x86_64 3.1-38.20210714cvs.fc34 updates libgcrypt.x86_64 1.9.3-3.fc34 updates libidn2.x86_64 2.3.2-1.fc34 updates libmodulemd.x86_64 2.13.0-1.fc34 updates librepo.x86_64 1.14.1-1.fc34 updates libsss_idmap.x86_64 2.5.2-1.fc34 updates libsss_nss_idmap.x86_64 2.5.2-1.fc34 updates libuser.x86_64 0.63-4.fc34 updates libxcrypt.x86_64 4.4.23-1.fc34 updates nano.x86_64 5.8-3.fc34 updates nano-default-editor.noarch 5.8-3.fc34 updates nettle.x86_64 3.7.3-1.fc34 updates openldap.x86_64 2.4.57-5.fc34 updates pam.x86_64 1.5.1-6.fc34 updates python-setuptools-wheel.noarch 53.0.0-2.fc34 updates python-unversioned-command.noarch 3.9.6-2.fc34 updates python3.x86_64 3.9.6-2.fc34 updates python3-dnf.noarch 4.8.0-1.fc34 updates python3-hawkey.x86_64 0.63.1-1.fc34 updates python3-libcomps.x86_64 0.1.17-1.fc34 updates python3-libdnf.x86_64 0.63.1-1.fc34 updates python3-libs.x86_64 3.9.6-2.fc34 updates python3-setuptools.noarch 53.0.0-2.fc34 updates sssd-client.x86_64 2.5.2-1.fc34 updates systemd.x86_64 248.6-1.fc34 updates systemd-libs.x86_64 248.6-1.fc34 updates systemd-networkd.x86_64 248.6-1.fc34 updates systemd-pam.x86_64 248.6-1.fc34 updates systemd-rpm-macros.noarch 248.6-1.fc34 updates vim-minimal.x86_64 2:8.2.3182-1.fc34 updates xkeyboard-config.noarch 2.33-1.fc34 updates yum.noarch 4.8.0-1.fc34 updates

OK, so run your first dnf updateinfo command:

$ dnf updateinfo Updates Information Summary: available 5 Security notice(s) 4 Moderate Security notice(s) 1 Low Security notice(s) 11 Bugfix notice(s) 8 Enhancement notice(s) 3 other notice(s)

This is the summary of updates. As you can see there are security updates, bugfix updates, enhancement updates and some which are not specified.

Look at the list of updates and which types they belong to:

$ dnf updateinfo list FEDORA-2021-e4866762d8 enhancement audit-libs-3.0.3-1.fc34.x86_64 FEDORA-2021-1f32e18471 bugfix ca-certificates-2021.2.50-1.0.fc34.noarch FEDORA-2021-b09e010a46 bugfix coreutils-8.32-30.fc34.x86_64 FEDORA-2021-b09e010a46 bugfix coreutils-common-8.32-30.fc34.x86_64 FEDORA-2021-83fdddca0f Moderate/Sec. curl-7.76.1-7.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix dnf-4.8.0-1.fc34.noarch FEDORA-2021-3b74285c43 bugfix dnf-data-4.8.0-1.fc34.noarch FEDORA-2021-523ee0a81e enhancement expat-2.4.1-1.fc34.x86_64 FEDORA-2021-07625b9c81 unknown file-libs-5.39-6.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-2.33-20.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-common-2.33-20.fc34.x86_64 FEDORA-2021-e14e86e40e Moderate/Sec. glibc-minimal-langpack-2.33-20.fc34.x86_64 FEDORA-2021-8b25e4642f Low/Sec. krb5-libs-1.19.1-14.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix libcomps-0.1.17-1.fc34.x86_64 FEDORA-2021-83fdddca0f Moderate/Sec. libcurl-7.76.1-7.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix libdnf-0.63.1-1.fc34.x86_64 FEDORA-2021-ca22b882a5 enhancement libeconf-0.4.0-1.fc34.x86_64 FEDORA-2021-f9c139edd8 bugfix libedit-3.1-38.20210714cvs.fc34.x86_64 FEDORA-2021-31fdc84207 Moderate/Sec. libgcrypt-1.9.3-3.fc34.x86_64 FEDORA-2021-bc56cf7c1f enhancement libidn2-2.3.2-1.fc34.x86_64 FEDORA-2021-da2ec14d7f bugfix libmodulemd-2.13.0-1.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix librepo-1.14.1-1.fc34.x86_64 FEDORA-2021-1db6330a22 unknown libsss_idmap-2.5.2-1.fc34.x86_64 FEDORA-2021-1db6330a22 unknown libsss_nss_idmap-2.5.2-1.fc34.x86_64 FEDORA-2021-8226c82fe9 bugfix libuser-0.63-4.fc34.x86_64 FEDORA-2021-e6916d6758 bugfix libxcrypt-4.4.22-2.fc34.x86_64 FEDORA-2021-fed4036fd9 bugfix libxcrypt-4.4.23-1.fc34.x86_64 FEDORA-2021-3122d2b8d2 unknown nano-5.8-3.fc34.x86_64 FEDORA-2021-3122d2b8d2 unknown nano-default-editor-5.8-3.fc34.noarch FEDORA-2021-d1fc0b9d32 Moderate/Sec. nettle-3.7.3-1.fc34.x86_64 FEDORA-2021-97949d7a4e bugfix openldap-2.4.57-5.fc34.x86_64 FEDORA-2021-e6916d6758 bugfix pam-1.5.1-6.fc34.x86_64 FEDORA-2021-07931f7f08 bugfix python-setuptools-wheel-53.0.0-2.fc34.noarch FEDORA-2021-2056ce89d9 enhancement python-unversioned-command-3.9.6-1.fc34.noarch FEDORA-2021-d613e00b72 enhancement python-unversioned-command-3.9.6-2.fc34.noarch FEDORA-2021-2056ce89d9 enhancement python3-3.9.6-1.fc34.x86_64 FEDORA-2021-d613e00b72 enhancement python3-3.9.6-2.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix python3-dnf-4.8.0-1.fc34.noarch FEDORA-2021-3b74285c43 bugfix python3-hawkey-0.63.1-1.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix python3-libcomps-0.1.17-1.fc34.x86_64 FEDORA-2021-3b74285c43 bugfix python3-libdnf-0.63.1-1.fc34.x86_64 FEDORA-2021-2056ce89d9 enhancement python3-libs-3.9.6-1.fc34.x86_64 FEDORA-2021-d613e00b72 enhancement python3-libs-3.9.6-2.fc34.x86_64 FEDORA-2021-07931f7f08 bugfix python3-setuptools-53.0.0-2.fc34.noarch FEDORA-2021-1db6330a22 unknown sssd-client-2.5.2-1.fc34.x86_64 FEDORA-2021-3141f0eff1 bugfix systemd-248.6-1.fc34.x86_64 FEDORA-2021-3141f0eff1 bugfix systemd-libs-248.6-1.fc34.x86_64 FEDORA-2021-3141f0eff1 bugfix systemd-networkd-248.6-1.fc34.x86_64 FEDORA-2021-3141f0eff1 bugfix systemd-pam-248.6-1.fc34.x86_64 FEDORA-2021-3141f0eff1 bugfix systemd-rpm-macros-248.6-1.fc34.noarch FEDORA-2021-b8b1f6e54f enhancement vim-minimal-2:8.2.3182-1.fc34.x86_64 FEDORA-2021-67645ae09f enhancement xkeyboard-config-2.33-1.fc34.noarch FEDORA-2021-3b74285c43 bugfix yum-4.8.0-1.fc34.noarch

The output is in three columns. These show the ID for an update, the type of the update, and the package to which it refers.

If you want to see the Bodhi page for a specific update, just add the id to the end of this URL:
https://bodhi.fedoraproject.org/updates/.

For example, https://bodhi.fedoraproject.org/updates/FEDORA-2021-3141f0eff1 for systemd-248.6-1.fc34.x86_64 or https://bodhi.fedoraproject.org/updates/FEDORA-2021-b09e010a46 for coreutils-8.32-30.fc34.x86_64.

The next command will list the actual changelog.

dnf updateinfo info

The output from this command is quite long. So only a few interesting excerpts are provided below.

Start with a small one:

=============================================================================== ca-certificates-2021.2.50-1.0.fc34 =============================================================================== Update ID: FEDORA-2021-1f32e18471 Type: bugfix Updated: 2021-06-18 22:08:02 Description: Update the ca-certificates list to the lastest upstream list. Severity: Low

Notice how this info has the update ID, type, updated time, description and severity. Very simple and easy to understand.

Now look at the systemd update which, in addition to the previous items, has some bugs associated with it in Red Hat Bugzilla, a more elaborate description, and a different severity.

=============================================================================== systemd-248.6-1.fc34 =============================================================================== Update ID: FEDORA-2021-3141f0eff1 Type: bugfix Updated: 2021-07-24 22:00:30 Bugs: 1963428 - if keyfile >= 1024*4096-1 service "systemd-cryptsetup@<partition name>" can't start : 1965815 - 50-udev-default.rules references group "sgx" which does not exist : 1975564 - systemd-cryptenroll SIGABRT when adding recovery key - buffer overflow : 1984651 - systemd[1]: Assertion 'a <= b' failed at src/libsystemd/sd-event/sd-event.c:2903, function sleep_between(). Aborting. Description: - Create 'sgx' group (and also use soft-static uids for input and render, see https://pagure.io/setup/c/df3194a7295c2ca3cfa923981b046f4bd2754825 and https://pagure.io/packaging-committee/issue/1078 (#1965815) : - Various bugfixes (#1963428, #1975564) : - Fix for a regression introduced in the previous release with sd-event abort (#1984651) : : No need to log out or reboot. Severity: Moderate

Next look at a curl update. This has a security update with several CVEs associated with it. Each CVE has its respective Red Hat Bugzilla bug.

=============================================================================== curl-7.76.1-7.fc34 =============================================================================== Update ID: FEDORA-2021-83fdddca0f Type: security Updated: 2021-07-22 22:03:07 Bugs: 1984325 - CVE-2021-22922 curl: wrong content via metalink is not being discarded [fedora-all] : 1984326 - CVE-2021-22923 curl: Metalink download sends credentials [fedora-all] : 1984327 - CVE-2021-22924 curl: bad connection reuse due to flawed path name checks [fedora-all] : 1984328 - CVE-2021-22925 curl: Incorrect fix for CVE-2021-22898 TELNET stack contents disclosure [fedora-all] Description: - fix TELNET stack contents disclosure again (CVE-2021-22925) : - fix bad connection reuse due to flawed path name checks (CVE-2021-22924) : - disable metalink support to fix the following vulnerabilities : CVE-2021-22923 - metalink download sends credentials : CVE-2021-22922 - wrong content via metalink not discarded Severity: Moderate

This item shows a simple enhancement update.

=============================================================================== python3-docs-3.9.6-1.fc34 python3.9-3.9.6-1.fc34 =============================================================================== Update ID: FEDORA-2021-2056ce89d9 Type: enhancement Updated: 2021-07-08 22:00:53 Description: Update of Python 3.9 and python3-docs to latest release 3.9.6 Severity: None

Finally an “unknown” type update.

=============================================================================== file-5.39-6.fc34 =============================================================================== Update ID: FEDORA-2021-07625b9c81 Type: unknown Updated: 2021-06-11 22:16:57 Bugs: 1963895 - Wrong detection of python bytecode mimetypes Description: do not classify python bytecode files as text (#1963895) Severity: None Conclusion

So, in what situation does dnf updateinfo become handy?

Well, you could use it if you prefer managing updates fully via the CLI, or if you are unable to successfully use the GUI tools at a specific moment.

In which case is checking the changelog useful?

Say you manage the updates yourself, sometimes you might not consider it ideal to stop what you are doing to update your system. Instead of simply installing the updates, you check the changelogs. This allows you to figure out whether you should prioritize your updates (maybe there’s a important security fix?) or whether to postpone a bit longer (no important fix, “I will do it later when I’m not doing anything important”).

Build your own Fedora IoT Remix

Wednesday 11th of August 2021 08:00:00 AM

Fedora IoT Edition is aimed at the Internet of Things. It was introduced in the article How to turn on an LED with Fedora IoT in 2018. It is based on RPM-OSTree as a core technology to gain some nifty properties and features which will be covered in a moment.

RPM-OSTree is a high-level tool built on libostree which is a set of tools establishing a “git-like” model for committing and exchanging filesystem trees, deployment of said trees, bootloader configuration and layered RPM package management. Such a system benefits from the following properties:

  • Transactional upgrade and rollback
  • Read-only filesystem areas
  • Potentially small updates through deltas
  • Branching, including rebase and multiple deployments
  • Reproducible filesystem
  • Specification of filesystem through version-controlled code

Exchange of filesystem trees and corresponding commits is done through OSTree repositories or remotes. When using one of the Fedora Editions based on RPM-OSTree there are remotes from which the system downloads commits and applies them, rather than downloading and installing separate RPMs.

A Remix in the Fedora ecosystem is an altered, opinionated version of the OS. It covers the needs of a specific niche. This article will dive into the world of building your own filesystem commits based on Fedora IoT Edition. You will become acquainted to the tools, terminology, design and processes of such a system. If you follow the directions in this guide you will end up with your own Fedora IoT Remix.

Preparations

You will need some packages to get started. On non-ostree systems install the packages ostree and rpm-ostree. Both are available in the Fedora Linux package repositories. Additionally install git to access the Fedora IoT ostree spec sources.

sudo dnf install ostree rpm-ostree git

Assuming you have a spare, empty folder laying around to work with, start there by creating some files and folders that will be needed along the way.

mkdir .cache .build-repo .deploy-repo .tmp custom

The .cache directory is used by all build commands around rpm-ostree. The folders build and deploy store separate repositories to keep the build environment separate from the actual remix. The .tmp directory is used to combine the git-managed upstream sources (from Fedora IoT, for example) with modifications kept in the custom directory.

As you build your own OSTree as derivative from Fedora IoT you will need the sources. Clone them into the folder .fedora-iot-spec. They contain several configuration files specifying how the ostree filesystem for Fedora IoT is built, what packages to include, etc.

git clone -b "f34" https://pagure.io/fedora-iot/ostree.git .fedora-iot-spec OSTree repositories

Create repositories to build and store an OSTree filesystem and its contents . A place to store commits and manage their metadata. Wait, what? What is an OSTree commit anyway? Glad you ask! With rpm-ostree you build so-called libostree commits. The terminology is roughly based on git. They essentially work in similar ways. Those commits store diffs from one state of the filesystem to the next. If you change a binary blob inside the tree, the commit contains this change. You can deploy this specific version of the filesystem at any time.

Use the ostree init command to create two ostree repositories.

ostree --repo=".build-repo" init --mode=bare-user ostree --repo=".deploy-repo" init --mode=archive

The main difference between the repositories is their mode. Create the build repository in “bare-user” mode and the “production” repository in “archive” mode. The bare* mode is well suited for build environments. The “user” portion additionally allows non-root operation and storing extended attributes. Create the other repository in archive mode. It stores objects compressed; making them easy to move around. If all that doesn’t mean a thing to you, don’t worry. The specifics don’t matter for your primary goal here – to build your own Remix.

Let me share just a little anecdote on this: When I was working on building ostree-based systems on GitLab CI/CD pipelines and we had to move the repositories around different jobs, we once tried to move them uncompressed in bare-user mode via caches. We learned that, while this works with archive repos, it does not with bare* repos. Important filesystem attributes will get corrupted on the way.

Custom flavor

What’s a Remix without any customization? Not much! Create some configuration files as adjustment for your own OS. Assuming you want to deploy the Remix on a system with a hardware watchdog (a Raspberry Pi, for example) start with a watchdog configuration file:

./custom/watchdog.conf watchdog-device = /dev/watchdog max-load-1 = 24 max-load-15 = 9 realtime = yes priority = 1 watchdog-timeout = 15 # Broadcom BCM2835 limitation

The postprocess-script is an arbitrary shell script executed inside the target filesystem tree as part of the build process. It allows for last-minute customization of the filesystem in a restricted and (by default) network-less environment. It’s a good place to ensure the correct file permissions are set for the custom watchdog configuration file.

./custom/treecompose-post.sh #!/bin/sh set -e # Prepare watchdog chown root:root /etc/watchdog.conf chmod 0644 /etc/watchdog.conf Plant a Treefile

Fedora IoT is pretty minimal and keeps its main focus on security and best-practices. The rest is up to you and your use-case. As a consequence, the watchdog package is not provided from the get-go. In RPM-OSTree the spec file is called Treefile and encoded in JSON. In the Treefile you specify what packages to install, files and folders to exclude from packages, configuration files to add to the filesystem tree and systemd units to enable by default.

./custom/treefile.json { "ref": "OSTreeBeard/stable/x86_64", "ex-jigdo-spec": "fedora-iot.spec", "include": "fedora-iot-base.json", "boot-location": "modules", "packages": [ "watchdog" ], "remove-files": [ "etc/watchdog.conf" ], "add-files": [ ["watchdog.conf", "/etc/watchdog.conf"] ], "units": [ "watchdog.service" ], "postprocess-script": "treecompose-post.merged.sh" }

The ref is basically the branch name within the repository. Use it to refer to this specific spec in rpm-ostree operations. With ex-jigdo-spec and include you link this Treefile to the configuration of the Fedora IoT sources. Additionally specify the Fedora Updates repo in the repos section. It is not part of the sources so you will have to add that yourself. More on that in a moment.

With packages you instruct rpm-ostree to install the watchdog package. Exclude the watchdog.conf file and replace it with the one from the custom directory by using remove-files and add-files. Now just enable the watchdog.service and you are good to go.

All available treefile options are available in the official RPM-OSTree documentation.

Add another RPM repository

In it’s initial configuration the OSTree only uses the initial Fedora 34 package repository. Add the Fedora 34 Updates repository as well. To do so, add the following file to your custom directory.

./custom/fedora-34-updates.repo [fedora-34-updates] name=Fedora 34 - $basearch - Updates #baseurl=http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/ metalink=https://mirrors.fedoraproject.org/metalink?repo=updates-released-f34&arch=$basearch enabled=1 repo_gpgcheck=0 type=rpm gpgcheck=1 #metadata_expire=7d gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-34-$basearch skip_if_unavailable=False

Now tell rpm-ostree in the spec for your Remix to include this repository. Use the treefile‘s repos section.

./custom/treefile.json { ... "repos": [ "fedora-34", "fedora-34-updates" ], ... } Build your own Fedora IoT Remix

You have all that need to build your first ostree based filesystem. By now you setup a certain project structure, downloaded the Fedora IoT upstream specs, and added some customization and initialized the ostree repositories. All you need to do now is throw everything together and create a nicely flavored Fedora IoT Remix salsa.

cp ./.fedora-iot-spec/* .tmp/ cp ./custom/* .tmp/

Combine the postprocessing-scripts of the Fedora IoT upstream sources and your custom directory.

cat "./.fedora-iot-spec/treecompose-post.sh" "./custom/treecompose-post.sh" > ".tmp/treecompose-post.merged.sh" chmod +x ".tmp/treecompose-post.merged.sh"

Remember that you specified treecompose-post.merged.sh as your post-processing script earlier in treefile.json? That’s where this file comes from.

Note that all the files – systemd units, scripts, configurations – mentioned in ostree.json are now available in .tmp. This folder is the build context that all the references are relative to.

You are only one command away from kicking off your first build of a customized Fedora IoT. Now, kick-of the build with rpm-ostree compose tree command. Now grab a cup of coffee, enjoy and wait for the build to finish. That may take between 5 to 10 minutes depending on your host hardware. See you later!

sudo rpm-ostree compose tree --unified-core --cachedir=".cache" --repo=".build-repo" --write-commitid-to="$COMMIT_FILE" ".tmp/treefile.json" Prepare for deployment

Oh, erm, you are back already? Ehem. Good! – The .build-repo now stores a complete filesystem tree of around 700 to 800 MB of compressed data. The last thing to do before you consider putting this on the network and deploying it on your device(s) (at least for now) is to add a commit with an arbitrary commit subject and metadata and to pull the result over to the deploy-repo.

sudo ostree --repo=".deploy-repo" pull-local ".build-repo" "OSTreeBeard/stable/x86_64"

The deploy-repo can now be placed on any file-serving webserver and then used as a new ostree remote … theoretically. I won’t go through the topic of security for ostree remotes just yet. As an initial advise though: Always sign OSTree commits with GPG to ensure the authenticity of your updates. Apart from that it’s only a matter of adding the remote configuration on your target and using rpm-ostree rebase to switch over to this Remix.

As a final thing before you leave to do outside stuff (like with fresh air, sun, ice-cream or whatever), take a look around the newly built filesystem to ensure that everything is in place.

Explore the filesystem

Use ostree refs to list available refs in the repo or on your system.

$ ostree --repo=".deploy-repo" refs OSTreeBeard/stable/x86_64

Take a look at the commits of a ref with ostree log.

$ ostree --repo=".deploy-repo" log OSTreeBeard/stable/x86_64 commit 849c0648969c8c2e793e5d0a2f7393e92be69216e026975f437bdc2466c599e9 ContentChecksum: bcaa54cc9d8ffd5ddfc86ed915212784afd3c71582c892da873147333e441b26 Date: 2021-07-27 06:45:36 +0000 Version: 34 (no subject)

List the ostree filesystem contents with ostree ls.

$ ostree --repo=".build-repo" ls OSTreeBeard/stable/x86_64 d00755 0 0 0 / l00777 0 0 0 /bin -> usr/bin l00777 0 0 0 /home -> var/home l00777 0 0 0 /lib -> usr/lib l00777 0 0 0 /lib64 -> usr/lib64 l00777 0 0 0 /media -> run/media l00777 0 0 0 /mnt -> var/mnt l00777 0 0 0 /opt -> var/opt l00777 0 0 0 /ostree -> sysroot/ostree l00777 0 0 0 /root -> var/roothome l00777 0 0 0 /sbin -> usr/sbin l00777 0 0 0 /srv -> var/srv l00777 0 0 0 /tmp -> sysroot/tmp d00755 0 0 0 /boot d00755 0 0 0 /dev d00755 0 0 0 /proc d00755 0 0 0 /run d00755 0 0 0 /sys d00755 0 0 0 /sysroot d00755 0 0 0 /usr d00755 0 0 0 /var $ ostree --repo=".build-repo" ls OSTreeBeard/stable/x86_64 /usr/etc/watchdog.conf -00644 0 0 208 /usr/etc/watchdog.conf

Take note that the watchdog.conf file is located under /usr/etc/watchdog.conf. On booted deployment this is located at /etc/watchdog.conf as usual.

Where to go from here?

You took a brave step in building a customized Fedora IoT on your local machine. First I introduced you the concepts and vocabulary so you could understand where you were at and where you wanted to go. You then ensured all the tools were in place. You looked at the ostree repository modes and mechanics before analyzing a typical ostree configuration. To spice it up and make it a bit more interesting you made an additional service and configuration ready to role out on your device(s). To do that you added the Fedora Updates RPM repository and then kicked off the build process. Last but not least, you packaged the result up in a format ready to be placed somewhere on the network.

There are a lot more topics to cover. I could explain how to configure an NGINX to serve ostree remotes effectively. Or how to ensure the security and authenticity of the filesystem and updates through GPG signatures. Also, how one manually alters the filesystem and what tooling is available for building the filesystem. There is also more to be explained about how to test the Remix and how to build flashable images and installation media.

Let me know in the comments what you think and what you care about. Tell me what you’d like to read next. If you already built Fedora IoT, I’m happy to read your stories too.

References

NMState: A declarative networking config tool

Monday 9th of August 2021 08:00:00 AM

This article describes and demonstrates NMState, a network manager that uses a declarative approach to configure hosts. This means you define the desired configuration state through an API and the tool applies the configuration through a provider.

Configuration approaches: imperative vs declarative

Networking management can be a very complex task depending on the size and diversity of the environment. In the early days of IT, networking management relied on manual procedures performed by network administrators over networking devices. Nowadays, Infrastructure as Code (IaC) allows automation of those tasks in a different way. There are, essentially two approaches: imperative or declarative.

In an imperative approach you define “how” you will arrive at a desired configuration state. The declarative paradigm defines “what” is the desired configuration state, so it does not shape which steps are required nor in which order they must be performed. This approach is currently gathering more adepts and you can find it on most of the management and orchestration tools currently used.

NMState: a declarative tool

NMState is a network manager that allows you to configure hosts following a declarative approach. It means you define the desired configuration state through a northbound declarative API and this tool applies the configuration through a southbound provider.

Currently the only provider supported by NMState is NetworkManager, which is the main service to address networking capabilities on Fedora Linux. However, the development life cycle of NMState will add other providers gradually.

For further information regarding NMState please visit either its project site or github repository.

Installation

NMState is available on Fedora Linux 29+ and requires NetworkManager 1.26 or later installed and running on the system. The following shows the installation on Fedora Linux 34:

$ sudo dnf -y install nmstateoutput omitted … Installed:   NetworkManager-config-server-1:1.30.4-1.fc34.noarch      gobject-introspection-1.68.0-3.fc34.x86_64      nispor-1.0.1-2.fc34.x86_64              nmstate-1.0.3-2.fc34.noarch                 python3-gobject-base-3.40.1-1.fc34.x86_64                python3-libnmstate-1.0.3-2.fc34.noarch          python3-nispor-1.0.1-2.fc34.noarch      python3-varlink-30.3.1-2.fc34.noarch   Complete!

At this point you can use nmstatectl as a command line tool for NMState. Please refer to either nmstatectl –help or man nmstatectl for further information about this tool.

Using NMstate

Start by checking the NMState version installed in the system:

$ nmstatectl version 1.0.3

Check the current configuration of a networking interface, e.g. the eth0 configuration:

$ nmstatectl show eth0 2021-06-29 10:28:21,530 root         DEBUG    NetworkManager version 1.30.4 2021-06-29 10:28:21,531 root         DEBUG    Async action: Retrieve applied config: ethernet eth0 started 2021-06-29 10:28:21,531 root         DEBUG    Async action: Retrieve applied config: ethernet eth1 started 2021-06-29 10:28:21,532 root         DEBUG    Async action: Retrieve applied config: ethernet eth0 finished 2021-06-29 10:28:21,533 root         DEBUG    Async action: Retrieve applied config: ethernet eth1 finished --- dns-resolver:   config: {}   running:     search: []     server:     - 192.168.122.1 route-rules:   config: [] routes:   config: []   running:   - destination: fe80::/64     metric: 100     next-hop-address: ''     next-hop-interface: eth0     table-id: 254   - destination: 0.0.0.0/0     metric: 100     next-hop-address: 192.168.122.1     next-hop-interface: eth0     table-id: 254   - destination: 192.168.122.0/24     metric: 100     next-hop-address: ''     next-hop-interface: eth0     table-id: 254 interfaces: - name: eth0   type: ethernet   state: up   ipv4:     enabled: true     address:     - ip: 192.168.122.238       prefix-length: 24     auto-dns: true     auto-gateway: true     auto-route-table-id: 0     auto-routes: true     dhcp: true   ipv6:     enabled: true     address:     - ip: fe80::c3c9:c4f9:75b1:a570       prefix-length: 64     auto-dns: true     auto-gateway: true     auto-route-table-id: 0     auto-routes: true     autoconf: true     dhcp: true   lldp:     enabled: false   mac-address: 52:54:00:91:E4:4E   mtu: 1500

 As you can see above the networking configuration shows four main sections:

  • dns-resolver: this section has the nameserver configuration for this interface.
  • route-rules: it states the routing rules. 
  • routes: it includes both dynamic and static routes.
  • Interfaces: this section describes both ipv4 and ipv6 settings.
Modify the configuration

You can modify the desired configuration state in two modes: 

  • Interactive: editing the interface configuration through nmstatectl edit. This command invokes the text editor defined by the environment variable EDITOR so the network state can be edited in yaml format. After finishing the edition NMState will apply the new network configuration unless there are syntax errors.
  • File-based: applying the interface configuration using nmstatectl apply which imports a desired configuration state from a yaml or json file earlier created.

The following sections show you how to change the networking configuration using NMState. These changes can be disruptive to the system so the recommendation is to perform these tasks on a test system or guest VM till you get a better understanding of NMState.

The test system in use herehas two Ethernet interfaces: eth0 and eth1:

$ ip -br -4 a lo               UNKNOWN        127.0.0.1/8  eth0             UP             192.168.122.238/24  eth1             UP             192.168.122.108/24 Example of interactive configuration mode: Change the MTU of eth0 interface to 9000 bytes using the nmstatectl edit command as follows (all changes are in bold): $ sudo nmstatectl edit eth0 --- dns-resolver:   config: {}   running:     search: []     server:     - 192.168.122.1 route-rules:   config: [] routes:   config: []   running:   - destination: fe80::/64     metric: 100     next-hop-address: ''     next-hop-interface: eth0     table-id: 254   - destination: 0.0.0.0/0     metric: 100     next-hop-address: 192.168.122.1     next-hop-interface: eth0     table-id: 254   - destination: 192.168.122.0/24     metric: 100     next-hop-address: ''     next-hop-interface: eth0     table-id: 254 interfaces: - name: eth0   type: ethernet   state: up   ipv4:     enabled: true     address:     - ip: 192.168.122.123       prefix-length: 24     auto-dns: true     auto-gateway: true     auto-route-table-id: 0     auto-routes: true     dhcp: true   ipv6:     enabled: true     address:     - ip: fe80::c3c9:c4f9:75b1:a570       prefix-length: 64     auto-dns: true     auto-gateway: true     auto-route-table-id: 0     auto-routes: true     autoconf: true     dhcp: true   lldp:     enabled: false   mac-address: 52:54:00:91:E4:4E   mtu: 9000

After saving and exiting the edito, NMState applies the new network desired state:

2021-06-29 11:29:05,726 root         DEBUG    Nmstate version: 1.0.3 2021-06-29 11:29:05,726 root         DEBUG    Applying desire state: {'dns-resolver': {'config': {}, 'running': {'search': [], 'server': ['192.168.122.1']}}, 'route-rules': {'config': []}, 'routes': {'config': [], 'running': [{'destination': 'fe80::/64', 'metric': 102, 'next-hop-address': '', 'next-hop-interface': 'eth0', 'table-id': 254}, {'destination': '0.0.0.0/0', 'metric': 102, 'next-hop-address': '192.168.122.1', 'next-hop-interface': 'eth0', 'table-id': 254}, {'destination': '192.168.122.0/24', 'metric': 102, 'next-hop-address': '', 'next-hop-interface': 'eth0', 'table-id': 254}]}, 'interfaces': [{'name': 'eth0', 'type': 'ethernet', 'state': 'up', 'ipv4': {'enabled': True, 'address': [{'ip': '192.168.122.238', 'prefix-length': 24}], 'auto-dns': True, 'auto-gateway': True, 'auto-route-table-id': 0, 'auto-routes': True, 'dhcp': True}, 'ipv6': {'enabled': True, 'address': [{'ip': 'fe80::5054:ff:fe91:e44e', 'prefix-length': 64}], 'auto-dns': True, 'auto-gateway': True, 'auto-route-table-id': 0, 'auto-routes': True, 'autoconf': True, 'dhcp': True}, 'lldp': {'enabled': False}, 'mac-address': '52:54:00:91:E4:4E', 'mtu': 9000}]} --- output omitted --- 2021-06-29 11:29:05,760 root         DEBUG    Async action: Update profile uuid:2bdee700-f62b-365a-bd1d-69d9c31a9f0c iface:eth0 type:ethernet started 2021-06-29 11:29:05,792 root         DEBUG    Async action: Update profile uuid:2bdee700-f62b-365a-bd1d-69d9c31a9f0c iface:eth0 type:ethernet finished

Now, use both the ip command and also the eth0 configuration file to check that the MTU of eth0 is 9000 bytes.

$ ip link show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc fq_codel state UP mode DEFAULT group default qlen 1000     link/ether 52:54:00:91:e4:4e brd ff:ff:ff:ff:ff:ff     altname enp1s0 $ sudo cat /etc/NetworkManager/system-connections/eth0.nmconnection  [sudo] password for admin:  [connection] id=eth0 uuid=2bdee700-f62b-365a-bd1d-69d9c31a9f0c type=ethernet interface-name=eth0 lldp=0 permissions= [ethernet] cloned-mac-address=52:54:00:91:E4:4E mac-address-blacklist= mtu=9000 [ipv4] dhcp-client-id=mac dhcp-timeout=2147483647 dns-search= method=auto [ipv6] addr-gen-mode=eui64 dhcp-duid=ll dhcp-iaid=mac dhcp-timeout=2147483647 dns-search= method=auto ra-timeout=2147483647 [proxy] Example of file-based configuration mode:

Let’s use the file-based approach to set a new config state. In this case disable the IPv6 configuration in eth1 interface.

First, create a yaml file to define the desired state of the eth1 interface. Use nmstatectl show to save the current settings then nmstatectl edit to disable IPv6. Again, all changes are in bold and deletions are shown with strike-through:

$ nmstatectl show eth1 > eth1.yaml $ vi eth1.yaml --- dns-resolver:   config: {}   running:     search: []     server:     - 192.168.122.1 route-rules:   config: [] routes:   config: []   running:   - destination: fe80::/64     metric: 101     next-hop-address: ''     next-hop-interface: eth1     table-id: 254   - destination: 0.0.0.0/0     metric: 101     next-hop-address: 192.168.122.1     next-hop-interface: eth1     table-id: 254   - destination: 192.168.122.0/24     metric: 101     next-hop-address: ''     next-hop-interface: eth1     table-id: 254 interfaces: - name: eth1   type: ethernet   state: up   ipv4:     enabled: true     address:     - ip: 192.168.122.108       prefix-length: 24     auto-dns: true     auto-gateway: true     auto-route-table-id: 0     auto-routes: true     dhcp: true   ipv6:     enabled: false     address:     - ip: fe80::5054:ff:fe3c:9b04       prefix-length: 64     auto-dns: true     auto-gateway: true     auto-route-table-id: 0     auto-routes: true     autoconf: true     dhcp: true       lldp:     enabled: false   mac-address: 52:54:00:3C:9B:04   mtu: 1500

After saving the new configuration, use it to apply the new state:

$ sudo nmstatectl apply eth1.yaml 2021-06-29 12:17:21,531 root         DEBUG    Nmstate version: 1.0.3 2021-06-29 12:17:21,531 root         DEBUG    Applying desire state: {'dns-resolver': {'config': {}, 'running': {'search': [], 'server': ['192.168.122.1']}}, 'route-rules': {'config': []}, 'routes': {'config': [], 'running': [{'destination': 'fe80::/64', 'metric': 101, 'next-hop-address': '', 'next-hop-interface': 'eth1', 'table-id': 254}, {'destination': '0.0.0.0/0', 'metric': 101, 'next-hop-address': '192.168.122.1', 'next-hop-interface': 'eth1', 'table-id': 254}, {'destination': '192.168.122.0/24', 'metric': 101, 'next-hop-address': '', 'next-hop-interface': 'eth1', 'table-id': 254}]}, 'interfaces': [{'name': 'eth1', 'type': 'ethernet', 'state': 'up', 'ipv4': {'enabled': True, 'address': [{'ip': '192.168.122.108', 'prefix-length': 24}], 'auto-dns': True, 'auto-gateway': True, 'auto-route-table-id': 0, 'auto-routes': True, 'dhcp': True}, 'ipv6': {'enabled': False}, 'lldp': {'enabled': False}, 'mac-address': '52:54:00:3C:9B:04', 'mtu': 1500}]} --- output omitted --- 2021-06-29 12:17:21,582 root         DEBUG    Async action: Update profile uuid:5d7244cb-673d-3b88-a675-32e31fad4347 iface:eth1 type:ethernet started 2021-06-29 12:17:21,587 root         DEBUG    Async action: Update profile uuid:5d7244cb-673d-3b88-a675-32e31fad4347 iface:eth1 type:ethernet finished --- output omitted --- Desired state applied:  --- dns-resolver:   config: {}   running:     search: []     server:     - 192.168.122.1 route-rules:   config: [] routes:   config: []   running:   - destination: fe80::/64     metric: 101     next-hop-address: ''     next-hop-interface: eth1     table-id: 254   - destination: 0.0.0.0/0     metric: 101     next-hop-address: 192.168.122.1     next-hop-interface: eth1     table-id: 254   - destination: 192.168.122.0/24     metric: 101     next-hop-address: ''     next-hop-interface: eth1     table-id: 254 interfaces: - name: eth1   type: ethernet   state: up   ipv4:     enabled: true     address:     - ip: 192.168.122.108       prefix-length: 24     auto-dns: true     auto-gateway: true     auto-route-table-id: 0     auto-routes: true     dhcp: true   ipv6:     enabled: false   lldp:     enabled: false   mac-address: 52:54:00:3C:9B:04   mtu: 1500

You can check that the eth1 interface does not have any IPv6 configured:

$ ip -br a lo               UNKNOWN        127.0.0.1/8 ::1/128  eth0             UP             192.168.122.238/24 fe80::5054:ff:fe91:e44e/64  eth1             UP             192.168.122.108/24  $ sudo cat /etc/NetworkManager/system-connections/eth1.nmconnection  [connection] id=eth1 uuid=5d7244cb-673d-3b88-a675-32e31fad4347 type=ethernet interface-name=eth1 lldp=0 permissions= [ethernet] cloned-mac-address=52:54:00:3C:9B:04 mac-address-blacklist= mtu=1500 [ipv4] dhcp-client-id=mac dhcp-timeout=2147483647 dns-search= method=auto [ipv6] addr-gen-mode=eui64 dhcp-duid=ll dhcp-iaid=mac dns-search= method=disabled [proxy] Applying changes temporarily

An interesting feature of NMState allows you to configure a desired networking state temporarily. In case you are satisfied with the configuration you can commit it afterwards. Otherwise it will rollback when the timeout expires (default is 60 sec).

Modify the eth1 configuration from the previous example so it has an IPv4 static address instead of getting it dynamically by DHCP.

$ vi eth1.yaml --- dns-resolver:   config: {}   running:     search: []     server:     - 192.168.122.1 route-rules:   config: [] routes:   config: []   running:   - destination: fe80::/64     metric: 101     next-hop-address: ''     next-hop-interface: eth1     table-id: 254   - destination: 0.0.0.0/0     metric: 101     next-hop-address: 192.168.122.1     next-hop-interface: eth1     table-id: 254   - destination: 192.168.122.0/24     metric: 101     next-hop-address: ''     next-hop-interface: eth1     table-id: 254 interfaces: - name: eth1   type: ethernet   state: up   ipv4:     enabled: true     address:     - ip: 192.168.122.110       prefix-length: 24     auto-dns: true     auto-gateway: true     auto-route-table-id: 0     auto-routes: true     dhcp: false   ipv6:     enabled: false   lldp:     enabled: false   mac-address: 52:54:00:3C:9B:04   mtu: 1500

Now, apply this config temporarily using the option no-commit so it will be valid only for 30 seconds. This can be done adding the option –timeout. Meanwhile, we will run the ip  -br a command three times to see how the IPv4 address configured in eth1 interface changes and then the configuration rolls back.

$ ip -br a && sudo nmstatectl apply --no-commit --timeout 30 eth1.yaml && sleep 10 && ip -br a && sleep 25 && ip -br a lo               UNKNOWN        127.0.0.1/8 ::1/128  eth0             UP             192.168.122.238/24 fe80::5054:ff:fe91:e44e/64  eth1             UP             192.168.122.108/24  2021-06-29 17:29:18,266 root         DEBUG    Nmstate version: 1.0.3 2021-06-29 17:29:18,267 root         DEBUG    Applying desire state: {'dns-resolver': {'config': {}, 'running': {'search': [], 'server': ['192.168.122.1']}}, 'route-rules': {'config': []}, 'routes': {'config': [], 'running': [{'destination': 'fe80::/64', 'metric': 101, 'next-hop-address': '', 'next-hop-interface': 'eth1', 'table-id': 254}, {'destination': '0.0.0.0/0', 'metric': 101, 'next-hop-address': '192.168.122.1', 'next-hop-interface': 'eth1', 'table-id': 254}, {'destination': '192.168.122.0/24', 'metric': 101, 'next-hop-address': '', 'next-hop-interface': 'eth1', 'table-id': 254}]}, 'interfaces': [{'name': 'eth1', 'type': 'ethernet', 'state': 'up', 'ipv4': {'enabled': True, 'address': [{'ip': '192.168.122.110', 'prefix-length': 24}], 'dhcp': False}, 'ipv6': {'enabled': False}, 'lldp': {'enabled': False}, 'mac-address': '52:54:00:3C:9B:04', 'mtu': 1500}]} --- output omitted --- Desired state applied:  --- dns-resolver:   config: {}   running:     search: []     server:     - 192.168.122.1 route-rules:   config: [] routes:   config: []   running:   - destination: fe80::/64     metric: 101     next-hop-address: ''     next-hop-interface: eth1     table-id: 254   - destination: 0.0.0.0/0     metric: 101     next-hop-address: 192.168.122.1     next-hop-interface: eth1     table-id: 254   - destination: 192.168.122.0/24     metric: 101     next-hop-address: ''     next-hop-interface: eth1     table-id: 254 interfaces: - name: eth1   type: ethernet   state: up   ipv4:     enabled: true     address:     - ip: 192.168.122.110       prefix-length: 24     dhcp: false   ipv6:     enabled: false   lldp:     enabled: false   mac-address: 52:54:00:3C:9B:04   mtu: 1500 Checkpoint: NetworkManager|/org/freedesktop/NetworkManager/Checkpoint/7 lo               UNKNOWN        127.0.0.1/8 ::1/128  eth0             UP             192.168.122.238/24 fe80::5054:ff:fe91:e44e/64  eth1             UP             192.168.122.110/24  lo               UNKNOWN        127.0.0.1/8 ::1/128  eth0             UP             192.168.122.238/24 fe80::5054:ff:fe91:e44e/64  eth1             UP             192.168.122.108/24

As you can see from above, the eth1 IP address changed temporarily from 192.168.122.108 to 192.168.122.110 and then it returned to 192.168.122.108 after the timeout expired.

Conclusion

NMState is a declarative networking configuration tool that currently applies the desired networking configuration state in a host through the NetworkManager API. This state can be defined either interactively using a text editor or with a file-based approach creating a yaml or json file.

This kind of tool provides Infrastructure as Code, it allows the automation of networking tasks and also reduces potential misconfigurations or unstable networking scenarios that could arise using legacy configuration methods.

Use OpenCV on Fedora Linux ‒ part 2

Friday 6th of August 2021 08:00:00 AM

Welcome back to the OpenCV series where we explore how to make use of OpenCV on Fedora Linux. The first article covered the basic functions and use cases of OpenCV. In addition to that you learned about loading images, color mapping, and the difference between BGR and RGB color maps. You also learned how to separate and merge color channels and how to convert to different color spaces. This article will cover basic image manipulation and show you how to perform image transformations including:

  • Accessing individual image pixels
  • Modifying a range of image pixels
  • Cropping
  • Resizing
  • Flipping
Accessing individual pixels import cv2 import numpy as np import matplotlib.pyplot as plt # Read image as gray scale. img = cv2.imread(cv2.samples.findFile("gradient.png"),0) # Set color map to gray scale for proper rendering. plt.imshow(img, cmap='gray') # Print img pixels as 2D Numpy Array print(img) # Show image with Matplotlib plt.show()

To access a pixel in a numpy matrix, you have to use matrix notation such as matrix[r,c], where the r is the row number and c is the column number. Also note that the matrix is 0-indexed. If you want to access the first pixel, you need to specify matrix[0,0]. The following example prints one black pixel from top-left and one white pixel from top-right-corner.

# print the first pixel print(img[0,0]) # print the white pixel to the top right corner print(img[0,299]) Modifying a range of image pixels

You can modify the values of pixels using the same notation described above.

gr_img = img.copy() # Modify pixel one by one #gr_img[20,20] = 200 #gr_img[20,21] = 200 #gr_img[20,22] = 200 #gr_img[20,23] = 200 #gr_img[20,24] = 200 # ... # Modify pixel between 20-80 pixel range gr_img[20:80,20:80] = 200 plt.imshow(gr_img, cmap='gray') print(gr_img) plt.show() Cropping images

Cropping an image is achieved by selecting a specific (pixel) region of the image.

import cv2 as cv import matplotlib.pyplot as plt img = cv.imread(cv.samples.findFile("starry_night.jpg"),cv.IMREAD_COLOR) img_rgb = cv.cvtColor(img, cv.COLOR_BGR2RGB) fig, (ax1, ax2) = plt.subplots(1,2) ax1.imshow(img_rgb) ax1.set_title('Before Crop') ax2.imshow(img_rgb[200:400, 300:600]) ax2.set_title('After Crop') plt.show() Resizing images

Syntax: dst = cv.resize( src, dsize[, dst[, fx[, fy[, interpolation]]]] )

The resize function resizes the src image down to or up to the specified size. The size and type are derived from the values of src, dsize,fx, and fy.

The resize function has two required arguments:

  • src: input image
  • dsize: output image size

Optional arguments that are often used include:

  • fx: The scale factor along the horizontal axis. When this is 0, the factor is computed as dsize.width/src.cols.
  • fy: The scale factor along the vertical axis. When this is 0, the factor is computed as dsize.height/src.rows.
import cv2 as cv import matplotlib.pyplot as plt img = cv.imread(cv.samples.findFile("starry_night.jpg"), cv.IMREAD_COLOR) img_rgb = cv.cvtColor(img, cv.COLOR_BGR2RGB) plt.figure(figsize=[18, 5]) plt.subplot(1, 3, 1) # row 1, column 3, count 1 cropped_region = img_rgb[200:400, 300:600] resized_img_5x = cv.resize(cropped_region, None, fx=5, fy=5) plt.imshow(resized_img_5x) plt.title("Resize Cropped Image with Scale 5X") width = 200 height = 300 dimension = (width, height) resized_img = cv.resize(img_rgb, dsize=dimension, interpolation=cv.INTER_AREA) plt.subplot(1, 3, 2) plt.imshow(resized_img) plt.title("Resize Image with Custom Size") desired_width = 500 aspect_ratio = desired_width / img_rgb.shape[1] desired_height = int(resized_img.shape[0] * aspect_ratio) dim = (desired_width, desired_height) resized_cropped_region = cv.resize(img_rgb, dsize=dim, interpolation=cv.INTER_AREA) plt.subplot(1, 3, 3) plt.imshow(resized_cropped_region) plt.title("Keep Aspect Ratio - Resize Image") plt.show() Flipping images

Syntax: dst = cv.flip( src, flipCode )

  • dst: output array of the same size and type as src.

The flip function flips the array in one of three different ways.

The flip function has two required arguments:

  • src: the input image
  • flipCode: a flag to specify how to flip the image
    • Use 0 to flip the image on the x-axis.
    • Use a positive value (for example, 1) to flip the image on the y-axis.
    • Use a negative value (for example, -1) to flip the image on both axes.
import cv2 as cv import matplotlib.pyplot as plt img = cv.imread(cv.samples.findFile("starry_night.jpg"),cv.IMREAD_COLOR) img_rgb = cv.cvtColor(img, cv.COLOR_BGR2RGB) img_rgb_flipped_horz = cv.flip(img_rgb, 1) img_rgb_flipped_vert = cv.flip(img_rgb, 0) img_rgb_flipped_both = cv.flip(img_rgb, -1) plt.figure(figsize=[18,5]) plt.subplot(141);plt.imshow(img_rgb_flipped_horz);plt.title("Horizontal Flip"); plt.subplot(142);plt.imshow(img_rgb_flipped_vert);plt.title("Vertical Flip"); plt.subplot(143);plt.imshow(img_rgb_flipped_both);plt.title("Both Flipped"); plt.subplot(144);plt.imshow(img_rgb);plt.title("Original"); plt.show() Further information

More details about OpenCV are available in the documentation.

Thank you.

Apps for daily needs part 3: image editors

Wednesday 4th of August 2021 08:00:00 AM

Image editors are applications that are liked and needed by many people, from professional designers, students, or for those who have certain hobbies. Especially in this digital era, more and more people need image editors for various reasons. This article will introduce some of the open source image editors that you can use on Fedora Linux. You may need to install the software mentioned. If you are unfamiliar with how to add software packages in Fedora Linux, see my earlier article Things to do after installing Fedora 34 Workstation. Here is a list of a few apps for daily needs in the image editors category.

GIMP

GIMP (GNU Image Manipulation Program) is a raster graphics editor used for photo retouching, image composition, and image authoring. It has almost the same functionality as Adobe Photoshop. You can use GIMP to do a lot of the things you can do with Photoshop. Because of that, GIMP has become the most popular application as an open source alternative to Adobe Photoshop.

GIMP has a lot of features for manipulating images, especially for raster images. You can fix or change the color of your photos using GIMP. You can select a part of the image, crop it, and then merge it with other pieces of the image. GIMP also has many effects that you can apply to your images, including blur, shadow, noise, etc. Many people use GIMP to repair damaged photos, improve image quality, crop unwanted parts of images, create posters and various graphic design works, and much more. Moreover you can also add plugins and scripts in GIMP, making it even more fully featured.

More information is available at this link: https://www.gimp.org/

Inkscape

Inkscape is a popular open source application used to create and edit vector graphics. It is a feature-rich vector graphics editor which makes it competitive with other similar proprietary applications, such as Adobe Illustrator and Corel Draw. Because of that, many professional illustrators use it to create vector-based artwork.

You can use Inkscape for making artistic and technical illustrations, such as logos, diagrams, icons, desktop wallpapers, flowcharts, cartoons, and much more. Moreover, Inkscape can handle various graphic file formats. In addition, you can also add add-ons to make your work easier.

More information is available at this link: https://inkscape.org/

Krita

Krita looks like GIMP or Inkscape at first glance. But actually it is an application that is quite different, although it has some similar functions. Krita is an application for creating digital paintings like those made by artists. You can use Krita for making concept art, illustration, comics, texture, and matte paintings.

Krita has over 100 professionally made brushes that come preloaded. It also has a brush stabilizer feature with 3 different ways to smooth and stabilize your brush strokes. Moreover, you can customize your brushes with over 9 unique brush engines. Krita is the right application for those of you who like digital painting activities.

More information is available at this link: https://krita.org/en/

darktable

darktable is perfect for photographers or for those who want to improve the quality of their photos. darktable focuses more on image editing specifically on non-destructive post-production of raw images. Therefore, it provides professional color management that supports automatic display profile detection. In addition, you can also use darktable to handle multiple images with filtering and sorting features. So you can search your collections by tags, rating, color labels, and many more. It can import various image formats, such as JPEG, CR2, NEF, HDR, PFM, RAF, etc.

More information is available at this link: https://www.darktable.org/

Conclusion

This article presented four image editors as apps for your daily needs that you can use on Fedora Linux. Each application represents a sub-category of image editor applications. Actually there are many other image editors that you can use in Fedora Linux. You can also use RawTherapee or Photivo as a dartkable alternative. In addition there is Pinta as an alternative to GIMP, and MyPaint as an alternative to Krita. Hopefully this article can help you to choose the right image editors. If you have experience in using these applications, please share your experience in the comments.

More in Tux Machines

Programming Leftovers

  • Announcement : An AArch64 (Arm64) Darwin port is planned for GCC12

    As many of you know, Apple has now released an AArch64-based version of macOS and desktop/laptop platforms using the ‘M1’ chip to support it. This is in addition to the existing iOS mobile platforms (but shares some of their constraints). There is considerable interest in the user-base for a GCC port (starting with https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96168) - and, of great kudos to the gfortran team, one of the main drivers is folks using Fortran. Fortunately, I was able to obtain access to one of the DTKs, courtesy of the OSS folks, and using that managed to draft an initial attempt at the port last year (however, nowhere near ready for presentation in GCC11). Nevertheless (as an aside) despite being a prototype, the port is in use with many via hombrew, macports or self-builds - which has shaken out some of the fixable bugs. The work done in the prototype identified three issues that could not be coded around without work on generic parts of the compiler. I am very happy to say that two of our colleagues, Andrew Burgess and Maxim Blinov (both from embecosm) have joined me in drafting a postable version of the port and we are seeking sponsorship to finish this in the GCC12 timeframe. Maxim has a lightning talk on the GNU tools track at LPC (right after the steering committee session) that will focus on the two generic issues that we’re tackling (1 and 2 below). Here is a short summary of the issues and proposed solutions (detailed discussion of any of the parts below would better be in new threads).

  • Apple Silicon / M1 Port Planned For GCC 12 - Phoronix

    Developers are hoping for next year's GCC 12 release they will have Apple AArch64 support on Darwin in place for being able to support Apple Silicon -- initially the M1 SoC -- on macOS with GCC. LLVM/Clang has long been supporting AArch64 on macOS given that Apple leverages LLVM/Clang as part of their official Xcode toolchain as the basis for their compiler across macOS to iOS and other products. While the GNU Compiler Collection (GCC) supports AArch64 and macOS/Darwin, it hasn't supported the two of them together but there is a port in progress to change it.

  • Dirk Eddelbuettel: tidyCpp 0.0.5 on CRAN: More Protect’ion

    Another small release of the tidyCpp package arrived on CRAN overnight. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the vignette for motivating examples. The Protect class now uses the default methods for copy and move constructors and assignment allowing for wide use of the class. The small NumVec class now uses it for its data member.

  • QML Modules in Qt 6.2

    With Qt 6.2 there is, for the first time, a comprehensive build system API that allows you to specify a QML module as a complete, encapsulated unit. This is a significant improvement, but as the concept of QML modules was rather under-developed in Qt 5, even seasoned QML developers might now ask "What exactly is a QML module". In our previous post we have scratched the surface by introducing the CMake API used to define them. We'll take a closer look in this post.

  • Santiago Zarate: So you want to recover and old git branch because it has been overwritten?
  • Start using YAML now | Opensource.com

    YAML (YAML Ain't Markup Language) is a human-readable data serialization language. Its syntax is simple and human-readable. It does not contain quotation marks, opening and closing tags, or braces. It does not contain anything which might make it harder for humans to parse nesting rules. You can scan your YAML document and immediately know what's going on. [...] At this point, you know enough YAML to get started. You can play around with the online YAML parser to test yourself. If you work with YAML daily, then this handy cheatsheet will be helpful.

  • 40 C programming examples

    C programming language is one of the popular programming languages for novice programmers. It is a structured programming language that was mainly developed for UNIX operating system. It supports different types of operating systems, and it is very easy to learn. 40 useful C programming examples have been shown in this tutorial for the users who want to learn C programming from the beginning.

Devices/Embedded: Asus Tinker Board 2 and More

  • Asus Tinker Board 2 single-board computer now available for $94 and up - Liliputing

    The Asus Tinker Board 2 is a Raspberry Pi-shaped single-board computer powered by a Rockchip RK3399 hexa-core processor and featuring 2GB to 4GB of RAM. First announced almost a year ago, the Tinker Board 2 is finally available for $99 and up. Asus also offers a Tinker Board 2S model that’s pretty similar except that it has 16GB of eMMC storage. Prices for that model start at about $120.

  • Raspberry Pi Weekly Issue #371 - Sir Clive Sinclair, 1940 – 2021

    This week ended with the incredibly sad news of the passing of Sir Clive Sinclair. He was one of the founding fathers of home computing and got many of us at Raspberry Pi hooked on programming as kids. Join us in sharing your Sinclair computing memories with us on Twitter and our blog, and we’ll see you next week.

  • cuplTag battery-powered NFC tag logs temperature and humidity (Crowdfunding) - CNX Software

    Temperature and humidity sensors would normally connect to a gateway sending data to the cloud, the coin-cell battery-powered cuplTag NFC tag instead sends data to your smartphone after a tap. CulpTag is controlled by an MSP430 16-bit microcontroller from Texas Instruments which reads and stores sensor data regularly into an EEPROM, and the data can then be read over NFC with the tag returning an URL with the data from the sensor and battery, then display everything on the phone’s web browser (no app needed).

  • A first look at Microchip PolarFire SoC FPGA Icicle RISC-V development board - CNX Software

    Formally launched on Crowd Supply a little over a year ago, Microchip PolarFire SoC FPGA Icicle (codenamed MPFS-ICICLE-KIT-ES) was one of the first Linux & FreeBSD capable RISC-V development boards. The system is equipped with PolarFire SoC FPGA comprised a RISC-V CPU subsystem with four 64-bit RISC-V (RV64GC) application cores, one 64-bit RISC-V real-time core (RV64IMAC), as well as FPGA fabric. Backers of the board have been able to play with it for several months ago, but Microchip is now sending the board to more people for evaluation/review, and I got one of my own to experiment with. That’s good to have a higher-end development board instead of the usual hobbyist-grade board. Today, I’ll just have a look at the kit content and main components on the board before playing with Linux and FPGA development tools in an upcoming or two posts.

  • What is IoT device management?

    Smart devices are everywhere around us. We carry one in our pocket, watch movies on another while a third cooks us dinner. Every day there are thousands of new devices connecting to the Internet. Research shows that by 2025, more than 150,000 IoT devices will come online every minute. With such vast numbers it is impossible to keep everything in working order just on your own. This brings the need for IoT device management. But what is IoT device management? To answer this question we first need to understand what the Internet of Things (IoT) is.

  • Beelink U59 mini PC with Intel Celeron N5095 Jasper Lake coming soon - Liliputing

    Beelink says the system ships with Windows 10, but it should also supports Linux.

  • Beelink U59 Celeron N5095 Jasper Lake mini PC to ship with 16GB RAM, 512GB SSD - CNX Software

    Beelink U59 is an upcoming Jasper Lake mini PC based on the Intel Celeron N5095 15W quad-core processor that will ship with up to 16GB RAM, and 512 GB M.2 SSD storage. The mini PC will also offer two 4K HDMI 2.0 ports, a Gigabit Ethernet port, WiFi 5, as well as four USB 3.0 ports, and support for 2.5-inch SATA drives up to 7mm thick.

Graphics: Mesa, KWinFT, and RADV

  • Experimenting Is Underway For Rust Code Within Mesa - Phoronix

    Longtime Mesa developer Karol Herbst who has worked extensively on the open-source NVIDIA "Nouveau" driver as well as the OpenCL/compute stack while being employed by Red Hat is now toying with the idea of Rust code inside Mesa.  Karol Herbst has begun investigating how Rust code, which is known for its memory safety and concurrency benefits, could be used within Mesa. Ultimately he's evaluating how Rust could be used inside Mesa as an API implementation as well as for leveraging existing Mesa code by Rust. 

  •     
  • KWinFT Continues Working On WLROOTS Render, Library Split

    KWinFT as a fork of KDE's KWin X11/Wayland compositor code continues making progress on driving fundamental display improvements and ironing out the Wayland support.  KWinFT has been transitioning to use WLROOTS for its Wayland heavy-lifting and that process remains ongoing. KWinFT has also been working on splitting up its library code to make it more manageable and robust.  Among the features still desired by KWinFT and to be worked on include input methods, graphical tablet support, and PipeWire video stream integration. Currently there are two full-time developers working on the project but they hope to scale up to four to five full-time developers. 

  • Raytracing Starting to Come Together – Bas Nieuwenhuizen – Open Source GPU Drivers

    I am back with another status update on raytracing in RADV. And the good news is that things are finally starting to come together. After ~9 months of on and off work we’re now having games working with raytracing.

  • Multiple Games Are Now Working With RADV's Ray-Tracing Code - Phoronix

    Not only is Intel progressing with its open-source ray-tracing driver support but the Mesa Radeon Vulkan driver "RADV" has been rounding out its RT code too and now has multiple games correctly rendering. Bas Nieuwenhuizen has been spearheading the RADV work on Vulkan ray-tracing support and after more than a half-year tackling it things are starting to fall into place nicely.Games such as Quake II RTX with native Vulkan ray-tracing are working along with the game control via VKD3D-Proton for going from Direct3D 12 DXR to Vulkan RT. Metro Exodus is also working while Ghostrunner and Doom Eternal are two games tested that are not yet working.

Audiocasts/Shows: Full Circle Weekly News, Juno Computers, Kali Linux 2021.3