Language Selection

English French German Italian Portuguese Spanish

Fedora Magazine

Syndicate content
Guides, information, and news about the Fedora operating system for users, developers, system administrators, and community members.
Updated: 20 hours 55 min ago

Using i3 with multiple monitors

Monday 24th of June 2019 07:00:58 AM

Are you using multiple monitors with your Linux workstation? Seeing many things at once might be beneficial. But there are often much more windows in our workflows than physical monitors — and that’s a good thing, because seeing too many things at once might be distracting. So being able to switch what we see on individual monitors seems crucial.

Let’s talk about i3 — a popular tiling window manager that works great with multiple monitors. And there is one handy feature that many other window managers don’t have — the ability to switch workspaces on individual monitors independently.

Quick introduction to i3

The Fedora Magazine has already covered i3 about three years ago. And it was one of the most popular articles ever published! Even though that’s not always the case, i3 is pretty stable and that article is still very accurate today. So — not to repeat ourselves too much — this article only covers the very minimum to get i3 up and running, and you’re welcome to go ahead and read it if you’re new to i3 and want to learn more about the basics.

To install i3 on your system, run the following command:

$ sudo dnf install i3

When that’s done, log out, and on the log in screen choose i3 as your window manager and log back in again.

When you run i3 for the first time, you’ll be asked if you wish to proceed with automatic configuration — answer yes here. After that, you’ll be asked to choose a “mod key”. If you’re not sure here, just accept the default which sets you Windows/Super key as the mod key. You’ll use this key for mostly all the shortcuts within the window manager.

At this point, you should see a little bar at the bottom and an empty screen. Let’s have a look at some of the basic shortcuts.

Open a terminal using:

$mod + enter

Switch to a second workspace using:

$mod + 2

Open firefox in two steps, first by:

$mod + d

… and then by typing “firefox” and pressing enter.

Move it to the first workspace by:

$mod + shift + 1

… and switch to the first workspace by:

$mod + 1

At this point, you’ll see a terminal and a firefox window side by side. To close a window, press:

$mod + shift + q

There are more shortcuts, but these should give you the minimum to get started with i3.

Ah! And to exit i3 (to log out) press:

$mod + shift + e

… and then confirm using your mouse at the top-right corner.

Getting multiple screens to work

Now that we have i3 up and running, let’s put all those screens to work!

To do that, we’ll need to use the command line as i3 is very lightweight and doesn’t have gui to manage additional screens. But don’t worry if that sounds difficult — it’s actually quite straightforward!

The command we’ll use is called xrandr. If you don’t have xrandr on your system, install it by running:

$ sudo dnf install xrandr

When that’s installed, let’s just go ahead and run it:

$ xrandr

The output lists all the available outputs, and also indicated which have a screen attached to them (a monitor connected with a cable) by showing supported resolutions. Good news is that we don’t need to really care about the specific resolutions to make the them work.

This specific example shows a primary screen of a laptop (named eDP1), and a second monitor connected to the HDMI-2 output, physically positioned right of the laptop. To turn it on, run the following command:

$ xrandr --output HDMI-2 --auto --right-of eDP1

And that’s it! Your screen is now active.

Second screen active. The commands shown on this screenshot are slightly different than in the article, as they set a smaller resolution to make the screenshots more readable. Managing workspaces on multiple screens

Switching workspaces and creating new ones on multiple screens is very similar to having just one screen. New workspaces get created on the screen that’s currently active — the one that has your mouse cursor on it.

So, to switch to a specific workspace (or to create a new one in case it doesn’t exist), press:

$mod + NUMBER

And you can switch workspaces on individual monitors independently!

Workspace 2 on the left screen, workspace 4 on the right screen. Left screen switched to workspace 3, right screen still showing workspace 4. Right screen switched to workspace 4, left screen still showing workspace 3. Moving workspaces between monitors

The same way we can move windows to different workspaces by the following command:

$mod + shift + NUMBER

… we can move workspaces to different screens as well. However, there is no default shortcut for this action — so we have to create it first.

To create a custom shortcut, you’ll need to open the configuration file in a text editor of your choice (this article uses vim):

$ vim ~/.config/i3/config

And add the following lines to the very bottom of the configuration file:

# Moving workspaces between screens
bindsym $mod+p move workspace to output right

Save, close, and to reload and apply the configuration, press:

$mod + shift + r

Now you’ll be able to move your active workspace to the second monitor by:

$mod + p Workspace 2 with Firefox on the left screen Workspace 2 with Firefox moved to the second screen

And that’s it! Enjoy your new multi-monitor experience, and to learn more about i3, you’re welcome to read the previous article about i3 on the Fedora Magazine, or consult the official i3 documentation.

Making Fedora 30

Friday 21st of June 2019 08:00:35 AM

What does it take to make a Linux distribution like Fedora 30? As you might expect, it’s not a simple process.

Changes in Fedora 30

Although Fedora 29 released on October 30, 2018, work on Fedora 30 began long before that. The first change proposal was submitted in late August. By my count, contributors made nine separate change proposals for Fedora 30 before Fedora 29 shipped.

Some of these proposals come early because they have a big impact, like mass removal of Python 2 packages. By the time the proposal deadline arrived in early January, the community had submitted 50 change proposals.

Of course, not all change proposals make it into the shipped release. Some of them are more focused on how we build the release instead of what we release. Others don’t get done in time. System-wide changes must have a contingency plan. These changes are generally evaluated at one of three points in the schedule: when packages branch from Rawhide, at the beginning of the Beta freeze, and at the beginning of the Final freeze. For Fedora 30, 45 Change proposals were still active for the release.

Fedora has a calendar-based release schedule, but that doesn’t mean we ship whatever exists on a given date. We have a set of release criteria that we test against, and we don’t put out a release until all the blockers are resolved. This sometimes means a release is delayed, but it’s important that we ship reliable software.

For the Fedora 30 development cycle, we accepted 22 proposed blocker bugs and rejected 6. We also granted 33 freeze exceptions — bugs that can be fixed during the freeze because they impact the released artifacts or are otherwise important enough to include in the release.

Other contributions

Of course, there’s more to making a release than writing or packaging the code, testing it, and building the images. As with every release, the Fedora Design team created a new desktop background along with several supplemental wallpapers. The Fedora Marketing team wrote release announcements and put together talking points for the Ambassadors and Advocates to use when talking to the broader community.

If you’ve looked at our new website, that was the work of the Websites team in preparation for the Fedora 30 release:

The Documentation Team wrote Release Notes and updated other documentation. Translators provided translations to dozens of languages.

Many other people made contributions to the release of Fedora 30 in some way. It’s not easy to count everyone who has a hand in producing a Linux distribution, but we appreciate every one of our contributors. If you would like to join the Fedora Community but aren’t sure where to start, check out What Can I Do For Fedora?

Photo by Robin Sommer on Unsplash.

Critical Firefox vulnerability fixed in 67.0.3

Thursday 20th of June 2019 06:50:38 AM

On Tuesday, Mozilla issued a security advisory for Firefox, the default web browser in Fedora. This advisory concerns a CVE for a vulnerability based on type confusion that can happen when JavaScript objects are being manipulated. It can be used to crash your browser. There are apparently already attacks in the wild that exploit the issue. Read on for more information, and how to protect your system against this flaw.

At the same time the security vulnerability was issued, Mozilla also released Firefox 67.0.3 (and ESR 60.7.1) to fix the issue.

Updating Firefox in Fedora

Firefox 67.0.3 (with the security fixes) has already been pushed to the stable Fedora repositories. The security fix will be applied to your system with your next update. You can also update the firefox package only by running the following command:

$ sudo dnf update --refresh firefox

This command requires you to have sudo setup. Note that not every Fedora mirrors syncs at the same rate. Community sites graciously donate space and bandwidth these mirrors to carry Fedora content. You may need to try again later if your selected mirror is still awaiting the latest update.

Get the latest Ansible 2.8 in Fedora

Wednesday 19th of June 2019 08:00:13 AM

Ansible is one of the most popular automation engines in the world. It lets you automate virtually anything, from setup of a local system to huge groups of platforms and apps. It’s cross platform, so you can use it with all sorts of operating systems. Read on for more information on how to get the latest Ansible in Fedora, some of its changes and improvements, and how to put it to use.

Releases and features

Ansible 2.8 was recently released with many fixes, features, and enhancements. It was available in Fedora mere days afterward as an official update in Fedora 29 and 30, as well as EPEL. The follow-on version 2.8.1 released two weeks ago. Again, the new release was available within a few days in Fedora.

Installation is, of course, easy to do from the official Fedora repositories using sudo:

$ sudo dnf -y install ansible

The 2.8 release has a long list of changes, and you can read them in the Porting Guide for 2.8. But they include some goodies, such as Python interpreter discovery. Ansible 2.8 now tries to figure out which Python is preferred by the platform it runs on. In cases where that fails, Ansible uses a fallback list. However, you can still use a variable ansible_python_interpreter to set the Python interpreter.

Another change makes Ansible more consistent across platforms. Since sudo is more exclusive to UNIX/Linux, and other platforms don’t have it, become is now used in more places. This includes command line switches. For example, –ask-sudo-pass has become –ask-become-pass, and the prompt is now BECOME password: instead.

There are many more features in the 2.8 and 2.8.1 releases. Do check out the official changelog on GitHub for all the details.

Using Ansible

Maybe you’re not sure if Ansible is something you could really use. Don’t worry, you might not be alone in thinking that, because it’s so powerful. But it turns out that it’s not hard to use it even for simple or individual setups like a home with a couple computers (or even just one!).

We covered this topic earlier in the Fedora magazine as well:

Using Ansible to set up a workstation

Give Ansible a try and see what you think. The great part about it is that Fedora stays quite up to date with the latest releases. Happy automating!

Personal assistant with Mycroft and Fedora

Friday 14th of June 2019 09:46:12 AM

Looking for an open source personal assistant ? Mycroft is allowing you to run an open source service which gives you better control of your data.

Install Mycroft on Fedora

Mycroft is currently not available in the official package collection, but it can be easily installed from the project source. The first step is to download the source from Mycroft’s GitHub repository.

$ git clone https://github.com/MycroftAI/mycroft-core.git

Mycroft is a Python application and the project provides a script that takes care of creating a virtual environment before installing Mycroft and its dependencies.

$ cd mycroft-core
$ ./dev_setup.sh

The installation script prompts the user to help him with the installation process. It is recommended to run the stable version and get automatic updates.

When prompted to install locally the Mimic text-to-speech engine, answer No. Since as described in the installation process this can take a long time and Mimic is available as an rpm package in Fedora so it can be installed using dnf.

$ sudo dnf install mimic Starting Mycroft

After the installation is complete, the Mycroft services can be started using the following script.

$ ./start-mycroft.sh all

In order to start using Mycroft the device running the service needs to be registered. To do that an account is needed and can be created at https://home.mycroft.ai/.

Once the account created, it is possible to add a new device at the following address https://account.mycroft.ai/devices. Adding a new device requires a pairing code that will be spoken to you by your device after starting all the services.

The device is now ready to be used.

Using Mycroft

Mycroft provides a set of skills that are enabled by default or can be downloaded from the Marketplace. To start you can simply ask Mycroft how is doing, or what the weather is.

Hey Mycroft, how are you ?

Hey Mycroft, what's the weather like ?

If you are interested in how things works, the start-mycroft.sh script provides a cli option that lets you interact with the services using the command line. It is also displaying logs which is really useful for debugging.

Mycroft is always trying to learn new skills, and there are many way to help by contributing the Mycroft community.

Photo by Przemyslaw Marczynski on Unsplash

Installing alternative versions of RPMs in Fedora

Wednesday 12th of June 2019 08:00:29 AM

Modularity enables Fedora to provide alternative versions of RPM packages in the repositories. Several different applications, language runtimes, and tools are available in multiple versions, build natively for each Fedora release. 

The Fedora Magazine has already covered Modularity in Fedora 28 Server Edition about a year ago. Back then, it was just an optional repository with additional content, and as the title hints, only available to the Server Edition. A lot has changed since then, and now Modularity is a core part of the Fedora distribution. And some packages have moved to modules completely. At the time of writing — out of the 49,464 binary RPM packages in Fedora 30 — 1,119 (2.26%) come from a module (more about the numbers).

Modularity basics

Because having too many packages in multiple versions could feel overwhelming (and hard to manage), packages are grouped into modules that represent an application, a language runtime, or any other sensible group.

Modules often come in multiple streams — usually representing a major version of the software. Available in parallel, but only one stream of each module can be installed on a given system.

And not to overwhelm users with too many choices, each Fedora release comes with a set of defaults — so decisions only need to be made when desired.

Finally, to simplify installation, modules can be optionally installed using pre-defined profiles based on a use case. A database module, for example, could be installed as a client, a server, or both.

Modularity in practice

When you install an RPM package on your Fedora system, chances are it comes from a module stream. The reason why you might not have noticed is one of the core principles of Modularity — remaining invisible until there is a reason to know about it.

Let’s compare the following two situations. First, installing the popular i3 tiling window manager, and second, installing the minimalist dwm window manager:

$ sudo dnf install i3
...
Done!

As expected, the above command installs the i3 package and its dependencies on the system. Nothing else happened here. But what about the other one?

$ sudo dnf install dwm
...
Enabling module streams:
dwm 6.1
...
Done!

It feels the same, but something happened in the background — the default dwm module stream (6.1) got enabled, and the dwm package from the module got installed.

To be transparent, there is a message about the module auto-enablement in the output. But other than that, the user doesn’t need to know anything about Modularity in order to use their system the way they always did.

But what if they do? Let’s see how a different version of dwm could have been installed instead.

Use the following command to see what module streams are available:

$ sudo dnf module list
...
dwm latest ...
dwm 6.0 ...
dwm 6.1 [d] ...
dwm 6.2 ...
...
Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled

The output shows there are four streams of the dwm module, 6.1 being the default.

To install the dwm package in a different version — from the 6.2 stream for example — enable the stream and then install the package by using the two following commands:

$ sudo dnf module enable dwm:6.2
...
Enabling module streams:
dwm 6.2
...
Done!
$ sudo dnf install dwm
...
Done!

Finally, let’s have a look at profiles, with PostgreSQL as an example.

$ sudo dnf module list
...
postgresql 9.6 client, server ...
postgresql 10 client, server ...
postgresql 11 client, server ...
...

To install PostgreSQL 11 as a server, use the following command:

$ sudo dnf module install postgresql:11/server

Note that — apart from enabling — modules can be installed with a single command when a profile is specified.

It is possible to install multiple profiles at once. To add the client tools, use the following command:

$ sudo dnf module install postgresql:11/client

There are many other modules with multiple streams available to choose from. At the time of writing, there were 83 module streams in Fedora 30. That includes two versions of MariaDB, three versions of Node.js, two versions of Ruby, and many more.

Please refer to the official user documentation for Modularity for a complete set of commands including switching from one stream to another.

Applications for writing Markdown

Monday 10th of June 2019 08:56:12 AM

Markdown is a lightweight markup language that is useful for adding formatting while still maintaining readability when viewing as plain text. Markdown (and Markdown derivatives) are used extensively as the primary form of markup of documents on services like GitHub and pagure. By design, Markdown is easily created and edited in a text editor, however, there are a multitude of editors available that provide a formatted preview of Markdown markup, and / or provide a text editor that highlights the markdown syntax.

This article covers 3 desktop applications for Fedora Workstation that help out when editing Markdown.

UberWriter

UberWriter is a minimal Markdown editor and previewer that allows you to edit in text, and preview the rendered document.

The editor itself has inline previews built in, so text marked up as bold is displayed bold. The editor also provides inline previews for images, formulas, footnotes, and more. Ctrl-clicking one of these items in the markup provides an instant preview of that element to appear.

In addition to the editor features, UberWriter also features a full screen mode and a focus mode to help minimise distractions. Focus mode greys out all but the current paragraph to help you focus on that element in your document

Install UberWriter on Fedora from the 3rd-party Flathub repositories. It can be installed directly from the Software application after setting up your system to install from Flathub

Marker

Marker is a Markdown editor that provides a simple text editor to write Markdown in, and provides a live preview of the rendered document. The interface is designed with a split screen layout with the editor on the left, and the live preview on the right.

Additionally, Marker allows you to export you document in a range of different formats, including HTML, PDF, and the Open Document Format (ODF).

Install Marker on Fedora from the 3rd-party Flathub repositories. It can be installed directly from the Software application after setting up your system to install from Flathub

Ghostwriter

Where the previous editors are more focussed on a minimal user experice, Ghostwriter provides many more features and options to play with. Ghostwriter provides a text editor that is partially styled as you write in Markdown format. Bold text is bold, and headings are in a larger font to assist in writing the markup.

It also provides a split screen with a live updating preview of the rendered document.

Ghostwriter also includes a range of other features, including the ability to choose the Markdown flavour that the preview is rendered in, as well as the stylesheet used to render the preview too.

Additionally, it provides a format menu (and keyboard shortcuts) to insert some of the frequent markdown ‘tags’ like bold, bullets, and italics.

Install Ghostwriter on Fedora from the 3rd-party Flathub repositories. It can be installed directly from the Software application after setting up your system to install from Flathub

Contribute to Fedora Magazine

Friday 7th of June 2019 08:00:59 AM

Do you want to share a piece of Fedora news for the general public? Have a good idea for how to do something using Fedora? Do you or someone you know use Fedora in an interesting way?

We’re always looking for new contributors to write awesome, relevant content. The Magazine is run by the Fedora community — and that’s all of us. You can help too! It’s really easy.Read on to find out how.

What content do we need?

Glad you asked. We often feature material for desktop users, since there are many of them out there! But that’s not all we publish. We want the Magazine to feature lots of different content for the general public.

Sysadmins and power users

We love to publish articles for system administrators and power users who dive under the hood. Here are some recent examples:

Developers

We don’t forget about developers, either. We want to help people use Fedora to build and make incredible things. Here are some recent articles focusing on developers:

Interviews, projects, and links

We also feature interviews with people using Fedora in interesting ways. We even link to other useful content about Fedora. We’ve run interviews recently with people using Fedora to increase security, administer infrastructure, or give back to the community. You can help here, too — it’s as simple as exchanging some email and working with our helpful staff.

How do I get started?

It’s easy to start writing for Fedora Magazine! You just need to have decent skill in written English, since that’s the language in which we publish. Our editors can help polish your work for maximum impact.

Follow this easy process to get involved.

The Magazine team will guide you through getting started. The team also hangs out on #fedora-mktg on Freenode. Drop by, and we can help you get started.

Image courtesy Dustin Lee – originally posted to Unsplash as Untitled

Tweaking the look of Fedora Workstation with themes

Wednesday 5th of June 2019 08:00:29 AM

Changing the theme of a desktop environment is a common way to customize your daily experience with Fedora Workstation. This article discusses the 4 different types of visual themes you can change and how to change to a new theme. Additionally, this article will cover how to install new themes from both the Fedora repositories and 3rd party theme sources.

Theme Types

When changing the theme of Fedora Workstation, there are 4 different themes that can be changed independently of each other. This allows a user to mix and match the theme types to customize their desktop in a multitude of combinations. The 4 theme types are the Application (GTK) theme, the shell theme, the icon theme, and the cursor theme.

Application (GTK) themes

As the name suggests, Application themes change the styling of the applications that are displayed on a user’s desktop. Application themes control the style of the window borders and the window titlebar. Additionally, they also control the style of the widgets in the windows — like dropdowns, text inputs, and buttons. One point to note is that an application theme does not change the icons that are displayed in an application — this is achieved using the icon theme.

Two application windows with two different application themes. The default Adwaita theme on the left, the Adapta theme on the right.

Application themes are also known as GTK themes, as GTK (GIMP Toolkit) is the underlying technology that is used to render the windows and user interface widgets in those windows on Fedora Workstation.

Shell Themes

Shell themes change the appearance of the GNOME Shell. The GNOME Shell is the technology that displays the top bar (and the associated widgets like drop downs), as well as the overview screen and the applications list it contains.

Comparison of two Shell themes, with the Fedora Workstation default on top, and the Adapta shell theme on the bottom. Icon Themes

As the name suggests, icon themes change the icons used in the desktop. Changing the icon theme will change the icons displayed both in the Shell, and in applications.

Comparison of two icon themes, with the Fedora 30 Workstation default Adwaita on the left, and the Yaru icon theme on the right

One important item to note with icon themes is that all icon themes will not have customized icons for all application icons. Consequently, changing the icon theme will not change all the icons in the applications list in the overview.

Comparison of two icon themes, with the Fedora 30 Workstation default Adwaita on the top, and the Yaru icon theme on the bottom Cursor Theme

The cursor theme allows a user to change how the mouse pointer is displayed. Most cursor themes change all the common cursors, including the pointer, drag handles and the loading cursor.

Comparison of multiple cursors of two different cursor themes. Fedora 30 default is on the left, the Breeze Snow theme on the right. Changing the themes

Changing themes on Fedora Workstation is a simple process. To change all 4 types of themes, use the Tweaks application. Tweaks is a tool used to change a range of different options in Fedora Workstation. It is not installed by default, and is installed using the Software application:

Alternatively, install Tweaks from the command line with the command:

sudo dnf install gnome-tweak-tool

In addition to Tweaks, to change the Shell theme, the User Themes GNOME Shell Extension needs to be installed and enabled. Check out this post for more details on installing extensions.

Next, launch Tweaks, and switch to the Appearance pane. The Themes section in the Appearance pane allows the changing of the multiple theme types. Simply choose the theme from the dropdown, and the new theme will apply automatically.

Installing themes

Armed with the knowledge of the types of themes, and how to change themes, it is time to install some themes. Broadly speaking, there are two ways to install new themes to your Fedora Workstation — installing theme packages from the Fedora repositories, or manually installing a theme. One point to note when installing themes, is that you may need to close and re-open the Tweaks application to make a newly installed theme appear in the dropdowns.

Installing from the Fedora repositories

The Fedora repositories contain a small selection of additional themes that once installed are available to we chosen in Tweaks. Theme packages are not available in the Software application, and have to be searched for and installed via the command line. Most theme packages have a consistent naming structure, so listing available themes is pretty easy.

To find Application (GTK) themes use the command:

dnf search gtk | grep theme

To find Shell themes:

dnf search shell-theme

Icon themes:

dnf search icon-theme

Cursor themes:

dnf search cursor-theme

Once you have found a theme to install, install the theme using dnf. For example:

sudo dnf install numix-gtk-theme Installing themes manually

For a wider range of themes, there are a plethora of places on the internet to find new themes to use on Fedora Workstation. Two popular places to find themes are OpenDesktop and GNOMELook.

Typically when downloading themes from these sites, the themes are encapsulated in an archive like a tar.gz or zip file. In most cases, to install these themes, simply extract the contents into the correct directory, and the theme will appear in Tweaks. Note too, that themes can be installed either globally (must be done using sudo) so all users on the system can use them, or can be installed just for the current user.

For Application (GTK) themes, and GNOME Shell themes, extract the archive to the .themes/ directory in your home directory. To install for all users, extract to /usr/share/themes/

For Icon and Cursor themes, extract the archive to the .icons/ directory in your home directory. To install for all users, extract to /usr/share/icons/

Submissions now open for the Fedora 31 supplemental wallpapers

Monday 3rd of June 2019 07:30:03 AM

Have you always wanted to start contributing to Fedora but don’t know how? Submitting a supplemental wallpaper is one of the easiest ways to start as a Fedora contributor. Keep reading to learn how.

Each release, the Fedora Design team works with the community on a set of 16 additional wallpapers. Users can install and use these to supplement the standard wallpaper. And submissions are now open for the Fedora 31 Supplemental Wallpapers.

Dates and deadlines

The submission phase opens June 3, 2019 and ends July 26, 2019 at 23:59 UTC.

Important note: In certain circumstances, submissions during the last hours may not get into the election, if there is no time to do legal research. The legal research is done by hand and very time consuming. Please help by following the guidelines correctly and submit only work that has a correct license.

Please stay away to submit pictures of pets, especially cats.

The voting will open August 1, 2019 and will be open until August 16, 2019 at 23:59 UTC.

How to contribute to this package

Fedora uses the Nuancier application to manage the submissions and the voting process. To submit, you need an Fedora account. If you don’t have one, you can create one here. To vote you must have membership in another group such as cla_done or cla_fpca.

For inspiration you can look to former submissions and the  previous winners. Here are some from the last election:

You may only upload two submissions into Nuancier. In case you submit multiple versions of the same image, the team will choose one version of it and accept it as one submission, and deny the other one.

Previously submissions that were not selected should not be resubmitted, and may be rejected. Creations that lack essential artistic quality may also be rejected.

Denied submissions into Nuancier count. Therefore, if you make two submissions and both are rejected, you cannot submit more. Use your best judgment for your submissions.

Badges

You can also earn badges for contributing. One badge is for an accepted submission. Another badge is awarded if your submission is a chosen wallpaper. A third is awarded if you participate in the voting process. You must claim this badge during the voting process, as it is not granted automatically.

Use Firefox Send with ffsend in Fedora

Friday 31st of May 2019 08:00:51 AM

ffsend is the command line client of Firefox Send. This article will show how Firefox Send and ffsend work. It’ll also detail how it can be installed and used in Fedora.

What are Firefox Send and ffsend ?

Firefox Send is a file sharing tool from Mozilla that allows sending encrypted files to other users. You can install Send on your own server, or use the Mozilla-hosted link send.firefox.com. The hosted version officially supports files up to 1 GB, and links that expire after a configurable download count (default of 1) or 24 hours, and then all the files on the Send server are deleted. This tool is still in experimental phase, and therefore shouldn’t be used in production or to share important or sensitive data.

While Firefox Send is the tool itself and can be used with a web interface, ffsend is a command-line utility you can use with scripts and arguments. It has a wide range of configuration options and can be left working in the background without any human intervention.

How does it work?

FFSend can both upload and download files. The remote host can use either the Firefox tool or another web browser to download the file. Neither Firefox Send nor ffsend require the use of Firefox.

It’s important to highlight that ffsend uses client-side encryption. This means that files are encrypted before they’re uploaded. You share secrets together with the link, so be careful when sharing, because anyone with the link will be able to download the file. As an extra layer of protection, you can protect the file with a password by using the following argument:

ffsend password URL -p PASSWORD Other features

There are a few other features worth mentioning. Here’s a list:

  • Configurable download limit, between 1 and 20 times, before the link expires
  • Built-in extract and archiving functions
  • Track history of shared files
  • Inspect or delete shared files
  • Folders can be shared as well, either as they are or as compressed files
  • Generate a QR code, for easier download on a mobile phone
How to install in Fedora

While Fedora Send works with Firefox without installing anything extra, you’ll need to install the CLI tool to use ffsend. This tool is in the official repositories, so you only need a simple dnf command with sudo.

$ sudo dnf install ffsend

After that, you can use ffsend from the terminal .

Upload a file

Uploading a file is a simple as

$ ffsend upload /etc/os-release
Upload complete
Share link: https://send.firefox.com/download/05826227d70b9a4b/#RM_HSBq6kuyeBem8Z013mg

The file now can be easily share using the Share link URL.

Downloading a file

Downloading a file is as simple as uploading.

$ ffsend download https://send.firefox.com/download/05826227d70b9a4b/#RM_HSBq6kuyeBem8Z013mg
Download complete

Before downloading a file it might be useful to check if the file exist and get information about it. ffsend provides 2 handy commands for that.

$ ffsend exists https://send.firefox.com/download/88a6324e2a99ebb6/#YRJDh8ZDQsnZL2KZIA-PaQ
Exists: true
Password: false
$ ffsend info https://send.firefox.com/download/88a6324e2a99ebb6/#YRJDh8ZDQsnZL2KZIA-PaQ
ID: 88a6324e2a99ebb6
Downloads: 0 of 1
Expiry: 23h59m (86388s Upload history

ffsend also provides a way to check the history of the uploads made with the tools. This can be really useful if you upload a lot of files during a scripted tasks for example and you want to keep track of each files download status.

$ ffsend history
LINK EXPIRY
1 https://send.firefox.com/download/#8TJ9QNw 23h59m
2 https://send.firefox.com/download/KZIA-PaQ 23h54m Delete a file

Another useful feature is the possibility to delete a file.

ffsend delete https://send.firefox.com/download/2d9faa7f34bb1478/#phITKvaYBjCGSRI8TJ9QNw

Firefox Send is a great service and the ffsend tools makes it really convenient to use from the terminal. More examples and documentation is available on ffsend‘s Gitlab repository.

Fedora 28 End of Life

Wednesday 29th of May 2019 12:48:33 PM

With the recent release of Fedora 30Fedora 28 officially enters End Of Life (EOL) status effective May 28, 2019. This impacts any systems still on Fedora 28. If you’re not sure what that means to you, read more below.

At this point, packages in the Fedora 28 repositories no longer receive security, bugfix, or enhancement updates. Furthermore, the community adds no new packages to the Fedora 28 collection starting at End of Life. Essentially, the Fedora 28 release will not change again, meaning users no longer receive the normal benefits of this leading-edge operating system.

There’s an easy, free way to keep those benefits. If you’re still running an End of Life version such as Fedora 28, now is the perfect time to upgrade to Fedora 29 or to Fedora 30. Upgrading gives you access to all the community-provided software in Fedora.

Looking back at Fedora 28

Fedora 28 was released on May 1, 2018. As part of their commitment to users, Fedora community members released over 9,700 updates.

This release featured, among many other improvements and upgrades:

  • GNOME 3.28
  • Easier options for third-party repositories
  • Automatic updates for the Fedora Atomic Host
  • The new Modular repository, allowing you to select from different versions of software for your system

Of course, the Project also offered numerous alternative spins of Fedora, and support for multiple architectures.

About the Fedora release cycle

The Fedora Project offers updates for a Fedora release until a month after the second subsequent version releases. For example, updates for Fedora 29 continue until one month after the release of Fedora 31. Fedora 30 continues to be supported up until one month after the release of Fedora 32.

The Fedora Project wiki contains more detailed information about the entire Fedora Release Life Cycle. The lifecycle includes milestones from development to release, and the post-release support period.

Packit – packaging in Fedora with minimal effort

Wednesday 29th of May 2019 07:30:11 AM


What is packit

Packit (https://packit.dev/) is a CLI tool that helps you auto-maintain your upstream projects into the Fedora operating system. But what does it really mean?

As a developer, you might want to update your package in Fedora. If you’ve done it in the past, you know it’s no easy task. If you haven’t let me reiterate: it’s no easy task.

And this is exactly where packit can help: once you have your package in Fedora, you can maintain your SPEC file upstream and, with just one additional configuration file, packit will help you update your package in Fedora when you update your source code upstream.

Furthermore, packit can synchronize downstream changes to a SPEC file back into the upstream repository. This could be useful if the SPEC file of your package is changed in Fedora repositories and you would like to synchronize it into your upstream project.

Packit also provides a way to build an SRPM package based on an upstream repository checkout, which can be used for building RPM packages in COPR.

Last but not least, packit provides a status command. This command provides information about upstream and downstream repositories, like pull requests, release and more others.

Packit provides also another two commands: build and create-update.

The command packit build performs a production build of your project in Fedora build system – koji. You can Fedora version you want to build against using an option –dist-git-branch. The command packit create-updates creates a Bodhi update for the specific branch using the option —dist-git-branch.

Installation

You can install packit on Fedora using dnf:

sudo dnf install -y packit Configuration

For demonstration use case, I have selected the upstream repository of colin (https://github.com/user-cont/colin). Colin is a tool to check generic rules and best-practices for containers, dockerfiles, and container images.

First of all, clone colin git repository:

$ git clone https://github.com/user-cont/colin.git
$ cd colin

Packit expects to run in the root of your git repository.

Packit (https://github.com/packit-service/packit/) needs information about your project, which has to be stored in the upstream repository in the .packit.yaml file (https://github.com/packit-service/packit/blob/master/docs/configuration.md#projects-configuration-file).

See colin’s packit configuration file:

$ cat .packit.yaml
specfile_path: colin.spec
synced_files:
 -.packit.yaml
 - colin.spec
upstream_project_name: colin
downstream_package_name: colin

What do the values mean?

  • specfile_path – a relative path to a spec file within the upstream repository (mandatory)
  • synced_files – a list of relative paths to files in the upstream repo which are meant to be copied to dist-git during an update
  • upstream_project_name – name of the upstream repository (e.g. in PyPI); this is used in %prep section
  • downstream_package_name – name of the package in Fedora (mandatory)

For more information see the packit configuration documentation (https://github.com/packit-service/packit/blob/master/docs/configuration.md)

What can packit do?

Prerequisite for using packit is that you are in a working directory of a git checkout of your upstream project.

Before running any packit command, you need to do several actions. These actions are mandatory for filing a PR into the upstream or downstream repositories and to have access into the Fedora dist-git repositories.

Export GitHub token taken from https://github.com/settings/tokens:

$ export GITHUB_TOKEN=<YOUR_TOKEN>

Obtain your Kerberos ticket needed for Fedora Account System (FAS) :

$ kinit <yourname>@FEDORAPROJECT.ORG

Export your Pagure API keys taken from https://src.fedoraproject.org/settings#nav-api-tab:

$ export PAGURE_USER_TOKEN=<PAGURE_USER_TOKEN>

Packit also needs a fork token to create a pull request. The token is taken from https://src.fedoraproject.org/fork/YOU/rpms/PACKAGE/settings#apikeys-tab

Do it by running:

$ export PAGURE_FORK_TOKEN=<PAGURE_FORK_TOKEN>

Or store these tokens in the ~/.config/packit.yaml file:

$ cat ~/.config/packit.yaml

github_token: <GITHUB_TOKEN>
pagure_user_token: <PAGURE_USER_TOKEN>
pagure_fork_token: <PAGURE_FORK_TOKEN>
Propose a new upstream release in Fedora

The command for this first use case is called propose-update (https://github.com/jpopelka/packit/blob/master/docs/propose_update.md). The command creates a new pull request in Fedora dist-git repository using a selected or the latest upstream release.

$ packit propose-update

INFO: Running 'anitya' versioneer
Version in upstream registries is '0.3.1'.
Version in spec file is '0.3.0'.
WARNING  Version in spec file is outdated
Picking version of the latest release from the upstream registry.
Checking out upstream version 0.3.1
Using 'master' dist-git branch
Copying /home/vagrant/colin/colin.spec to /tmp/tmptfwr123c/colin.spec.
Archive colin-0.3.0.tar.gz found in lookaside cache (skipping upload).
INFO: Downloading file from URL https://files.pythonhosted.org/packages/source/c/colin/colin-0.3.0.tar.gz
100%[=============================>]     3.18M  eta 00:00:00
Downloaded archive: '/tmp/tmptfwr123c/colin-0.3.0.tar.gz'
About to upload to lookaside cache
won't be doing kinit, no credentials provided
PR created: https://src.fedoraproject.org/rpms/colin/pull-request/14

Once the command finishes, you can see a PR in the Fedora Pagure instance which is based on the latest upstream release. Once you review it, it can be merged.

Sync downstream changes back to the upstream repository

Another use case is to sync downstream changes into the upstream project repository.

The command for this purpose is called sync-from-downstream (https://github.com/jpopelka/packit/blob/master/docs/sync-from-downstream.md). Files synced into the upstream repository are mentioned in the packit.yaml configuration file under the synced_files value.

$ packit sync-from-downstream

upstream active branch master
using "master" dist-git branch
Copying /tmp/tmplvxqtvbb/colin.spec to /home/vagrant/colin/colin.spec.
Creating remote fork-ssh with URL git@github.com:phracek/colin.git.
Pushing to remote fork-ssh using branch master-downstream-sync.
PR created: https://github.com/user-cont/colin/pull/229

As soon as packit finishes, you can see the latest changes taken from the Fedora dist-git repository in the upstream repository. This can be useful, e.g. when Release Engineering performs mass-rebuilds and they update your SPEC file in the Fedora dist-git repository.

Get the status of your upstream project

If you are a developer, you may want to get all the information about the latest releases, tags, pull requests, etc. from the upstream and the downstream repository. Packit provides the status command for this purpose.

$ packit status
Downstream PRs:
 ID  Title                             URL
----  --------------------------------  ---------------------------------------------------------
 14  Update to upstream release 0.3.1  https://src.fedoraproject.org//rpms/colin/pull-request/14
 12  Upstream pr: 226                  https://src.fedoraproject.org//rpms/colin/pull-request/12
 11  Upstream pr: 226                  https://src.fedoraproject.org//rpms/colin/pull-request/11
  8 Upstream pr: 226                  https://src.fedoraproject.org//rpms/colin/pull-request/8

Dist-git versions:
f27: 0.2.0
f28: 0.2.0
f29: 0.2.0
f30: 0.2.0
master: 0.2.0

GitHub upstream releases:
0.3.1
0.3.0
0.2.1
0.2.0
0.1.0

Latest builds:
f27: colin-0.2.0-1.fc27
f28: colin-0.3.1-1.fc28
f29: colin-0.3.1-1.fc29
f30: colin-0.3.1-2.fc30

Latest bodhi updates:
Update                Karma  status
------------------  ------- --------
colin-0.3.1-1.fc29        1  stable
colin-0.3.1-1.fc28        1  stable
colin-0.3.0-2.fc28        0  obsolete Create an SRPM

The last packit use case is to generate an SRPM package based on a git checkout of your upstream project. The packit command for SRPM generation is srpm.

$ packit srpm
Version in spec file is '0.3.1.37.g00bb80e'.
SRPM: /home/phracek/work/colin/colin-0.3.1.37.g00bb80e-1.fc29.src.rpm Packit as a service

In the summer, the people behind packit would like to introduce packit as a service (https://github.com/packit-service/packit-service). In this case, the packit GitHub application will be installed into the upstream repository and packit will perform all the actions automatically, based on the events it receives from GitHub or fedmsg.

5 GNOME keyboard shortcuts to be more productive

Monday 27th of May 2019 08:00:33 AM

For some people, using GNOME Shell as a traditional desktop manager may be frustrating since it often requires more action of the mouse. In fact, GNOME Shell is also a desktop manager designed for and meant to be driven by the keyboard. Learn how to be more efficient with GNOME Shell with these 5 ways to use the keyboard instead of the mouse.

GNOME activities overview

The activities overview can be easily opened using the Super key from the keyboard. (The Super key usually has a logo on it.) This is really useful when it comes to start an application. For example, it’s easy to start the Firefox web browser with the following key sequence Super + f i r + Enter.

Message tray

In GNOME, notifications are available in the message tray. This is also the place where the calendar and world clocks are available. To open the message tray using the keyboard use the Super+m shortcut. To close the message tray simply use the same shortcut again.

Managing workspaces in GNOME

Gnome Shell uses dynamic workspaces, meaning it creates additional workspaces as they are needed. A great way to be more productive using Gnome is to use one workspace per application or per dedicated activity, and then use the keyboard to navigate between these workspaces.

Let’s look at a practical example. To open a Terminal in the current workspace press the following keys: Super + t e r + Enter. Then, to open a new workspace press Super + PgDn. Open Firefox (Super + f i r + Enter). To come back to the terminal, use Super + PgUp.

Managing an application window

Using the keyboard it is also easy to manage the size of an application window. Minimizing, maximizing and moving the application to the left or the right of the screen can be done with only a few key strokes. Use Super+

Securing telnet connections with stunnel

Wednesday 22nd of May 2019 08:00:51 AM

Telnet is a client-server protocol that connects to a remote server through TCP over port 23. Telnet does not encrypt data and is considered insecure and passwords can be easily sniffed because data is sent in the clear. However there are still legacy systems that need to use it. This is where stunnel comes to the rescue.

Stunnel is designed to add SSL encryption to programs that have insecure connection protocols. This article shows you how to use it, with telnet as an example.

Server Installation

Install stunnel along with the telnet server and client using sudo:

sudo dnf -y install stunnel telnet-server telnet

Add a firewall rule, entering your password when prompted:

firewall-cmd --add-service=telnet --perm
firewall-cmd --reload

Next, generate an RSA private key and an SSL certificate:

openssl genrsa 2048 > stunnel.key
openssl req -new -key stunnel.key -x509 -days 90 -out stunnel.crt

You will be prompted for the following information one line at a time. When asked for Common Name you must enter the correct host name or IP address, but everything else you can skip through by hitting the Enter key.

You are about to be asked to enter information that will be
incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []

Merge the RSA key and SSL certificate into a single .pem file, and copy that to the SSL certificate directory:

cat stunnel.crt stunnel.key > stunnel.pem
sudo cp stunnel.pem /etc/pki/tls/certs/

Now it’s time to define the service and the ports to use for encrypting your connection. Choose a port that is not already in use. This example uses port 450 for tunneling telnet. Edit or create the /etc/stunnel/telnet.conf file:

cert = /etc/pki/tls/certs/stunnel.pem
sslVersion = TLSv1
chroot = /var/run/stunnel
setuid = nobody
setgid = nobody
pid = /stunnel.pid
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
[telnet]
accept = 450
connect = 23

The accept option is the port the server will listen to for incoming telnet requests. The connect option is the internal port the telnet server listens to.

Next, make a copy of the systemd unit file that allows you to override the packaged version:

sudo cp /usr/lib/systemd/system/stunnel.service /etc/systemd/system

Edit the /etc/systemd/system/stunnel.service file to add two lines. These lines create a chroot jail for the service when it starts.

[Unit]
Description=TLS tunnel for network daemons
After=syslog.target network.target

[Service]
ExecStart=/usr/bin/stunnel
Type=forking
PrivateTmp=true
ExecStartPre=-/usr/bin/mkdir /var/run/stunnel
ExecStartPre=/usr/bin/chown -R nobody:nobody /var/run/stunnel

[Install]
WantedBy=multi-user.target

Next, configure SELinux to listen to telnet on the new port you just specified:

sudo semanage port -a -t telnetd_port_t -p tcp 450

Finally, add a new firewall rule:

firewall-cmd --add-port=450/tcp --perm
firewall-cmd --reload

Now you can enable and start telnet and stunnel.

systemctl enable telnet.socket stunnel@telnet.service --now

A note on the systemctl command is in order. Systemd and the stunnel package provide an additional template unit file by default. The template lets you drop multiple configuration files for stunnel into /etc/stunnel, and use the filename to start the service. For instance, if you had a foobar.conf file, you could start that instance of stunnel with systemctl start stunnel@foobar.service, without having to write any unit files yourself.

If you want, you can set this stunnel template service to start on boot:

systemctl enable stunnel@telnet.service Client Installation

This part of the article assumes you are logged in as a normal user (with sudo privileges) on the client system. Install stunnel and the telnet client:

dnf -y install stunnel telnet

Copy the stunnel.pem file from the remote server to your client /etc/pki/tls/certs directory. In this example, the IP address of the remote telnet server is 192.168.1.143.

sudo scp myuser@192.168.1.143:/etc/pki/tls/certs/stunnel.pem
/etc/pki/tls/certs/

Create the /etc/stunnel/telnet.conf file:

cert = /etc/pki/tls/certs/stunnel.pem
client=yes
[telnet]
accept=450
connect=192.168.1.143:450

The accept option is the port that will be used for telnet sessions. The connect option is the IP address of your remote server and the port it’s listening on.

Next, enable and start stunnel:

systemctl enable stunnel@telnet.service --now

Test your connection. Since you have a connection established, you will telnet to localhost instead of the hostname or IP address of the remote telnet server:

[user@client ~]$ telnet localhost 450
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Kernel 5.0.9-301.fc30.x86_64 on an x86_64 (0)
server login: myuser
Password: XXXXXXX
Last login: Sun May  5 14:28:22 from localhost
[myuser@server ~]$

Getting set up with Fedora Project services

Monday 20th of May 2019 08:00:04 AM

In addition to providing an operating system, the Fedora Project provides numerous services for users and developers. Services such as Ask Fedora, the Fedora Project Wiki and the Fedora Project Mailing Lists provide users with valuable resources for learning how to best take advantage of Fedora. For developers of Fedora, there are many other services such as dist-git, Pagure, Bodhi, COPR and Bugzilla that are involved with the packaging and release process.

These services are available for use with a free account from the Fedora Accounts System (FAS). This account is the passport to all things Fedora! This article covers how to get set up with an account and configure Fedora Workstation for browser single sign-on.

Signing up for a Fedora account

To create a FAS account, browse to the account creation page. Here, you will fill out your basic identity data:

Account creation page

Once you enter your data, an email will be sent to the email address provided, with a temporary password. Pick a strong password and use it.

Password reset page

Next, the account details page appears. If you intend to become a contributor to the Fedora Project, you should complete the Contributor Agreement now. Otherwise, you are done and your account can now be used to log into the various Fedora services.

Account details page Configuring Fedora Workstation for single sign-On

Now that you have your account, you can sign into any of the Fedora Project services. Most of these services support single sign-on (SSO), allowing you to sign in without re-entering your username and password.

Fedora Workstation provides an easy workflow to add SSO credentials. The GNOME Online Accounts tool helps you quickly set up your system to access many popular services. To access it, go to the Settings menu.

GNOME Online Accounts

Click on the ⋮ button and select Enterprise Login (Kerberos), which provides a single text prompt for a principal. Enter fasname@FEDORAPROJECT.ORG (being sure to capitalize FEDORAPROJECT.ORG) and click Connect.

Kerberos principal dialog

GNOME prompts you to enter your password for FAS and given the option to save it. If you choose to save it, it is stored in GNOME Keyring and unlocked automatically at login. If you choose not to save it, you will need to open GNOME Online Accounts and enter your password each time you want to enable single sign-on.

Single sign-on with a web browser

Today, Fedora Workstation supports three web browsers “out of the box” with support for single sign-on with the Fedora Project services. These are Mozilla Firefox, GNOME Web, and Google Chrome. Due to a bug in Chromium, single sign-on does not currently work properly in many cases. As a result, this has not been enabled for Chromium in Fedora.

To sign on to a service, browse to it and select the “login” option for that service. For most Fedora services, this is the only thing you need to do and the browser handles the rest. Some services such as the Fedora Mailing Lists and Bugzilla support multiple login types. For them, you need to select the “Fedora” or “Fedora Account System” login type.

That’s it! You can now log into any of the Fedora Project services without re-entering your password.

Special consideration for Google Chrome

In order to enable single sign-on out of the box for Google Chrome, Fedora needed to take advantage of certain features in Chrome that are intended for use in “managed” environments. A managed environment is traditionally a corporate or other organization that sets certain security and/or monitoring requirements on the browser.

Recently, Google Chrome changed its behavior and it now reports “Managed by your organization” under the ⋮ menu in Google Chrome. That link leads to a page that states “If your Chrome browser is managed, your administrator can set up or restrict certain features, install extensions, monitor activity, and control how you use Chrome.” Fedora will never monitor your browser activity or restrict your actions.

Enter chrome://policy in the address bar to see exactly what settings Fedora has enabled in the browser. The AuthNegotiateDelegateWhitelist and AuthServerWhitelist options will be set to *.fedoraproject.org. These are the only changes Fedora makes.

Building Smaller Container Images

Thursday 16th of May 2019 08:00:35 AM

Linux Containers have become a popular topic, making sure that a container image is not bigger than it should be is considered as a good practice. This article give some tips on how to create smaller Fedora container images.

microdnf

Fedora’s DNF is written in Python and and it’s designed to be extensible as it has wide range of plugins. But Fedora has an alternative base container image which uses an smaller package manager called microdnf written in C. To use this minimal image in a Dockerfile the FROM line should look like this:

FROM registry.fedoraproject.org/fedora-minimal:30

This is an important saving if your image does not need typical DNF dependencies like Python. For example, if you are making a NodeJS image.

Install and Clean up in one layer

To save space it’s important to remove repos meta data using dnf clean all or its microdnf equivalent microdnf clean all. But you should not do this in two steps because that would actually store those files in a container image layer then mark them for deletion in another layer. To do it properly you should do the installation and cleanup in one step like this

FROM registry.fedoraproject.org/fedora-minimal:30
RUN microdnf install nodejs && microdnf clean all Modularity with microdnf

Modularity is a way to offer you different versions of a stack to choose from. For example you might want non-LTS NodeJS version 11 for a project and old LTS NodeJS version 8 for another and latest LTS NodeJS version 10 for another. You can specify which stream using colon

# dnf module list
# dnf module install nodejs:8

The dnf module install command implies two commands one that enables the stream and one that install nodejs from it.

# dnf module enable nodejs:8
# dnf install nodejs

Although microdnf does not offer any command related to modularity, it is possible to enable a module with a configuation file, and libdnf (which microdnf uses) seems to support modularity streams. The file looks like this

/etc/dnf/modules.d/nodejs.module
[nodejs]
name=nodejs
stream=8
profiles=
state=enabled

A full Dockerfile using modularity with microdnf looks like this:

FROM registry.fedoraproject.org/fedora-minimal:30
RUN \
echo -e "[nodejs]\nname=nodejs\nstream=8\nprofiles=\nstate=enabled\n" > /etc/dnf/modules.d/nodejs.module && \
microdnf install nodejs zopfli findutils busybox && \
microdnf clean all Multi-staged builds

In many cases you might have tons of build-time dependencies that are not needed to run the software for example building a Go binary, which statically link dependencies. Multi-stage build are an efficient way to separate the application build and the application runtime.

For example the Dockerfile below builds confd a Go application.

# building container
FROM registry.fedoraproject.org/fedora-minimal AS build
RUN mkdir /go && microdnf install golang && microdnf clean all
WORKDIR /go
RUN export GOPATH=/go; CGO_ENABLED=0 go get github.com/kelseyhightower/confd

FROM registry.fedoraproject.org/fedora-minimal
WORKDIR /
COPY --from=build /go/bin/confd /usr/local/bin
CMD ["confd"]

The multi-stage build is done by adding AS after the FROM instruction and by having another FROM from a base container image then using COPY –from= instruction to copy content from the build container to the second container.

This Dockerfile can then be built and run using podman

$ podman build -t myconfd .
$ podman run -it myconfd

Manage business documents with OpenAS2 on Fedora

Monday 13th of May 2019 08:00:09 AM

Business documents often require special handling. Enter Electronic Document Interchange, or EDI. EDI is more than simply transferring files using email or http (or ftp), because these are documents like orders and invoices. When you send an invoice, you want to be sure that:

1. It goes to the right destination, and is not intercepted by competitors.
2. Your invoice cannot be forged by a 3rd party.
3. Your customer can’t claim in court that they never got the invoice.

The first two goals can be accomplished by HTTPS or email with S/MIME, and in some situations, a simple HTTPS POST to a web API is sufficient. What EDI adds is the last part.

This article does not cover the messy topic of formats for the files exchanged. Even when using a standardized format like ANSI or EDIFACT, it is ultimately up to the business partners. It is not uncommon for business partners to use an ad-hoc CSV file format. This article shows you how to configure Fedora to send and receive in an EDI setup.

Centralized EDI

The traditional solution is to use a Value Added Network, or VAN. The VAN is a central hub that transfers documents between their customers. Most importantly, it keeps a secure record of the documents exchanged that can be used as evidence in disputes. The VAN can use different transfer protocols for each of its customers

AS Protocols and MDN

The AS protocols are a specification for adding a digital signature with optional encryption to an electronic document. What it adds over HTTPS or S/MIME is the Message Disposition Notification, or MDN. The MDN is a signed and dated response that says, in essence, “We got your invoice.” It uses a secure hash to identify the specific document received. This addresses point #3 without involving a third party.

The AS2 protocol uses HTTP or HTTPS for transport. Other AS protocols target FTP and SMTP. AS2 is used by companies big and small to avoid depending on (and paying) a VAN.

OpenAS2

OpenAS2 is an open source Java implemention of the AS2 protocol. It is available in Fedora since 28, and installed with:

$ sudo dnf install openas2
$ cd /etc/openas2

Configuration is done with a text editor, and the config files are in XML. The first order of business before starting OpenAS2 is to change the factory passwords.

Edit /etc/openas2/config.xml and search for ChangeMe. Change those passwords. The default password on the certificate store is testas2, but that doesn’t matter much as anyone who can read the certificate store can read config.xml and get the password.

What to share with AS2 partners

There are 3 things you will exchange with an AS2 peer.

AS2 ID

Don’t bother looking up the official AS2 standard for legal AS2 IDs. While OpenAS2 implements the standard, your partners will likely be using a proprietary product which doesn’t. While AS2 allows much longer IDs, many implementations break with more than 16 characters. Using otherwise legal AS2 ID chars like ‘:’ that can appear as path separators on a proprietary OS is also a problem. Restrict your AS2 ID to upper and lower case alpha, digits, and ‘_’ with no more than 16 characters.

SSL certificate

For real use, you will want to generate a certificate with SHA256 and RSA. OpenAS2 ships with two factory certs to play with. Don’t use these for anything real, obviously. The certificate file is in PKCS12 format. Java ships with keytool which can maintain your PKCS12 “keystore,” as Java calls it. This article skips using openssl to generate keys and certificates. Simply note that sudo keytool -list -keystore as2_certs.p12 will list the two factory practice certs.

AS2 URL

This is an HTTP URL that will access your OpenAS2 instance. HTTPS is also supported, but is redundant. To use it you have to uncomment the https module configuration in config.xml, and supply a certificate signed by a public CA. This requires another article and is entirely unnecessary here.

By default, OpenAS2 listens on 10080 for HTTP and 10443 for HTTPS. OpenAS2 can talk to itself, so it ships with two partnerships using http://localhost:10080 as the AS2 URL. If you don’t find this a convincing demo, and can install a second instance (on a VM, for instance), you can use private IPs for the AS2 URLs. Or install Cjdns to get IPv6 mesh addresses that can be used anywhere, resulting in AS2 URLs like http://[fcbf:fc54:e597:7354:8250:2b2e:95e6:d6ba]:10080.

Most businesses will also want a list of IPs to add to their firewall. This is actually bad practice. An AS2 server has the same security risk as a web server, meaning you should isolate it in a VM or container. Also, the difficulty of keeping mutual lists of IPs up to date grows with the list of partners. The AS2 server rejects requests not signed by a configured partner.

OpenAS2 Partners

With that in mind, open partnerships.xml in your editor. At the top is a list of “partners.” Each partner has a name (referenced by the partnerships below as “sender” or “receiver”), AS2 ID, certificate, and email. You need a partner definition for yourself and those you exchange documents with. You can define multiple partners for yourself. OpenAS2 ships with two partners, OpenAS2A and OpenAS2B, which you’ll use to send a test document.

OpenAS2 Partnerships

Next is a list of “partnerships,” one for each direction. Each partnership configuration includes the sender, receiver, and the AS2 URL used to send the documents. By default, partnerships use synchronous MDN. The MDN is returned on the same HTTP transaction. You could uncomment the as2_receipt_option for asynchronous MDN, which is sent some time later. Use synchronous MDN whenever possible, as tracking pending MDNs adds complexity to your application.

The other partnership options select encryption, signature hash, and other protocol options. A fully implemented AS2 receiver can handle any combination of options, but AS2 partners may have incomplete implementations or policy requirements. For example, DES3 is a comparatively weak encryption algorithm, and may not be acceptable. It is the default because it is almost universally implemented.

If you went to the trouble to set up a second physical or virtual machine for this test, designate one as OpenAS2A and the other as OpenAS2B. Modify the as2_url on the OpenAS2A-to-OpenAS2B partnership to use the IP (or hostname) of OpenAS2B, and vice versa for the OpenAS2B-to-OpenAS2A partnership. Unless they are using the FedoraWorkstation firewall profile, on both machines you’ll need:

# sudo firewall-cmd --zone=public --add-port=10080/tcp

Now start the openas2 service (on both machines if needed):

# sudo systemctl start openas2 Resetting the MDN password

This initializes the MDN log database with the factory password, not the one you changed it to. This is a packaging bug to be fixed in the next release. To avoid frustration, here’s how to change the h2 database password:

$ sudo systemctl stop openas2
$ cat >h2passwd <<'DONE'
#!/bin/bash
AS2DIR="/var/lib/openas2"
java -cp "$AS2DIR"/lib/h2* org.h2.tools.Shell \
-url jdbc:h2:"$AS2DIR"/db/openas2 \
-user sa -password "$1" <<EOF
alter user sa set password '$2';
exit
EOF
DONE
$ sudo sh h2passwd ChangeMe yournewpasswordsetabove
$ sudo systemctl start openas2 Testing the setup

With that out of the way, let’s send a document. Assuming you are on OpenAS2A machine:

$ cat >testdoc <<'DONE'
This is not a real EDI format, but is nevertheless a document.
DONE
$ sudo chown openas2 testdoc
$ sudo mv testdoc /var/spool/openas2/toOpenAS2B
$ sudo journalctl -f -u openas2
... log output of sending file, Control-C to stop following log
^C

OpenAS2 does not send a document until it is writable by the openas2 user or group. As a consequence, your actual business application will copy, or generate in place, the document. Then it changes the group or permissions to send it on its way, to avoid sending a partial document.

Now, on the OpenAS2B machine, /var/spool/openas2/OpenAS2A_OID-OpenAS2B_OID/inbox shows the message received. That should get you started!

Photo by Beatriz Pérez Moya on Unsplash.

Contribute at the Fedora Test Week for kernel 5.1

Sunday 12th of May 2019 05:29:06 PM

The kernel team is working on final integration for kernel 5.1. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, May 13, 2019 through Saturday, May 18, 2019. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test day/week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

Check storage performance with dd

Friday 10th of May 2019 08:00:54 AM

This article includes some example commands to show you how to get a rough estimate of hard drive and RAID array performance using the dd command. Accurate measurements would have to take into account things like write amplification and system call overhead, which this guide does not. For a tool that might give more accurate results, you might want to consider using hdparm.

To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. WARNING: The write tests will destroy any data on the block devices against which they are run. Do not run them against any device that contains data you want to keep!

Four tests

Below are four example dd commands that can be used to test the performance of a block device:

  1. One process reading from $MY_DISK:# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
  2. One process writing to $MY_DISK:# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
  3. Two processes reading concurrently from $MY_DISK:# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
  4. Two processes writing concurrently to $MY_DISK:# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)

– The iflag=nocache and oflag=direct parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from RAM rather than the hard drive.

– The values for the bs and count parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware.

– The null and zero devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests.

– The skip=200 parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive.

16 examples

Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices:

  1. MY_DISK=/dev/sda2 (used in examples 1-X)
  2. MY_DISK=/dev/sdb2 (used in examples 2-X)
  3. MY_DISK=/dev/md/stripped (used in examples 3-X)
  4. MY_DISK=/dev/md/mirrored (used in examples 4-X)

A video demonstration of the these tests being run on a PC is provided at the end of this guide.

Begin by putting your computer into rescue mode to reduce the chances that disk I/O from background services might randomly affect your test results. WARNING: This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your root password to get into rescue mode. The passwd command, when run as the root user, will prompt you to (re)set your root account password.

$ sudo -i
# passwd
# setenforce 0
# systemctl rescue

You might also want to temporarily disable logging to disk:

# sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf
# systemctl restart systemd-journald.service

If you have a swap device, it can be temporarily disabled and used to perform the following tests:

# swapoff -a
# MY_DEVS=$(mdadm --detail /dev/md/swap | grep active | grep -o "/dev/sd.*")
# mdadm --stop /dev/md/swap
# mdadm --zero-superblock $MY_DEVS Example 1-1 (reading from sda) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s Example 1-2 (writing to sda) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s Example 1-3 (reading concurrently from sda) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.42875 s, 61.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s Example 1-4 (writing concurrently to sda) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) 200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.2435 s, 64.7 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s Example 2-1 (reading from sdb) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s Example 2-2 (writing to sdb) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s Example 2-3 (reading concurrently from sdb) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52808 s, 59.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s Example 2-4 (writing concurrently to sdb) # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.7841 s, 55.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s Example 3-1 (reading from RAID0) # mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS
# MY_DISK=/dev/md/stripped
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s Example 3-2 (writing to RAID0) # MY_DISK=/dev/md/stripped
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s Example 3-3 (reading concurrently from RAID0) # MY_DISK=/dev/md/stripped
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.31025 s, 160 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s Example 3-4 (writing concurrently to RAID0) # MY_DISK=/dev/md/stripped
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.65026 s, 127 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s Example 4-1 (reading from RAID1) # mdadm --stop /dev/md/stripped
# mdadm --create /dev/md/mirrored --homehost=any --metadata=1.0 --level=1 --raid-devices=2 --assume-clean $MY_DEVS
# MY_DISK=/dev/md/mirrored
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s Example 4-2 (writing to RAID1) # MY_DISK=/dev/md/mirrored
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s Example 4-3 (reading concurrently from RAID1) # MY_DISK=/dev/md/mirrored
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67171 s, 125 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s Example 4-4 (writing concurrently to RAID1) # MY_DISK=/dev/md/mirrored
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) 200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.09666 s, 51.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s Restore your swap device and journald configuration # mdadm --stop /dev/md/stripped /dev/md/mirrored
# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 $MY_DEVS
# mkswap /dev/md/swap
# swapon -a
# mv /etc/systemd/journald.conf.bak /etc/systemd/journald.conf
# systemctl restart systemd-journald.service
# reboot Interpreting the results

Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s.

Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drive’s bandwidth (60 MB/s).

The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a catastrophic failure.

The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost if a drive fails.

Video demo Testing storage throughput using dd Troubleshooting

If the above tests aren’t performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in Self-Monitoring, Analysis and Reporting Technology (SMART). If your drive supports it, the smartctl command can be used to query your hard drive for its internal statistics:

# smartctl --health /dev/sda
# smartctl --log=error /dev/sda
# smartctl -x /dev/sda

Another way that you might be able to tune your PC for better performance is by changing your I/O scheduler. Linux systems support several I/O schedulers and the current default for Fedora systems is the multiqueue variant of the deadline scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions.

To view which I/O scheduler your drives are using, issue the following command:

$ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done

You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block/<device name>/queue/scheduler file:

# echo bfq > /sys/block/sda/queue/scheduler

You can make your changes permanent by creating a udev rule for your drive. The following example shows how to create a udev rule that will set all rotational drives to use the BFQ I/O scheduler:

# cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
END

Here is another example that sets all solid-state drives to use the NOOP I/O scheduler:

# cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none"
END

Changing your I/O scheduler won’t affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering.

Photo by James Donovan on Unsplash.

More in Tux Machines

Benchmarking The Experimental Bcachefs File-System Against Btrfs, EXT4, F2FS, XFS & ZFS

Bcachefs is the file-system born out of the Linux kernel's block cache code and has been worked on the past several years by developer Kent Overstreet. Our most recent benchmarking of Bcachefs was last year, so with the prospects of Bcachefs potentially being staged soon in the mainline Linux kernel, I ran some benchmarks using the latest kernel code for this next-generation file-system. Those unfamiliar with this copy-on-write file-system can learn more at Bcachefs.org. The design features of this file-system are similar to ZFS/Btrfs and include native encryption, snapshots, compression, caching, multi-device/RAID support, and more. But even with all of its features, it aims to offer XFS/EXT4-like performance, which is something that can't generally be said for Btrfs. Read more

Games: TheoTown, Prison Architect and More

  • Retro themed city-builder 'TheoTown' has now added Linux support

    TheoTown, developed by blueflower is a city-builder with a retro style that looks to be inspired by the classic Sim City 2000 and it's now available on Steam for Linux. Released on Steam earlier this month, TheoTown is also available on mobile but the PC version is a full and proper game with no in-app purchase nonsense. On Android at least, the game is very highly rated and I imagine a number of readers have played it there so now you can pick it up again on your Linux PC and continue building the city of your dreams. So far, the Steam user reviews are also giving it a good overall picture.

  • Reminder: Update your PC info for the next round of statistics updates

    This is your once a month reminder to make sure your PC information is correct on your user profiles. A fresh batch of statistics is generated on the 1st of each month.

  • Prison Architect gains a new warden with Double Eleven, free update incoming

    After Paradox Interactive acquired the rights to Prison Architect from Introversion Software, they've now announced that Double Eleven will be handling future updates. Double Eleven are a well-known developer and publisher of quite a number of titles, with them also previously been responsible for the console versions of Prison Architect so it seems like a pretty good fit as they already worked with the game.

  • Steam To Drop Support For Ubuntu

    Ubuntu is the most popular Linux distribution and that’s why it gets the attention of big companies like steam to design software for it. But recently, Linux community is kind of unhappy over Canonical decision on dropping Ubuntu 32-bit packages. The community already discussed that in case Ubuntu drops 32-bit packages support in upcoming Ubuntu 19.10 or future releases, it’d create big problems including Wine users and Linux gamers. And here comes the first news from Steam, the gaming platform. Pierre-Loup Griffais from Valve tweeted that Ubuntu 19.10 or any future Ubuntu releases will not be officially supported by Steam. He also said that the team will work on to minimize the breakage for existing users and thinking to focus on any other Linux distribution.

  • Canonical to Continue Building Selected 32-Bit i386 Packages for Ubuntu 19.10, Azul Systems Announces Zulu Mission Control v7.0, Elisa v. 0.4.1 Now Available, Firefox Adds Fission to the Nightly Build and Tails Emergency Release

    After much feedback from the community, Canonical yesterday announced it will continue to build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS. The statement notes that Canonical "will also work with the WINE, Ubuntu Studio and gaming communities to use container technology to address the ultimate end of life of 32-bit libraries; it should stay possible to run old applications on newer versions of Ubuntu. Snaps and LXD enable us both to have complete 32-bit environments, and bundled libraries, to solve these issues in the long term."

  • OpenVIII, an in-development open source game engine for Final Fantasy VIII

    Any fans of Final Fantasy VIII reading? You're going to want to keep an eye on the in-development game engine OpenVIII. While it doesn't seem like it's currently playable, plenty of work has already gone into OpenVIII to work with "video support, music support, audio support, in-game menu" and more. The project is currently classed by the developer as a "pre-prototype" so don't go getting any hopes up yet about playing Final Fantasy VIII natively on Linux.

  • Littlewood hasn't been out for long, but this peaceful RPG has a lot to like about it

    Entering Early Access last week, Sean Young's peaceful RPG Littlewood is a game for those who like to relax a little. Note: Key provided directly by the developer. What happens after the world has been saved, after all the major battles have already been fought? That's exactly what Littlewood is all about, you saved the world and lost your memory so you're helping to re-build the town. In some ways, it actually reminds me of my experience with Forager. It's small, it's sweet and it doesn't feel like it's constantly begging for attention. Quite different in setting though of course, more along the lines of Stardew Valley but with less emphasis on constant farming. I love the building interface too, while it's quite simplistic it allows you to pick up trees, stones and move everything out of your way. Nothing feels annoying, so it's really sweet.

  • Cyberspace first-person shooter 'Black Ice' just had a massive upgrade

    Currently in Early Access, it has been a long time since Black Ice had an update to the "stable" version but the developer hasn't been sat idle. A massive update to the entire game just landed. Featuring some of what I showed off recently, Black Ice has come a very long was since the initial few releases making it a vastly more interesting game. One of the biggest changes, is an overhaul to the entire world design full of new areas, combat arenas with even more to come. Additionally, there's now some random events that will happen to also make the world seem a bit more lively. One server might try to hack another, so you can jump in and fight them all or sit back and watch the fireworks.

Android Leftovers

KDE Plasma 5.16.2 Desktop Environment Released with More Than 30 Bug Fixes

Coming just one week after the first point release, the KDE Plasma 5.16.2 maintenance update is here to add yet another layer of bug fixes with the ultimate goal to make the KDE Plasma 5.16 desktop environment more stable and reliable for users. In particular, this second point release introduces a total of 34 changes across various core components and apps. "Today KDE releases a bugfix update to KDE Plasma 5, versioned 5.16.2. Plasma 5.16 was released in June with many feature refinements and new modules to complete the desktop experience. This release adds a week's worth of new translations and fixes from KDE's contributors. The bugfixes are typically small but important," reads today's announcement. Read more Also: Plasma 5.16.2